Diversity is good and necessary, but we're not doing it in a smart way. Instead of copying
everything about a distro to make a complegely separate project, better to fork just a few things
and stay within the original project.
Desktop Linux is 400+ distros flying (as much as penguins can fly) in loose formation.
We should try to shift the culture toward some consolidation instead
of everyone creating new distros and apps.
Who needs 400 distros and 40 different tweak-OS-settings apps ? How about 20 and 3 ?
[Some argue that only a few distros and DEs really matter, and there's some
truth to that, but still there are a lot of combinations:
Dedoimedo's Linux distro dependency graph
omits a few such as Elementary OS, and skips a lot of variables)]
Prices we pay because of fragmentation:
One price we pay today for all the fragmentation is bugs and slow development. Have you run some of the
standard GUI apps from CLI, and looked at the error messages that appear in the CLI window ?
Assertion failures, broken pipes, use of insecure or deprecated APIs, more.
The quality of many major apps on Linux is bad.
Suppose much of the effort put into tweaking and packaging and testing and delivering and supporting many
of those distros was instead put into bug-fixing in the couple of dozen major distros ? Bugs would get fixed faster.
New features would get created faster.
The complexity of the forking and upstream/downstream and LTS/rolling dichotomy
means that bug-reporting and bug-tracking
are harder than they should be. Often a report filed against, say, Mint is closed with "probably an upstream bug,
you go figure out somewhere else to file it".
Today we have tremendous duplication and dilution of effort. It makes devs
and the whole community less effective. It seems every Linux project is saying "we need more devs !".
(e.g. Debian: "we need about 2-3 times our current work capacity" from
also see article2)
Same thing the general computer security industry is saying. This is not sustainable.
My sure-to-be-unpopular suggestions to Debian:
Work to bring derivative distros (and their devs) back into the main project.
If some distro forked off Debian simply because they wanted a different set of default
apps or something, maybe make "default set of apps" an install-time configuration choice.
Reverse some of the forking, somehow. Get those devs back.
"Today there are over 61,000 amd64 binary packages in Bullseye, the forthcoming release."
So push more of the work back onto the app devs: use flatpak or appimage or snap or docker
instead of native packages, where possible.
I haven't thought this one through, but: Redefine Debian from distro(s) to platform.
Have Debian concentrate on everything below the DE and distro-UI level, so user has to go
to a Debian-derived distro (e.g. Ubuntu or MX Linux or LMDE etc) to get an ISO,
which has an installer, DE, default apps, settings, etc.
Another price we pay today is confusion among vendors and potential new users.
With Windows or MacOS, a user or vendor has a very small and easy choice of what to use or support.
Then they can customize on top of that.
With Linux, there are dozens of major distros and maybe 400 total distros. A new user or vendor is faced
with an intimidating variety. And whatever subset they choose to support, the other 80% of the
community will criticize them. Easier just to avoid Linux than to deal with that. Linux has somewhat-poor
support for some graphics, Wi-Fi, and Bluetooth partly because of that, I think.
Friction. I'm thinking of moving from one distro to another, probably to a different DE.
But I like one or two of the default apps in my current distro; they're better than
any alternative I've found. Turns out they're custom-built for my
current distro (Mint), and built using things (GTK) that may not be available in other distros or DEs.
Possible friction points, barriers to moving:
Your muscle-memory will be wrong: things will be in different places in the GUI,
keyboard shortcuts may be different.
Some CLI commands (package manager and init, mainly) may be different.
Your favorite app from old distro may not be available on new distro,
especially if it was one of the default apps written/forked specially for that distro.
Some system GUI apps may be quite different, especially the installer,
the software center/store/manager, the update manager, the system settings manager.
Sort-of-bleeding-edge stuff (WireGuard, ZFS, Wayland) may be supported or not,
especially by default and/or in the installer.
Some types of packaging (snap, flatpak) may be supported by default or not,
especially by the software installer and updater.
Some of your little scripts may have to be fixed because system files may be in different locations
or daemons may have different names.
Desktop icons or widgets may be different or unavailable if you change DEs.
If you're changing between LTS and rolling-release, or between distros of different ages,
you may find differences such as Python 2 no longer supported, or standard repo contains
older versions of apps than you used on previous distro.
GUI inconsistency. Since various apps, tools, utilities and system features are parts of different
projects, and often built using different frameworks, there is no consistent or easy "theming".
A particular app or piece may be built on GTK (2.0 or 3.0), Qt, Java, Electron, other.
Then it might be packaged inside some container (Snap, flatpak, appimage) potentially affecting settings.
There is no one place to say "make the scrollbars for all things 20 pixels wide".
The layouts and functionality of open-file and save-file dialogs may vary from app to app.
Some apps that support printing have print-preview and print-settings dialogs, others don't.
Every new distro represents a forking/multiplication/replication of the existing bugs
in the original code of kernel, user-space code, and apps.
Many of the distros "handle" the huge steady flood of bug-fixes and security fixes from upstream by ignoring it:
freezing on a specific release of the upstream distro or kernel. This keeps bugs (including security holes)
in place for years, to bite people again and again.
From discussion 2/2020 on /r/windows:
> What made you switch back to (or come to) Windows
> as your primary system after using Mac/Linux?
I used to use both but realized if you want to consume multimedia content comfortably you have to have Windows.
Linux is great for servers and stuff like that but as a daily OS it sucks. I tried bunch of different
distros but there always seemed to be an issue with drivers, apps or compatibility.
Ubuntu, Mint, Debian variants, Fedora. They work ok but there always seem to be some
form of tinkering requirements on a regular basis. Ain't nobody got time for that!
Linux is good if you want to waste 3 days getting your graphics card to work with 3D function.
Seriously, f*ck Linux. Waste of time system on a PC.
Before switching to Windows 10 I tried giving Linux a final chance because I was going to wipe my system anyways.
Ran into a driver issue with a RAID card. Downloaded the driver from Intel ... and it was in the wrong package format.
F*ck Linux. Constant problems like that.
Edit; fix your goddamn stupid driver support instead of creating a new distro every week!
The creator of the Linux kernel blames fragmentation for the relatively low adoption of Linux on the desktop.
Torvalds thinks that Chromebooks and/or Android is going to define Linux in this aspect.
From Chris Fisher on Linux Unplugged podcast episode 358 at 1:02:15:
There are so many areas where it feels like you're running a desktop environment
on top of a command-line environment which is running on top of a kernel.
You can feel that stack, sometimes.
From someone on reddit 5/2019:
My opinion is that the very things that we Linux users love about our platform are the same things that have
prevented it from becoming a real contender as a viable desktop alternative to Windows or macOS.
Linux is all about choice. Unfortunately that choice has splintered Linux into 300 active distributions.
There are now at least 18 desktop environments. There are over 40 music players alone to choose from.
There are even more than 20 sys init systems. The choices go on and on and forks, which can be done by
any person or group, add even more confusion. Can you imagine trying to manage a help desk for mainstream
Linux users who are lay people who purchased a computer running Linux pre-installed? It would be a nightmare!
Sure there are some hardware vendors who ship Linux systems but those are aimed for developers and Linux geeks
like us, not for mom and dad.
From someone on reddit 9/2020:
Just because you can do something, doesn't mean you should. Often it would be better for
the community if devs of projects would work together.
For example, there used to be 3 GTK-based Desktops. Gnome 2, XFCE and LMDE.
When Gnome 3 came out, much of the community didn't like it. So as a response they
made MATE, Cinnamon, Unity, Deepin, Budgie and Pantheon.
In a short time we went from 3 GTK-based DEs to 9 GTK-based DEs.
Seems a whole lot of repetitive work, and that 6 desktops really weren't necessary.
If those devs had worked together on one or two new DEs instead of 6, how much better
and more advanced could those be ?
From someone on reddit 9/2020:
> What do you dislike about Linux ?
The smug attitude that some Linux devs/users have that Linux is somehow superior to Windows or MacOS.
It's not. It's forked into infinity with hundreds of different attempts to solve the same problem
(and for the most part - failing). The UI is inconsistent, the look is mediocre at best, and the
applications for the most part are appalling (at least on the desktop).
Yet with all that crap - the Linux gang thinks they're the second coming of OS Christ.
Get over yourselves, Linux is a tool, for some jobs it's a good tool, but it's not a religion,
it's not the best, it's just a different approach to software that Windows and MacOS has
mastered many many decades ago and Linux still thinks they will (maybe, but not today).
[Even what may seem to be a single project may not be:]
From someone on reddit 6/2020:
In GNOME, just like in most other OSS projects, there is no "leader" that decides while the rest listens.
GNOME itself for example is just a collection of projects, where each maintainer for their own project
decides what to do with it. They agree on some things like a release schedule, and try to follow
the GNOME HIG, but that's basically it.
The startup sequence into the live session was relatively tame - I'm talking about what happens
from BIOS to desktop, and so far we've seen every single distro doing it ever so slightly differently.
Every single one. ...
Fonts remain a big problem. Among the 9,000 distros out there, one or two manage good, clear,
crisp fonts out of the box. ...
the Welcome screen has its own window decorations that are different from the rest of the system.
I guess some hybrid mix of Gnome and Xfce and Cinnamon.
... I am going to show you how to change font color in the MATE desktop, too, very soon.
This is all very similar - Gnome, MATE, Cinnamon, Xfce. And funnily, they all require editing CSS files manually.
In KDE, this is a simple, friendly, built-in thing ...
[Mint 20 XFCE:] ... The problem is, the differentiating factors by which the Linux desktop could once
sway hearts and create hope - especially for wavering Windowsers - are long long gone. So having a
decent desktop that checks some boxes simply isn't enough. Mint 20 Xfce is fast and does most of
the basics reasonably well.
But then, the ergonomics are off, the printing thing is weird, the software selection can be better,
there are quite a few rough spots, and at the end of the day, there are few super-awesome features
that would distinguish this system over dozens of other Linux distros. But as long as there's no
ultra-rigorous QA across the entire ecosystem, as long as even simple things like the boot sequence
or fonts cannot be taken for granted, the Linux desktop will not be the "killer" replacement for Windows. ...
There are already way too many distros, distro spins and distro editions out there.
Roughly 90% too many. Even maintaining a single version can be tough, for small or large teams alike,
and splitting thin resources to create an extra edition make things even worse.
Finally, what's the actual benefit? Is this going to sway the Windows masses or revolutionize the desktop? ...
Desktops (GNOME, KDE) are fine, and were okay 10 or 20 years ago.
Base apps (file managers, mail clients, browsers) are good.
What is missing is the third-party app ecosystem.
Missing from / problems in Linux: central developer portal, stable APIs, consistent desktop APIs,
consistent desktop functionality (e.g. no systray on some DEs), cross-platform toolkits and libraries (e.g. GTK
on Windows and Mac), packaging (app release cycle gets tied to distro release cycle; snap/flatpak/appimage are
promising but there should be only one; Electron is a symptom of devs avoiding Linux APIs).
Can we somehow merge/coordinate KDE/GNOME/Gtk/Qt a bit ?
Open app stores separate from distros (e.g. Flathub, but even better if federated).
Agree on one packaging format.
Integrate packaging into IDEs. You should be able to push a button in the IDE to package
and app and publish it into N stores.
We (the world) need a free and open desktop. We need it for privacy, freedom.
My opinion: What should be the set of base distros ?
Some distros have unique fundamental features or directions that justify their existence:
Void (static linking)
Qubes (compartmentalization / hypervisor)
Tails (non-persistent, onion)
Whonix (dual VMs, onion)
Kali (run as root, special network stack ?)
LFS (build from source, learn)
I'm sure I'm missing some that are unusual / unique in some way.
Others are pretty fundamental for reasons of organization or company or heritage:
IMO the rest should feel some pressure to un-fork, to merge back into the base and become
install-time or config-time options, maybe just a check-box that gives you a particular DE
and set of default apps. Today we have tremendous dilution of brands (an obstacle to potential
new users and vendors) and duplication of effort (all of these duplicate web sites and ISOs
and installers and repo maintenance and bug-trackers and forked apps etc).
DEs should be separate projects. KDE, GNOME, Cinnamon, XFCE, MATE, etc. Any distro can
let the user choose at install-time among supported DEs.
Other features should be available in all distros and
let the user choose to enable/disable them at install-time: snap, flatpak, docker, Wayland.
Other features have won their wars and should just become standard across all the major distros: systemd.
Rip out the old code to simplify things.
So, for example, instead of having a separate distro Mint Cinnamon, I would move the unique
changes of Mint (installer, eCryptfs, more) back into the Ubuntu source tree and bug-tracking system,
and have them appear to the user as install-time options. Move the forked changes of
Mint apps (Nemo, Pix, etc) back into their
original apps source trees and bug-tracking systems, and have them be build-time options.
The Cinnamon DE should come from the Cinnamon project.
I'm not talking about forcing or preventing people. I'm talking about persuading the leaders
of distros and projects to consider a different emphasis.
We keep adding new stuff without ever getting rid of old stuff. So the junk keeps building up,
the overall system keeps getting more complex, the available resources keep getting more and
more diluted, more and more duplication of effort.
Areas that should be "consolidated" a bit:
Default or standard apps (file explorer, text editor, image viewer, music player, etc).
Init / event systems (systemd, cron, Network Manager, etc).
"Some consolidation" is not something one developer can do. We need to change (by persuasion)
the culture, the attitudes, of the major devs and managers in the community.
Many people don't like to hear this. "HE'S SAYING NO ONE SHOULD DO ANYTHING NEW. HE WANTS TO STOP ME FROM
DOING WHAT I WANT. HE WANTS LINUX TO BE LIKE MICROSOFT OR APPLE. HE'S EVIL, BURN HIM !"
Actually, what I'm saying is that the adults, the managers and devs who do
the big work and run the major distros and projects,
should think about ways to consolidate things a bit. For example, do Ubuntu, Lubuntu, Xubuntu, Kubuntu,
Ubuntu Budgie, Ubuntu Kylin, Ubuntu MATE, Ubuntu Studio, Cubuntu, Fluxbuntu, Ubuntu Mini Remix, UbuntuLite, Mint and many more
(see Ubuntu wiki's "Derivatives")
all need to be separate distros, or can they be one with various install and config options ?
There would be a benefit to the community, in terms of mindshare and bug-fixing etc, if they could be one.
Maybe there are technical reasons they can't be; I'm no expert.
And I'm sure there are organizational/political/legal/strategy conflicts that would prevent some of this.
But I'm putting forth the idea. Having 400 distros
(see GNU/Linux Distributions Timeline)
imposes costs and holds back Linux.
If all the *buntu's and Mint*'s and Elementary OS became one distro "Ubuntu+", then when you
fix a bug in that one distro, it's fixed in all the combinations. One distro name ("Ubuntu+").
One installer. One set of release images. One repo. One set of tests. One bug-reporting and bug-tracking system.
One set of documentation.
I'm told Debian does something like this ? Near the end of the installer, it gives you a list
of available DEs and says "pick one". One ISO and installer for all N configurations.
I'm told Manjaro GNOME has something a bit like this ? In Manjaro "Hello", there is "GNOME layout manager" ?
[From someone on reddit 6/2020:
"OpenSUSE lets you try different DEs just by logging out. There's only one distro OpenSUSE
and it comes with KDE, Gnome, Xfce, Enlightenment, Mate, LXDE, LXQT, and more."]
Some of the biggest problems are political. I'm sure one reason that distro Y forked off from distro X
was that the Y devs/managers didn't agree with decisions made by the X devs/managers. They argued, split, and a fork happened.
Merging back in, or even submitting changes back to upstream, would be very difficult.
We need variety and choice, but a reasonable level of it. We never should prevent random person X from creating a new distro.
But we need more focus among the majority, the core, of the community.
Suppose other areas of the Linux/GNU ecosystem were more like the kernel and GNU ?
The Linux kernel, GNU, and util-linux generally work pretty well and don't have a lot
of duplicate effort and forks etc.
Why is that ? Because each has a single owner and standard. This does not eliminate
all "choice"; the kernel has pluggable drivers and modules. And it does not kill innovation;
the kernel gets new features, new CLI commands get added.
So, suppose other areas of the Linux/GNU ecosystem were handled the same way ?
Suppose there was an agreement that systemd was the only init system, and
there was a clear central owner of systemd ? It has modular plug-ins, you can
innovate on top of systemd, you can add units. The major projects all agree to (over time) rip out any old init
structures and only use systemd.
Now, someone could refuse to accept this, and use their own non-systemd init system.
But over time they would find fewer and fewer apps and devs and base distros supporting that.
The costs of being different would get higher. Just as if they forked the Linux kernel
and changed it, and based their distro on that forked kernel. Nothing stops them from
choosing to be different, but they'll be fighting against the tide.
Similar with package formats. Suppose Red Hat and Canonical and Debian etc were to get
together and say "look, let's try to reduce our differences.
let's add the best features of rpm/dnf packaging to dpkg/apt, and then we'll all use the enhanced dpkg/apt,
and eliminate any support for the old formats and managers".
Each of these changes would take many years. It would not be an overnight change.
But with a clear new standard, slowly people/apps would adopt the new standard.
Corporate funding idea (you won't like it):
Suppose Red Hat was to take a tiny chunk of its billions and say to the
Fedora, CentOS, Qubes OS teams: "we will fund you to help port your best features and apps back
into base Red Hat, and try to reduce the deltas between our distros. we will allocate some of our devs
to help you."
Suppose Canonical was to take a chunk of its millions and say similar to the
Mint, Zorin OS, Elementary OS, Whonix, Pop OS teams: "we will fund you to help port your best features and apps back
into base Ubuntu, and try to reduce the deltas between our distros, let you become more like 'flavors' of Ubuntu".
we will allocate some of our devs to help you."
Suppose Red Hat and Canonical and Debian etc were to get together and say "look, let's try to reduce our differences.
let's add the best features of rpm/dnf packaging to dpkg/apt, and then we'll all use the enhanced dpkg/apt".
And the effort was staffed by employees of the corps, or funded by the corps.
Google and Microsoft and Apple have tons of money, and some stake in the success of Linux.
Any way to tap their funding to implement some consolidation and increased commonality ?
Critical bug reports filed against the Linux kernel often get zero attention and may linger for years
before being noticed and resolved. Posts to LKML oftentimes get lost if the respective developer is not
attentive or is busy with his own life.
... the fbdev, vt, and vgacon kernel subsystems. These subsystems aren't actively
maintained (receiving drive-by fixes only), and the kernel developers recommend
to not enable these subsystems if you care about security ...
[Those are drivers, which are a type of kernel module. So if you're using them,
they should show up in output of "lsmod".]
There seem to be a number of serious design flaws or lacks:
A long-standing security issue in the standard Xorg/X.11
display server system used for decades:
StackExchange's "Why has Ubuntu 18.04 moved back to insecure Xorg?".
TLDR: Nothing stops a Linux GUI application from spying on all the events/keys
input to other applications, or even injecting events/keys into the input queues
for other applications. Keylogging, essentially.
From /u/aioeu on reddit 7/2020:
> What was wrong with X that motivated Wayland?
> I read somewhere that the code base had become unmaintainable.
I don't think the long-term maintainability of X is really a major reason for Wayland.
After all, Xorg is still being maintained.
But there are other reasons for Wayland:
Security in X is largely non-existent. Any client can modify any window on the display,
or eavesdrop on events on any window. This is really fundamental in the design of X - it's how
window managers work, how programs that let you define hot keys work, and so on.
Things like devilspie are only possible because of X's lax security.
X is not a good fit with how modern graphics hardware works. A lot of X was designed
with the idea that 2D graphical primitives would be hardware-accelerated (stippled lines, oh my!).
That may have been the case in the 90s ... not so much now. Modern hardware doesn't even
bother accelerating those things.
While the supposed network transparency of X is often lauded, it really doesn't work
very well over anything with a bit of latency. The X protocol is quite chatty, and a lot
of operations require synchronisation between client and server.
X has a lot of legacy cruft that can't be removed. The core X protocol isn't even
used by most X applications using modern toolkits (they typically use the XRender extension instead),
but the core X protocol needs to be there because it's literally "the core X protocol",
and there are a few programs that do use it. Heck, my day-to-day text editor uses Motif
and still relies on the core X protocol.
A lot of hacks on X are really, really bad hacks. I'm amazed that drag and drop ever works at all.
I should add that there have been a few successes in removing the most egregious parts of X.
X has historically had hardware drivers in userspace - it couldn't actually rely on the most
useful features of your hardware having kernel APIs, right? - but as you can imagine that's
a terrible idea all round. It's the reason X had to be run as root. But a lot of those hardware
drivers have been removed now.
Xorg also once had a print server. After all, if you can render a window to a screen,
why not also use the same code to render paper documents? Xprint was only removed after
somebody added support for it to glxgears ...
From /u/Sh4dowCode on reddit 7/2020:
> Wayland and Xorg. Difference ?
Wayland is a protocol, while Xorg is a display server (using X11 protocol).
Wayland is the "new" thing and works differently than X11. Back when X11 was created, it was common
to have one poweful (time-sharing) server and multiple clients that are connecting to it.
Each of thoose clients had a server that was speaking X11, and the server could say
"render a line at xy to xy, draw a circle at xy with radius r". Nowadays those X11 primitives
are practically not used any more, but they still need to exist because what if some app needs them.
(e.g. xclock uses those) Over the time X11 got a lot of extensions, one of them was XRender
and alowed pushing a bitmap(-image) over the X11 protocol. This ability is basically what
every application uses to render itself, because it gives you a lot more freedom in how
you design your stuff. Issue is, that all the bitmaps are going though a socket.
And while today the xorg server and x-clients (applications) are on the same machine,
it still creates performance and memory overhead. Also any x-client can grab the entire screen,
get keyboard events etc, so it's not secure.
Wayland tries to fix this, by using shared memory. Meaning the wayland client just "sends" a
memory location to the server saying where the window "bitmap" is located. Also in Wayland
a window is its own thing and it doesn't know what other windows are open.
Your window manager / desktop env in xorg is just another client to the xorg server,
with the same permissions. Now in Wayland the window manager / desktop environment is the
Wayland server, so it decides what to render and then just gives the finished screen
to render to the Linux kernel.
To run legacy X11 clients on Wayland there is XWayland which is basically an X-Server that
takes the data provided by x-clients, renders it to a bitmap, and then gives them to the Wayland server.
Apparently X display system is based on networking, so I can't make Firejail or AppArmor turn off networking
for some apps that on the face of it have no need for network access.
Someone said "Try running X with --nolisten tcp/udp". In Mint, I think I'd add
"nolisten tcp/udp" (see "man Xserver") to "/etc/X11/Xsession.options". But I think
that would just kill lots of apps.
Also no way in Firejail or AppArmor to say "restrict networking to just domain D, and/or localhost,
and/or LAN addresses" ? Some apps (such as sudo ?) do a DNS lookup of the local hostname;
I want to allow that kind of thing while denying external access.
As far as I can tell, networking/VPN/firewall needs a major redesign. Try to figure out why your
system is using a particular DNS, or stop it from doing so. Try to figure out
if your system ever does a single network access that bypasses the VPN at any time, from boot to shutdown.
Try to figure out what happens if your VPN connection goes down, and get informed if your public IP address changes.
You'll find yourself lost in a sea of modules and layers, from /etc/nsswitch.conf to
systemd to avahi to BIND to /etc/hosts to
dnsmasq to Network Manager and nm-connection-editor and nmcli and
VPN and iptables and netfilter and ufw and gufw and docker and more. This thing overrides that thing,
this falls back to that
if configured this way, three things add rules to iptables as they start up, etc.
There even are two forms of GUI for Network Manager in Ubuntu: one through
Settings / Network and another through "sudo nm-connection-editor", and they feature-overlap about 95%.
There seem to be several areas where old and new architectures, including old and new device names,
old and new stacks, old and new utilities, are co-existing or pasted together:
Networking/VPN/firewall/DNS/hostsfile/Bluetooth/Wi-Fi (see previous item).
Audio (OSS, ALSA, pulseaudio).
Installation is done in many different ways by various apps or components.
Try running some GUI apps from the CLI instead of the normal way (clicking on icon).
After using them and quitting, look at the CLI window. Chances are you will see failed assertions,
broken pipes, and other alarming things. On Mint 19.1,
at least Firefox, Chromium, ShowFoto, xed (the default text editor), NetBeans IDE, OWASP ZAP do this.
I think there's something wrong/changed with Java: every Java app throws "illegal reflective access operation"
as it starts. Look in ~/.xsession-errors files, and see various alarming errors.
Try running "sudo journalctl -p 3 -xb" or "egrep -i 'error|warn' /var/log/*g | more",
to see what happened as your system booted and ran.
Probably you will see some alarming error messages, about PAM failures and keyring failures and
apps that wouldn't start and files not found
and who knows what else. Does not inspire confidence.
For example, on my (working) Mint 19.3 Cinnamon system, I get things such as:
# On boot:
kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored
kernel: random: 7 urandom warning(s) missed due to ratelimiting
kernel: ashmem_linux: module is from the staging directory, the quality is unknown, you have been warned.
kernel: ACPI Warning: SystemIO range 0x0000000000000540-0x000000000000054F conflicts with OpRegion 0x00000000
kernel: kvm: VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL does not work properly. Using workaround
kernel: uvcvideo 1-1.6:1.0: Entity type for entity Extension 5 was not initialized!
dbus-daemon: dbus: Unknown group "power" in message bus configuration file
dbus-daemon: dbus: Unknown username "whoopsie" in message bus configuration file
networkd-dispatcher: WARNING: systemd-networkd is not running, output will be incomplete.
ger: Error: can't open /lib/modules/5.3.0-24-generic/updates/dkms
udisksd: failed to load module mdraid: libbd_mdraid.so.2: ...
wpa_supplicant: dbus: wpa_dbus_get_object_properties: failed to get object properties: (none) none
lightdm: PAM unable to dlopen(pam_kwallet.so): ...
dbus-daemon: [system] Rejected send message, ...
# Startup after kernel is up
systemd: kerneloops.service: Found left-over process 2589 (kerneloops) ...
kernel: kauditd_printk_skb: 11 callbacks suppressed
systemd-udevd: Could not generate persistent MAC address for docker0: No such file or directory
vboxdrv.sh: vboxdrv.sh: failed: Look at /var/log/vbox-setup.log to find out what went wrong.
lightdm: gkr-pam: couldn't run gnome-keyring-daemon: No such file or directory
cinnamon-session: WARNING: t+1.51996s: Failed to start app: ...
[pulseaudio] bluez5-util.c: GetManagedObjects() failed ...
pam_ecryptfs: pam_sm_authenticate: /home/user1 is already mounted
colord: failed to get session [pid 5933]: No data available
colord: CdMain: failed to emit DeviceAdded: ...
upowerd: unhandled action 'bind' on /sys/devices/pci0000 ...
kernel: ecryptfs_decrypt_page: Error attempting to read lower page; rc = [-4]
kernel: parport 0x378 (WARNING): CTR: wrote 0x0c, read 0xff
# On shutdown:
systemd: systemd-coredump.socket: Failed to queue service startup job (Maybe the service file is missing ...
systemd-udevd: Process '/usr/sbin/tlp auto' failed with exit code 4.
systemd: netfilter-persistent.service: Failed with result 'exit-code'.
umount.ecryptfs: Failed to find key with sig [d5eaa71c805ac0fb]: Required key not available
systemd: Failed unmounting /home/user1.
systemd: Failed unmounting /home.
kernel: printk: systemd-shutdow: 41 output lines suppressed due to ratelimiting
I'm sure some of these are just tests for features my machine doesn't have, or
things that sound alarming but shouldn't be.
I noticed that most repository mirrors use HTTP, not HTTPS. I asked if that was
a problem, and mostly was told that it isn't because packages are signed.
Why does APT not use HTTPS?
I think the "apt 2.0" coming out in early 2020 is supposed for fix some of this ?
Next, download a few packages more or less at random from various PPAs and mirrors that you use.
Extract the contents of those packages and look for a file named "_gpgorigin". If you don't see that file,
then the package isn't signed.
In general, dpkg files aren't signed. apt supports it, but distributions are neglecting that security layer.
Instead, the "Release" file is signed, and that file has a hash for the Packages file.
The Packages file has hashes for each of the individual packages. That could be almost as good
as signing the packages directly, but if you look at the Release file, its hash is only an MD5.
MD5 has been deprecated for almost every security sensitive application because it's too easy to
create a collision, and that's the weak point in the apt security chain. If you can MITM or compromise
a mirror, and if you can generate a Packages file that has a matching, then you can replace a
package file and apt will believe it's valid.
Personally, I think that people are not taking that weakness nearly seriously enough, and in this thread
you'll see a lot of people asserting incorrectly that packages are signed. They aren't.
The prevailing wisdom is a myth.
HTTPS isn't necessarily the answer. It wouldn't protect you from a compromised mirror.
"yum" doesn't use HTTPS generally, either. But Red Hat based distributions (RHEL, CentOS, Fedora)
all sign their packages directly.
If you're concerned about security, I recommend using one of those.
Yes, that's correct, the packages themselves aren't signed.
But the bit about "Release" file using MD5 is a bit incomplete. Modern repos should have SHA256 hashes
... IIRC This was the cause of a bit of trouble back when this was enabled; apt would warn about
insecure hashes and Google's repos were affected. And GPG the signatures themselves use SHA512
in Ubuntu's official repositories. I'd guess the presence of MD5 is probably for backwards compatibility
with any older tools.
I could be wrong, but it seems nothing makes a package you're installing tell you what directories
it's going to modify, and nothing forces the installer to stay within those boundaries ?
A malicious package could alter anything in the system ?
[To see what files a package would install: "dpkg-query --listfiles PKGNAME" ?
Does NOT include effects of any scripts that might be run.]
Similar questions for a PPA (Personal Package Archive). Suppose you add a PPA for app X to "software sources" for your system,
then an attacker cracks the PPA and adds a package to update, say, Cron. In Update Manager, you'd see
an update for Cron, and nothing would tell you that the update came from a PPA instead of the official repo, I think.
A number of people, on both server and desktop, seem to take pride in staying on old LTS releases,
such as Ubuntu 14.04 or 16.04. From a security point of view, this is a bad idea. Yes, Canonical back-ports
serious security fixes to those releases. But plenty of fixes do not get back-ported.
The whole concept of "LTS" is somewhat flawed.
Bugs in open-source software, including that used by Linux or common apps/services on Linux,
can go undiscovered for years. For example,
The Heartbleed Bug,
From Daniel Micay (lead dev of GrapheneOS, I think) on reddit 4/2019
It's just a fallacy that open-source is more secure and privacy-respecting. It's quite often not the case.
There's also the mistaken belief that closed-source software is a black box that cannot be inspected / audited,
and the massively complex hardware underneath is the real black box. A lot of the underlying microcode / firmware
is also a lot higher to inspect.
From /u/longm0de on reddit 2/2020:
Many eyes prevents security backdoors and other security exploits right? Or at least gets them fixed faster?
Statistically there is no real and significant data that supports open-source or closed-source software
being more secure than the other. You can't easily gauge this statistic either since many proprietary
software suites incorporate open-source components as well. Closed-source software can also have "many eyes".
Thousands to millions of individuals/entities can be looking at the source code of Microsoft Windows through the
Shared Source Initiative.
Our government certainly takes advantage of that program.
Year 2014 was the most damning in regard to Linux security: critical remotely-exploitable vulnerabilities
were found in many basic Open Source projects, like bash (shellshock), OpenSSL (heartbleed), kernel
and others. So much for "everyone can read the code thus it's invulnerable". In the beginning of 2015
a new critical remotely exploitable vulnerability was found, called GHOST.
Year 2015 welcomed us with 134 vulnerabilities in one package alone: WebKitGTK+ WSA-2015-0002.
I'm not implying that Linux is worse than Windows/MacOS proprietary/closed software - I'm just saying
that the mantra that open source is more secure by definition because everyone can read the code
is apparently totally wrong.
Year 2016 pleased us with several local root Linux kernel vulnerabilities as well as countless
other critical vulnerabilities. In 2016 Linux turned out to be significantly more
than often-ridiculed and laughed-at Microsoft Windows.
The Linux kernel consistently remains one of the most vulnerable pieces of software in the entire world.
In 2017 it had 453 vulnerabilities vs. 268 in the entire Windows 10 OS. No wonder Google intends to
replace Linux with its own kernel.
[But: many bugs are not assigned a CVE number, and it's not clear if different OS teams have
similar reporting policies.]
We found the number of active open source projects has been shrinking since 2016 and the number
of contributors and commits has decreased from a peak in 2013. Open source -- although initially
growing at exponential rate -- is not growing anymore. We believe it has reached saturation.
A study [which mostly excluded mobile devices] of vulnerabilities - bugs that can
be a gateway for malware or allow privilege escalation
by an intruder - shows that Windows platforms have the most by far, but that they also tend to
be fixed quickly, compared to Linux systems or appliances like routers, printers and scanners.
... Microsoft platform assets get fixes faster than other platforms, according to the paper.
"The half-life of vulnerabilities in a Windows system is 36 days," it reports.
"For network appliances, that figure jumps to 369 days. Linux systems are slower to get fixed,
with a half-life of 253 days. ..."
A common misconception about the Linux kernel is that it's secure, or that one can go a long time without
worrying about kernel security updates. Neither of these are even remotely true. New versions of Linux
are released almost every week, often containing security fixes buried among the many other changes.
These releases typically
make explicit mention of the changes having security implications. As a result, many "stable"
or "LTS" distributions don't know which commits should be backported to their old kernels,
or even that something needs backporting at all. If the problem has a public CVE assigned to it,
maybe your distro will pick it up. Maybe not. Even if a CVE exists, at least in the case of
Ubuntu and Debian especially, users are often left with kernels full of
for months at a time. Arch doesn't play the backporting game, instead opting to provide
the newest stable releases shortly after they come out.
From Daniel Micay (lead dev of GrapheneOS, I think) on reddit 4/2019
The Linux kernel is a security disaster, but so are the kernels in macOS / iOS and Windows, although they are
moving towards changing. For example, iOS moved a lot of the network stack to userspace, among other things.
The userspace Linux desktop software stack is far worse relative to the others. Security and privacy
are such low priorities. It's really a complete joke and it's hard to even choose where to start in
terms of explaining how bad it is. There's almost a complete disregard for sandboxing /
privilege separation / permission models, exploit mitigations, memory-safe languages (lots of
cultural obsession with using memory-unsafe C everywhere), etc. and there isn't even
much effort put into finding and fixing the bugs. Look at something like Debian where software
versions are totally frozen and only a tiny subset of security fixes receiving CVEs are backported,
the deployment of even the legacy exploit mitigations from 2 decades ago is terrible and work on
systems-integration-level security features like verified boot, full system MAC policies, etc.
is near non-existent. That's what passes as secure though when it's the opposite. When people tell
you that Debian is secure, it's like someone trying to claim that Windows XP with partial security
updates (via their extended support) would be secure. It's just not based in any kind of reality with
any actual reasoning / thought behind it.
The traditional desktop OS approach to disk encryption is also awful since it's totally opposed to keeping
data at rest. I recommend looking at the approach on iOS which Android has mostly adopted at this point.
In addition to all the hardware support, the OS needs to go out of the way to support fine-grained
encryption where lots of data can be kept at rest when locked. Android also provides per-profile encryption keys,
but has catching-up to do in terms of making it easier to keep data at rest when locked. ... iOS makes it
easier by letting you just mark files as being in one of 2 encryption classes that can become at
rest when locked. It even has a way to use asymmetric encryption to append to files when locked,
without being able to read them.
Really, people just like saying that their preferred software stack is secure, or that open-source
software is secure, when in reality it's not the case. Desktop Linux is falling further and further
behind in nearly all of these areas. The work to try catching-up such as Flatpak is extremely flawed
and is a failure from day 1 by not actually aiming to achieve meaningful goals with a proper threat model.
There's little attempt to learn from other platforms doing much better and to adopt their privacy and
security features to catch up. It's a decade behind at this point, and falling further behind.
Also, all these things about desktop Linux completely apply to anything else using the software stack.
It doesn't matter if it's FreeBSD or whatever. FreeBSD also has a less secure kernel, malloc, etc.
but at least it doesn't have nonsense like systemd greatly expanding attack surface written with
tons of poorly written C code.
There are literally hundreds of serious, game-over vulnerabilities being fixed every month in the Linux kernel.
There are so many vulnerabilities that vulnerability tracking and patching doesn't scale to it at all.
It has no internal security boundaries. It's equivalent to running the entirety of userspace in a single process
running as full unconstrained root, written entirely in C and assembly code rather than preferring
memory-safe / type-safe languages. Watch this talk as a starting point:
Dmitry Vyukov's "Syzbot and the Tale of Thousand Kernel Bugs" (video)
> you've said Flatpak is flawed. is Snap any better as an app sandbox?
No, not really. They're both fundamentally flawed and poorly implemented. They're a lot worse than even
the very early Android sandbox from a decade ago before all of the work on hardening it and improving
the permission model. They're approaching it completely wrong and treating it as if they need to figure
out how to do things properly themselves, by not learning from existing app sandboxes.
... It's a fundamentally broken approach to implementing a sandbox. It doesn't draw an actual security boundary
and fully trusts the applications. The design choices are being made based on the path of least resistance
rather than actually trying to build a proper security model. There's a big difference between opportunistic
attack surface reduction like this and an application sandbox, which these are not implementing.
They cannot even be used to properly sandbox an application no matter how the application chooses to
configure the security policies, even if the app is fully trustworthy and trying to do it.
The implementation is not that complete. It could certainly be done properly but it would require a
huge amount of work across the OS as a whole treating it as a unified project, along with a massive
overhaul of the application ecosystem. I can't see it happening. It requires throwing out the traditional
distribution model and moving to a well-defined base OS with everything outside of that being contained
in well-defined application sandboxes with a permission model supporting requesting more access dynamically,
or having the user select data as needed without granting overly broad forms of persistent access.
From /u/longm0de on reddit 2/2020:
[In the context of "why do people go back to Windows"]
I feel security is a massive burden put upon Linux developers. Linux was not made to be "the most secure" system
in the world or even secure at all. Linux was made with portability in mind and with portability there
can be conflicts with security mechanisms.
Take an actual look at the Linux kernel, for a long time it lacked security that Windows NT had since its release.
It's important to know that the NT lineage of Windows is not based off of or even similar to MS DOS or OS/2-like
Windows 95 and etc. The NT lineage of Windows is initially based off of VAX/VMS (now known as OpenVMS) and still
largely is based off of that architecture as developed by Dave Cutler and his team. Windows NT from the get-go had users,
roles, and groups as well as proper access control. NT contains discretionary access control lists as well as
system access control lists which can be used for auditing in comparison to Linux which relied on rudimentary
RWX permissions with an owner-group-world philosophy. SELinux finally brought discretionary access control lists
to Linux as well as mandatory access control. SELinux is a great thing and should be treated as such - it implements
a form of MLS. Windows later on added a form of MLS to known as mandatory integrity control.
Nearly all objects in NT are securable with DACLs and auditing such as processes, threads, sockets, pipes,
mutexes, etc., NT has an underlying unifying security principle. In later versions of Windows (such as Vista)
UAC was implemented with the Administrator account so that even administrators didn't execute things as
administrators, but had to explicitly grant permissions. It's stated that UAC is insecure because in normal
implementations, it is just a "yes" or "no". This is largely untrue, UAC in its current default configuration
is ran in Secure Desktop Mode which prevents software input emulation as well as keylogging.
In Linux, if I want to run a program elevated, I have to use the terminal and on X11, I can just intercept
the key events and then log the users password without any high privileges. Where is the security in that?
Windows has exploit mitigation policies which are VERY similar to hardenedBSD and grsecurity/PaX. Many
Linux distributions don't even want to use grsecurity/PaX and the kernel developers don't even want to
support it because it may "break" some devices.
Again, Linux was made for portability, not security. It's not exactly "insecure", but it's not exactly
secure either. Also, I don't run any anti-malware on Windows (for resource purposes I even disabled
Windows Defender by setting -X on core files it requires), and my computer hasn't received any malware,
and years back the only time my PC did receive malware was due to being socially engineered.
There is nothing about Linux that magically prevents malware - nothing about its architecture as
compared to Windows accomplishes this. When somebody can make an actual case about its architecture - I will
change my mind. No, don't point out access control that Windows already has. Windows on the other hand
has driver signature enforcement, kernel patch protection, AppContainers, etc. You can even configure
Windows so that the only applications to run with administrative privileges have to be digitally signed.
There is a lot you can do in terms of security on Windows systems.
Not going to happen: RIIR: Rewrite Linux using Rust programming language:
An important point made by some people: we really should stop using the C programming language (created in 1972-3).
It is not memory-safe and type-safe, doesn't have the concepts of exceptions (it always just
does something and keeps going), heap, strings. Unfortunately, the Linux kernel and
much of the user-space code is written in it. This leads to tens of thousands of
bugs in Linux today, including security vulnerabilities. Maybe C is appropriate for very
low-level system programming, as an alternative to assembly language. But for apps and services and modules, not.
This would not help/solve issues such as all of the kernel code operating in one
memory address space at one processor privilege level (lack of compartmentalization).
A bug in device driver X still could mangle something in iptables code Y, for example.
But it should help get rid of entire classes of errors such as buffer overflows and
People bringing up this idea have provoked a "so you go do it" reaction.
"RIIR: You're telling existing devs to go do a ton of work." A fair point. Except that
the work would be pointless if at the end the existing devs reject the new code.
And indications are that devs all the way up to Linus Torvalds would reject it.
A rewrite would solve some classes of low-level problems, not fix bigger problems,
be an ENORMOUS amount of work, and be resisted by the existing devs. Not going to happen.
Now Linux desktop users are using the same browsers etc as the Windows people are, so vulnerabilities seen on Windows
are more likely to exist on Linux too. Same with PDF docs and Office macroes. And with cross-platform
apps such as those running on Electron or Docker. And libraries (such as the SSL library) used on many/all platforms.
An exploit may work the same way regardless of the underlying OS type.
Software from third-party repositories (like PPA's) and external .deb installers, is untested
and unverified. Therefore it may damage the stability, the reliability and even the security
of your system. It might even contain malware ...
Furthermore, you make yourself dependent on the owner of the external repository, often only
one person, who isn't being checked at all. By adding a PPA to your sources list, you give
the owner of that PPA in principle full power over your system!
Gamers may have a tougher time with Linux than with Windows. Vendors target the biggest market first and best.
Same for video-editors and such.
Open-source software may be great, or may have one guy working on it occasionally and be really hit-or-miss.
From someone on reddit 4/2018:
> I'm sorry it's a bit of a rant and I might sound like
> a noob to you all, I'm really disappointed and not in a
> good mood at the moment. I've been using Linux only for at
> least 6 months and I've been in love with it when I decided
> to make the switch for good ... and I'm beginning to think
> it sucks. Tonight I had to do a simple slide-show for a client
> and I used mostly Shotcut but I tried Openshot and Kdenlive
> and the three of them was horribly buggy and a nightmare ...
> I really didn't enjoyed my experience and it pissed me off.
> I do not understand, most of those softwares have been in
> development for years and they look like in beta phases or as
> if only one person worked on it, but there's a big community
> and I keep seeing donations for open source I don't think money
> is really an issue. As for bugs I didn't even try to break them,
> they struggled with tasks such as fade in and fade out, transitions,
> adding texts, very basic stuff, I had to restart Shotcut like
> 4 times because it couldn't add the pictures on the timeline,
> and it's a well-known bug that is from around 2016. I'm on
> Ubuntu Mate and everyone says Ubuntu is a stable distro for gaming
> and doing work so I installed it. The only softwares that are
> stable to me is Blender, Krita, Gimp, Inkscape and Godot. As for
> Gimp it really is very powerful but there are some tools that are
> missing that Photoshop has, and if I go for the latest version
> it's very slow and not usable. I do a lot of multimedia and I
> don't think I will survive ... There's Natron, Fusion 9 that I
> didn't used yet but they are compositing softwares, I don't think
> I can do a lot with them as for video editing. It's already hard
> to not be able to play recent video games, if it also removes tools
> for working and being creative there's just no point to stay or to
> suggest it to anyone.
I think you have a wrong picture here. Most open-source projects indeed have only one (or very few) developers
working on them, and get very few (if any) donations.
Ubuntu is mostly stable in the sense of "let's not change it after the release". That's great to avoid
introducing new bugs, but not so great to remove old bugs.
From /u/BlueGoliath on reddit 9/2017:
As someone who previously used Windows and now uses Linux for 90% of my time now: If you are going to switch to Linux,
be ready to deal with bugs, piss-poor UI design, hardware incompatibilities, and other issues.
Despite what you hear on tech sites about how great the Linux community is, it really isn't.
If you complain about Linux you are most likely going to be met with one of the following:
You just don't like it because it isn't Windows.
It isn't Linux's fault, it's your computers fault.
Distro X just sucks, Distro Y is what you should be using.
You shouldn't complain because it's free.
Yeah it sucks but that's the current mentality of the Linux community. Be ready for it.
[Re: "piss-poor UI design": probably not a problem if you spend most of your
time in a browser, desktop, and a couple of major applications.]
From /u/OnlyScar on reddit 3/2018:
Around 6 months ago, I made the move to Linux. I am not a gamer, so it was easy for me. To make the experience more authentic,
I installed linux on my main machine and didn't dual boot Windows. It was only linux for me. It has been an interesting journey,
but sorry I can't take it anymore. Please note that I am strictly speaking as a non-developer, non-geek but a "power user".
My reasons might not apply for developers and very technical users. Below are the reasons am going back:
1) Windows vs Package Manager Repo System : Repeatedly I was told that the software repository and package manager
system of linux is much superior than the Windows system of downloading .exes from developer sites. This is such a lie
that it's not even funny. The reason: age of software. Win32 .exe softwares get updates independently from the base OS.
You can use Windows 7 and guess what, your favorite softwares will all run at the LATEST version. I repeat, you can use Windows 7 and
your Blender and Krita will be at the latest version. What the version of Blender and Krita on Ubuntu 16.04 or 14.04?
Is Ubuntu 14.04 even usable for normal desktop use anymore, consider its software repo age? And no, am not using any rolling distro
or Fedora because their stability doesn't hold a candle in front of Ubuntu, mint, debian stable, win 10 or macOS.
Also I shouldn't have to upgrade my OS just to get the next version of software. This is absolutely unacceptable and ridiculous.
The fact that my softwares stays fully cutting edge, up to date on Windows while the base OS stays same is extremely important.
2) Security and BSODs etc : Contrary to FUD, Windows 10 is actually very secure unless you want to download softwares
from crackedfreesoftwares.ru. You DO NOT need a separate antivirus, Windows Defender is now enough. It runs like a dream on most hardware.
And Windows do NOT force upgrades in the middle of work. BSODs have long been a thing of distant past. Basically am saying that
repeatedly using the boogeyman of security, bsods etc isn't working.
3) Atrocious Desktop Environments : My main reason of ditching linux. Linux DEs are such a sad joke compared to Windows (or Mac) DE
that it is not even funny. Let's start, shall we:
i: GNOME: The DE suffers from MEMORY LEAK for god's sake. Performance is pathetic, much much worse than Windows 10 or mac DE.
This is also the main default desktop of linux world, which actually says a lot about linux. It's absolutely unthinkable for us
to even use a DE which suffers from extreme memory leak, and developers doesn't even shows any intention of fixing it.
It is just unthinkable on Windows. Gnome is also unusable out of the box, and you have to use random 3rd party hack job extensions
just to get a basic fully functional DE. You need to download a software to get simple minimise button. Simply Unbelievable.
And you guys, like a bunch of callous users, continue to support it and use it while happily doing ALT+F2 -> r. Lame.
ii: KDE - So, so many small random but crucial bugs that it is really impossible to list them all. They try to emulate Windows,
and does a pretty poor job. For example, just use the "hover option" on KDE task bar. See the quality of preview. Does KDE devs even
know how important that single function is? Small random bugs like this simply makes it inferior to Windows DE.
iii: XFCE - Thanks, but no thanks. Its 2018, not 1998. No hover option btw. Too basic and limited.
iv: Cinnamon - Too strongly tied to Linux Mint, a distro indulging in many questionable practises. Bad aesthetics.
What up with that huge square-like menu? And why does the menu size increases when I add favorites?? It's already too big anyway.
It just looks like a cheap rip-off of Windows XP.
v: Mate - Still too basic compared to Windows.
vi: Tiling windows managers - Unusable and irrelevant for non-developers, non-geeks.
Anyway, for me default DE matters. Even if the perfect DE exists somewhere in the wild, if a distribution chooses a subpar DE,
it says a lot about them and their focus on user-friendliness. And since most of the linux world has enthusiastically
opted for Gnome 3, a pathetic subpar incomplete DE, it says a lot about you guys.
4) Sickening Hypocrisy of the Community : Let's start, shall we - i: Saw multiple caustic rants about how MS Windows 10 provides
a poor inconsistent UI because of 2 settings menu (legacy and metro). And you guys say this while primarily using a piece of jewel
like Gnome 3. /s ii: Linux is all about control. Just don't expect a fcking minimise button by default on popular DEs like Gnome and Pantheon.
OK got it. iii: The arrogance and know-it-all attitude of gnome devs and elementary OS devs will put the arrogance of MS and Apple to shame.
But i guess that's okay cause they are your own. iv: Continuously compare Windows from 2002 to Linux from 2017 and try to prove your
point about how linux desktop is superior. Continuously attack MS for telemetry and control and while happily using Google services and FB.
Giving Apple a pass cause they are unix. The list goes on and on ...
5) Last but not the least, atrocious softwares - Yeah guys, accept it, LibreOffice and GIMP sucks balls compared to MS Office
and Photoshop. Krita gives MS softwares a run for their money, but LibreOffice and GIMP are simply cringy embarrassments.
You will get fired if you dare to make a presentation with LibreOffice Impress in a corporate environment. It is so bad.
VLC Media Player is out right bad compared to Pot Player on Windows. Nothing on linux compared to MusicBee on Windows.
I won't even embarrass you guys by talking about JRiver Media Center. Most linux desktop softwares simply lacks the features,
polish and finesse compared to their Windows counterpart.
And no, it is not MS or Adobe's fault that those softwares are not available on Linux. You guys continuously rant about evil
proprietary software. Upstream major distros like Debian and Fedora doesn't even include proprietary softwares in their main repo.
Then why should proprietary software companies release their softwares on linux? What sort of a weird entitled demand is that?
Why should proprietary software companies accept second-class treatment on linux and hear some caustic remarks from Gnome devs
and Debian greybeards? It was up to you guys to provide a real 1:1 alternative to MS Office, Photoshop and various other proprietary
softwares, and you guys failed.
And yes, hardware support and display quality is much better on Windows. The fault again lies with Linux.
If you treat proprietary drivers and firmware as second-class citizens, don't expect hardware developers to go out
of their way to support Linux. That's an unfair demand.
Bye. After experiencing Linux, my respect for Microsoft and Windows 10 has increased by a 1000 times.
IMPORTANT EDIT - REASON FOR WRITING THIS POST - This problems have bugged me since the beginning.
But I came to linux at a tumultuous time, when Ubuntu has abandoned Unity (so Ubuntu Unity 16.04 is a dead horse),
and Ubuntu 17.04 and 17.10 are only interim releases. So I cut linux desktop and Canonical some slack and waited
for the next LTS. Today I tried Ubuntu 18.04 Beta and guess what? Lo and behold, the glorious memory leak is still present.
And my head just exploded in rage. :/ So much effort, so much time spent tweaking, so much distro hopping,
so much anticipation to permanently shift was all for naught. That's why I made this salty post.
From /u/UncleSneakyFingers on reddit 3/2018:
I have the same experience as you. This is my first comment on this sub, but a lot of users here are living in their own universe.
I see so many posts on the various Linux subs describing issues that are simply unthinkable. Windows just works, Linux just breaks.
I still try learning Linux though just to increase my skill set. But going from win10 to Linux is like going from a Mercedes to one
of those old cars you have to hand-crank to start up. It's just ridiculous.
So many users here are willing to spend an entire weekend fixing an issues with their Linux setup, but give up on Windows
the first time they f*ck up something basic and get an error message. This sub has really turned me off from Linux in general.
When they talk about Windows, it's like one of those infomercials showing someone trying to crack an egg and having it explode
all over the place. Just ridiculous exaggerations with no bearing of reality.
From /u/tonedeath on reddit 3/2018:
... The most important point that he made (in my opinion) is that if you install a distro like Ubuntu 16.04.x LTS (a distro
that is supposedly designed for non-techies, non-geeks, non-developers, you know regular computer users),
a lot of the software in the repos is not the latest versions of things. If you want to run the latest versions,
you probably end up Google-ing and finding out how to add PPAs. This is not hard but, it takes more effort and learning
than downloading installers on Windows or Mac and then getting update notifications. Why should a user of any current
version of a desktop distro not at least be offered to be updated to the latest version of apps? It's a valid criticism
and it should be listened to and addressed. ...
From /u/knvngy on reddit 3/2018:
The Gnome thing is an embarrassing. Looks like amateurish I don't understand what's going there in the Gnome HQ.
But truth be told: Linux has never been really polished, optimized and focused for desktop. The focus on Linux has been: servers,
IT networks and now embedded/mobile, where the money is. In the desktop department Linux is OKish it can be used just fine,
but I would agree that macOs and even Windows are better in that department.
From /u/ThePenultimateOne on reddit 3/2018:
Bluetooth audio is a pretty messy scene on Linux. For a long time I couldn't get any headset to work consistently on Kubuntu.
You would have to go through this painful connect-disable-disconnect-connect-enable loop every single time.
Now I have things working on Fedora ... except for my laptop, which now consistently gets very out of sync.
It didn't do this a month ago. It didn't do this on a previous version of Fedora. The whole thing sucks.
From /u/AlejandroAlameda on reddit 3/2018:
Once every few years, I try to give Desktop Linux another chance just for the kick. Here's my recent experience with Linux Mint 18.3. Enjoy :)
In a test VM with Linux Mint as a guest, VirtualBox guest additions can't be installed (some strange compilation error).
Installing Mint on real hardware then went quite smoothly, but:
USB WiFi interface won't be found on boot, only after plugging out and back in. Need to manually add modules to init scripts.
Not what granny expects from a desktop system.
Installing Chromium is quite flaky: Clicking on "Install" in the Software Manager doesn't seem to do anything
(nothing happens) -- after multiple attempts, it somehow magically appears in the Start menu.
Software installation through the Software Manager is hit and miss in general.
Suddenly, I get a "Busy spinner" as a mouse cursor all the time, everywhere, forever.
Chromium: Switching themes gives huge graphical glitches, a mixture of all previously selected themes is used for
various slices of widgets.
Chromium: All taskbar buttons show the default Chromium icon, not the one belonging to the Chrome app.
Chromium: Each taskbar button has a strange vertical line before the window's title.
File Manager: Situations can arise easily where the File Manager recursively tries to copy a folder into itself,
yielding an infinite "Preparing to copy: 4298742398743298423789234789234 files (42723484329742389423 GB)" dialog.
VirtualBox installation (as host): Entire computer simply freezes (last seen outside of Linux in Windows 95) when launching
a Virtual Machine.
Installing current NVIDIA graphics drivers is impossible except if you're at least 3 rocket scientists.
Even if you manage to install them, nvidia-settings forgets its settings on each reboot (yes yes, I know you can put them
in a "Startup script" with special voodoo command line options, but Granny doesn't want to do that).
Mounted samba shares simply stop working after an update ("Input/Output error"). 2 hours of Googling and trial and error
reveals that the default protocol version simply changed from one version to the next and there's no mention about that,
no useful error message, and no fallback, anywhere.
Desktop compositing is much, much slower and laggier than on Windows with exactly the same machine, graphics card, and official
NVIDIA drivers (verified to be working and in use). I mean, REALLY slow. Like 10 FPS. Dragged windows lag visibly behind mouse cursor.
OpenGL is extremely slow. 12 FPS on Linux, 20 FPS on Windows, exactly same machine and test (WebGL Aquarium, browser doesn't matter).
Lots of obscure character set problems when mounting network shares, too many details to mention.
Some apps don't "see" network shares mounted in certain ways. For example, FreeFileSync simply doesn't list
SMB shares mounted via the "Files" app, which makes it unusable except if you have mount -t cifs and fstab voodoo
(which aunt mary doesn't have).
From /u/MaxPayneNoir on reddit 3/2018:
And this is exactly why Linux desktop share is still ~2-3% (and not because it doesn't come preinstalled on laptops,
as Torvalds instead assessed: ChromeOS is an already popular Linux only because it "just works").
Not that Linux doesn't work, it works perfectly (significantly less troublesome than Windows and macOS, efficient,
lightweight, secure, performing, versatile, free, portable, privacy-keeping and well documented), but you need to
learn how to use it. And relying on GUI stuff only is not the right way of using it. Linux is CLI. You may use Graphical apps
all the day long, and that's perfectly fine, but system administration, configuration, maintenance, and troubleshooting
requires you to type commands in a terminal or on a virtual console. And most people don't like the idea (or are too afraid)
of getting their hands dirty on terminals.
Here lays the explanation for the fact that all the people I know who attempted Linux (even ~ 10 engineering, physics,
IT, computer science students forced to install it by University) but a single guy, dropped it after a while.
However if you bear it for the first 6 months you'll get accustomed to it, start appreciating it better and see reality for
what it is, and probably never look back.
From /u/theth1rdchild on reddit 4/2018:
I've been using Windows since I was 4 in 1993. We had a Windows 3.1 box. I've worked in IT for a decade and I still do,
but I have next to zero Linux experience.
How ... how does anyone do this? I tried to install Ubuntu server 16.04 raid 1 and every single step from partitioning
on required googling and a restart of the entire process. I tried for eight hours just to get a bootable system on
raid 1 and things just kept going wrong. Half the information I was looking up contradicted itself, documentation
is incomplete and advice is anecdotal and missing important information. Screw it, I thought. I'll install desktop
and get used to it before doing crazy stuff. Raid 1 was kind of a nice but not necessary thing. Surely a regular
desktop install will allow me to learn and I can try again in a few months.
But holy sh*t, every single thing I want to do that would be as simple as "Google thing I want, Grab newest version
from their website, Install or launch the exe" in Windows is a tedious stress-inducing headache in Linux.
As example: Google for a program to show sensor output like temperatures. Open hardware monitor looks cool.
Oh it has dependencies. I don't know what mono is. Will it take up a lot of space or break anything else?
Sh*t, I don't know. Oh, this forum post has another person trying to learn Linux and he wanted to use this program.
Everyone is being rude to him. Oh, Linux can't interface with open hardware monitor very well.
Why the f*ck was it the first answer on Google? There's no hardware sensor app like hwinfo for Linux?
Okay, I'll search the Ubuntu apps for a temp sensor at least. There's only one. The only notes say that it
needs something assigned in terminal to work. Why the f*ck doesn't the installer do that? Oh well, now I typed
what it said to in terminal and it didn't take. I don't understand why. Oh, the official page on the app is
misspelled for this command and I copied it directly. Okay, FINALLY I have a temperature sensor. And it doesn't
display anything beyond the current core temp. Great.
As opposed to: "Google temp sensor. Find speedfan or hwinfo. Install. It runs."
Is the problem me? Is my windows brain just too stuck in a rut to understand why all this tedious BS is necessary?
I think at the least I need a decent explanation of why these are so different so I can maybe understand
and work within my limitations better. Any guides I've followed are very straightforward "do ___ then do ___"
so I haven't really learned anything about why Linux is the way it is, which seems necessary to functioning in it.
Thanks to anyone who read all that and can help.
From /u/zincpl on reddit 4/2018:
I just had to set up Linux on my new machine for work, took 4 different versions before it would actually install
then started booting to a blank screen when I installed the software I needed, took me 2 days of non-stop frustration
but now I can finally do something productive.
Basically IMO Linux shouldn't be compared with Windows or Mac, it's made by engineers for engineers,
it's not designed to be user-friendly, rather it's designed to give power to the user and assumes the user knows what they're doing.
It really sucks that there isn't really anything between over-priced and underpowered macs with *nix power
and free-but-held-together-with-duct-tape linux.
From someone on reddit 5/2018:
So I stopped using [pirated] Windows a year ago since it was problematic. Buying is not an option.
So I switched to Linux since it was free, open source, and I am a Science student so I thought it would be pretty useful.
A year have passed and I am still a noob (was very busy with my exams already, learning Linux would have been a burden).
I have a Dell Inspiron Laptop with Intel HD Graphics 5500, 4 GB RAM and 1TB Hard Disk.
I have been switching distros and these are the experiences so far:
Ubuntu 16.04 - Was good but it was a little slow. Plus it wouldn't detect my headphone half of the time.
Elementary OS - Was extremely slow. Took 30 minutes just to boot to login screen.
Return to Ubuntu 16.04
Switching to Ubuntu 16.04 Budgie Remix - Was good. Better than the default Unity both in looks and performance.
Ubuntu 16.04 Xubuntu - Thought this would be lightweight, so installed it. The performance was OK and the look was really bad.
Ubuntu 17.10 - tried to install. My laptop crashed. Couldn't even get past booting screen.
Switch to Ubuntu 16.04 - Performance became slower day by day.
Ubuntu 16.04 Lubuntu - thought that my laptop is low spec, so why not switch to the lightest distro?
Well, surprise, Lubuntu encountered issues. The screen flickered often, especially when coming out of suspend.
Finally, now I am in Linux Mint 18.3 Sylvia - The performance is OKish, lags sometimes, hangs out of nowhere.
I will not talk about gaming experience, but in short it is awful.
So, those of you who are new to Linux, this is my message: be cautious before installing Linux and understand Linux very carefully.
Linux, as an interface for personal use, is terrible.
Some advice: Slow down on switching distros, and find out where your performance bottleneck is by looking at your system usage.
It could be the drivers you're using, or applications that aren't properly optimized to run on your OS.
Dell offers some Linux driver support; look into that and see if you can replace some of the generic ones with Dell's suggestions.
Sounds like some poor configuration or hardware interaction (5400 rpm disk?)
The slowdowns and hangs are probably something to do with the disk. At a guess is it made by Seagate?
They just love to stall for ages.
The other obvious hang is after doing a large disk write then flushing it to disk.
There is a few turntables for this. I wish the distro's would fix these by default. Which is to limit the
dirty cache relative to the performance of the disk.
Your problems are originating from "Intel HD Graphics 5500"
From people on reddit 6/2018:
Re: Windows vs Linux:
Over the course of the past ten years, I have tried Ubuntu on three separate occasions, on three separate laptops.
Each time, I ended up going back to Windows because I couldn't get Wi-Fi to work.
Linux is great if you're a dev. I've found that it hits hiccups any time you are trying to do something a bit more consumer oriented,
and have to interact with the world of Windows and macOS systems, as well as proprietary software.
Linux was also so customizable, and you could set up some pretty impressive desktop environments, however if something went sideways
it would be quite a bit of work to get it sorted. ...
Windows just works.
Linux is buggy and unstable, regardless of what people say (I'd rather use macOS over Linux everyday).
I've tried Linux multiple times and distros and never takes more than a day to find a major bug on the system or a problem with software.
It depends heavily on the hardware, just like Windows. It's also heavily distro and version specific. I haven't been able to get Fedora
to boot from USB without failing in 10 years, but Ubuntu runs every time.
Laptops are another issue ... if you want a Linux laptop, you're best off buying one from System76, Pine64, or Dell/HP with Ubuntu pre-installed.
Wireless support has always been iffy if you try to install it on a laptop that was designed for Windows.
As someone who uses Ubuntu quite often - The Non-LTS releases are effectively betas. The newer, bleeding-edge ones are there
for those who want them, but you're a lot more likely to find bugs outside of the LTS release.
[Currently with LTS] the Ubuntu Store doesn't even work, as a major bug but was reported on their channels.
I used to use Linux as my main OS. What happened is that I found I was needing to go into Windows more and more because of the lack
of support of programs and hardware I needed for working which became a much more present issue in my life as I got older and
spent less time casual computing. There were a lot of alternative software options for Linux but I found most of them to be
unpolished and buggy. If you're okay and enjoy the whole troubleshooting aspect, then Linux might be right up your alley.
I got to a point where I just wanted everything to work though and spend less time trying to make it work myself.
From someone on reddit 6/2018:
Linux can often break more frequently than Windows - no one likes to hear this and I'm sure people will say the problem is the user etc etc.
esp for rolling release distros, or point release when you do e.g. dist-upgrade, and other times with just regular updates, things can break.
With Linux it then becomes a cycle of 'hope you can find the answer on google, try it in terminal, see if its fixed, try something else'
unless you are an expert. This is because of package dependencies in Linux, if you break one others break too.
Often you can need to compile from src etc.
Windows has its own version of dll hell, but each program gets its own dependencies managed via WinSxs so you can't get global breakage
due to a package. People will tell you that Windows Updates can cause problems but that's really rare - they can be slow though.
You get all the benefits of open source, choice, no ads etc but lets dispel a myth - Linux isn't any more performant or stabler than Windows 10.
Windows is rock solid stable, supports every hw ever made and is very fast. It also has better battery life (I've tried both powertop and tlp).
From someone on reddit 11/2018:
I love Ubuntu, but have no more time to resolve the endless bugs it creates.
I adore Linux (Lubuntu is my current distro of choice) and have been using
it for more than ten years. It has taught me a ton about how computers work
and even created some professional opportunities for me writing about tech.
But as an increasingly busy small-business owner, I no longer have an hour
a day to spare sifting through the endless amount of bugs that the OS throws
up and am reluctantly about to switch to Windows. I love customization,
but at this point in my life I also need something that just works and
doesn't impair my productivity.
This week alone:
Oracle Virtualbox has become basically unusable for me. I launch it
and just see a black box basically. Some weird theme-related bug that
even the good folks over on AskUbuntu have been unable to help me resolve.
I don't trust VMWare not to randomly break down again, as it has done in the past,
so like keeping a VM on Virtualbox as a backup. Now I've zero backup
and there's a good chance that I won't be able to run a Windows VM at all
at some point in the near future.
Simple Screen Recorder no longer works. The Continue button is missing.
I spent hours trying to install the very-latest version only to continuously
run into problems compiling the package with Cmake.
Shutter has taken to not starting on system launch and occasionally crashing the system.
Pulseaudio has mysteriously decided to stop recognizing Chrome as an output stream,
meaning that although I can connect my Bluetooth headset through Bluetooth Manager,
I can't switch audio over to it - at least with this GUI.
Autokey has been great except when I try to add a new Unicode-based phrase,
which crashes the whole system. I've wasted hours trying to come up with
workarounds and attempting to debug with people on its users' Google Group.
I'm certain that there are a few more. And that if I knew more about Linux,
or had more time to devote to resolving these issues, that I could fix some
of the above. But I don't feel like I should have to.
Why do things have to be like this? It occurred to me yesterday that I would
be more than happy to pay an annual subscription to a service that both guaranteed
a level of customization that neither Windows nor MacOS offers, but also had some
inherent stability so that bugs like this aren't par for the course. I'm not a poor
student any more. But I still love Linux and the philosophy that underpins it.
Or perhaps asking for both stability and what we love about Ubuntu is chasing after the impossible.
From /u/deadbunny on reddit 11/2018:
... the Mint devs do many things badly.
Rather than type out a long reply here is a Debian dev explaining it:
"Linux Mint is generally very bad when it comes to security and quality.
First of all, they don't issue any Security Advisories, so their users cannot - unlike users of most
other mainstream distributions - quickly lookup whether they are affected by a certain CVE.
Secondly, they are mixing their own binary packages with binary packages from Debian and Ubuntu
without rebuilding the latter. This creates something that we in Debian call a "FrankenDebian"
which results in system updates becoming unpredictable. With the result, that the Mint developers
simply decided to blacklist certain packages from upgrades by default thus putting their users
at risk because important security updates may not be installed.
Thirdly, while they import packages from Ubuntu or Debian, they hi-jack package and binary names
by re-using existing names. For example, they called their fork of gdm2 "mdm" which supposedly
means "Mint Display Manager". However, the problem is that there already is a package "mdm" in Debian
which are "Utilities for single-host parallel shell scripting". Thus, on Mint, the original
"mdm" package cannot be installed.
Another example of such a hi-jack are their new "X apps" which are supposed to deliver common apps
for all desktops which are available on Linux Mint. Their first app of this collection is an editor
which they forked off the Mate editor "pluma". And they called it "xedit", ignoring the fact that
there already is an "xedit", making the old "xedit" unusable by hi-jacking its namespace.
Add to that, that they do not care about copyright and license issues and just ship their ISOs
with pre-installed Oracle Java and Adobe Flash packages and several multimedia codec packages
which infringe patents and may therefore not be distributed freely at all in countries like the US.
The Mint developers deliver professional work. Their distribution is more a crude hack of existing
Debian-based distributions. They make fundamental mistakes and put their users at risk, both in
the sense of data security as well as licensing issues.
I would therefore highly discourage anyone using Linux Mint until Mint developers have changed
their fundamental philosophy and resolved these issues."
Read the comments for more fun examples of how bad the Mint dev team are.
If you want to run a Debian-based system, run Debian or Ubuntu.
Edit: No they have not resolved any of these issues in the last few years since this was posted.
The main issue is that Mint doesn't care about security. To quote glaubitz again:
"On Debian, I open up Google and type "Debian CVE-2015-7547" and I am immediately
presented with a website which shows me which versions of Debian are affected by the
recent glibc vulnerability and which are not. You cannot do that on Linux Mint which
therefore disqualifies itself for any professional use."
Due to the frankendebian issue mentioned in my previous post the fact that Mint uses
Debian compiled packages (they don't compile themselves) they are reliant on Debian
for any and all security fixes. If their frankendebian isn't compatible with the
security patches made by debian (due to dependency issues) then you have to wait
for Clem et al. to actually patch it themselves. Given their history of rejecting patches
and their general security stance I don't have any faith in them to actually do things properly.
Mint also blacklist packages from updates, this means they won't get patched if there is a
security update for them. While there is an option buried within Mint to allow these
to update, this is not something a noob would be doing. This means your system could be
vulnerable even when you think it's fully patched. That is unacceptable.
Mint's selling point is it's ease of use; unfortunately that ease of use comes from
the devs having a willful disregard of licencing issues. They ship their ISO files
with pre-installed Adobe Flash, Oracle Java packages as well as multimedia codecs (which
people want) which violate intellectual copyrights and patents. Unless the maintainers of
a distribution want to violate copyright laws intentionally and make themselves attractive
targets for lawyers, there is nothing they can do to alleviate that. Debian and others
aren't not shipping those packages because they want to make life hard for their users,
it's because they cannot, legally speaking.
(This is the reason Debian forked Firefox and Thunderbird and distributed them as Iceweasel/Icedove.)
In this respect Ubuntu actually has licencing agreements which allow them to distribute third-party
software through their official third party repos without violating the license terms of the software.
There's a class of reasons that I dislike Ubuntu specifically. Ubuntu has at least three completely
different installers, all of which use different sets of preseed commands. Documentation for
Canonical's own installers is pretty bad. Automating Ubuntu installs for a large environment
can be difficult, as a result. I think Canonical is a bad community member, with a history of
competing with the community rather than contributing. They repeatedly offer applications
which aren't as well supported as an application developed by the broader community, and then
after a few years, shut it down. (Examples: Mir, Unity, bzr, probably snaps). If I build something
new on top of a solution from Canonical, I'm probably going to have to rebuild it from scratch
in a few years' time. Partially as a result, if you look at contributions to almost any major
software project for GNU/Linux, Canonical is either very small, or absent completely. They're
more of a consumer of Free Software than they are a contributor.
Lots of people say that closing the lid of a laptop to make it sleep, and
opening to revive it, doesn't work well on Linux. Seems to be a common problem.
Apparently there is a long-standing problem with Linux reacting VERY badly to "RAM is nearly full":
Apparently there is a long-standing problem with Ubuntu and the ~/.Xauthority file that results in people unable to login.
From /u/TheChosenLAN on reddit 2/2020:
I was a full time Linux user for over a year (even bought a Dell Precision 5530 with Ubuntu preinstalled,
to support the movement and have HW fully supported by Linux). But after all that time I had to
go back to Windows. I just got really tired of tinkering around with my system. I just want something
that is standardized and works fully out of the box.
On the effing Laptop, those were some gripes I had:
I used my machine as primary machine on work. That means I needed often go to meetings with the machine,
internally and externally. Just take your Laptop, close it and throw it into a backpack, right? Noooo.
Suspend doesn't work reliably, even after countless tweaks from the internet. Often times it initially
appeared as if it was fully suspended, but after a couple of minutes it turned itself back on again.
In my backpack. In a sleeve. It overheated so bad, I couldn't touch it for more that a fraction of a second
without burning my fingers.
Wanna watch a movie over BT speakers? Well, sometimes BT worked, sometimes not. Oh and also it liked to turn
the screen off after the normal timeout, despite me watching the movie in full screen on VLC.
Wanna have proper RAW image previews in the file manager? Download this package from the official repo.
Oh, it's broken and crashes every time. Just manually install a newer version from a ppa and manually configure
the thumbnailer service.
Oh you tried to use more RAM than the System has? Apparently the OOM manager in the kernel is buggy
and without additional software like earlyoom, your system will just come to a screeching halt.
Oh hey, you wanna use more up to date packages than those from the standard repos? Either use a rolling release
distro, which might break more often (I tried to boot manjaro and it crashed on hecking boot, even with
failsafe graphics enabled). Or use a more recent version of (for instance) Ubuntu, which comes with Gnome
which doesn't work anywhere as smoothly as Unity7 and also has random crashes for some reason.
Oh, and the touchpad tapping doesn't register as reliably as on Ubuntu 16.04 on any other distro,
even Ubuntu ones. Oh and don't get me started on KDE. It don't like its default appearance and I don't want\
to spend a couple of hours tinkering with my system to not have it look like trash.
Use a standard Logitech MX Master on your laptop over BT? No problem works fine. For 5 months.
And then it suddenly starts lagging extremely with no remedy besides reinstalling the entire OS (or maybe
debugging it for a couple of days but I don't have time for that or any interest).
Let's try out ElementaryOS since aesthetics of my system is important to me and it looks promising.
Boot the live CD. Whoopsie, when I click 'reboot' nothing happens and when I click 'shutdown' I get
a kernel panic. The heck?
Copy lots of large files to a USB drive. The progress bar moves instantly to 99% percent and you
have no clue, how long it is going to take. Also, the progress bar may reach 100% and the file manager
say "operation completed", but when I try to safely eject the USB drive it takes another 10 minutes!
before I can unplug it since the data has actually only been copied 10%. During that time, a lot of
file operations will be painfully slow or just not begin at all, since the disk scheduler or what
do I know is pinned 100%. Apparently it's an issue with the buffer sizes or something. So just copy
some configuration options into your sysctl.conf and now it actually works. But now some copy operations
take waaay longer than before (even accounting for the 10 minutes additional waiting time) and
always more time than on Windows.
Don't get me wrong. I still have a soft spot for Linux and think it is promising. And I fully understand and
support you, if you are running it on your own systems. I love it as a software development platform.
But I'm just tired of tinkering with my system and just want it to work. While having it not look like trash
and have recent up-to-date software available and being stable. Windows has it's own slew of issues,
but none of them are so nagging as the Linux ones. At least for me.
Disclaimer: Two months ago I had to swap my mainboard, because apparently the Intel GPU was defective
(maybe an effect from the heat incident, who knows) and Windows (I was dual-booting at that stage) kept
crashing because of it. So it may very well be possible that some of those issues appeared because
I had a bad mainboard. But the thing is, I only discovered it because Windows clearly stated the
crashing module in the BSOD - so I very quickly found the culprit. I have no idea if I would have
found the source if I stayed purely with Linux.
Some flaws in Linux [I omitted some items which are outdated IMO]:
Lack of video game support.
No error feedback: "When you run a program through the panel or start menu in Linux and it fails for some reason,
you are not notified at all. You have to run it through the terminal if you want to see the error messages ..."
Software installation: "packages - of which there are many variants, all incompatible with each other."
No actual firewall: "In Linux, any application can connect whenever and wherever it wants to, while you are none the wiser.
... Windows has had better firewalls such as ZoneAlarm for a very long time ..."
And of course, choosing a distro is a struggle in itself that Windows users don't have to deal with.
[In email 10/2019:
Linux security is pretty much an illusion. An application can do what
it wants in the folders it has permissions in - which usually is your
whole home folder. Many distros run sshd by default on startup which
allows any shmuck to try to crack your password. And some distros have
really weak default passwords for root, which presents a real
danger. I actually had it happen recently since I guess I didn't even
think the root user is enabled on Slackel. Why the f*ck would you have
sshd on by default though? It provides nothing but an entry point for hackers.
My article is old and since writing it I've had way more stuff to add in:
GTK3 apps don't look the same as the GTK2 ones. Add Qt on top of
that and you've got three different looks. Terrible.
Editing the bootloader is a nightmare.
PS Vita [Sony PlayStation Vita] does not properly work with Linux.
Neither does Nintendo Switch ...
Certain applications use different save dialogs than the system-wide
one. Which means your bookmarks will be ignored (I think it's the
GTK3 vs GTK2 issue again, but not sure).
Many, many more. Windows is still worse though, so whatever. Not that
it justifies this stuff, just shows how much of a swamp we're in.
From /u/DistroHopper101 on reddit 8/2020:
I went from macOS to Ubuntu last year, then came back to macOS. This year I gave a chance
to Arch Linux and loved it! Stayed with it for about 4 or 5 months then ... I came back to macOS.
When it comes to specific desktop usage some things are really off on Linux.
Some points that made me switch back:
The lack of good (imo) proprietary software. Apps like Devonthink, Banktivity, Omnifocus and
Logic are really polished and unfortunately mac-only. Office apps and Adobe Suite are a huge deal too.
This won't affect you if you don't care for non-FOSS apps.
Xorg is a hot mess. Remapping keyboards in linux is HELL! The following is based on
personal experience: I've spent a whole week and a few more days learning about setxkbmap,
xkb, xcape and xmodmap. Setted up everything? Good! Want to switch from X to Wayland?
Goodbye to all your cool keyboard hacks that you spent hours (maybe days?) programming.
Nothing I know is even closer to Karabiner Events. It actually came to the point that
my keyboard workflow on macos is way more productive than using i3/awesomewm.
Example: my Caps Lock is a 3 mode modifier (Esc when I tap and release, command + control
when I hold Caps Lock. This in combination with any other key becomes any function you want,
and when I double-tap it and hold it becomes Control (very useful in vim, since this
would be Esc and Control instantly). Regular keys can act as modifiers without disrupting
the standard function key. e.g: If I press "S" key it outputs the "S" letter but if I hold
it and press h,j,k,l it controls arrow keys. "D" letter in combination with h,j,k,l becomes
my mouse cursor. Sane way of making custom deadkeys for accessing common characters used
when I'm progamming and the list goes on.
GUI fragmentation. I use the terminal a lot but Apple set really good Human Interface Guidelines
for graphical applications. It makes the experience of using macOS more polished than the constant
battle of GTK vs QT on Linux.
macOS has a great integration with iOS. I hate this phrase but things just work.
Shift burden of packaging work from many distro packagers / repo maintainers to one app packager/dev.
Especially valuable for large and frequently-updated apps such as browsers,
and large app suites such as Office suites.
More direct connection between users and app developers.
No longer a distro builder/maintainer between them.
Single source for software (Snap Store), although that can be bypassed if you wish.
More familiar to new users who are used to single app/extension Stores in Android,
Apple, Chrome, Firefox, Burp Suite, VSCode, GNOME desktop, Thunderbird, more.
When installing a deb, any scripts provided by the app dev run as root and unrestricted.
When installing a snap, only snapd is running as root, any scripts from app dev are
running non-root and contained.
A user who does not have root privileges can install a snap but not a deb.
Flatpaks, Snaps and Appimages solve the "dependency hell" issue by packaging all required dependencies with
the application itself in a separate environment. This solves an increasing serious problem (inability to install
and run some applications) with another one -- an application's download and storage size and startup time goes up.
By contrast, an application installed from the normal repositories must find all its dependencies (right
version and properties) in the installed libraries, which unfortunately is a declining prospect in modern times.
From someone on reddit:
> What is the potential of snaps? What does it do better than apt?
Snaps are a great way to isolate the program you are executing from the rest of the system.
So the main idea behind Snaps is security and ease of install (distro-agnostic), as .deb based
programs (and many others like it) are able to access the entire disk (with read-only permission),
which can create a lot of security breaches in the system overall. With Snaps you are able to control
what the software can read/write, what kind of hardware it can access (i.e. webcam or a microphone)
and a lot of other options.
From someone on reddit:
"snaps are compressed, and are not uncompressed for installation -- certain snaps actually are
smaller than their installed deb-packaged counterparts"
From /u/timrichardson on reddit 1/2020:
Once, people said the GUI applications were way too full of bloat. And before that, people despised compilers;
hand-crafted assembly language is smaller and faster. The history of coding is to trade off memory and disk space
for more efficient use of humans; it's the history of the algorithms we use and the tools we use,
it's the reason for layer upon layer of abstraction that lets humans steer modern computers.
Like the arrow of time, this is a one-way trend, but unlike time, it doesn't just happen,
it happens because it saves the valuable time of the creators: the coders, the packagers.
Snaps and flatpaks are another example of this. The less time wasted repackaging apps for a
million different distributions, the more apps we all get. When you've got 2% market share of a stagnant
technology (desktop computing), you should grasp at all the help you can get, if you want to see it
survive and maybe even thrive.
And by the way, the binary debs you are used to are not targeted or optimised for your hardware,
they target a lower common denominator. The difference can be significant, look how fast Clear Linux is.
Maybe you should swap to Gentoo. My point is that you already accept bloat and performance hits in
the name of convenience, you are used to it so you don't notice. But traditional packaging is an
old technology, is it is so surprising that there are new ideas?
From /u/10cmToGlory on reddit 2/2019:
The snap experience is bad, and is increasingly required for Ubuntu
As the title says. The overall user experience with snaps is very, very poor.
I have several apps that won't start when installed as snaps, others that run weird,
and none run well or fast. I have yet to see a snap with a start up time that
I would call "responsive". Furthermore the isolation is detrimental to the user experience.
A few examples:
Firefox now can't open a PDF in a window when installed as a snap on Ubuntu 18.04 or 18.10.
The "open file" dialog doesn't work. The downloads path goes to the snap container.
Stuff that I don't need isolated, like GNOME calculator, is isolated. Why do I care?
Because as a snap it takes forever to start, and the calculator I'd really like to have start quickly.
Other snaps like simplenote take so long to open I often wonder if they crashed.
Many snaps just won't open, or stop opening for a plethora of reasons.
Notables include bitwarden, vscode (worked then stopped, thanks to the next point),
mailspring, the list goes on.
The auto-updating is the worst thing ever. Ever. On a linux system I can disable
auto-updates for just about everything EXCEPT snaps. Why do I care? Well, one day,
the day before a deadline, I sat down to do some work, only to find that vscode
wouldn't open. A bug was introduced that caused it to fail to open, somehow.
As the snap auto-updated, I was dead in the water until I was able to remove it and
install it via apt (which solved the problem and many others). That little auto-update
caused me several hundred dollars in lost revenue that day.
Daemons have to be started and stopped via the snap and not systemd. This is a terrible design choice,
making me have to change my tooling to support it for daemon (which I'm not going to do, by the way).
A great example of that is Ansible - until very recently there was no support for snaps.
Logging is a nightmare. Of course all the logs are now isolated too, because for some reason
making everyone change where to look for help when something is not working just sounds like
a good idea. As if it's not enough that we have to deal with binary systemd logs,
now we get to drill into individual snaps to look for them.
Most system tools are not prepared for containerization, and make system administration
much more difficult. A great example is mount. Now we get to see every piece of software
installed on the system when we run mount. Awesome, just what I wanted. This is just one example of many.
Snaps are slowing down my system overall, especially shutdown. Thanks to it's poor design,
there are multiple known issues with snaps and lxd, for example, shutting down running containers.
This is just one of many that makes me have to force shutdown my machine daily.
Creating a snap as a developer is difficult and documentation poor. You have to use a
Ubuntu 16.04 image to create your snap, which alone makes it unacceptable. I found myself
in dependency hell trying to snap package some software that used several newer libraries
than what Ubuntu 16.04 had on offer. The YAML file documentation is laughably bad,
and the process so obtuse that I simply gave up, as it just wasn't worth the effort.
This is just the short list, using mostly anecdotes. I won't waste my time compiling a more
extensive list, as I feel like the folks at Canonical should have done some basic testing
long ago and realized that this isn't a product ready for prime time.
As for Ubuntu in general, I'm at a crossroads. I won't waste any more time with snaps,
I just can't afford to and this machine isn't a toy or a hobby. It seems that removing
snaps altogether from a Ubuntu system is becoming more and more difficult by the day,
which is very distressing. I fear that I may have to abandon Ubuntu for a distro that
makes decisions that are more in line with what a professional software developer who
makes their living with these machines requires.
From /u/HonestIncompetence on reddit:
IMHO that's one of several good reasons to use Linux Mint rather than Ubuntu.
No snaps at all, flatpaks supported but none installed out of the box.
From /u/MindlessLeadership on reddit 10/2019:
... issues with Snap as a Fedora user.
The only "source" for Snaps, the Snap store, is closed-source and controlled by a commercial entity,
Canonical. Sure, the client and protocol are open source, but the API is unstable and the repository url
is set at build-time. Even a Canonical admitted at Flock it was unpractical to build another source right now.
Snap relies on many Ubuntu-isms, it obvious it was never made originally as a cross-distro package format.
It's annoying to see it advertised as a cross-distro package format, when as a Fedora user, I can tell
you Snap does not work nicely with Fedora (it has improved somewhat in the last year), with SELinux issues etc.
At one point running Snap would make the computer nearly freeze up because the SELinux log would be getting flooded.
It also relies on systemd, although that itself isn't an issue but it raises design questions.
Similar to above, snapcraft only runs on Ubuntu. So you have to use Ubuntu to build a Snap.
/snap and ~/snap. If you don't do the former, you can't run 'classical snaps'. This not only violates the FHS,
but doesn't work when / is RO such as under OStree systems such as Silverblue.
The reliance of snapd and relying on loopback mounting. I don't really like df showing a line for each
application/runtime installed, even if it's not running and the entire thing of at-boot needing to mount
potentially dozens of loopback files for my applications seems like a massive hack. A recent kernel update
broke on Fedora the way Snap was using to mount loopback files (although it was fixed). Snaps were also broken
because Fedora moved to cgroups2.
Since they're squashfs images, you can't modify them if you don't have the snapcraft file.
Flatpak as a comparison, stores files you can edit in /var/lib/flatpak.
If I wanted to use Ubuntu to run my applications (Snap uses an Ubuntu image), I would use Ubuntu.
snapd needs to run in the background to run/install/update/delete Snaps. This seems like a backwards
design choice compared to rpm and Flatpak, which elevate permissions where needed via polkit.
Canonical don't seem very interested on addressing any of these, which questions whether it's to help
the "Linux desktop world" or just push Canonical/Ubuntu.
From /u/schallflo on reddit 10/2019:
Does not allow third-party repositories (so only Canonical's own store can be used).
[But you could download snaps manually and install with --dangerous. Someone said also you
could download and then "sudo snap ack yoursnap.assert; sudo snap install yoursnap.snap".]
Only has Ubuntu base images, so every developer has to build on Ubuntu.
Forces automatic updates (even on metered connections).
Depends on a proprietary server run by Canonical.
Relies on AppArmor for app isolation (rather than using cgroups and namespaces like everyone else),
which is incompatible with most Linux distributions, yet it keeps advertising itself as a
cross-distribution package format.
It's a bloated sandbox, tied to a proprietary app store, they've gone out of their way to make it as
difficult as possible to disable automatic updates, so now trust in all developers is mandatory.
Canonical's dismissive toward arguments against the update thing, they took the store proprietary
and for their excuse they offered, "nobody was contributing so we closed the source." Excuse me?
And all the while, they're trying to push vendors to use this thing, which means I am stuck with it.
And I'm stuck with the distro because they've got the market share, and that means this is the
distro with official vendor support for d*mn near everything.
From people on reddit 3/2020:
Snap is pretty much hard-wired not only to Ubuntu, but also to Canonical.
Snap can only use one repository at a time, and if it is not the Canonical's,
users will miss most of the packages. ... Also, some snap packages simply assume that DE is Gnome 3.
... currently Snap (on the server side I think) is not yet open-source.
I think also you get updates on the developer's schedule. So suppose some horrible security hole is
found in library X. Each snap (and flatpak and appimage) in your system may have its own copy of
library X. You can't update one copy (package) of library X and know that the issue has been handled.
[I'm told that flatpak allows sharing of libraries, if the developer sets that up explicitly, maybe
in a case such as N flatpak apps from the same vendor.]
[But see Drew DeVault's "Dynamic linking" (not about snaps).]
How is RAM consumption affected ? If I have 10 snaps that all have version N of a library, I'm told the kernel
will see that and share the same RAM for that library.
Suppose all 10 have SLIGHTLY different versions of that library, point-releases ?
Many people complain that Snaps are slow to launch. Explanation paraphrased from /u/zebediah49:
"Has to create a mount-point and mount up a filesystem, load up everything relevant from
it -- and since it's a new filesystem, we've effectively nuked our cache -- and then
start the application. In contrast to normal, where you just open the application,
and use any shared objects that already were cached or loaded." Daniel Aleksandersen's "Firefox contained in Flatpak vs Snap comparison"
From people on reddit 4/2020 - 6/2020:
closed-source server component.
hard-coded canonical repos.
limited control over updates.
ubuntu pushes it in situations users feel it isn't useful (some default apps are snaps,
apt can install snaps without the user noticing).
a few technical issues, like long startup time when launching an app for the first time (I've even
seen cases where the app didn't launch at all the first time), theming issues, a too-restrictive sandbox, etc.
you can't move or rename ~/snap.
there are some security functions such as limiting which directories the snaps can access, and with
development tools, having to redo your directory structures to accommodate draconian hard-coded is a PITA.
it is entirely within the control of canonical / Ubuntu with the snapcraft store being the
only place to distribute snap packages.
[But you could download snaps manually from anywhere and install them with --dangerous.]
it creates a bunch of virtual storage devices, which clutters up device and mount-point listings,
and maybe slows booting.
bloats system with unnecessary duplicates of dependencies both on disk and in RAM.
snap allows designation of only one repo for all snaps; you can't list multiple.
some people say snap introduces yet another variable into "why doesn't app X use the system theme ?"
snaps won't function if the /home directory is remoted in certain common ways.
snapd requires AppArmor [true], won't work under SELinux [means "SELinux alone, without AppArmor" ?].
all snap-packaged programs have horrible locale support.
snap software doesn't work with Input Method. That alone makes snap totally useless
for me as I cannot input my native language, Japanese, to the snap-packaged software.
4/2020 I installed Ubuntu 20.04 GNOME, and decided to let it use snaps as it wished:
Ended up with software store and 4 more snap apps in my user configuration (~/snap), and
a dozen more for all users (/snap). They seem to work okay, with one big exception:
when a snap app needs to launch or touch another app (Liferea launching any downloader,
or VSCode opening a link in Firefox).
This either fails (Liferea case), or works oddly (VSCode opens new FF process instead
of opening a new tab in existing FF process). But: KeePassXC is a snap app, and has no problem opening
a link in existing Firefox process. [Later someone said: VSCode is specifying profile "default", so
if you've changed to another profile, FF has to open another process. Let it open FF, then set your desired
profile as the default, and next time VSCode will open link in existing FF process.]
Some people complain that Ubuntu's store app prioritizes snaps ahead of debs (re-ordering search
results to do so), and even has some
debs (Chromium) that start as a deb but then install a snap.
I'm told: Pop!_OS has adopted a no-snaps policy, Elementary OS has adopted a flatpaks-instead-of-snaps policy,
Mint has a no-snaps-by-default policy.
Dev who packaged Liferea as snap said fixing it is complicated, just about as I was giving up on
the snap version and changing to the deb version. Works.
VSCode as snap had a couple of issues: won't open a new tab in existing FF process, and seemed
to be interpreting snap version of node incorrectly (said "v15.0.0-nightly20200523a416692e93"
is less than minimum needed version 8). I gave up, uninstalled the snap version and changed to the deb version.
The node-based FF extension I was developing can't contact Tor Browser.
Removed node.js snap, and did "sudo apt install nodejs" and "sudo apt install npm".
But that didn't fix the problem.
9/2020: Changed Firefox in my system from deb to snap. Flatpak and snap have almost same versions in
them: snap is a fraction more recent. I don't see a developer or nightly version available in either store/hub.
Apparently to get flatpak beta you need to add a flatpka beta repo.
Did "sudo apt remove firefox", "snap install firefox", then copied profile from old place to new place, works.
One under-handed thing that Ubuntu 20 does: the deb package for Chromium browser actually
installs Chromium as a snap. IMO that's deceptive. If it's available only as a snap,
don't provide a deb package at all.
Changes Canonical could make to eliminate most of the objections:
Support an "update never" setting for a snap. Perhaps there could be a mechanism for
notifying that an update exists, that the update fixes security issues, and/or current version is past EOL.
Open-source the proprietary part of the Snap store software.
See some details in Merlijn Sebrechts' "Why is there only one Snap Store?"
[A user could download snaps manually from anywhere and install them with --dangerous, but that's
a bit of an ugly solution.]
From predr on Snapcraft forum 8/2020:
Only parts missing are server code, Amazon S3 buckets, snap signing (assertions), and database APIs.
You won't find these things open-sourced in any good store, for a reason. Everything else is open source."
Response from Merlijn Sebrechts:
"Canonical's official position is that the store is currently woven into their own internal
infrastructure. Open-sourcing it would require a massive effort to untangle this and they
don't think it's worth the effort."
Have some kind of policy board overseeing the store, that includes outside people.
Ban use of any "deb that actually installs a snap" packages. More of a distro policy issue, but snap
could state it as the preferred policy.
Allow Ubuntu system owner to set policies such as "I don't want snaps in my system"
and "prioritize apt first" in the Ubuntu Software application.
How to prevent a snap from ever being updated:
Instead of running "snap install foo", do
"snap download foo ; snap install foo.snap --dangerous".
That sideloads the snap onto your system, so that it won't get updates from the store.
(Doesn't work for "core" snap.)
Kernel: mailing-lists (multiple) only; have to figure out who the component maintainer is,
and what the mailing list is.
Some projects/distros (e.g. Mint): unified bug-tracking and feature-requests and source-control (e.g. on GitHub),
but with dozens or hundreds of components, and you have to figure out right component to file against.
Many component areas are stale or inactive or placeholders.
Some projects/distros (e.g. Ubuntu): separate strategies for bug-reporting
(UbuntuOne) and feature-requests (mailing list).
A given part may have a huge "stack" and you may have to figure out exactly where to report:
Example: Pix app in Linux Mint 19 Cinnamon:
Not really in linear order, there are forks in here.
Part of Mint Cinnamon distro.
Part of Mint family.
Part of Ubuntu family.
Part of Debian family.
Part of XApps project.
Pix app is forked from gThumb app.
gThumb is part of GNOME project ?
Built on top of GTK ?
Built on top of X windowing and glibc ?
Built on top of Linux kernel.
Example: GNOME desktop in Linux Ubuntu desktop 20.04:
Not really in linear order, there are forks in here.
GNOME "Icons" extension.
In Ubuntu 20.04 distro.
Part of Ubuntu family.
Part of Debian family.
Built on top of GTK ?
Built on top of X windowing and glibc ?
Built on top of Linux kernel.
From someone on reddit:
freedesktop.org is a project, which aims to reduce the fragmentation of Linux desktop.
They work on interoperability and "host" software such as systemd and wayland. It used
to be called X desktop group (XDG), but now they are killing off X11 ("death of Xorg"
will be beneficial for the Linux desktop as a whole), so they "rebranded" themselves.
GNOME and KDE work with them.
You don't send bug reports about anything to them. You can discuss "standards" stuff,
e.g. new wayland protocols on their mailing lists.
GTK and clutter are GUI libraries developed by the GNOME team. Qt is a GUI library
developed by the Qt company (KDE is using it). These libraries are used by various GUIs.
Usually, the programmers using them are those who file bug reports about these.
My issues with Linux Mint 19 Cinnamon in particular:
Apps in the repo sometimes are ancient versions, I guess because the Ubuntu LTS repo is
being used. The custom default Mint apps such as Pix are current, I think.
Removing a USB drive is much more sensitive than in Windows; easy to cause a FAT*
filesystem to become "dirty". And then Nemo file explorer doesn't report an
issue when you mount a dirty filesystem, which is VERY bad behavior.
In Linux Mint 19.0 Cinnamon, I had "UI freezes" (underlying OS still running) or "complete freezes" (all dead)
until I stopped using the Synaptics touchpad driver.
In Linux Mint 19.3 Cinnamon with 5.3 kernel, I'm getting occasional freezes again. Sometimes
under high load using Veracrypt (not sure if relevant) and an external disk,
Nemo will crash or the system will freeze.
In several apps, including the standard apps xed and pix, printing to a European A4 printer does not work properly.
If the document has content starting at the left edge, that edge will be cut off by the edge of the paper
when printing. The apps involved have no "margin" settings in their print dialogs.
My experience 4/2019 after using Linux Mint 19 and 19.1 Cinnamon for about 8 months:
My opinion: installing / updating / package managers is a mess:
I'm not happy about the variety of package managers and installers you have to use.
I would like to deal with only Mint's Software Manager and Update Manager apps, but I also have to deal with
FlatPak, Docker, Github, apt,
pip (Python), bundler (Ruby), tar, npm (Node), yarn, more things I don't know the names of. Some of these are at
a different level than others, I don't know.
Some apps (such as Atom) have different
builds (of same release, I think) that work differently.
Updating is done in many different ways:
Through Update Manager.
Most apps that use plug-ins (e.g. Firefox, VSCode,
Burp Suite, OWASP ZAP)
update them inside the app, using some custom mechanism.
XnviewMP and Master PDF Editor check
for updates internally and then you have to download and install them separately (not through Update Manager).
GNOME shell checks
for extension updates and then you have to download and install them
from the extensions site through the GNOME shell browser extension.
"Oh My Zsh" and npm check and update
themselves at the CLI.
FoxIt Reader, Thunderbird seem to check and apply updates in a custom way.
Snap checks for Snap Store package updates four times each day and applies them automatically ?
And Ubuntu updater doesn't tell you what snaps are being updated or any details about the updates.
The anti-virus packages
all install cron jobs to update signatures, some (Sophos) also update
the AV app that way.
Some apps (Atom, KeepassXC, OWASP-ZAP, more ?) notify you of the
existence of updates, but then you have to download the update or go to the home web site
and download the update or do apt-get to get them.
Some apps (Windscribe, more ?) notify you of the
existence of an update and then stop working, until you update them through Update Manager or elsewhere.
I had hoped Linux would have a more rational
install/update situation than Windows does, but it doesn't.
Causes of this:
Cross-platform apps find it easier to roll their own internal update mechanism rather
than use the different mechanisms on Linux, Windows, macOS, wherever else they run.
Cross-distro apps that need to update their database (e.g. security apps) find it easier
to roll their own internal update mechanism rather than build and submit update packages for
each different distro family and repo.
Simple one-dev apps find it too burdensome to build and submit packages for each
different package manager type and distro family and repo.
Apps with internal add-ons and add-on stores roll their own internal update
mechanisms for add-ons.
Older apps and services built before there were updaters/stores just used cron,
and continue to use it.
I find the Nemo file explorer to be slow (19.1 is faster). Maybe my laptop has too little RAM (3 GB). I should try a
different explorer, and I'm tempted to try a lighter distro such as Xubuntu next time
I have to do a new install (I'm thinking of buying a new laptop).
On the other hand, I reported a series of Nemo crashes (on 19) and within days a dev had fixed it and
put out the new version. Not going to see that on Windows.
Scrollbars too thin, and I had to try a series of hacks to get them wider.
Often it's unclear where to report a bug. Is it a Mint thing, or an Ubuntu thing,
or a Debian thing ? A Cinnamon thing, a GNOME thing, a freedesktop.org thing ?
Often it's unclear where to tweak something. Is it a theme thing,
or a Cinnamon thing, or a GNOME thing, or a Mint thing ? Some apps using gtk 2.0,
others using gtk3.0, and the config files are separate and with different naming.
My MP3 players don't work well with Linux Mint 19; they worked fine on Windows.
Connect via USB cable and delete a file, Linux says it's gone, MP3 player
says it's still there. Might be related to Linux not supporting formatting in FAT16 ?
But I think it happened even before I resorted to reformatting my MP3 players
to get rid of "ghost" files.
The upgrade from Mint 19 to 19.1 was done through Update Manager, but the update didn't
appear in the normal window, instead somehow you were supposed to notice that a new item had
appeared in the Edit menu of Update Manager ! But the update went smoothly.
My issues with Ubuntu 20.04 desktop GNOME in particular:
Installer still as much of a mess as Mint 19.0 Cinnamon installer was,
when it comes to partitioning, encryption, and swap.
I was doing the simplest case (wipe Windows, use whole disk
for Ubuntu) and still had confusion, errors, no idea what swap settings I was getting.
Ubuntu GNOME desktop is primitive and limited compared to Mint Cinnamon desktop.
My 1/2019 response to "will Linux ever reach 10% share of the installed desktop OS market ?":
To me, a big barrier to people moving to desktop Linux is the bewildering number of variations.
Hundreds of distros, a dozen ways of packaging applications (package managers, then Docker, Flatpak, Snap, Appimage, etc).
I would love to see some consolidation inside each of the major distros. For example, some way that all
the Ubuntu flavors (including Mint) could become one Ubuntu, and then at install time you pick
DE and theme and list of installed apps. Same among the other major variants (Red Hat, Arch, Slack, Gentoo ?).
That way someone moving from Windows or Mac really would be given 6 or 8 major choices, not 50 or 200.
And app developers and hardware developers and bug-fixers would have more focus, and less
duplication of effort. Linux would get better and better.
Also, installation (partitioning and dual-booting) is a big barrier. Even with installers
that try to make it easy, it's confusing. Certain options make things happen automatically,
others require that user specify the partitioning. I installed Mint, wasn't clear how to get
a swap file instead of a swap partition, if I chose encrypted /home then I had to do
partitioning manually, etc. And user has to know if they have BIOS or UEFI.