Technical Basics

(For very basics, see Basics section of "Moving to Linux" page.)


[Mostly from lowest to highest:]
  • Boot medium: hard disk (HDD), solid-state drive (SSD), flash drive (USB), read-only (CD, DVD), or network.

  • Disk partition table: part of Master Boot Record (MBR) sector,
    or GUID Partition Table (GPT) which is 34 logical blocks.

  • Computer/boot firmware:
    Legacy BIOS (MBR sector has partition table and first-stage bootloader),
    or UEFI (load a file from FAT32 boot partition)
    or coreboot.

  • Boot manager / boot menu / bootloader: usually GRUB, but there are others (Syslinux, LILO, rEFInd, gummiboot, efistub, coreboot, OpenCore (Mac), Clover (Mac), BOOTMGR (Windows), systemd-boot, U-Boot (embedded systems), more). [Also there is a "fallback" bootloader in boot/efi/EFI/BOOT ?] Gives a menu of available kernels/systems, user chooses, it loads that image and jumps to it. At various points in here, there may be decryption of the full disk or a partition.

  • Bootsplash: Plymouth, Splashy, RHGB, XSplash, more.

  • Kernel type: mainly a matter of config parameters ? Real-time, low-latency, clustering, stripped-down, more ? Could change the process scheduling algorithm or other things. There are various named configurations (AKA "spins"): Zen, Liquorix, XanMod.

  • Kernel: handles processes, memory, more. Uses drivers to connect to hardware devices, and modules to implement filesystems and network protocols and security mechanisms and encryption and more.
    Linux kernel map
    Interactive map of Linux kernel
    Do "lsmod" to see installed modules, drivers and filesystem types.
    Do "lspci -nn -k" to see which module/driver each device is using.
    Do "ls /sys/module/MODULENAME/parameters" to see parameters defined for a module.
    Do "sudo cat /sys/module/MODULENAME/parameters/PARAMNAME" to see a parameter value.

    • Modules and Drivers:
      "ls /lib/modules/$(uname -r)/kernel/drivers/*/*.ko"
      to see all available modules and drivers.
      Some drivers may contain/install maybe-proprietary binary "blobs" of code which run on CPU or GPU etc: article.
      There also could be a "blob" of CPU microcode which is installed inside the CPU to modify the instruction set.

    • Filesystems:
      "ls /lib/modules/$(uname -r)/kernel/fs/*/*.ko"
      to see all available filesystem types.

    Also kernel parameters: "less /boot/config-$(uname -r)"
    Each distro can configure the kernel as it wishes.

  • libc API: glibc, musl, others.

  • GNU packages/utilities: shell, compiler, libraries, CLI utilities (coreutils: cat, ls, grep, others), more.

  • util-linux utilities: dmesg, fsck, kill, su, others.

  • Init system.

  • Windowing/display server protocol: X11, Wayland, Mir, OpenGL ?
    "X" section, "Wayland" section
    "echo $XDG_SESSION_TYPE", "loginctl show-session 2 -p Type", "glxinfo | less"

  • Display server: X.Org Server, XFree86, Wayland compositor (Weston, mutter) ?

  • Window manager: X11 manager (tiling: i3, dwm, bspwm, awesomewm, xmonad; stacking: Fluxbox, Openbox, mutter, Compiz, Metacity, kwin, xfwm, muffin, Gala, Window Maker), Wayland manager/compositor/more (Weston, sway, Way Cooler, mutter, Enlightenment, Moksha, Wayfire, more) ?

    article1, article2, article3, do "wmctrl -m" to see what you're using.

    From StackExchange answers:
    "Your display manager creates a nice graphical display where you can use a login manager to login to your X session which will start a window manager and may start a desktop manager."

    From someone on reddit:
    "The window manager puts the window decoration around the contents including the buttons to minimize or close. It allows resizing and moving the windows around, decides which window is on top."

    From someone on reddit:
    A window manager handles the placement, movement and geometry of windows. It also handles titlebars, borders and other decorations.

    A compositor is responsible for transparency of windows and other fancy effects such as fade in/out, preventing screen tearing, animations, more.

    A desktop environment is a set of tools, integrated together to give you a comfortable experience. It consists of window manager, compositor, status bar, settings manager, polkit agent, more.

    From someone on reddit:
    In many cases your window manager and compositor are the same. There are three types of window managers:

    • Tiling - these are less common, but their distinguishing feature is that they don't allow windows to overlap. They arrange windows on the screen so that you can always see the entire contents of all windows.

    • Stacking - this is a bit more typical. Windows used a stacking WM until Windows Vista. It allows windows to overlap, but when it draws the screen it uses the painter's algorithm to come up with the final image. That means that it actually overwrites the image of each window in memory as it stacks the windows. This means the window manager needs to ask each program for its contents to redraw a window, if for example you move a window out of the way of another.

    • Compositing - the key difference between stacking and compositing is that compositing window managers hold the contents of all windows in memory and create a final, composite image by manipulating all of them. This allows for effects like "peeking" windows or Mac OS expose-like functionality. It also allows for "fancy" effects like distorting windows.

    Kwin (used by Plasma), xfwm (used by Xfce), mutter (used by GNOME), more ... are compositing window managers, so there is no driving need for a separate compositor. You can still disable the compositing functionality of some (xfwm and kwin, not mutter) and add a separate compositor like picom.

    Also note that this all applies to the X window system. On Wayland, you must always have a compositor built into your window manager and they generally just call them "Wayland compositors" rather than window managers.

  • GUI framework/library/toolkit ?: GTK+, Qt, libwayland-client, libX, OpenGL ? Non-GUI: D-Bus, udev ?

  • App framework/library ?: KDE, GNOME, Unity, more ?

  • Desktop manager (taskbar/systray, application launcher, icons): KDE/Plasma, GNOME, Cinnamon, Xfce, LXDE, more.

A bundle of choices of the last 4-5 items and some apps usually is called a "Desktop Environment" (DE).
Active GUI desktop DE's include: Budgie, CDE, Cinnamon, COSMIC, CuteFishDE, Deepin/DDE, Enlightenment/E, Equinox/EDE, GNOME, GNOME Flashback, KDE Plasma, Lomiri, Lumina, LXDE, LXQt, MATE, Moksha, NsCDE, NX Desktop, Pantheon, PIXEL, Razor-qt, Regolith, Sugar, Trinity, UKUI, Unity, Xfce.
(There also are text-mode DE's, and Linux-phone DE's such as Phosh, Plasma Mobile, Ubuntu Touch, more. And discontinued GUI desktop DE's.)
article1, article2, article3

From someone on reddit:
"A DE gives you an overall user experience. It has the panels, the system menus, the starters, the status applets. It has a window manager. It might offer a default file explorer and viewer. To streamline, it might even contain default editor, terminal program, or even e-mailer, all made to look alike and work together."

Distributions (distros):

A distro is a set of choices of layers/parts/policies, all packaged together under one label.

Some major distros: Debian, Ubuntu, Kubuntu, Linux Mint, Red Hat, Fedora, Arch, openSUSE.

Debian Family Tree (!)
Which is just part of a bigger GNU/Linux Distributions Timeline (!!)

My "Linux Distros" page

Other pieces that usually vary by DE:

  • Display manager / login screen manager: sddm, gdm3, lightdm, KDM, MDM, SLiM, more.
    Christian Cawley's "What Is a Linux Display Manager?"
    "loginctl session-status", "loginctl show-session 2 -p Service", "sudo systemctl status display-manager --full --lines 1000", "less /etc/gdm3/daemon.conf", "ls /etc/sddm", "grep '/usr/s\?bin' /etc/systemd/system/display-manager.service".

  • Themes.

  • System Languages.

  • Fonts.

  • Desktop widgets/launchers/icons/shortcuts. Launchers: dmenu, Synapse, Albert, Ulauncher, more.

  • Taskbar/systray applets.

  • GUI-shell extensions.

  • GUI workspaces (AKA virtual desktops or multiple desktops; article).

  • KDE's GUI activities (desktops with different icons and widgets on each, and then you can have multiple workspaces inside each activity).

Other pieces that often vary by distro or distro-family:

  • Installer (Calamares, Ubiquity, Subiquity, curtin, Anaconda, Debian-Installer, YaST, Cnchi, Refracta, os-installer, more).

  • System GUI apps:
    • Updater.
    • Software manager/store (snap software-boutique, GNOME Software, KDE Discover, pamac, AppCenter, MintInstall, more; article1, article2).
    • File manager/explorer (Nautilus, Nemo, Caja, Thunar, Dolphin, Krusader, Pantheon Files, Ranger, others) and add-ons.
      ("xdg-mime query default inode/directory" to show file manager in use.)
    • Task manager.
    • Crash-reporter (whoopsie, apport, Dr. Konqi, others).
    • Settings manager.
    • Network manager.
    • Disk partitioning/formatting.
    • Screen-saver/locker: XScreenSaver, Light-Locker, GNOME Screensaver, XSecureLock, XLockmore, alock, xtrlock, more.
    • more ...

  • Default user GUI apps:
    • Terminal.
    • Text-editor.
    • Video player.
    • Browser.
    • Image viewer.
    • Image editor.
    • more ...

  • Repositories / App Store: apps and packages that are available for use in this distro.

    You'd hope that being in a repo means "has been tested/approved and is supported", but it may just mean "installs and runs without crashing". And various repos for a distro may be provided by various parties; e.g. Ubuntu has 4 repos (the universe and multiverse repositories are "community-maintained").

  • CLI apps.

  • Codecs.

  • Printer drivers: "lpstat -s" to see defined printers; "ls /etc/cups/ppd/*.ppd" to see the installed printer drivers; /usr/lib/cups/driver contains databases of PPD's.

  • Documentation.

Things that vary less often, or vary only from one distro-family to another:
  • Source or binary packages: most are binary; source-based distros include: Gentoo, LFS.

  • Package formats and managers

  • Init/daemon/services system: sysvinit, runit, systemd, BSD-style startup scripts (Slackware), Upstart, Finit, OpenRC (Gentoo), more.

  • Fairly standard daemons/services: Network management (DNS, DHCP, VPN, more), printing (CUPS), network services (FTP, SSH, more).

  • Fairly standard facilities: Authentication (PAM, certificates, more), access control, IPC (D-Bus, sockets, more).

Some other variations or modifications, some of them user choices:

  • Do you use the command-line much, or mostly stay in the GUI.

  • Does the distribution do "rolling releases" (constantly updating), or do periodic "stable releases" (AKA "point release" or "fixed release"). Rolling release: Arch, Solus, openSUSE, more. Stable release: Ubuntu, Mint, Fedora, Elementary OS. Some distros have both kinds available.
    FOSS Linux's "Linux Rolling Release vs Point Release, and which is better?"

  • There may be different stable, testing, experimental versions of the same distribution.

  • Emphasis / target of distro: desktop GUI, server, VM / container / cloud, micro-VM / unikernel / cloud, IoT, micro-computer (e.g. Raspberry Pi).

  • CPU architecture: x86, ARM, more.

  • 32-bit and 64-bit versions, as on Windows.

  • UEFI Secure Boot (Ubuntu's "SecureBoot"), or not.

  • Kernel / system emphasis: normal, high-availability (clustering), hardened, low-latency (e.g. Ubuntu Studio, Liquorix), real-time, run-as-root (e.g. old Kali), containers/cloud image (e.g. Flatcar), immutable (e.g. Fedora Silverblue/Kinoite, openSUSE MicroOS).

  • Disk encryption: full-disk, full-extended-Linux-partition, individual partition encryption, encrypted home directories, or no encryption.

  • Various command-line shells: bash, zsh, fish, dash, tcsh, ash, more.

  • Various GUI docks: Latte, Cairo, Docky, more.

  • Do you compile things yourself from source (LFS, BLFS, article), or use binaries created by other people/companies.

    About LFS, from someone on reddit:
    Just FYI ... as someone who has gone through Linux from Scratch ...

    Certainly you will learn some things about Linux by doing this. But honestly ... LFS is essentially a lot of very repetitive compiling -- you go through the compilation of all the basic tools for setting up a Linux system. But there's very little explanation about how things work, how all the pieces fit together, etc., and the final product is a bare-bones, minimal Linux install. Obviously, you can add more to it if you wish (and there is some guidance on the web site for that), but in general I think the best thing that LFS teaches is just sort of ... how programs are compiled from source code.

    I learned a lot more about how the pieces of Linux fit together (the init system, networking, package management, desktop environments, etc.) by installing Arch Linux. (Gentoo would work as well, but is a longer process given the compile time.) You're not "building it from scratch", but most of the interaction people have with a working Linux system is on the level of those building blocks, rather than "how do I ensure that this program in /bin is properly linked with the correct gcc compile tool". I found it much more useful to find out things like, "how does networking work in Linux? what's the difference between ALSA and PulseAudio?"

    And if you're really interested in how software is compiled, of course, you are certainly free to compile things within an already-working Linux distro! Installing something manually from the Arch User Repository could be a useful intro into this concept.

    I'm certainly not trying to dissuade you from going through Linux from Scratch. It's well-written, and there's certainly something cool to building your own Linux system entirely from the ground up. I'm just trying to offer my perspective, as someone who went through it myself for the purpose of learning about how Linux worked, and realized I didn't actually learn much at the end of it.

Dedoimedo's "The ultimate guide to Linux for Windows users"

Roles in the Linux world:

  • Policy / standards / licensing bodies and corporations.

  • Developers.

  • Packagers.

  • System administrators.

  • Users.

More-complicated things you can do:

  • Partition a disk into Windows and Linux partitions, so you can dual-boot (boot into either OS). A little dangerous; sometimes a Windows update will wipe out the Linux bootloader (only if using MBR ?).

  • Boot Linux, and then sometimes run a Windows emulator (such as WINE) to run some Windows applications.

  • Boot Linux, and then sometimes run a virtual machine (such as VirtualBox) to run a copy of some other OS (Windows, or a different Linux) inside the VM.

  • Boot Windows, and then sometimes run a Linux command-line environment (such as BASH) to do various operations.

  • Boot Windows, and then sometimes run a Linux emulator (such as WSL 1) to run some Linux applications.

  • Boot Windows, and then sometimes run a virtual machine (such as VirtualBox, or WSL 2) to run a copy of some other OS (Linux, or a different Windows) inside the VM.

See my "Moving to Linux" page

Linux Myths

From 4/2000 interview of Rob Young, founder of Red Hat:
There are two big myths about the business, and the first is there is a single Linux operating system. Linux is a 16-megabyte kernel of the 600-megabyte operating systems that companies like Corel and Red Hat make. Linux might be the engine of your car, but if you plunk an engine in your driveway, you're not going to drive your kids to school.

Our job is making people understand this revolution is about open-source software and it is not about Linux at all. Linux is simply the poster boy for this movement. The other myth is that Linux is being written by 18-year-olds in their basements when actually most of it is being written by professional engineering teams.
Dawn Foster's "Who Contributes to the Linux Kernel?" (1/2017)

Another myth is that "free" software and "open-source" software are identical concepts.
Mark Drake's "The Difference Between Free and Open-Source Software"

In 2021, there are lots of articles about how shameful it is that so many people use FOSS without paying for it. Major internet incidents have been caused by one lone badly-supported package-developer stopping their work or making a mistake. But I think people pushing the "shameful" narrative are being naive. Many corps and people use FOSS exactly because it is free. They don't WANT to pay for it. They're not using it to take some principled stance about freedom.

Desktop user should realize: Enterprise Linux is where the money is, and it drives and enables many of the advances delivered to desktop Linux. PipeWire comes from automotive Linux; systemd comes from needs of enterprise admins; probably ZFS and Btrfs and containers and security modules and firewalls etc.

Linux Truths


If you want to contribute to the Linux community:

You could pick an app or distro and:
  • Donate money to the project.

  • Seed torrents for a distro's files.

  • If you find bugs during normal use, report them.

  • Get onto the beta or insiders or "proposed" track, and report crashes and bugs.

  • Help to test it, systematically, using a plan and maybe automated tools.

  • Help to improve the docs.

  • Help to improve the project or doc web sites.

  • If you're artistic, work on artwork or logos.

  • Help to port an app or service to more distros or more DEs.

  • Help to translate a distro or app to more human languages.

  • Help to make a distro or app or site more accessible to the disabled or impaired.

  • Do some bug-fixing (start small: a smaller app you know well, in a language you know, small bugs).

  • Donate money to fund development or fixing of a particular feature, or do specific work yourself for money: Bountysource.

Jason Evangelho's "8 Ways To Contribute To The Desktop Linux Community, Without Knowing A Single Line Of Code"
wikiHow's "How to Contribute to Open Source"
Linux-For-Everyone / contribute-foss
Open Source Guides' "How to Contribute to Open Source"

John Regehr's "Responsible and Effective Bugfinding"
LTP - Linux Test Project
See "Reporting Bugs" section of my "Linux Troubleshooting" page
Shubheksha's "How to find your first open source bug to fix"

Davide Coppola's "How to contribute to an open source project on GitHub"
Elizabeth K. Joseph's "4 ways I contribute to open source as a Linux systems administrator"
Quora discussion "How do I start contributing in Open Source projects?"
Sarah Drasner's "How to Contribute to an Open Source Project"
Command Line Heroes' S2E3 "Ready to Commit" (audio)
Debian's' "How can you help Debian?"

For development, start with an app, DE, or distro you use:
First, become a very good user:
  • Use the software, explore the features, read the docs and Help.

  • Update to the latest version, or the beta version.

  • File bug reports, and read the open issues/bugs.

  • Join the user forum or group or reddit sub.

  • Skim the source code, on GitHub or other site, or by cloning it to your local disk.

  • Build the app from source code and use the copy you built.
    Abhilash Mhaisne's "How to Install Software from Source in Linux"

kamranahmedse's "Getting Familiar with Unfamiliar Codebase"
Parth Parikh's "General Guide For Exploring Large Open Source Codebases"
Mitchell Hashimoto's "Contributing to Complex Projects"
Jaideep Rao's "4 big lessons from my internship with open source"
Not about open-source: Samuel Taylor's "How to Join a Team and Learn a Codebase"

Learn git. Although some projects use another source-control system (e.g. Subversion, Darcs, Mercurial, more).
Catalin's Tech's "How To Make Your First Open-Source Contributions"

Development things to realize:
  • Probably you will be stepping into a project that has a 20-year history, an enormous base of scripts, code, docs, and issue-tracking, and a big somewhat-changing team of developers.

  • It will take time to learn the code and tools and style.

  • Flitting from one project to another may be difficult, if they all use different tools and languages etc.

  • People-issues and process will be very important; follow the rules, be humble, start very small.

  • If a project seems stalled (2 years since last release, bugs open for a long time, pull/merge requests outstanding for a long time), or going through a huge transition, or very undocumented, or broken in some way, maybe stay away (at least as a first project to join).

Big projects often have extensive guides for developers:
GNOME Wiki's "Building system components"
[GNOME] Tobias Bernard's "Community Power Part 1: Misconceptions"
KDE Community Wiki's "Get Involved"
Linux Mint Developer Guide
Debian Developers' Manuals
What can I do for Mozilla
What can I do for LibreOffice
What can I do for Fedora ?

From someone on reddit:
Anyone suitably capable can become an Ubuntu developer! ...

You start by providing fixes for existing Ubuntu developers to sponsor. Once you have a track record of good work and your existing sponsors are willing to endorse you, you can apply to become an Ubuntu developer yourself. ...
Ubuntu Packaging and Development Guide

Check the plans/roadmap for the unit you have chosen, to see the priorities, and what other people may be working on:
linuxmint / Roadmap

Find out the developer communication channels (mailing list, forum, bug-tracker, IRC, whatever) for the unit you have chosen, and join those channels.

Konrad Zapalowicz's "Three Ways for Beginners to Contribute to the Linux Kernel"
Adam Zerella's "How to become a Linux kernel developer"
Jason Wertz's "Kernel Basics" (video) (2013 but interesting)
torvalds / linux "HOWTO do Linux kernel development"
Linux Kernel Newbies
Arch: KernelBuild
Garrit Franke's "Compiling your own kernel"
Tamir Suliman's "Beginner's Guide to Writing your First Linux patch"
Sayli Karnik's "A checklist for submitting your first Linux kernel patch"
Kosta Zertsekel's "Who should I sent Linux Kernel patch to?"
Byte Lab blog
danvet's "Why GitHub can't host the Linux Kernel Community"

Ashish Vara's "Kernel Architecture Of Linux (Part 7/15)"
Wikipedia's "Linux kernel - Architecture"
linux-kernel-labs' "Overview of the Linux kernel"
Greg Kroah-Hartman's "Linux Kernel in a Nutshell"
Tigran Aivazian's "Linux Kernel 2.4 Internals"
online copy of Mel Gorman's book "Understanding the Linux Virtual Memory Manager"

The Linux Kernel documentation
Bootlin Elixir (Linux kernel source tree)
torvalds / linux (Linux kernel source tree)
Plailect / linux-devel (kernel development using VSCode and libvirtd)

Jesse Smith's "Benefits to building your own kernel"

Gaurav Kamathe's "Analyze the Linux kernel with ftrace"
Gaurav Kamathe's "Kernel tracing with trace-cmd"

To get a copy of the kernel source locally: "sudo apt install linux-source" and "ls /usr/src". You'll get a file "linux-source-NNN.tar.xz". Then maybe

mkdir ~/kernel && cd ~/kernel && tar -xaf /usr/src/linux-source-NNN.tar.xz

Paraphrased from the Open Source Security podcast:
The kernel mostly is written in C, but it's not standard C as you might think of it. There are kernel-specific conventions, styles, and macroes that must be used.

From article:
"... the Linux kernel is not written in standard C, but rather in a dialect that relies on gcc extensions, code-style standards, and external tools. ..."

Hcamael's "How to Develop Linux Driver from Scratch"
Corbet, Rubini, and Kroah-Hartman's "Linux Device Drivers, Third Edition"
Salzman, Burian, and Pomerantz's "The Linux Kernel Module Programming Guide"
Salzman, Burian, Pomerantz, Mottram, and Huang's "The Linux Kernel Module Programming Guide"
Andrew Klotz's "Making your first kernel module"

From someone on reddit:
> I want to take a stab at creating a couple of GUI Linux apps
> to improve the usability of some parts of the system that
> are locked to CLI.

Honestly, coding probably isn't going to be the lynchpin issue you'll run into.

Most projects are ruled like fiefdoms, some are more reasonable/practical than others and as a result allow for more broader contributions, but you still run into issues where maintainers/developers will not accept or consider certain features because it either violates their vision or ethos for what they think their creation should be, or they don't want to support features they aren't interested in (desktop icons disappearing from Gnome3 is a prime example), whatever the case it's not a coding issue, the biggest problems with OSs are people issues.

Please don't create a new distro:

We have far too many distros already. GNU/Linux Distributions Timeline

If you want/need something, do it some other way. A configuration script that modifies an existing distro. A new DE or theme. A new kernel module. Whatever is appropriate for your need/idea.

Of course, if you have some great new world-beating idea that just can't be done in an existing distro, go for it ! But expect lots of work and little audience. Better if you could implement your idea in some major distro such as Ubuntu, and get it out to millions of users.

Make a Linux App

Linux is suffering from Fragmentation:
See "Fragmentation" section of my "Linux Problems" page.

Standards / projects / groups:

+/- Plenty of huge companies (Intel, Microsoft, Google, Samsung, more) and smaller companies/foundations (Mozilla, Apache, more) contribute code and/or money to Linux and FOSS.
Statistics &l;s Data's "Top Companies Contributing to Open Source - 2011/2021"

Some subsystems have come from various corporations: CUPS (Apple sort of), ZFS (Oracle).

Unix Family Tree 1
Unix Family Tree 2

From Alan Pope's "Pitchforks set to Stun":
The 'community' of Linux users has a bit of a problem. It's not really a community at all. The Linux 'community' is a bunch of individuals who have an affinity for running the OS. ...

... there's no real wider unified 'Free Software' community either. There's the "Popular People's Front of FSF" and the "People's Popular Front of Open Source" who believe fundamentally different things and target different users. It's a giant sliding scale, ...
Difference between Free Software and Open Source, by South Park Goth kids (video)

Software / business models:

  • Open-Source: original software source code is published for anyone to see and use. It's considered good behavior to take any changes you make, open-source them, and offer them to the upstream project to help improve the base software.

  • Free: any changes to software source code must be published for anyone to see and use.

  • Open-Core: [definition is contested ?] uses open-source software at the core and then adds proprietary software around that.

Note that these really are talking about software (not services, or trademarks). Examples: Linux kernel is open-source but the name "Linux" is trademarked. Red Hat's software is open-source but the company sells support services and owns various trademarks. Canonical (Ubuntu) is open-source but sells certification services, support services, management services, and owns various trademarks.

Each project or company may have secret information (encryption keys, operating procedures, customer data, financial, etc) even though the software is free or open-source.

The CLI (terminal-and-shell; Command-Line Interface):

Where there is a shell, there is a way.
From procabiak comment on Guide: Migrating to Linux in 2019:
Stop recommending the command line to beginners. Disclaimer/TL;DR: I like the command line, use it every day, but it's not the user-friendly tool you make it out to be. It's a tool for veterans, power users, tech-savvy folk. It's dangerous in the wrong hands.

This is a Linux/nix culture we need to move away from if Linux is to be successful mainstream. There's a reason many distros have a GUI for package management. If everything was indeed easier on a command line for every user demographic, why waste time implementing the GUI package manager, just tell everyone to use the terminal, right? Wrong! (IMO)

There're people who don't speak English or have difficulty with spelling even the simplest words, people with typing disabilities, older aged people, children ... Just to name a few, but all who just want to enjoy a game without needing to learn the complexities of Linux or us vets pushing that need onto them.

With a GUI package manager, you don't need to know anything beyond the ability to explore, click around to find stuff you need, etc. Often times you'll find other software you didn't think about wanting. The guide already states package managers are akin to app stores. These days everyone knows what an app store is (well, I make the assumption - there are people who have never owned smartphones before), so it's not as difficult a concept as it used to be, so we can use fewer words to explain it. The instructions are pretty simple: "open the package manager (app store) from the menu, search for steam and click install, enter your password and click OK". It's not faster than a command line, but it's simpler for many people. The instruction is nearly universal for all distros, even if the GUI looks or behaves slightly differently.

Contrast with apt, yum or pacman. You'd also need to learn to use grep to get anything remotely useful out of their list-package outputs. You can't explore - you just execute a command like sheep and hope you really just installed steam from Valve and not some other kind of steam. And when it comes to instructions, you need more words to explain the command line (that many people are omitting): "open the Terminal, type sudo apt install steam, type in your password (it shows up blank but your password is really there, keep typing), press enter, then type Y and press enter" ... Not only that, that command only works for Debian-based distros using apt. It will confuse Fedora and Manjaro novices using yum or pacman (they might even try to install apt, just to be able to run your command!).

You can also very easily mislead a novice to run dangerous "sudo rm -rf /"-like commands. I assume it's backwards to suggest to the user to understand the command before executing it when they are learning the command in the first place, so we can't assume the novice user will know what every command does. Therefore I can maliciously explain that line means "sudo = as a superuser, rm = remind me, -rf = to read from, / = root", for example. If I am their only reading material, I've just tricked a user to wipe their OS, imagine what else you can do to that novice by exploiting shortly-named commands?

The reality is command lines are simpler for the person offering help, but not necessarily for the person on the other end.

From grady_vuckovic comment on Guide: Migrating to Linux in 2019:
I have experience in converting people to Linux and that terminal is absolutely poison. When I show a typical Windows user an OS like Linux Mint, they are interested until they see the terminal, then groan and lose interest immediately.

Even if GUIs change, it is still waaaay easier for a new user to figure out a GUI with obviously labelled buttons, text boxes and tabs, than figure out how to use a terminal.

Try to put yourself in the mindset of a person who has only ever used a computer for homework, Facebook, job-hunting, Amazon, and who mainly uses a smartphone for all their computer needs. A terminal itself is a foreign concept and not a pleasant one.

On most decent software managers for Linux distros, the GUI is mostly pretty self-explanatory, with a simple search box for finding new software. That's pretty easy for a new user to figure out how to use.

But it is almost impossible for them to figure out what 'sudo apt steam' means at a glance, (and no, taking the time to explain it doesn't help, it will likely result in a 'wait wait I just want to install an app, why do I need to know all this rubbish?') or figure out where they need to go to type it, because if your user doesn't know where the software manager is, they certainly don't know where the terminal is or how to use it, so for all you know they could end up typing those commands into a search box somewhere. Not to mention it's difficult for them to even remember something like that.

And when I say "how to use it" regarding the terminal, I mean that literally. I've seen new users half type in commands, causing the terminal to enter a mode of waiting for the rest of the input of the command, then they can't figure out why it isn't working as they type in more and more commands, and other frustrations. It is truly a TERRIBLE user experience and one that should only be reserved for veteran Linux users.

Also keep in mind, users like to install and uninstall software, how on Earth is a user meant to figure out how to uninstall software which they installed with 'sudo apt'? There's no list of installed software they can look at, no buttons to click, no GUI to navigate, all they can do is hopelessly start Googling for help on how to use apt commands (that's an appallingly bad user experience for someone new to Linux) or (more likely) just get frustrated and decide "Linux is rubbish!" and switch back to whatever they were using before.

On top of that, it looks pathetically antiquated in 2019 to use a terminal to do such a simple basic OS operation as installing software or installing some drivers.

It may be easier for you to give instructions in terminal commands, and the terminal may be easier for you to use than a GUI, but it is scaring away possible Linux converts. For Ubuntu at least you should include instructions on how to navigate to the Software Manager.

I personally wouldn't offer any advice at all for Linux newbies that includes terminal commands unless you're trying to scare them away from Linux.

My thoughts:
  • Recognition (seeing something in GUI and recognizing it's what they want) is easier than recall (knowing what they want to do, and then recalling what command does it) for most people.

  • CLI is the right tool for certain things, such as piping commands together to do text-manipulation.

  • CLI is great for documenting procedures, can be copy-pasted, can be used through SSH, even can be spoken to someone.

  • CLI can be faster for experienced users.

  • CLI is more standard (across distros) than GUIs.

  • Windows and Mac also have CLI's, so this is not an advantage or selling-point for trying to move people to Linux. Heck, DOS had ONLY a CLI. And the Unix CLI commands have been available for DOS and for Windows CLI for a long time; you can use grep and sed etc on any of those CLI's, with the right stuff installed.

"Terminal" is not same as "shell"

  • Terminal is the GUI process (window controls, tab stops, copy/paste, drag/drop, font, background image/color, line-buffer, scrolling, multiple panes, char color, more).

  • Shell is the text-command process (variables, statements, pipes/redirection, launch commands, path, history, tab-completion, more).

  • Console (accessible via ctrl+alt+F1 etc) is a text-window process (line-buffer).

  • Terminal: gnome-terminal, xfce4-terminal, konsole, guake, rxvt-unicode, aterm, xterm, kitty, alacritty, tilix, terminator, tmux, PuTTY, termite, sakura, urxvt, st, foot, cool-retro-term, more.

  • Shell: sh, csh, bash, fish, zsh, ksh, ash, dash, more. "cat /etc/shells".

  • Console: done through getty, agetty, more.

Aram Drevekenin's "Anatomy of a Terminal Emulator"
Seth Kenlon's "Terminals, shells, consoles, and command lines"
Unix Sheikh's "The terminal, the console and the shell - what are they?"
Andreas Fuhrich's "What is the difference: terminal / console / shell?"

GUI Terminal:
A pseudo-terminal (pty) is a pair of char-device files which serves as a bidirectional pipe between terminal and shell.
See current pty and shell: "ps -p $$"
See terminal process: "ps -ax | grep terminal | grep -v grep"
Type of terminal to emulate ? "echo $TERM"

Similar happens in a Console:
A tty a char-device file connects console (text window) and shell.
See current tty and shell: "ps -p $$"

A multiplexer has one terminal controlling multiple sessions. Tmux uses a client/server architecture that lets one terminal control multiple sessions on a remote host. Terminator is a terminal that can control multiple sessions on the local host.

"Console" versus "Virtual TTY"

+/- From /u/aioeu on reddit:
There are at least three different things to consider here:
  1. The kernel knows how to drive a video adapter. It can run the hardware in a text mode (e.g. a VGA text mode) or in a graphical mode (the framebuffer).

  2. The kernel's virtual TTY subsystem associates a video adapter with a keyboard device and provides the kernel's built-in VT102 emulator. This is probably the thing you think of when running Linux in a text-only mode. It's the thing that provides the multiple screen buffers you switch between using Ctrl+Alt+Fn.

  3. Quite separately, the kernel always has a "console device" of some kind. The purpose of this console is to be somewhere for the kernel to log messages.

It should be emphasized that these three things are fairly independent. For instance, it is possible to run a system without the virtual TTY. You might use a userspace TTY implementation instead, or you might not care about having an interactive terminal at all.

Regarding point 3, although most people just use a video adapter as the kernel console, you can choose to use something else instead. For instance, you can boot the kernel and have it use a serial port as its console. You can have it shipped over the network to another machine. You can even use multiple output devices as "the" console: you can have the kernel output log messages to both a video adapter and a serial port, for example.


To a large extent, this is a big problem with naming. It's an utter mess. The virtual TTY, for instance, is often also called a "virtual terminal", and indeed the kernel config option for it (CONFIG_VT) is named with that in mind. But then this kernel subsystem goes and creates /dev/vcs* devices, and the documentation for that calls it a "virtual console". So who knows?

To make this more concrete, consider these two statements:
  • I entered my username and password at the console.

  • The kernel logged a message to the console.

Both reasonable sentences ... until you realise that the word "console" is being used to denote two completely different things. They only look like the same thing because they happen to be going through the same hardware device.

I guess my point is that you really do need to read between the lines when interpreting articles ... My earlier comment was my description at how these components fit together. I deliberately did not use the names "virtual console" and "Linux console", because different people interpret them different ways.

Expression of Change's "CLIs are reified UIs"

John Hammond's "How to move FAST in the Linux Terminal" (video)

Cool feature: in KDE's Dolphin file manager, press F4 to open a Terminal that will change dirs as you change dirs in Dolphin.

Package formats and managers:

A package is a file that encapsulates a lot of metadata and some files and maybe scripts. It is used to install files etc onto a target system. A package (and its filename) may contain:
  • Metadata: package name, version number, author, dependencies, description, more.
  • Files to install: usually a (shared library) file or a binary executable file (e.g. /usr/bin/firefox), but there could be multiple binaries, plus config files, maybe source files, man page, etc.
  • Scripts that do anything: usually tweak configuration files, but really could be large and do just about anything including interacting with the user.
See for example Wikipedia's "deb (file format)"

A repository is a collection of packages, usually on some public server. By having separate repos and enabling/disabling them, it's possible for a user or system to choose repos that are "only FOSS" or "only officially-supported software" or "private" or "testing" or other groupings.

A package manager manages packages in your system, installing/removing/searching/listing them as you wish. Usually the packages come from repositories, but also you could install a package file you downloaded manually from anywhere. Each package could contain binary and/or source and/or config files and/or install scripts, or just be metadata pointing to other packages. Some managers may know how to build from source to make a binary.

An app store manager is a GUI app that may just be a front-end to a package manager, or may be a curated way of selecting and installing/removing just certain GUI apps.

Format Distros / Family Binary Managers
rpm Red Hat, Fedora, CentOS, SUSE, Mageia rpm, yum, dnf, zypper, YaST, urpmi, Drakrpm
deb Debian, Ubuntu, Mint dpkg, apt, apt-*, aptitude, Synaptic, wajig, cupt
tgz/txz Slackware, VectorLinux, Zenwalk pkgtools, slackpkg, slapt-get, Swaret, netpkg
pkg.tar.zst Arch, Manjaro pacman, pamac, yay, yaourt, apacman, pacaur
tbz2 Gentoo, Sabayon emerge/Portage, entropy, equo
nix NixOS nix
xbps Void xbps
Solus eopkg
Intel Clear Linux swupd
Alpine apk
MocaccinoOS Luet
PET Puppy Linux PPM
Some front-end managers that work on top of multiple package formats: smart, PackageKit.

Combination package manager / configuration manager ? GNU Guix, OStree, Guile, Nix.

Devopedia's "Package Manager"
DistroWatch's "Package Management Cheatsheet"
Dan Lorenc's "In Defense of Package Managers"

Dedoimedo's "Broken apt, missing dependencies, what now?"

From /u/Linux4ever_Leo on reddit:
Honestly, there really isn't anything stopping the Linux world from standardizing on one single package management system except for egos. Each distro line deems its own package management systems to be the "best". Red Hat derived distros use RPM packages; Debian derived distros use DEB packages, Slackware uses good ol' tarballs. Gentoo and Arch prefer sources based packages. Still other distros have developed their own proprietary ways of managing packages. In short, it's sort of a mess. ... Personally, I don't care what sort of package manager wins the game so long as SOMETHING is standardized which will help further the widespread adoption of software on Linux and make things easier for developers to port their software to the Linux platform.

There are a couple of other levels of "management":
  • VMs and emulators.
  • Bundles/containers (Snap, Flatpak, appimage, Docker).
  • Language-specific module managers (node.js's npm, Ruby's gem, Python's pip, Go's get, Rust's cargo, Perl's cpan).
  • App store managers (GNOME Software, Ubuntu Software, SNAP Store, POP!_Shop).


See my "Package Managers" section.

Desktop Frameworks:

  • GTK: used by GNOME, Cinnamon, MATE, Budgie, Xfce, Pantheon, Sugar, Phosh, Unity 7, LXDE desktops.

  • Qt: used by KDE, DDE, UKUI, LXQt, Sailfish, Ubuntu Touch, Lumina, Unity 8, Trinity desktops.

  • EFL: used by Enlightenment desktop, IoT.

Wikipedia's "Comparison of X Window System desktop environments"
Joshua Strobl's "Building an Alternative Ecosystem"
"KDE Frameworks are 83 add-on libraries to Qt which provide a wide variety of commonly needed functionality ..."

Desktop Environments (DEs):

  • GNOME (originally "GNU Network Object Model Environment"):

    GNOME is based on the GTK toolkit (originally "GIMP ToolKit").

    There was a big transition from GTK 2 to GTK 3, and apparently it affected UI. Some desktops (Unity, Cinnamon, MATE ?) stayed with a GTK-2-type UI while others (GNOME) went to a GTK-3-type UI ? Some time after 2020, there will be a GTK 4. Only the LXDE desktop still uses GTK 2 underneath ?

    "gnome-shell --version"

    From discussion on reddit 8/2020:
    ... there is no stable API for extensions to use.

    Instead, extensions that do major changes are actually doing monkey-patching, with fairly predictable impacts if there are either issues with the extensions ... Or if the code changes.

    And because there is no stable extensions API, there's really no good way for the developers to even know when a change is going to break an extension. The tools simply don't exist to tell them.

    And the more invasive the extension is, the more likely it is to break in a minor release.

    And sadly, the only real way to fix the problem would be to create a stable extension API, but that would break every single extension currently in existence, it would limit what extensions can do to what there is an API to do, and it would take an extremely large amount of time and energy.

    And the gnome-shell developers don't currently have that spare time and energy for the project.


    GNOME constantly removes important features (indicator icons, desktop icons, ...). Any time someone complains about that, the answer is always "just use an extension".

    GNOME constantly breaks extensions. Any time someone complains about that, the answer is always "well you shouldn't have been using extensions".

    It's actually maddening.

    This helps to explain why I don't like GNOME: Joey Sneddon article. I want system theming and extensions and preferences.

    Joshua Strobl's "Building an Alternative Ecosystem"

  • KDE (originally "Kool Desktop Environment"):

    KDE is a Qt-based framework, large set of applications (KDE Gear), and the Plasma desktop. "KDE Frameworks are 83 add-on libraries to Qt which provide a wide variety of commonly needed functionality ...". "KDE Gear is a set of apps and other software created and maintained by the KDE Community."

    For the very latest KDE, updated constantly, use the KDE Neon distro.
    For KDE with the latest kernel, more stable, use the Kubuntu distro.
    "plasmashell --version"
    liquidshell is an alternative to plasmashell.

    Paraphrased from Michael Tunnell 9/2019 in Linux Lad's podcast "Season 3 - Episode 1: KDE Konundrum":
    KDE / Plasma has a much better internal architecture / modularity than GNOME. For example, GNOME (shell) runs on a single thread, so with Wayland, if that dies, you have to reboot. Another example: it's very easy to just replace a settings module or something in KDE without disturbing the rest of the modules.

    GNOME has a more polished UI than KDE, although the latest Plasma desktop is getting there. KDE actually gives too much choice sometimes, and you may have to dig very deep to find the setting you want.

    KDE Connect (also somewhat ported to make Zorin Connect) is awesome. Through Wi-Fi, it connects smartphone and computer and provides all kinds of clipboard and file and UI sharing.

    [To change to the GNOME workflow:]
    Replace every panel with Latte dock (in panel-mode).

    Akonadi is a PIM/data framework for maintaining/accessing shared data such as address book, calendar, mail, etc. "akonadictl status" KDE PIM

    KDE developer platform
    niccolove's "Understanding KDE Plasma theming system"
    Dedoimedo's "Plasma desktop customization guide - How to for newbies"
    paju1986/PlasmaConfSaver (copy "com.pajuelo.plasmaConfSaver" folder into ~/.local/share/plasma/plasmoids/, then you can find it in Add Widgets)

  • Cinnamon:

  • MATE:

  • ...:

Liam Proven's "The sad state of Linux desktop diversity: 21 environments, just 2 designs"

Switching to another DE without re-installing the distro:
On *buntu systems, apparently you can switch DEs (without re-installing) by installing the "tasksel" utility and then doing "tasksel install ubuntu-desktop" or "tasksel install gnome-session" or such.

On Fedora, "dnf grouplist -v" to see available DE's, "sudo dnf install @SOMEDE" to install one, "sudo dnf install switchdesk switchdesk-gui" to install switcher, then run Desktop Switcher application and select a DE.

From reddit:
If you just want to learn, go for it but be prepared for something going wrong.

If you genuinely dislike DE A and want to change because you must, then just reinstall same distro with DE B. Safer and probably quicker option.


Make sure to set up Timeshift before you begin tinkering. It lets you set up savepoints. If things go wrong, you can restore a previous savepoint.


If you add DE B without removing DE A you'll be fine, some manual intervention might be needed like manually switching greeter, default apps.


It's not unsafe, but it's more trouble than its worth and very hard to reverse/untangle.


As others have said, it can be really frustrating if something goes wrong. I recently played around with having multiple DEs installed and there were a lot of tiny issues. It wasn't anything too terrible to deal with, but I'm pretty familiar with how to fix stuff on Linux. If you're new to it, it could get really frustrating if something goes wrong


It can be cumbersome when it comes to DEs that use different toolkits (GTK and Qt primarily). A simple migration from one GTK-based DE to another GTK-based DE should be easy for example. Some manual intervention involved but not much. If you want to switch between GTK-based and Qt-based, a reinstall is recommended for sure. [I wonder about switching between GTK2-based and GTK3-based.]

In general, to switch from DE A to DE B: install packages for B, reboot (coming up into B), then remove packages for A, reboot, hope all is well.

In general, if you plan to switch DEs, maybe use Debian or Arch, which are closer to server distros, and tend to have a cleaner separation between base and DE.

To configure Qt5 (KDE) apps while using some DE other than KDE Plasma desktop, use "qt5ct" with "QT_QPA_PLATFORM=qt5ct" environment variable set.

From a DistroWatch article: Some desktops use 3-D features (Cinnamon, GNOME, Unity), others are 2-D desktops (KDE Plasma, Xfce, LXQt). You can turn off visual effects or other features to try to gain speed.

Vivek Gite's "How to install and edit desktop files on Linux"
Codrut's "Edit linux application shortcuts"

probono's "Make. It. Simple. Linux Desktop Usability" (series of 6 articles)

Boot process:

Basic steps:
  1. Hardware power-on.
  2. System firmware.
  3. Bootloader (3 stages).
  4. Kernel (initial/temporary root filesystem, and then real root filesystem).
  5. OS Init.
  6. OS GUI.

  1. Hardware power-on. Power supply sends signal to rest of hardware when power is stable.
  2. Firmware / controllers startup:
    1. CPU starts executing code (firmware) in ROM, and consulting stored values in CMOS or NVRAM.
    2. If there is a "management engine" on the board, it starts up too, with its own code, and may start running a TPM which creates an audit trail (recording such things as hash value of firmware contents).
    3. There may be a Hardware Security Module (HSM) or Trusted Platform Module (TPM) which stores crypto keys and can execute crypto algorithms, or even custom algorithms. It has to start up.
    4. There may be micro-controllers / boards which start up too, using their own internal processors and ROMs/NVRAMs. Keyboard controller, disk controllers, GPU, network interfaces, etc.
    5. It's likely that CPU microcode will be loaded.
    6. CPU code initializes hardware, does self-test (POST), displays manufacturer logo, etc.
    7. Some functions (e.g. RAID, video, hardware-encrypted disks ?) may have interactive setup functions in ROMs that have to be executed by the main CPU. So the main firmware may jump into these "Add-On ROMs" to do that processing, then control returns to the main firmware.

  3. Code checks CMOS/NVRAM settings, and looks for key-presses from user, to see what to do. Could go into setup menu, go into menu of boot devices, or step down list of boot device types and look for bootable devices.
  4. A boot device is found or specified.
[Following will assume 512-byte sectors on disk.]
  1. MBR (Master Boot Record) is read from sector 0 of boot device into RAM.
    Wikipedia's "Master boot record"
  2. The "post-MBR gap" is the disk space after MBR sector and before first partition. This is at least sectors 1-63 (31.5 KB), but more likely to be 2047 sectors (almost 1 MB) in modern disks.
[Following generally outlines Ubuntu/GRUB2 situation.]
  1. Selecting a kernel:

    If firmware is Legacy BIOS (Basic Input-Output System) and disk has MBR partitioning:
    1. Stage-1 bootloader code (in GRUB, AKA boot.img) is in first 446 bytes of MBR (actually, some bytes have other uses (timestamp, signature), so typical limit is 440 bytes). [That small amount of code can do the boot menu and a few basic commands, by calling functions in the BIOS.] Partition table (4 entries; primary partitions) starts at byte 446 and is 64 bytes. Then 2-byte magic number (0xAA55).

    2. Control jumps to start of MBR (start of stage-1 bootloader) in RAM.
    3. If stage-1 bootloader is for Windows, it would load a Volume Boot Record (VBR) which contains an Initial Program Loader (IPL), which would then load NT Loader (NTLDR). But we're assuming Linux and GRUB, so:
    4. Stage-1.5 bootloader code (AKA diskboot.img plus core.img) is in "post-MBR gap" after MBR sector and before first partition.
    5. Stage-1 bootloader copies stage-1.5 bootloader from post-MBR gap into RAM.
    6. Control jumps to start of stage-1.5 bootloader in RAM.
    7. Stage-1.5 bootloader finds partition in partition table that is marked as "active".
    8. Typical partitioning for legacy BIOS: /boot is FAT*, / is Linux-type (usually ext*, ZFS, or Btrfs), /home may be a Linux-type partition or just a directory in /.
    9. Stage-1.5 bootloader is big enough to understand common filesystems such as ext*, FAT, NTFS ? Or does the active partition have to be FAT* ?
    10. Stage-1.5 bootloader copies stage-2 bootloader files from /boot into RAM.
    11. Control jumps to start of stage-2 bootloader in RAM.
    12. Assume stage-2 bootloader is main body of GRUB (really, GRUB2).
    13. Which stage is smart enough to know about LVM, RAID, dm-crypt, LUKS, etc ?
    14. GRUB finds partition in partition table that is marked as "boot".

    15. GRUB reads /boot/grub/grub.cfg configuration file.
    16. GRUB may present a menu of kernel images and utility options, or just select a default kernel image.

    Else if firmware is UEFI (Unified Extensible Firmware Interface) and disk has GPT partitioning:
    1. First 446 bytes of MBR are ignored (actually, some bytes of that space have other uses, so might be used). Next 64 bytes are ignored (actually, they're set to "protective values" showing one full-disk partition of a special type, so the disk looks full and strange type if someone tries to use an MBR utility against this GPT disk). Then 2-byte magic number (0xAA55).
    2. GPT (GUID Partition Table) is in "post-MBR gap" after MBR sector and before first partition. Sector 1 of disk is the GPT header, and has a pointer to the partition table (Partition Entry Array), which typically starts at sector 2.
    3. Boot parameters are in NVRAM. "man efibootmgr".
    4. Typical partitioning for UEFI: /boot/efi is FAT*, / is Linux-type (usually ext*, ZFS, or Btrfs), /boot and /home may be Linux-type partitions or just directories in /.
    5. UEFI firmware understands at least FAT12, FAT16, and FAT32 filesystems, optionally may understand more.
    6. One of the partitions in the GPT has a unique GUID (C12A7328-F81F-11D2-BA4B-00A0C93EC93B; systemd's "The Discoverable Partitions Specification (DPS)") that identifies it as the EFI System Partition (ESP). The filesystem in that partition is specified as FAT-like. It usually ends up mounted on /boot/efi after Linux has booted.
    7. UEFI firmware can launch an application (bootloader, boot manager, utility, shell, kernel) from the filesystem in the ESP. One standard EFI application is GRUB ("grub*.efi"). Another EFI application could be a direct-launch kernel (using EFISTUB or systemd-stub; an EFI boot stub). Another EFI application could be a boot manager ( systemd-boot, or rEFInd). See "Bootloader / boot menu / boot manager" section of my "Linux Troubleshooting" page,

    8. UEFI firmware may present a menu of EFI applications and utility options, or just select a default application, or fall back to \EFI\BOOT\BOOTX64.EFI.
    9. Assume "grub*.efi" was chosen.

    10. If Secure Boot is enabled, verify authenticity of the EFI binaries by signatures. Uses certificates and hashes, blacklist database (DBX) and whitelist database (DB) and Key Exchange Key(s) (KEK) and Platform Key (PK), Machine Owner Key (MOK) and MOK Blacklist (MOKX), Secure Boot Standard Mode-ready bootloader SHIM. Signatures generally are provided by Microsoft. For Linux, there is a shim binary ("shim-signed"; signed by Microsoft; contains a cert from Canonical; "apt list | egrep '^shim[^m]'" and "efibootmgr -v") and a GRUB binary (e.g. "grub-efi-amd64-signed" or "grub-efi-arm64-signed"; signed by Canonical) to get through this process. Secure Boot has various modes including full, fast, minimal, custom. Various paths if checks fail. If available, TPM operates as a passive observer (creating audit trail) of all phases.

    11. GRUB reads configuration from ESP (EFI System Partition).
    12. That config file points to /boot/grub/grub.cfg
    13. Which stage is smart enough to know about LVM, RAID, dm-crypt, LUKS, etc ?
    14. GRUB reads /boot/grub/grub.cfg configuration file.
    15. GRUB may present a menu of kernel images and utility options, or just select a default kernel image.
    16. If Secure Boot is enabled, verify authenticity of the kernel image by signature etc. Initrd image is not validated.

    It's possible to have Legacy BIOS firmware boot a disk that has GPT partitioning (known as "BIOS/GPT boot"), but I'm skipping that.

    It's possible to have UEFI firmware boot a disk that has MBR partitioning and an ESP (identified as partition ID 0xEF), but I'm skipping that ("UEFI/MBR boot").

  2. [To see what kernel command line was used to boot your current system, do "sudo dmesg | grep 'Kernel command line'".]
  3. GRUB copies compressed kernel image (executable zImage or bzImage file; e.g. /boot/vmlinuz-NNN) into RAM.
  4. GRUB copies initrd (AKA "Initial RAM Disk") into RAM. The image can contain anything, but probably is a microcode blob followed by a cpio archive of kernel modules (from /boot/grub/i386-pc) including LVM and LUKS modules, encryption modules, filesystem modules, USB modules, video modules, etc.
  5. Control jumps to start of code plus compressed kernel in RAM.
  6. Possible that CPU microcode could be updated at this point, using microcode compiled into the kernel image.
  7. Kernel sets up memory-management, floating-point, interrupts, C stack and BSS, and other low-level things.
  8. Transition from kernel assembly code to (mostly) C code.
  9. Initialize console, detect memory size, initialize keyboard, initialize video.
  10. Transition into protected-mode memory management.
  11. Transition into 64-bit mode.
  12. Decompress the rest of the kernel (for ARM64 and some others, it was done by GRUB).
  13. Kernel creates an empty initial root filesystem (rootfs) in RAM. Then files are copied to rootfs, first from an initramfs embedded in the kernel binary during the kernel build, then from the initrd (Initial RAM Disk) in RAM. Both of those often are in cpio format, but many formats and variations have been used. Maybe it's more accurate to call them "archives". On Ubuntu 20, the initrd is almost empty, so the initramfs must have most of the files. "man dracut"
  14. Kernel loads modules/drivers as needed from root filesystem (rootfs). If Secure Boot is enabled, verify authenticity of the modules by signature etc.
  15. On Ubuntu 20, there is a CPU microcode file in initrd, so CPU microcode must be applied at this point ? I assume Secure Boot also checks that. See if your system has any "ucode" packages installed, as in "pamac search ucode".
  16. Initialize virtual device systems such as software RAID, LVM, ZFS, NFS.
  17. Kernel mounts real root filesystem, replacing temporary rootfs.
  18. Kernel may mount more filesystems ? systemd's "The Discoverable Partitions Specification (DPS)"

  19. Kernel creates scheduler process (pid 0).
  20. Kernel forks pid 0 to create pid 1 (init process; first user-space process), which executes /sbin/init.

  21. Init:
    If /sbin/init is standard/old init application, init process mounts non-root filesystems as specified in /etc/fstab, then reads /etc/inittab file to find out what to do, or processes all of the /etc/init.d/* files.

    If /sbin/init is a symlink to /lib/systemd/systemd, systemd process mounts non-root filesystems as specified in /etc/fstab, then uses files under /etc/systemd (including /etc/systemd/system.conf) to decide what to do. Maybe uses info from GRUB that specifies starting target under /etc/systemd/system/: default, rescue, emergency, cloud-final, more ?

  22. Init code loads kernel modules needed to handle detected devices (udev), sets up windowing system, starts getty's, sets up networking infrastructure, maybe connects to networks, starts cron, starts print server, starts audio service, etc.
  23. Init code could load microcode into CPU and other processors, using files in /usr/lib/firmware. "journalctl -k --grep='microcode'" to see log entries about updates.
  24. Eventually init code runs login manager, which presents a login screen to the user.

Ramesh Natarajan's "6 Stages of Linux Boot Process"
Narad Shrestha's "A Basic Guide to Linux Boot Process"
Sarath Pillai's "Linux Booting Process"
David Both's "An introduction to the Linux boot and startup processes"
IBM's "Inside the Linux boot process"
Wikipedia's "Linux startup process"
Arch Wiki's "Boot loader"
NeoSmart Knowledgebase's "The BIOS/MBR Boot Process"
Debian Reference's "Chapter 3. The system initialization"
linux-insides / Kernel Boot Process
"man bootup"

Wikipedia's "GUID Partition Table"
Wikipedia's "EFI system partition"
Arch Wiki's "EFI system partition"
Pid Eins's "The Wondrous World of Discoverable GPT Disk Images"

OSDev Wiki's "UEFI"
Ubuntu Wiki's "UEFI / SecureBoot"
NSA's "Boot Security Modes and Recommendations"
"sudo mokutil --sb-state" [NOT SURE WHAT THIS DOES]
noahbliss / mortar
Drive-Trust-Alliance / sedutil
Chris Hoffman's "How to Check If Your Computer Has a TPM"
Dell's "How to troubleshoot and resolve common issues with TPM and BitLocker"
Shawn Brink's "How to Check if Windows PC has a TPM"
Linux: "sudo apt install tpm-tools" and "man -k tpm" and "sudo dmesg | grep -i tpm" and "lsmod | grep tpm"
Igor's Blog's "In-depth dive into the security features of the Intel/Windows platform secure boot process"

When booting, hold down Shift key to get into GRUB menu.
GNU GRUB Manual (or do CLI "info grub")
From someone on reddit:
"GRUB disaster recovery: only need to know two commands to use in GRUB shell: ls to find things, and configfile to get GRUB to load the right grub.cfg that the distro created that knows all the root filesystem uuids and other magic needed for booting. Those two commands, tab-completion, and understanding the (hdX,msdosY) device/partition syntax are enough."

Rob Day's "initrd and initramfs"
Wikipedia's "Initial ramdisk"