Basics



Cloud services:

+/-
  • Infrastructure as a Service (IaaS): For sysadmins or users. Virtual devices or storage, such as:

    • Block storage: appears to your local system as a block device, which you can format as a filesystem.

    • Object storage: image-hosting or file-hosting or JSON-hosting, accessed through URLs ?

    • Backup: file-storage, with versioning, and a client that can do compression and incremental backup etc.

  • Platform as a Service (PaaS): For developers. Virtual server, such as:

    • Virtual Private Server (VPS): virtual server onto which you install a virtual machine image, which contains OS and tech stack (e.g. LAMP = Linux+Apache+MySQL+PHP/Perl/Python, MEAN = MongoDB/Express.js/AngularJS/Node.js, Ruby stack, Django stack, more) and application(s). The applications could be packaged as containers.

    • Specialized server: web site server, database server, VPN server, file server, identity management (e.g. Active Directory), email server (e.g. MS Exchange), virtual desktop, CRM, etc.

  • Software as a Service (SaaS): For end users. Complete locked-down service, such as GMail, Google Docs, web site shared hosting, etc.

The huge cloud companies/products (Amazon/AWS, Microsoft/Azure, Google/GCP) offer solutions at every point of this structure, and more.



Abstraction Types:

+/-
  • Virtual Machine:

    A virtual machine has a complete copy of an operating system in it; the OS thinks it is running on bare metal, but actually it is not. The VM can provide emulated devices and networks, do access control, etc.

    The operating system chosen could be a "full" Linux distro such as Ubuntu server, a very small distro such as Alpine, a cloud-focused Linux such as Bottlerocket, or a severely-stripped-down kernel-plus (unikernel or micro-VM) such as nanos or OSv.

  • Emulator:

    An emulator is a baby-VM that has a veneer of a different operating system in it, but really is just mapping system-calls of the "inside" OS to system-calls of the real OS.

  • Container/Bundling System:

    A way of bundling one or more applications/services/processes together with the dependencies (libraries) they need. E.g. Snap, Flatpak, AppImage, Docker.

    [I'm told that "container" is the wrong term for this, since they don't use the Linux container mechanism. I don't know what the right term is. I guess I'll call them "bundles". Popey called Snaps "confined software packages". Jesse Smith refers to Snap/Flatpak/Appimage/Docker as "portable packages".]

    Each system has an "inside" OS API (inside each bundle), and then runs on top of a "base OS" (outside the bundles). In Docker, there is a fairly strong connection between the two; having a different OS inside and outside requires some "hacks" ? When you download a Docker image from the hub, you have to choose OS type and CPU architecture.

    A bundle shares a single base OS with other bundle, mediated by the container framework/engine. Usually the base OS will be a LTS version of a bare-metal OS (often a stripped-down version of Alpine or Debian ?), but it could be a hypervisor or a VM (especially in the case of running in a cloud service, sharing a server with many other VMs or VPSs).

Red Hat's "What's a Linux container?"
Mike Calizo's "6 container concepts you need to understand"
Docker's "What is a Container?"
Wikipedia's "List of Linux containers"
EDUCBA's "Docker vs VMs"
Weaveworks' "A Practical Guide to Choosing between Docker Containers and VMs"
Mike Coleman's "So, when do you use a Container or VM?"
Bryan Cantrill's "Unikernels are unfit for production"
Debian Administrator's Handbook's "Virtualization"


From someone on reddit:
Musl and glibc are two implementations of libc, as in the standard C library. This is basically the standardized interface between the kernel and userland. The kernel itself actually has no stable interface, the stability is guaranteed by libc which is a bunch of C functions wrapped around binary system calls that make it easy to write programs. So now you can do fork() instead of having to manually copy numbers into registers to communicate to the kernel what you want to do which is what the libc does for you amongst other things.

It also provides basic routine libraries like handling of strings and what-not.

...

The thing is that when applications are compiled, they are compiled against a specific libc for the most part and if you want to use them with another libc you have to recompile them. ...



The key concept of bundles / VMs, especially for cloud:

+/-
A container/bundle or VM doesn't contain any persistent data. Its definition (the "image") is a construct that contains code and a configuration for executing code. But then the execution will be transient.

There will be a transient state as it executes, but then when execution finishes (or crashes, or the session is abandoned by the user) the whole state will be discarded and the execution resources (CPU, RAM, IP address, etc) re-assigned.

The persistent data it operates upon is stored elsewhere, in a database or filesystem, usually across a network. article

Really, this is the same as the normal distinction between application and process. An application is the code, ready to execute, but contains no persistent data. A process is a transient execution, and operates on persistent data in a database or filesystem.



One issue: Who created the VM or container/bundle you're going to use ? Many of them are created not by the original app dev, but by some unknown helpful third party. How do you know you can trust that third party ?





Diagrams



Virtual machines:

App #1 App #2
VM #1: Linux [^ API is libc ^]
App #3 App #4
VM #2: Windows [^ API is win32 ^]
Hypervisor: VMWare [^ API looks like bare metal: CPU, RAM, devices, etc ^] App #5 App #6
Native OS: Linux [^ API is libc ^]
Bare metal [^ CPU, RAM, devices, etc ^]


Emulators:

App #1
Emulator #1: WINE [^ API is win32 ^]
App #2
Emulator #2: Anbox [^ API is Android ^]
App #3 App #4
Native OS: Linux [^ API is libc ^]
Bare metal [^ CPU, RAM, devices, etc ^]


Docker:

Docker container #1 Docker container #2
Docker Engine [^ API is libc ^] Database #3 App #4
Native OS: Linux [^ API is libc ^]
Bare metal [^ CPU, RAM, devices, etc ^]

Docker container #1 Docker container #2
Docker Engine (includes a Linux VM) [^ API is libc ^] Database #3 App #4
Native OS: macOS [^ API is libc ? ^]
Bare metal [^ CPU, RAM, devices, etc ^]

[Normal situation on Windows]
Docker container #1 Docker container #2
Docker Engine (includes a Linux VM) [^ API is libc ^] Database #3 App #4
Native OS: Windows [^ API is win32 ^]
Bare metal [^ CPU, RAM, devices, etc ^]

[Unusual case: Docker Enterprise on Windows]
Docker container #1 Docker container #2
Docker Engine [^ API is win32 ^] Database #3 App #4
Native OS: Windows with Hyper-V (Docker Enterprise) [^ API is win32 ^]
Bare metal [^ CPU, RAM, devices, etc ^]


Flatpaks:

Flatpak app #1
(on freedesktop)
(on bubblewrap)
Flatpak app #2
(on GNOME)
(on bubblewrap)
Flatpak app #3
(on KDE)
(on bubblewrap)
App #4
Native OS: Linux [^ API is libc ^]
Bare metal [^ CPU, RAM, devices, etc ^]


Snaps:

Snapd Snap app #1
(on AppArmor)
Snap app #2
(on AppArmor)
App #3 App #4
Native OS: Linux [^ API is libc ^]
Bare metal [^ CPU, RAM, devices, etc ^]


AppImages:

AppImage app #1 AppImage app #2 App #3 App #4
Native OS: Linux [^ API is libc ^]
Bare metal [^ CPU, RAM, devices, etc ^]




The key concept of container/bundle / VM layers:

+/-
There is a foundation layer with a fixed API.

For a container/bundle, it is an API provided by the container/bundle daemon / framework, which usually is the libc API for some LTS release of a Linux distro, such as Ubuntu 18.04 LTS.

For a VM, it is the API/devices/memory-map of a standard Intel PC.



Erika Caoili's "Linux Basics: Static Libraries vs. Dynamic Libraries"
package "apt show libc6"; library "locate libc.so" at /usr/lib/x86_64-linux-gnu/libc.so and /snap/gimp/273/usr/lib/x86_64-linux-gnu/libc.so etc. There also is /snap/gimp/273/usr/lib/x86_64-linux-gnu/libsnappy.so





Hypervisor



Also called a "Type 1 Hypervisor", "Bare-Metal Hypervisor", or "paravirtualization".

A thin abstraction layer between the hardware and the VMs. This acts as a referee that controls access to hardware from the VMs.

ResellerClub's "Type 1 and Type 2 Hypervisors: What Makes Them Different"
Kelsey Taylor's "10 Best Open Source Hypervisor"
IBM's "Hypervisors"

From Teknikal's_Domain article:
+/-
Type 1 hypervisors are, usually, entire operating systems, but the defining fact is that they run directly on top of the physical hardware, which means they have direct access to hardware devices, and can result in better performance just by not having to deal with anything else except it's own tasks. As a good example, VMWare ESXi is a Type 1 hypervisor.

Type 2 are closer to a conventional software package running as a standard process on the host hardware's OS. While they're a bit easier to deal with and usually easier to play with, competing with other programs and abstracted hardware access can create a performance impact. QEMU and VirtualBox are two good examples here.

Note that the lines here are kinda blurred, for example, Hyper-V is a Windows service that communicates at a bit lower of a level, meaning it has characteristics of both type 1 and type 2, and KVM for Linux uses the running kernel to provide virtualization, effectively acting like type 1, despite otherwise bearing all the classifications of type 2.








Virtual Machine



Also called a "Type 2 Hypervisor", or "Hosted Hypervisor". Runs on top of a host OS.

A virtual machine has a complete copy of an operating system in it; a container shares a single underlying OS with other containers, mediated by the container framework/engine. VMs are a much more mature technology and have CPU support, so are more secure in general. An emulator is a VM that has a veneer of a different operating system in it.

I think there are VM providers and then VM managers ? Each provider also includes a manager, but there are managers that can control many kinds of providers ? Not sure.

Bobby Borisov's "Virtualization on PC, Explained for Beginners with Practical Use Cases"
ResellerClub's "Type 1 and Type 2 Hypervisors: What Makes Them Different"
Kelsey Taylor's "10 Best Open Source Hypervisor"
IBM's "Hypervisors"



Providers:

+/-
  • VirtualBox:
    +/-
    Oracle VM VirtualBox

    Wikipedia's "VirtualBox"
    Available through Mint's Software Manager, but it's an older version.
    Oracle VM VirtualBox - User Manual

    OSBoxes' "VirtualBox Images"

    VirtualBox has these options for virtual network connections:
    • Not attached: network card but no connection.
    • NAT: VM traffic to LAN uses host's IP address.
    • NAT Network: private network on host machine, VM traffic to LAN uses host's IP address.
    • Bridged networking: VM uses its own network stack, connects to LAN, gets its own IP address, other machines on LAN can see it.
    • Internal networking: private network among VMs, no connection to outside.
    • Host-only networking: private network among VMs and host machine, no connection to outside.
    • Generic networking: strange modes.
    Linux Shell Tips' "Learn Different Networking Options in VirtualBox"
    6.2. Introduction to Networking Modes

    I installed VirtualBox 6.0.2 on Mint 19.1:
    +/-
    1. To Download VirtualBox for Linux Hosts and downloaded Ubuntu 18.10 version.

    2. Also to Download VirtualBox and downloaded VirtualBox 6.0.2 Oracle VM VirtualBox Extension Pack.

    3. Ran deb file.

    4. Start the VirtualBox Manager ("Oracle VM VirtualBox" in Start menu), go to File / Preferences / Extensions, add the extension pack.

    5. Total installation took about 325 MB on /, according to "du -s -m /usr/bin/virtualbox /usr/lib/virtualbox /usr/share/virtualbox".

    6. By default, VM images are stored under your home directory, but you can change this.

    7. Download an ISO of the OS you want (if you don't want to use one from the default list from Oracle.

    8. Ran VirtualBox Manager. Clicked New, Type=Linux, Version=Other to create a VM image for Xubuntu 18.10. Chose 1024 MB RAM and 8 GB virtual disk (VDI, dynamically allocated). ISO itself is 1.5 GB.

      Clicked Start. Clicked folder icon to right of pull-down list. Picked Xubuntu's ISO file.

      Got "Starting VM / Creating process for VM" progress dialog in manager, and Xubuntu startup in VM window. Selected English and then Install. Selected Download Updates and Install Third-Party Software. Then "Erase disk and install Xubuntu" and "Install Now". Set user name and computer name and password (used same as for my real machine). Installer did a bunch of installing and downloading and updating. Eventually it said "Installation complete" and "You have to restart to use your OS". So I clicked Restart, and it rebooted and gave me a login screen. Logged in.

    9. On real OS, numbers look right. About 1+ GB more of RAM being used than before. About 6 GB of stuff under VM's folder under my home directory.

    10. In the VM, ran Software Updater and updated to latest Xubuntu apps etc. Did another restart, logged in, poked around, logged out. Went to VirtualBox Manager and Closed the VM, saving state. Now about 7.5 GB of stuff under VM's folder under my home directory.

    11. Downloaded a Lubuntu image from OSBoxes' "VirtualBox Images". Came as a 1.3 GB .7z archive, expanded into a 6 GB .vdi file.

    12. Ran VirtualBox Manager. Clicked New, Type=Linux, Version=Other to create a VM image for Lubuntu. Chose 1024 MB RAM and then "Use an existing virtual hard disk file". Clicked folder icon and then Add and then selected the .vdi file. Then clicked Create. New VM image appeared in the main list of the Manager.

      Selected the image and clicked "Start". Took about 20 seconds to get to a login screen. But I don't know the password. Back to OSBoxes, found username should be "osboxes", password "osboxes.org". Was able to log in.

      Took a while to figure out that the update manager is "Muon Package Manager". And I don't like it, but it does seem to be working. No updates available. Logged out and went to Manager and saved the state of the image.


    David Both's "Convert your Windows install into a VM on Linux"



  • VMware:
    +/-
    Wikipedia's "VMware"
    Mint's Software Manager seems to have some utilities for VMware, but not VMware itself.
    Mini vLab's "Hello, ESXi: An Intro to Virtualization (Part 1)"
    I've heard that VMWare is better than VirtualBox for heavy graphics use.

    VMware has three options for virtual network connections: bridged, NAT, and host-only.
    • Bridged: VM connects to LAN, gets its own IP address.
    • NAT: private network on host machine, VM traffic to LAN uses host's IP address.
    • Host-only: private network on host machine, VM not allowed to do traffic to LAN.



  • KVM and Qemu-kvm:
    +/-
    KVM (Kernel-based Virtual Machine): a kernel module providing most of the infrastructure that can be used by a virtualizer. Actual control for the virtualization is handled by a QEMU-based application. qemu-kvm only provides an executable able to start a virtual machine. libvirt allows managing virtual machines in a uniform way. Then virtual-manager is a graphical interface that uses libvirt to create and manage virtual machines.

    To restate: KVM is kernel-based virtual machine, QEMU is quick-emulator (next level up), then libvirt to orchestrate everything, virtual-manager (or GNOME Boxes) is GUI.

    Wikipedia's "Kernel-based Virtual Machine"
    Wikipedia's "QEMU"
    QEMU
    ArchWiki's "KVM"
    ArchWiki's "QEMU"
    wimpysworld / Quickemu

    Mauro Gaspari's "Getting Started With KVM Hypervisor, Virtual Machines"
    Alistair Ross's "How to setup a KVM server the fast way"
    Carla Schroder's "Creating Virtual Machines in KVM: Part 1"
    Linuxize's "How to Install Kvm on Ubuntu 20.04"
    Alan Pope's "GNOME OS 40 without GNOME Boxes"
    I've heard that KVM really was designed for headless operation, but can do more.

    From Debian Administrator's Handbook's "Virtualization":
    +/-
    KVM, which stands for Kernel-based Virtual Machine, is first and foremost a kernel module providing most of the infrastructure that can be used by a virtualizer, but it is not a virtualizer by itself. Actual control for the virtualization is handled by a QEMU-based application.

    Unlike other virtualization systems, KVM was merged into the Linux kernel right from the start. Its developers chose to take advantage of the processor instruction sets dedicated to virtualization (Intel-VT and AMD-V), which keeps KVM lightweight, elegant and not resource-hungry. The counterpart, of course, is that KVM doesn't work on any computer but only on those with appropriate processors. For x86-based computers, you can verify that you have such a processor by looking for "vmx" or "svm" in the CPU flags listed in /proc/cpuinfo.

    With Red Hat actively supporting its development, KVM has more or less become the reference for Linux virtualization.

    From "Linux Bible" by Christopher Negus:
    KVM is the basic kernel technology that allows virtual machines to interact with the Linux Kernel.

    QEMU Processor Emulator: One qemu process runs for each active virtual machine on the system. QEMU provides features that make it appear to each virtual machine as though it is running on physical hardware.

    Libvirt Service Daemon (libvirtd): A single libvirtd service runs on each hypervisor. The libvirtd daemon listens for requests to start, stop, pause, and otherwise manage virtual machines on a hypervisor.

    The Virtual Machine Manager (virt-manager) is a GUI tool for managing virtual machines. Besides letting you request to start and stop virtual machines, virt-manager lets you install, configure, and manage VMs in different ways.

    The virt-viewer command launches a virtual machine console window on your desktop.

    From someone on reddit: "QEMU is L2 hypervisor and KVM is L1, which makes it a lot faster. QEMU works in the userspace, and KVM is a kernel module."

    From someone on reddit: "KVM-based solutions seem to need quite a lot of fiddling for the initial setup. KVM-based VMs also lack ease-of-use features like folder-sharing or USB-passthrough."

    From post by Ryan Jacobs: "libvirt+kvm is wayyyyy better than Virtualbox. My Android VM is incredibly snappy now. The mouse integration is better too."



  • Bochs:

    A PC emulator that emulates Intel CPU, common I/O devices, and a custom BIOS.
    Bochs


By the way, virtualenv for Python is just a way of running a Python app with a certain set of libraries. Despite the name, it is not a virtual machine, and the app is not isolated from the OS.

WSL2 on Windows 10 is a VM: you run a Linux kernel inside it.
Joey Sneddon's "How to Install WSL 2 on Windows 10"




Chromebook:
+/-
Crostini (AKA "embedded Linux (beta)") on Chromebook is a VM: you run a Linux kernel inside it.
From /u/rolfpal on reddit 9/2019:
Crostini is the same as the embedded Linux (beta). It runs an instance of Linux in a container, the container is "sandboxed", the beta comes with tools allowing you to run pretty much anything in the Debian distro. It does support GPU acceleration, but you have to set it up. Crostini is an official project of Google and is a work in progress.

Crouton is an un-official script that allows you to run an instance of Linux in "chroot", meaning it uses the Linux kernel of Chrome as the foundation for the distro of your choice. Crouton is more of a hack, and is suspect from a security point of view, but sometimes you can do more with it, particularly if hardware hasn't been activated in Crostini yet.
From /u/LuciusAsinus on reddit 9/2019:
Crostini can be run by a supported Chromebook as-is. Crouton requires you to put your computer into "developer mode", which is less secure, and requires a dangerous prompt whenever you reboot (dangerous in the sense that it says, essentially, "Something has gone horribly wrong, hit space NOW to make it all better", but if you DO hit space you wipe your computer, including your Linux partition. I lost my Linux 3 times when my kids used my computer; very pleased that Crostini doesn't have that problem, even if it's a bit less powerful than Crouton).
Crostini: Don Watkins' "Run Linux apps on your Chromebook"




Managers:

+/- ZeroSec's "Learning the Ropes 101 - Virtualisation"
Bryant Son's "6 open source virtualization technologies to know in 2020"
da667's "Resources for Building Virtual Machine Labs Live Training"
SK's "How To Check If A Linux System Is Physical Or Virtual Machine"
SK's "OSBoxes - Free Unix/Linux Virtual machines for VMWare and VirtualBox"





Emulator



A virtual machine has a complete copy of an operating system in it; a container shares a single underlying OS with other containers, mediated by the container framework/engine. VMs are a much more mature technology and have CPU support, so are more secure in general. An emulator is a VM that has a veneer of a different operating system in it.

I think WSL1 on Windows 10 is an emulator; it maps Linux syscalls to Windows syscalls.





Container/Bundle Systems Overview



A virtual machine has a complete copy of an operating system in it; a container/bundle shares a single underlying OS with other containers/bundles, mediated by the container/bundle framework/engine. VMs are a much more mature technology and have CPU support, so are more secure in general. An emulator is a VM that has a veneer of a different operating system in it.



Wikipedia's "Linux containers"
Alistair Ross's "What is Docker (and Linux containers)?"
Opensource.com's "What are Linux containers?"
Merlijn Sebrechts' "Why Linux desktop apps need containers"
Ingo Molnar's "What ails the Linux desktop" (2012)



OS building blocks:

+/- Containers on Linux generally use namespaces, cgroups, seccomp, and maybe SELinux to confine the app and strip services from its environment. I think chroot and filesystem mounting are used on top of those.

Nived V's "4 Linux technologies fundamental to containers"

man namespaces
man cgroups
man seccomp



Some comparisons (focusing mainly on differences):

+/-
  • Docker: intended for server; multi-app; sandboxed; cross-platform; needs installed foundation.

  • Snap: for desktop and server and IoT; single-app; sandboxed; Linux-only; needs installed foundation.

  • Flatpak: desktop-only; single-app; sandboxed; Linux-only; needs installed foundation.

  • AppImage: for desktop and server; single-app; not sandboxed; Linux-only; no installed foundation.

  • Native binaries: for desktop and server and IoT; multi-app; not sandboxed; Linux-only; no special installed foundation.


I think usually a bundle/container is built from native packages (the "compose" file specifies packages) but then the resulting image contains a filesystem image containing destination files, not packages. So for example a "compose" file might specify package ffmpeg-1.4, and the image would contain the file /usr/bin/ffmpeg.



Some issues that containers/bundles could/do solve:

+/-
  • Dependencies:

    From /u/lutusp on reddit 4/2020:
    +/-
    [They] solve the "dependency hell" issue by packaging all required dependencies with the application itself in a separate environment. This solves an increasingly serious problem (inability to install and run some applications) with another one -- an application's download and storage size and startup time goes up.

    By contrast, an application installed from the normal repositories must find all its dependencies (right version and properties) in the installed libraries, which unfortunately is a declining prospect in modern times.

    Note: This works in reverse, too: I once did a "sudo apt remove" of some packages, which unexpectedly removed my whole desktop with them ! I quickly re-installed the desktop packages. But the potential damage was a bit limited by the fact that several apps important to me (including password manager) are running as snaps, and a couple more (including Firefox) are running as flatpaks.

  • Separate the app from the distro:

    App updates independent from system updates, if user wishes. E.g. you could use a LTS system/distro while doing rolling updates of snap apps.
    Merlijn Sebrechts' "Why Linux desktop apps need containers"

    Shift burden of packaging work from many distro packagers / repo maintainers to one app packager/dev. Especially valuable for large and frequently-updated apps such as browsers, and large app suites such as Office suites.

    More direct connection between users and app developers. No longer a distro builder/maintainer between them.

  • Single source (store or hub) for software:

    (Although that can be bypassed if you wish.)

    Using multiple repos and PPAs is insecure, has lots of duplication, and is confusing to some new users.

    Many new users are familiar with app/extension Stores in Android, Apple, Chrome, Firefox, Burp Suite, VSCode, GNOME desktop, Thunderbird, more.

  • Per-app permission model:

    Many new users are familiar with an Android or iPhone model where they can set permissions per-app.

[For me, the two main items I want are bug-reporting straight to app developer (no distro middleman) and per-app permissions (in a way easier than AppArmor or Firejail).]



Michal Gorny's "The modern packager's security nightmare"



My cautions about app containers/bundles:
+/-
App containers/bundles are a good idea, but the current implementations are lacking:
  • Many containers/bundles have bugs with directory access or launching helper apps.

  • I wish there was a requirement that only the original dev of an app could make a container/bundle of it. How to know if I should trust some unknown "helpful" person who made a container/bundle for a popular app ?

  • Flatpak has a surprising and bad permission structure involving "portals", and apparently snap is adopting it too.






Docker    Docker logo



Basics
+/-
Intended for server; multi-app; sandboxed; cross-platform; needs installed foundation.

Docker seems to be mostly for server applications that the user connects to through a web browser, or that other apps connect to through some web API such as a RESTful API. But it IS possible to run a normal GUI app in a Docker container, by connecting from app to the X display system in the base system: article1, article2

One difference between Docker and Snap/Flatpak/Appimage: you can run a whole collection of apps/services in one Docker container, with layers and IPC etc. The others generally are single-application (it could launch child processes, but I think they'd be outside the containment).

  • "Docker Hub" is a repo of images, but anyone can push to it, so no security guarantees, and many images have no descriptions at all. Better to use Official Images. Also there's LinuxServer.io.
  • An "image" is a static file in the hub or installed into your system.
  • A "container" is a running instance of an image.
  • A "container ID" is a unique ID that identifies a running container.
  • A "swarm" is a cluster of running Docker Engines (probably spread across multiple hosts that can be managed together.

Note: Docker Hub and docker.com seem allergic to Privacy Badger or Facebook Container or Firefox, not sure. I have to use a Chrom* browser to access them.

From Teknikal's_Domain article:
+/-
Docker containers are really meant to be tight little self-contained boxes meant to do one thing and one thing only. If your app needs a web server and a database, make two containers, one for each, and link them together into their own little isolated network. In this sense a Docker container is really just running any regular command inside an isolated space where it can't interact with anything other than what it's given and what's been explicitly allowed.

Docker uses a tiny little bit of a runtime, containerd, that makes a slight bit of an abstraction layer. Each container is formed from an image, which is a filesystem and some extra configuration data. That filesystem is a series of layers, each representing modifications (deltas) to the previous. Each container also has an entrypoint, an executable program in the container namespace to use as process 1. This can be a shell like /bin/bash, but it can also be an app wrapper that does nothing else except start the service. The two main ways a container can interact with the outside world are through volumes and ports.

A volume is a named location in the container filesystem that the host's filesystem can be mounted at, either as a raw path, or a named volume managed by Docker. For example, to give a container access to the Docker runtime, you can map the host's /var/run/docker.sock to the container's /var/run/docker.sock.

A port is a network port that the container image has stated it expects traffic on. An image for a web server might, say, call out ports 80/tcp and 443/tcp as ones that it's going to use. These can be mapped to any available host port (through some Linux networking magic), but generally are mapped into the ephemeral port range of 32768-60999 (at least for Linux).



Docker
Wikipedia's "Docker"



Images and getting started


Details
+/-
Ubuntu 20.04 has a snap for Docker, but no deb/apt package.
Most articles recommend installing straight from the Docker site, not distro repo's, which usually are a bit outdated.
If you want to install a deb, you'll have to use a PPA:
Linuxize's "How to Install Docker on Ubuntu 20.04"
Bobbin Zachariah's "How to Install Docker on Ubuntu 20.04"
Docker's "Install Docker Engine on CentOS"

On Mint, maybe "sudo apt install docker", "man docker", "man docker-run", "docker help", "sudo docker info", "sudo docker images", "sudo docker ps".

Apparently there are multiple versions of Docker: docker (old), docker-engine (old), docker.io, docker-ee (Enterprise Edition), docker-ce (Community Edition).

For Mint 19, use name "Bionic" anywhere you see "$(lsb_release -cs)" and follow instructions in Docker Docs' "Get Docker CE for Ubuntu"
Also Tenbulls' "Installing Docker on Linux Mint"

Installed Docker-CE on Mint 19.1:
+/-

+/-
# get rid of some stuff from previous attempts
sudo apt remove docker docker-engine docker.io containerd runc
sudo rm -rf /var/lib/docker
sudo rm /etc/docker/key.json
# reboot for good measure

sudo apt update

# Install packages to allow apt to use a repository over HTTPS:
sudo apt install apt-transport-https ca-certificates curl software-properties-common

# Add Docker's official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

# Verify that you now have the key
sudo apt-key fingerprint 0EBFCD88

# add stable repository
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"

# install docker CE
sudo apt update
sudo apt install docker-ce
# at the end got "Job for docker.service failed because the control process exited with error code."
systemctl status docker.service
# many msgs, ending with "Failed to start Docker Application Container Engine."
# rebooted to see if changed, and it did, looks good

# Verify that Docker CE is installed correctly by running the hello-world image
sudo docker container run hello-world

# Another check
sudo docker run -it ubuntu bash

# allow your normal user account to access docker socket
sudo groupadd docker
sudo usermod -aG docker $USER
# log out and back in
# test that it works
docker run hello-world
# failed with "Got permission denied while trying to connect to the Docker daemon socket ..."
# but after overnight/reboot, it works

# if you see "WARNING: Error loading config file ..."
sudo chown "$USER":"$USER" /home/"$USER"/.docker -R
sudo chmod g+rwx "$HOME/.docker" -R

# list docker images
sudo docker image ls
# Yikes !  Docker is taking up about 6 GB on / !
# 5.6 GB for the OpenVAS image alone.

# make docker run upon boot (didn't do this)
sudo systemctl enable docker

# tried creating /etc/docker/daemon.json containing:
{
	"dns": ["8.8.8.8", "8.8.4.4"]
}
but it didn't fix my problems with OpenVAS

Later an update for Docker came through the normal Mint Update Manager, but the update failed a bit, some errors in the final scripts.

Docker creates a lot of rules in the iptables FORWARD chain. Seems to create some kind of "bridge" device ? Creates a "docker0" network interface you can see via "ip -c addr".


Installed snap of Docker on Ubuntu GNOME 20.04:
+/-

# if deb/apt is installed, remove it:
apt list docker.io
sudo apt purge docker.io

sudo addgroup --system docker
sudo adduser $USER docker
newgrp docker	# launch new shell, changing group of current user to "docker"
snap install docker

snap info docker
docker --version
docker.help
docker --help | less
docker info | less

docker images | less	# show currently installed images
docker ps				# show currently running containers

docker run hello-world

docker run -p 2368:2368 ghost 	# on port 2368; see https://hub.docker.com/_/ghost

docker search weather
# To see DockerHub page for an image, go to "https://hub.docker.com/r/IMAGENAME".
docker pull rivethead42/weather-app	# install image into system
docker images | less	# show currently installed images
docker image inspect IMAGENAME | less
docker image history --no-trunc IMAGENAME >image_history		# see steps used to build the image
docker run -it IMAGENAME sh	# IF there is a shell in it, runs image and gives shell, so you can see contents
docker run -p 3000:3000 rivethead42/weather-app
# says "listening on port 3000"
# should be accessible via http://localhost:3000/
docker container list
docker stop CONTAINERID

docker images
docker rmi IMAGEID
# if it says there are stopped containers:
docker rm CONTAINERID

docker system prune -a --volumes
docker images
snap remove docker --purge
id		# if you're still running as group "docker", ctrl+D to get out
sudo delgroup --system docker

[Recommended:] Installed deb of Docker on Ubuntu GNOME 20.04, generally following Bobbin Zachariah's "How to Install Docker on Ubuntu 20.04":
+/-

sudo apt update
sudo apt install apt-transport-https ca-certificates curl software-properties-common gnupg-agent
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update

apt-cache policy docker-ce
# see that all the versions are on docker.com
sudo apt install docker-ce

sudo systemctl status docker

sudo usermod -aG docker $USER
newgrp docker	# launch new shell, changing group of current user to "docker"
id				# see that you're in group "docker" now

docker run hello-world

docker run -p 2368:2368 ghost 	# on port 2368; see https://hub.docker.com/_/ghost
docker ps -a
# in browser, go to http://localhost:2368/

docker pull spiffytech/weather-dashboard
docker images | less	# show currently installed images
docker image inspect spiffytech/weather-dashboard | less
docker inspect spiffytech/weather-dashboard | grep -i -A 1 'ExposedPorts'
docker inspect -f '{{ .Config.ExposedPorts }}' spiffytech/weather-dashboard
# see steps used to build the image:
docker image history --no-trunc spiffytech/weather-dashboard >image_history

docker run -p 8080:8080 spiffytech/weather-dashboard
# says http://127.0.0.1:8080 or http://172.17.0.2:8080
# server ran, but browsers see a placeholder page from the app, probably not a Docker issue
# server says ctrl+C to kill, but that doesn't work
docker container list
docker stop CONTAINERID

docker images
docker image rm --force spiffytech/weather-dashboard

docker run -it IMAGENAME sh	# IF there is a shell in it, runs image and gives shell, so you can see contents

docker info | grep "Docker Root Dir"
sudo ls /var/lib/docker
docker info | grep "Storage Driver"
sudo ls /var/lib/docker/overlay2

# remove Docker
docker container stop $(docker container ls -aq)
docker images
docker system prune -a --volumes
docker images
sudo apt purge docker-ce
sudo apt autoremove
cat /etc/apt/sources.list
sudo add-apt-repository -r "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
cat /etc/apt/sources.list
sudo apt update




Create an image


Evaluations
+/-
From people on reddit 5/2020:
+/-
Docker gives similar benefits to a VM, but is very lightweight. There is negligible difference between running a normally-installed app and that same app in a container, in terms of memory, cpu performance, or disk speed. There can be significant disk space overhead, however, depending on the images you use.

A container image usually consists of a distro [really just libraries ?] and application(s), but not a kernel or init system. The host kernel is shared with the containers.

There are many benefits:

  • If set up properly (e.g. non-root user), a compromised app cannot affect your host system. No services run in a container, only the app you specify. No ports are accessible, except those you specify. You can make the entire filesystem read-only. You can isolate a container's network from the host and other containers.

  • Most or all of the app setup and configuration is packaged into a container, often by an expert. This can be a huge time-savings.

  • You can start over at any time. You can blow away a container in seconds. If you want, you can spin it back up in the original state.

  • You can run many containers at a time. There is little overhead except for whatever resources the apps take.

TBH, I don't see many downsides compared to a traditional app install. Maybe some people are scared of the unknown? Keeping containers updated can be an issue. There are other similar technologies I like better, like podman. I'd use Kubernetes if I was dealing with a web-scale app.

...

Be careful on advertising the security angle. Containers definitely provide separation but they don't actually have much in the way of security provisioning. I work in this segment of the industry and we don't even really trust actual VMs (eg, Xen or KVM) as security features and containers even less so.

...

I think the security angle of containers is an overhype situation. Any security they do provide is minimal and could be done better with SELinux and standard DAC.

...

From a security perspective, containers DO provide an atomic deployment model, which prevents bespoke server configs, and enables easy patching. I think that's a boon.

...

There's plenty of docker images that run as root, or ask you to mount the docker socket inside the container itself.

The docker daemon is a security nightmare. And in addition, containers allow you to easily bypass auditing: "docker run -v /:/host fedora:latest bash -c 'rm - rf /host'"

There's plenty of ways of shooting yourself in the face. Podman follows a much better security model and allows rootless containers (but still lacks some features, such as nested podman in unprivileged containers).

...

I would say one of the biggest negatives is security updates. Your system has a whole bunch of packages and you do updates and it updates all the stuff on your system. Notably, it doesn't update the stuff your docker images contain, so all your containers miss out on all system updates. You instead must rely on the image maintainer to update the image and then you must pull the updated image. Often the image never gets updated. This means that while docker is pretty effective at keeping stuff in the containers, it tends to be easier to attack them and the stuff inside the containers is more vulnerable.

...

One advantage not mentioned is you can just run Docker containers without actually installing anything other than Docker. You can run a dozen containers and then just remove them all and have your vanilla OS. It makes it very easy to get your machine up and running again after a format or swapping out your pi's SD card.

...

Also avoiding "dependency hell" or conflicting packages on the same system. Want to run 4 different versions of Ruby on the same box with relative ease? Docker.

...

Docker versus Snaps: they are similar only in that both are called "containers" and both have runtimes written in golang. Completely different purposes and design. Docker puts a custom execution environment around a process or maybe a couple and knits together all the different bits and pieces through virtualized network communication (this is to make things "scalable", you can knit together several servers as a cluster and spawn more processes of one kind or other as they're needed). Snaps put a custom execution environment around a complete application with the idea that it is decoupled from the host system in a few interesting ways. Snaps are all-in-one, and meant to be a way for system users to consume software. Docker containers are convenient minimal execution platforms for small pieces of a larger service system.

...

I like that I can basically just build an image and run it wherever I need to whether it be from container registry or building from a Dockerfile. It works really well with making infrastructure more immutable. Rather than patching stuff, I just create a new updated image, regression test and swap the new one in which results in less downtime overall. I don't really want to install a ton of dependencies on hosts if I can avoid it. I'd rather just isolate them to a container.

The benefits grow too when adding container orchestration like Kubernetes. It makes it easier to use containers in things things like HA, services that need to have load balancers, maintaining healthy services, etc.

From someone on reddit:
TBH I don't like to use someone's image [from Docker Hub] unless they have a GitHub repo link and I can inspect the Dockerfile and any scripts added to the image. Even then I like to build my own image based off theirs. This is also a good excuse to learn how to build a docker image. I've used this method and moved all my workloads minus system mail messages to docker containers. Makes rebuilding and redeploying a server super simple and fast.

Paraphrased from someone on reddit:
"The Docker user/group can command the Docker daemon through a socket. If that user/group issues a command such as 'docker run -v /:/mnt:rw debian', the root user inside the container will have full root access to the host filesystem. So you must treat that Docker user/group as if it has sudo or root privileges."

Docker has a known problem where it adds an iptables rule, breaking firewalling ? issue 4737 and article and ufw-docker. Best way to fix is Docker's "Docker and iptables" ? Also: "fail2ban by default operates on the filter chain, which means Docker containers are not being filtered the way you might expect."





Flatpak    Flatpak logo



Basics
+/-
Desktop-only; single-app; sandboxed; Linux-only; needs installed foundation.

(Originally xdg-app)
Wikipedia's "Flatpak"
Flatpak.org
Joey Sneddon's "Linux Mint Throws Its Weight Behind Flatpak"
Cassidy James Blaede's "elementary AppCenter + Flatpak"



Images and getting started


Details
+/-

man flatpak
# if not installed:
sudo apt install flatpak

flatpak install flatseal	# for managing permissions

# list known remote repositories
flatpak remotes
# your distro may have a distro-specific or DE-specific repo specified
# if empty, do:
sudo flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo

# find flatpaks
flatpak search weather

flatpak install flathub Meteo
flatpak list
flatpak run com.gitlab.bitseater.meteo
flatpak uninstall Meteo

# list info about installed flatpaks
flatpak list
flatpak list --app
flatpak list --all

flatpak install APPNAME
flatpak list					# make note of APPID
du -sh /var/lib/flatpak/app/*	# sizes of all installed apps
ls -l ~/.var/app				# user's config should be under here
flatpak info APPID
flatpak permission-show APPID
# to change permissions, run flatseal
flatpak info -m APPID	# shows SOME of the settings done by flatseal
flatpak info --show-permissions APPID	# shows rest
# BUT: any directory permissions are overridden by anything
# the user does in a file open/save dialog; no way to stop
# the user from reading/writing where they choose in a dialog.

ls /var/lib/flatpak

# if app has command-line options:
flatpak run APPFULLYQUALIFIEDNAME --help

# cleanups:
sudo bash -c "! pgrep -x flatpak && rm -r /var/tmp/flatpak-cache-*"
flatpak uninstall --unused

# Removing Flatpak entirely
flatpak list --app     # see apps only
# remove the apps
flatpak list           # see everything
# only support runtimes should be left
# now run native package manager to remove flatpak
# then cleanup:
sudo rm -r /var/tmp/flatpak-cache-*

To get beta versions of apps, you need to add the Flathub Beta repo:

sudo flatpak remote-add flathub-beta htps://flathub.org/beta-repo/flathub-beta.flatpakrepo
sudo flatpak install flathub-beta APPNAME

Disappointed with Flatpak security model:
+/-
I installed the flatpak image of Firefox browser, and thought it had a bug. I have flatpak permissions set (through flatseal) to not allow access to ~/Documents, yet I can save files into there.

Got response from someone on reddit:
There are two different kinds of permissions, static and dynamic ones. Static permissions are those that an app always has (as long as it is running); Flatseal is for managing those. Dynamic permissions are granted (via something called "portals") only when they are needed, and that involves a user interaction (in your case, that was when you selected the file in the file chooser).

Generally the goal is that apps should use portals as much as possible, so that the user doesn't have to manually manage permissions. Another advantage of portals is that the permissions are much more targeted; an app doesn't need read access to all of ~/Documents just to read one file from there.

But of course porting apps to portals (and implementing the necessary portals) takes time, and that's why static permissions exist.

...

[There is no way to prevent the user from writing a file to anywhere they choose. User action in a GUI dialog overrides any flatpak/flatseal permission settings.]

[The purpose of Flatpak's sandboxing is to restrict what each app can do (when not in a user-driven dialog).]

This is surprising to me, and not what I want. I want something consistent with the Linux permissions model, where you set a rule (directory permissions, for example) and that rule is enforced always. The rule is not context-dependent (e.g. CLI app can violate it but GUI app can't, or something).

I want to set my browser so it can access about 4 dirs in my home directory, and that's it, no matter how it tries to access anything else. I want to set my password manager so it can never do any network access, no matter how it tries to do so.

From someone else on reddit:
The file selection dialog is not Firefox, it's part of the desktop portal implementation, it's running directly on the host without the Flatpak sandbox. If you chose a file with the file dialog then you gave Firefox permission to read and write to the file, that's up until Firefox drops the file handle, so it's one-time access.

There's no way to do that with folders, the file selection dialog can only give access to files.

If an app by default has access to a folder then you can use an override to block it. You can also do block access globally and then enable per app, and this is what I'm doing.

> how do I set Firefox permissions to say
> "only access to these 4 dirs in my home dir, nothing else anywhere" ?

...

> By default, I seem to be able to save to ANY folder.

That's not Firefox this is the FileChooser portal from the XDG Desktop Portal implementation that is running on the host and which allows one-time access for a specific file (or creating one) and that's only if you clicked on the O.K./confirmation button. The browser by itself can only access the downloads folder.

> This seems to be the heart of the issue. I see one app, Firefox.
> You seem to see two. I don't care if an access it does is stimulated by the user,
> a timer, JS on a web page, a network transaction, whatever. I want that access
> to be restricted by the permissions I set on the Flatpak image.
>
> Flatpak seems to have invented a bifurcated permission structure, where
> accesses stimulated by user action get treated one way, and all others
> get treated another way.
>
> And even in Flatseal, when you set the sliders to "off" for
> "filesystems / all system files" and "filesystems / all user files",
> you're just affecting the non-portal accesses.
>
> And there's no way to make the permission settings apply to the portals, right ?
>
> I want confidence that my app X can NEVER access any files under folder F
> in ANY way, even if I push buttons to try to make it do that. I deliberately
> set a security policy, and I should not be able to casually violate it with
> a file-choose dialog.

From Flatpak's "Sandbox Permissions":
"In many cases, portals use a system component to implicitly ask the user for permission before granting access to a particular resource. For example, in the case of opening a file, the user's selection of a file using the file chooser dialog is interpreted as implicitly granting the application access to whatever file is chosen."

Apparently "portals" are a Flatpak thing, even though they're named "org.freedesktop.portal.*":
Flatpak's "Portal Documentation"
mclasen's "Flatpak - a look behind the portal"

Apparently this is designed, will-not-fix behavior of Flatpak: "Filesystem permissions lead to bypassing the sandbox and that is something people must be conscious of." response to issue 3637.

And the point of that bug report is that these permission settings are kind of useless, because any user of the app could save a new permission config file over top of the existing one.

I submitted Flatpak feature request 3977. Also filed Flatseal feature request 196.

[Now, talking to the Snapcraft guys, I hear they're ALSO going to adopt portals ! This ruins container permissions for me !]

Someone said there is an option to do "flatpak run --no-documents-portal", but it may break some apps.

Much smaller issue: If you save a file somewhere, the app grants permission to itself to access that file, which is surprising and seems likely to create clutter and dangling grants.

Daniel Aleksandersen's "How to back up Flatpak user-data and apps"



Evaluations
+/-
Flatpak - a security nightmare
Flatpak - a security nightmare - 2 years later
TheEvilSkeleton's "Response to flatkill.org"
Joey Sneddon's "Flatseal Makes it Easy to Manage Flatpak Permissions"
Claim on reddit: "Flatpak has huge security loophole called portal where an app can get access to private data."

From someone on reddit:
The reason that Flatpak doesn't require root permissions [for installs] is because it doesn't change any files that require root permissions. In other words, it installs applications on a per-user basis in your home folder. You'll notice that the software you install for one user doesn't appear for the others.

From someone on reddit:
Re: Flatpak when compared with AppImage?

Portability:

Flatpak is a sandbox containing a complete runtime and application. This means it is portable to a large range of systems without any extra testing or integration work required.

AppImage has no concept of "runtimes" and it relies on the host for the majority of libraries, meaning it is only as portable as the packager explicitly tests it to be, and it is very easy to make mistakes or introduce incompatibilities.

Security:

Flatpak as mentioned is a sandbox and is being developed in coordination with other projects to improve security.

AppImage has no sandbox and you have to rely on using external tools to add such things. In some cases not being sandboxed would be considered an advantage.

Distribution:

Flatpaks are distributed via repositories which anybody can host, so users can get efficient updates in a timely manner.

AppImage again relies on extra tooling for this so you often don't get updates and they are not distributed in an efficient format.

The reason for AppImage's success is that the developer is very prolific in doing the work of packaging everything he finds on the internet. And I guess it's a cool demo to just click on a binary. It isn't a particularly advanced or forward-looking technology.

I guess I'll also put a disclaimer that I contribute to Flatpak; Not because I am paid to or anything, it is just a solid technology that improves the ecosystem in my eyes.
From someone on reddit:
Re: Flatpak when compared with AppImage?

The one huge advantage of Appimages is that it allows you to keep multiple versions of the same software around, in a very trivial and easy to understand manner. The App is just a file after all.

Flatpak does not do this, it has some support for multiple versions, but the control over what versions you are allowed to keep is up to the developer not the user. So the developer can offer a "beta" branch, but if the beta borks the user has no way to go back to the previous beta version.

One area were both fail is modular tools. They both assume that your software comes as one big monolithic executable. If your software comes as a bunch of little tools that can be used together, you are SOL with both of them.

The sandboxing of Flatpak is so far mostly smoke and mirrors, as it lacks granularity and faking. Wouldn't trust it to stop any real attempt at breaking stuff at this point. Might change in the future.

The way Flatpak deals with dependency is honestly complete garbage, as it only allows three dependencies: Gnome, KDE and Freedesktop. That's it. If you have a simple Qt app, you have to suck in all of KDE. No granularity or extensibility.

Overall I prefer AppImage, since it's much simpler and easier to understand from a user's point of view. Flatpak adds a lot of complexity, while still failing to solve any of the interesting problems.

From people on reddit 10/2019:
Re: Flatpak when compared with Snap?

One big difference is that Flatpak is desktop only, snap is designed to also work for servers. As a developer/power user I use many CLI applications including proprietary ones, so from this point of view snaps are more flexible.

...

Flatpak is designed a lot cleaner imo. Snaps are squashfs files which integrate TERRIBLY in modern filesystems. On the other side, flatpaks use the OCI format which is a nicer approach for developers of applications and distributions.

...

Also Flatpaks should be generally much more storage-efficient as individual flatpaks can share runtimes and files for all installed flatpaks are automatically deduplicated.

From someone on reddit: "flatpak can't run container engines ... and so things like lxd, firejail, and/or docker can't run in a flatpak"

From someone on reddit 5/2020:
  • AppImage tends to blow up the size and is easy to get wrong, because it seems to work but then doesn't on other distros if you forget to bundle some library.

  • Snap only works well on few distros (full functionality requires AppArmor, which is incompatible with SELinux).

  • Flatpak is pretty much universally supported.


From someone on reddit 6/2020:
Terminal apps just work on snap, thats why Canonical heavily pushes snap on the server while flatpak is completely useless here. So here I am using flatpak on the workstations and snap on the servers.

...

The first issue is that flatpaks only work if a desktop is loaded. I think this is an issue that can be solved tho. The second issue is that flatpaks depend on so-called portals, again something that does not currently work without a desktop loaded. Then we have the issue that snaps can be called like normal apps in the terminal, while you have to do something like "flatpak run org.something.whatever", or you add a certain folder to your path and only have to call "org.something.whatever".

From someone on reddit:
> pros and cons of using the Firefox flatpak compared to a Firefox apt/rpm ?

For something performance-critical like a web browser, I would really stick with installing through [package manager]. Firefox's Flatpak is maintained by Mozilla themselves which is a nice touch. The sandboxing in Flatpak probably won't do too much for you in Firefox's case since you're gonna be poking a lot of holes for camera, sound, filesystem, etc. Take it with a grain of salt since I haven't actually read the manifest myself.

My experience with performance in Flatpaks comes from running Dolphin Emulator and dropping 30 frames a second compared to its DNF equivalent on my machine. I do use Flatpak for apps like RuneScape and Minecraft though and can't complain.

My usage of Flatpak is pretty strictly bound to installing closed-source third-party software. Discord, Slack, Spotify, etc. I do have Element (Riot) and Fractal installed through Flatpak for various reasons.






Snap (Snapcraft, Canonical)    Snapcraft logo



Basics
+/-
Intended for desktop and server and IoT; single-app; sandboxed; Linux-only; needs installed foundation.

Snapcraft
Snapcraft.io forum (feature requests etc)



Images and getting started


Details
+/-

snap list							# see all snaps installed in system
sudo du -sh /var/lib/snapd /snap	# disk space of all snaps installed
snap find PATTERN					# find new snaps to install

snap install SNAPNAME
ls -l ~/snap					# user's config should be under here

snap info SNAPNAME				# show name, description, ID, etc
snap info --verbose SNAPNAME	# adds info about permissions
# see https://snapcraft.io/docs/snap-confinement

sudo snap get SNAPNAME			# show configuration (key-value pairs)
sudo snap get SNAPNAME KEYNAME	# show nested values for a key

sudo snap get system			# show configuration (key-value pairs)
sudo snap get system refresh	# show nested values for a key

# By default, last 3 versions of each app are cached.
# Change to minimum:
sudo snap set system refresh.retain=2

sudo apparmor_status			# see that snaps have profiles

# Where are the AppArmor profiles ?  I think the basic one is:
more /etc/apparmor.d/usr.lib.snapd.snap-confine.real
# and then individual tweaks are in:
cd /var/lib/snapd/apparmor/profiles

Snap permissions:
+/-

snap info --verbose APPNAME		# overall info, including confinement
snap connections APPNAME
# the "plug" is the consumer, the "slot" is the provider
snap interfaces APPNAME

snap connect APPNAME:PLUGINTERFACENAME APPNAME:SLOTINTERFACENAME
snap connect APPNAME:PLUGINTERFACENAME :SLOTINTERFACENAME       # a system/core slot
Maybe add a connection to slot ":system-files" or ":removable-media" ?

Snapcraft's "Interface management"
Snapcraft's "Supported Interfaces"
But I don't see for example how to restrict KeePassXC to one dir for database and another for attachments. The app just provides a "keepassxc:home" plug, and that is connected to the ":home" slot, so app has full access to everything under my home directory ? There seems to be no way for me to define a custom slot, and the two names have to match anyway. My only choices are to disconnect completely, or connect completely ? [By default, an app can read/write in ~/snap/APPNAME/common and ~/snap/APPNAME/current.] Relevant: jdstrand comment 3/2018. Created feature request.

I think permissions for a Snap image can be managed through the Snap Store ?

Only the snap's dev can change this: Snapcraft's "Snap confinement"

Ubuntu's "Create your first snap"
Jamie Scaife's "How To Package and Publish a Snap Application on Ubuntu 18.04"
Merlijn Sebrechts' "Verifying the source of a Snap package"
A snap can be built on top of various versions of the Ubuntu API ("bases"); 16.04 and 18.04 are available. See for example "core18" in output of "snap list".



How to prevent a snap from ever being updated:

Instead of running "snap install foo", do "snap download foo ; snap install foo.snap --dangerous". That sideloads the snap onto your system, so that it won't get updates from the store. (Doesn't work for "core" snap.)

Alan Pope's "Disabling snap Autorefresh"



dr knz's "Ubuntu without Snap"
Pratik's "How to Remove Snap From Ubuntu"



Alan Pope's "Snap Along With Me"
Alan Pope's "Hush Keyboards with Hushboard" (building a snap)





Snap Evaluations



Good things / intended features:

+/-
  • Install / remove app without affecting the rest of the system, especially other apps.

  • Bring dependencies with snap, so easier to install / remove.

  • Bring dependencies with snap, so fewer combinations / environments for devs and Support people to deal with.

  • There are some cases where app A wants to see glibc version N in your system, and app B wants to see glibc version N+1, and you want to use both apps A and B. With snaps, you can do that.

  • There are some cases (mostly devs, or multi-user systems) where you want to be able to install and run both version N of app A and version N+1 of app A in your system. With snaps, you can do that.

  • App updates independent from system updates, if user wishes. E.g. you could use a LTS system/distro while doing rolling updates of snap apps.
    Merlijn Sebrechts' "Why Linux desktop apps need containers"

  • App updates without having to reboot OS (some distros are forcing OS restart if you update any native packages).

  • Shift burden of packaging work from many distro packagers / repo maintainers to one app packager/dev. Especially valuable for large and frequently-updated apps such as browsers, and large app suites such as Office suites.

  • More direct connection between users and app developers. No longer a distro builder/maintainer between them.

  • Single source for software (Snap Store), although that can be bypassed if you wish. More familiar to new users who are used to single app/extension Stores in Android, Apple, Chrome, Firefox, Burp Suite, VSCode, GNOME desktop, Thunderbird, more.

  • When installing a deb, any scripts provided by the app dev run as root and unrestricted. When installing a snap, only snapd is running as root, any scripts from app dev are running non-root and contained.

  • A user who does not have root privileges can install a snap but not a deb.

Alan Pope's "8 Ways Snaps are Different"
Interview of Alan Pope

From /u/lutusp on reddit 4/2020:
+/-
Flatpaks, Snaps and Appimages solve the "dependency hell" issue by packaging all required dependencies with the application itself in a separate environment. This solves an increasing serious problem (inability to install and run some applications) with another one -- an application's download and storage size and startup time goes up.

By contrast, an application installed from the normal repositories must find all its dependencies (right version and properties) in the installed libraries, which unfortunately is a declining prospect in modern times.

From someone on reddit:
+/-
> What is the potential of snaps? What does it do better than apt?

Snaps are a great way to isolate the program you are executing from the rest of the system. So the main idea behind Snaps is security and ease of install (distro-agnostic), as .deb based programs (and many others like it) are able to access the entire disk (with read-only permission), which can create a lot of security breaches in the system overall. With Snaps you are able to control what the software can read/write, what kind of hardware it can access (i.e. webcam or a microphone) and a lot of other options.

From someone on reddit:
"snaps are compressed, and are not uncompressed for installation -- certain snaps actually are smaller than their installed deb-packaged counterparts"

From /u/timrichardson on reddit 1/2020:
Once, people said the GUI applications were way too full of bloat. And before that, people despised compilers; hand-crafted assembly language is smaller and faster. The history of coding is to trade off memory and disk space for more efficient use of humans; it's the history of the algorithms we use and the tools we use, it's the reason for layer upon layer of abstraction that lets humans steer modern computers. Like the arrow of time, this is a one-way trend, but unlike time, it doesn't just happen, it happens because it saves the valuable time of the creators: the coders, the packagers. Snaps and flatpaks are another example of this. The less time wasted repackaging apps for a million different distributions, the more apps we all get. When you've got 2% market share of a stagnant technology (desktop computing), you should grasp at all the help you can get, if you want to see it survive and maybe even thrive.

And by the way, the binary debs you are used to are not targeted or optimised for your hardware, they target a lower common denominator. The difference can be significant, look how fast Clear Linux is. Maybe you should swap to Gentoo. My point is that you already accept bloat and performance hits in the name of convenience, you are used to it so you don't notice. But traditional packaging is an old technology, is it is so surprising that there are new ideas?




Negative views:

+/-
From /u/10cmToGlory on reddit 2/2019:
+/-
The snap experience is bad, and is increasingly required for Ubuntu

As the title says. The overall user experience with snaps is very, very poor. I have several apps that won't start when installed as snaps, others that run weird, and none run well or fast. I have yet to see a snap with a start up time that I would call "responsive". Furthermore the isolation is detrimental to the user experience.

A few examples:
  • Firefox now can't open a PDF in a window when installed as a snap on Ubuntu 18.04 or 18.10. The "open file" dialog doesn't work. The downloads path goes to the snap container.

  • Stuff that I don't need isolated, like GNOME calculator, is isolated. Why do I care? Because as a snap it takes forever to start, and the calculator I'd really like to have start quickly.

  • Other snaps like simplenote take so long to open I often wonder if they crashed.

  • Many snaps just won't open, or stop opening for a plethora of reasons. Notables include bitwarden, vscode (worked then stopped, thanks to the next point), mailspring, the list goes on.

  • The auto-updating is the worst thing ever. Ever. On a linux system I can disable auto-updates for just about everything EXCEPT snaps. Why do I care? Well, one day, the day before a deadline, I sat down to do some work, only to find that vscode wouldn't open. A bug was introduced that caused it to fail to open, somehow. As the snap auto-updated, I was dead in the water until I was able to remove it and install it via apt (which solved the problem and many others). That little auto-update caused me several hundred dollars in lost revenue that day.

  • Daemons have to be started and stopped via the snap and not systemd. This is a terrible design choice, making me have to change my tooling to support it for daemon (which I'm not going to do, by the way). A great example of that is Ansible - until very recently there was no support for snaps.

  • Logging is a nightmare. Of course all the logs are now isolated too, because for some reason making everyone change where to look for help when something is not working just sounds like a good idea. As if it's not enough that we have to deal with binary systemd logs, now we get to drill into individual snaps to look for them.

  • Most system tools are not prepared for containerization, and make system administration much more difficult. A great example is mount. Now we get to see every piece of software installed on the system when we run mount. Awesome, just what I wanted. This is just one example of many.

  • Snaps are slowing down my system overall, especially shutdown. Thanks to it's poor design, there are multiple known issues with snaps and lxd, for example, shutting down running containers. This is just one of many that makes me have to force shutdown my machine daily.

  • Creating a snap as a developer is difficult and documentation poor. You have to use a Ubuntu 16.04 image to create your snap, which alone makes it unacceptable. I found myself in dependency hell trying to snap package some software that used several newer libraries than what Ubuntu 16.04 had on offer. The YAML file documentation is laughably bad, and the process so obtuse that I simply gave up, as it just wasn't worth the effort.

This is just the short list, using mostly anecdotes. I won't waste my time compiling a more extensive list, as I feel like the folks at Canonical should have done some basic testing long ago and realized that this isn't a product ready for prime time.

As for Ubuntu in general, I'm at a crossroads. I won't waste any more time with snaps, I just can't afford to and this machine isn't a toy or a hobby. It seems that removing snaps altogether from a Ubuntu system is becoming more and more difficult by the day, which is very distressing. I fear that I may have to abandon Ubuntu for a distro that makes decisions that are more in line with what a professional software developer who makes their living with these machines requires.

From /u/HonestIncompetence on reddit:
IMHO that's one of several good reasons to use Linux Mint rather than Ubuntu. No snaps at all, flatpaks supported but none installed out of the box.

From /u/MindlessLeadership on reddit 10/2019:
+/-
... issues with Snap as a Fedora user.
  • The only "source" for Snaps, the Snap store, is closed-source and controlled by a commercial entity, Canonical. Sure, the client and protocol are open source, but the API is unstable and the repository url is set at build-time. Even a Canonical admitted at Flock it was unpractical to build another source right now.

  • Snap relies on many Ubuntu-isms, it obvious it was never made originally as a cross-distro package format. It's annoying to see it advertised as a cross-distro package format, when as a Fedora user, I can tell you Snap does not work nicely with Fedora (it has improved somewhat in the last year), with SELinux issues etc. At one point running Snap would make the computer nearly freeze up because the SELinux log would be getting flooded. It also relies on systemd, although that itself isn't an issue but it raises design questions.

  • Similar to above, snapcraft only runs on Ubuntu. So you have to use Ubuntu to build a Snap.

  • /snap and ~/snap. If you don't do the former, you can't run 'classical snaps'. This not only violates the FHS, but doesn't work when / is RO such as under OStree systems such as Silverblue.

  • The reliance of snapd and relying on loopback mounting. I don't really like df showing a line for each application/runtime installed, even if it's not running and the entire thing of at-boot needing to mount potentially dozens of loopback files for my applications seems like a massive hack. A recent kernel update broke on Fedora the way Snap was using to mount loopback files (although it was fixed). Snaps were also broken because Fedora moved to cgroups2.

  • Since they're squashfs images, you can't modify them if you don't have the snapcraft file. Flatpak as a comparison, stores files you can edit in /var/lib/flatpak.

  • If I wanted to use Ubuntu to run my applications (Snap uses an Ubuntu image), I would use Ubuntu.

  • snapd needs to run in the background to run/install/update/delete Snaps. This seems like a backwards design choice compared to rpm and Flatpak, which elevate permissions where needed via polkit.

Canonical don't seem very interested on addressing any of these, which questions whether it's to help the "Linux desktop world" or just push Canonical/Ubuntu.

From /u/schallflo on reddit 10/2019:
+/-
Snap:
  • Does not allow third-party repositories (so only Canonical's own store can be used).
    [But you could download snaps manually and install with --dangerous. Someone said also you could download and then "sudo snap ack yoursnap.assert; sudo snap install yoursnap.snap". Also, see lol.]
  • Only has Ubuntu base images, so every developer has to build on Ubuntu.
  • Forces automatic updates (even on metered connections).
  • Depends on a proprietary server run by Canonical.
  • Relies on AppArmor for app isolation (rather than using cgroups and namespaces like everyone else), which is incompatible with most Linux distributions, yet it keeps advertising itself as a cross-distribution package format.



Merlijn Sebrechts' "Why is there only one Snap Store?"
lol

Wikipedia's "Snap (package manager)"



From /u/ynotChanceNCounter on reddit 1/2020:
It's a bloated sandbox, tied to a proprietary app store, they've gone out of their way to make it as difficult as possible to disable automatic updates, so now trust in all developers is mandatory. Canonical's dismissive toward arguments against the update thing, they took the store proprietary and for their excuse they offered, "nobody was contributing so we closed the source." Excuse me?

And all the while, they're trying to push vendors to use this thing, which means I am stuck with it. And I'm stuck with the distro because they've got the market share, and that means this is the distro with official vendor support for d*mn near everything.

From people on reddit 3/2020:
+/-
Snap is pretty much hard-wired not only to Ubuntu, but also to Canonical. Snap can only use one repository at a time, and if it is not the Canonical's, users will miss most of the packages. ... Also, some snap packages simply assume that DE is Gnome 3.

...

... currently Snap (on the server side I think) is not yet open-source.



Snap automatic update issues

I think also you get updates on the developer's schedule. So suppose some horrible security hole is found in library X. Each snap (and flatpak and appimage) in your system may have its own copy of library X. You can't update one copy (package) of library X and know that the issue has been handled. [I'm told that flatpak allows sharing of libraries, if the developer sets that up explicitly, maybe in a case such as N flatpak apps from the same vendor.] [But see Drew DeVault's "Dynamic linking" (not about snaps).]



How is RAM consumption affected ? If I have 10 snaps that all have version N of a library, I'm told the kernel will see that and share the same RAM for that library. Suppose all 10 have SLIGHTLY different versions of that library, point-releases ?



Apparently at boot time there is a "mount" operation for each of your installed snap apps; see output of "systemd-analyze blame | grep snap-". But they're not actually slowing down startup: in my system, "sudo systemd-analyze critical-chain" shows about 1 msec due to snap stuff, and it's not those mount operations.



Many people complain that Snaps are slow to launch. Explanation paraphrased from /u/zebediah49: "Has to create a mount-point and mount up a filesystem, load up everything relevant from it -- and since it's a new filesystem, we've effectively nuked our cache -- and then start the application. In contrast to normal, where you just open the application, and use any shared objects that already were cached or loaded."
Daniel Aleksandersen's "Firefox contained in Flatpak vs Snap comparison"



I think Snap (and Flatpak) has no built-in crash-reporting mechanism, similar to ubuntu-bug on Ubuntu or abrt on Fedora. Something that gathered info for you and sent you off to the right place to report.



From people on reddit 4/2020 - 6/2020:
+/-
  • closed-source server component.
  • hard-coded canonical repos.
  • limited control over updates.
  • ubuntu pushes it in situations users feel it isn't useful (some default apps are snaps, apt can install snaps without the user noticing).
  • a few technical issues, like long startup time when launching an app for the first time (I've even seen cases where the app didn't launch at all the first time), theming issues, a too-restrictive sandbox, etc.
  • you can't move or rename ~/snap.
  • there are some security functions such as limiting which directories the snaps can access, and with development tools, having to redo your directory structures to accommodate draconian hard-coded is a PITA.
  • it is entirely within the control of canonical / Ubuntu with the snapcraft store being the only place to distribute snap packages.
    [But you could download snaps manually from anywhere and install them with --dangerous.]
  • it creates a bunch of virtual storage devices, which clutters up device and mount-point listings, and maybe slows booting.
  • bloats system with unnecessary duplicates of dependencies both on disk and in RAM.
  • snap allows designation of only one repo for all snaps; you can't list multiple.
  • some people say snap introduces yet another variable into "why doesn't app X use the system theme ?"
  • snaps won't function if the /home directory is remoted in certain common ways.
  • snapd requires AppArmor [true], won't work under SELinux [means "SELinux alone, without AppArmor" ?].
  • all snap-packaged programs have horrible locale support.
  • snap software doesn't work with Input Method. That alone makes snap totally useless for me as I cannot input my native language, Japanese, to the snap-packaged software.

Package manager that is constantly-running daemon (snapd), which just seems wrong and un-Linuxy.



One under-handed thing that Ubuntu 20 does: the deb package for Chromium browser actually installs Chromium as a snap. IMO that's deceptive. If it's available only as a snap, don't provide a deb package at all.



Infrastructure-type additions that some people don't like: directory "snap" created in home directory, more mounts cluttering outputs of df/mount commands.



Apparently snapd requires use of systemd, and some people don't like systemd.



Nitrokey's "NextBox: Why we Decided for and Against Ubuntu Core"



4/2020 I installed Ubuntu 20.04 GNOME, and let it use snaps:

+/-
Ended up with software store and 4 more snap apps in my user configuration (~/snap), and a dozen more for all users (/snap). They seem to work okay, with one big exception: when a snap app needs to launch or touch another app (Liferea launching any downloader, or VSCode opening a link in Firefox). This either fails (Liferea case), or works oddly (VSCode opens new FF process instead of opening a new tab in existing FF process). But: KeePassXC is a snap app, and has no problem opening a link in existing Firefox process. [Later someone said: VSCode is specifying profile "default", so if you've changed to another profile, FF has to open another process. Let it open FF, then set your desired profile as the default, and next time VSCode will open link in existing FF process.]

Some people complain that Ubuntu's store app prioritizes snaps ahead of debs (re-ordering search results to do so), and even has some debs (Chromium) that start as a deb but then install a snap.

Heard: the Chromium snap is broken for 2-factor authorization keys (U2F). Reported since mid-2018, some fixes in pipeline, but still broken. Relevant: Ask Ubuntu's "How to remove snap completely without losing the Chromium browser?"

I'm told: Pop!_OS has adopted a no-snaps policy, Elementary OS has adopted a flatpaks-instead-of-snaps policy, Mint has a no-snaps-by-default policy.

Dev who packaged Liferea as snap said fixing it is complicated, just about as I was giving up on the snap version and changing to the deb version. Works.

VSCode as snap had a couple of issues: won't open a new tab in existing FF process, and seemed to be interpreting snap version of node incorrectly (said "v15.0.0-nightly20200523a416692e93" is less than minimum needed version 8). I gave up, uninstalled the snap version and changed to the deb version. Worked.

The node-based FF extension I was developing can't contact Tor Browser. Removed node.js snap, and did "sudo apt install nodejs" and "sudo apt install npm". But that didn't fix the problem.

9/2020: Changed Firefox in my system from deb to snap. Flatpak and snap have almost same versions in them: snap is a fraction more recent. I don't see a developer or nightly version available in either store/hub. Apparently to get flatpak beta you need to add a flatpka beta repo. Did "sudo apt remove firefox", "snap install firefox", then copied profile from old place to new place, works. But then I started finding a host of bugs in Firefox, mostly having to do with pathnames.

An attraction of containers is that the app dev can build the image and set the permissions, and you report any bugs straight back to the app dev (no middleman). But I'm finding a lot of containers where the app dev has NOT built the image, some third party built it. Which defeats much of the purpose.

11/2020: Changed Firefox in my system from snap to Flatpak. Snap version just had too many bugs.



Changes Canonical/Snapcraft could make to eliminate many objections:

+/-
  • Support an "update never" setting for a snap. Perhaps there could be a mechanism for notifying that an update exists, that the update fixes security issues, and/or current version is past EOL.

  • Open-source the proprietary part of the Snap store software.
    See some details in Merlijn Sebrechts' "Why is there only one Snap Store?"
    [A user could download snaps manually from anywhere and install them with --dangerous, but that's a bit of an ugly solution. Also see lol]
    From predr on Snapcraft forum 8/2020:
    "github.com/snapcore
    github.com/canonical-web-and-design/snapcraft.io
    Only parts missing are server code, Amazon S3 buckets, snap signing (assertions), and database APIs. You won't find these things open-sourced in any good store, for a reason. Everything else is open source."
    Response from Merlijn Sebrechts:
    "Canonical's official position is that the store is currently woven into their own internal infrastructure. Open-sourcing it would require a massive effort to untangle this and they don't think it's worth the effort."

  • Have some kind of policy board overseeing the store, that includes outside people.

  • Ban use of any "deb that actually installs a snap" packages. More of a distro policy issue, but snap could state it as the preferred policy.

  • Allow Ubuntu system owner to set policies such as "I don't want snaps in my system" and "prioritize apt first" in the Ubuntu Software application.






AppImage    AppImage logo



For desktop and server; single-app; not sandboxed; Linux-only; no installed foundation.

Doesn't have the security/isolation features of other container systems, but does have the "all dependencies bundled with the app" feature.



AppImage
wikipedia's "AppImage"
Abhishek Prakash's "How To Use AppImage in Linux"
Alexandru Andrei's "What Is AppImage in Linux?"
AppImageHub

Just find the site for an app you want, and see if they have an AppImage available, matching the CPU architecture you have. Download it and set execute permission on the file. Then run it.

See what AppImage apps are installed: "sudo find / -name "*.AppImage" -type f -print | less" ?



The Changelog's "Please stop making the library situation worse with attempts to fix it"





Others



LXC

+/-
Wikipedia's "LXC"
linuxcontainers.org
Project home
Rubaiat Hossain's "Everything You Need to Know about Linux Containers (LXC)"
John Ramsden's "A Brief Introduction to LXC Containers"
LXD is a container management system which provides a VM-like experience using LXC containers. Each container (group of processes) can have a different (restricted) view of the system's process identifiers, network configuration, devices, mount points. An LXC container is not a VM, in that it just uses the host kernel, not an additional kernel.

From Debian Administrator's Handbook's "Virtualization":
+/-
Even though it is used to build "virtual machines", LXC is not, strictly speaking, a virtualization system, but a system to isolate groups of processes from each other even though they all run on the same host. It takes advantage of a set of recent evolutions in the Linux kernel, collectively known as control groups, by which different sets of processes called "groups" have different views of certain aspects of the overall system. Most notable among these aspects are the process identifiers, the network configuration, and the mount points. Such a group of isolated processes will not have any access to the other processes in the system, and its accesses to the filesystem can be restricted to a specific subset. It can also have its own network interface and routing table, and it may be configured to only see a subset of the available devices present on the system.

These features can be combined to isolate a whole process family starting from the init process, and the resulting set looks very much like a virtual machine. The official name for such a setup is a "container" (hence the LXC moniker: LinuX Containers), but a rather important difference with "real" virtual machines such as provided by Xen or KVM is that there is no second kernel; the container uses the very same kernel as the host system. This has both pros and cons: advantages include excellent performance due to the total lack of overhead, and the fact that the kernel has a global vision of all the processes running on the system, so the scheduling can be more efficient than it would be if two independent kernels were to schedule different task sets. Chief among the inconveniences is the impossibility to run a different kernel in a container (whether a different Linux version or a different operating system altogether).

From Teknikal's_Domain article:
+/-
LXC uses one additional program, lxd, and native features of the Linux kernel to orchestrate everything (kinda like Docker, but more extreme).

A Linux container, conceptually, is meant more as a general-purpose Linux environment, and is also, conceptually, simpler: a filesystem archive and a configuration metadata file. Yes, that's all there is to it. Every container is a full Linux userland: same systemd, same file tree, same everything. Unlike Docker images which are more meant to be specific to one "thing" at a time, a Linux Container is more like an entire VM that shares its kernel with its host.

...

... for LXC, a container is a cgroup/namespace combination: the cgroup to set up the container's resource limits, and a namespace that defines the container's boundaries and filesystem access and limitations.

All containers are is a specified filesystem mount, and a configuration specifying what to allow. ...

...

Containers literally use the same kernel running as the host, meaning for the most part, they're free to interact with the outside world as long as its within the bounds of their namespace. The only real control to give is extra filesystem mounts that are allowed into said namespace, and what network interfaces and network abilities are permitted within said namespace. One advantage of using kernel features like that is that resource allocations can be changed live, unlike a VM, or, for that matter, a Docker container, which, by defaults, has no upper limits unless explicitly stated.

Using LXC:

man lxc
man lxc.conf
man lxc.container.conf
cat /etc/lxc/default.conf
cat /etc/lxc/lxc-usernet
ls /usr/share/lxc/templates
sudo lxc-checkconfig | less

sudo apt install lxc-templates
ls /usr/share/lxc/templates

sudo lxc-create -t alpine -n test-container
sudo lxc-start -n test-container
sudo lxc-console -n test-container
# log in as root, no password
# no way to disconnect the console, have to kill the terminal ?
sudo lxc-stop -n test-container


https://linuxcontainers.org/lxc/getting-started/
https://www.ubuntupit.com/everything-you-need-to-know-about-linux-containers-lxc/
https://www.how2shout.com/how-to/how-to-install-and-use-lxc-linux-containers-on-ubuntu.html
https://www.redhat.com/sysadmin/exploring-containers-lxc

lxc-create -n foo -f /etc/lxc/default.conf -t /usr/share/lxc/templates/lxc-local

lxc-execute -n foo [-f config] /bin/bash	# run an application (as PID 1)
lxc-start -n foo [-f config] [/bin/bash]	# run a system (lxc-init will be PID 1)

lxc-ls -f		# list all containers
lxc-info -n foo

lxc-monitor -n ".*"		# monitor states of all containers

lxc-stop -n foo -k

lxc-destroy -n foo

Using LXD:
Vivek Gite's "Install LXD on Ubuntu 20.04 LTS using apt"
Alan Pope's "LXD - Container Manager"



Also see Firejail section of my Linux Controls page (very similar to LXC, but on a single-application basis, not a client-daemon architecture).

[There seems to be a fuzzy dividing line between permission-controls (AppArmor, Firejail, SELinux, seccomp) and containers (LXC, snap, flatpak, Docker). The former are facilities with permissions defined and stored in OS/framework, while the latter are packaging facilities with permissions and network configuration etc defined and stored in each package. Both of them have sandboxing/permissions and share one kernel among all packages.

There are clear dividing lines between those and virtual machines (which have a kernel per package) and bundles such as appimage and Python Virtualenv (which don't have sandboxing/permissions).]



Python Virtualenv


Doesn't have the security/isolation features of other container systems, but does have the "all dependencies bundled with the app" feature.

Virtualenv



Zero Install


Zero Install
Wikipedia's "Zero Install"



Kaboxer


Kaboxer - Kali Applications Boxer



Looking Glass


Run Windows 10 in a VM on top of Linux.
Requires two GPUs, one for the host and one for the VM ?
Looking Glass



Systemd's "Portable Services":
Pid Eins's "Walkthrough for Portable Services"
"Portable Services are primarily intended to cover use-cases where code should more feel like 'extensions' to the host system rather than live in disconnected, separate worlds."



Containerd and runc: low-level run-times for containers. Cloud Native Computing Foundation - containerd





Container System Comparisons



Nitesh Kumar's "Comparison: Snap vs Flatpak vs AppImage"
OSTechNix's "Linux Package Managers Compared - AppImage vs Snap vs Flatpak"
AppImage / AppImageKit - "Similar projects"
Verummeum's "Snap, Flatpak and AppImage, package formats compared"
Merlijn Sebrechts' "A fundamental difference between Snap and Flatpak"
TheEvilSkeleton's "Some people think that the problems plaguing Snap also apply to Flatpak, but this is untrue"

From someone on reddit:
Snap is hard-wired to Ubuntu and does not contain basic libs that exist in Ubuntu.
Flatpak is designed to be cross-distro, and packages everything.
AppImage contains as many libs as its developer decided to put in it.

From /u/galgalesh on reddit 8/2020:
+/-
One of the issues with Docker is that confinement is all-or-nothing. You cannot give Docker containers curated access to host capabilities such as audio, serial, hypervisors etc.

Flatpak has the same issue as Docker in that it's very hard to give applications mediated access to specific host features. The xdg desktop portals are an attempt to solve this, but they require applications to be rewritten to use the new api's. As a result, most Flatpaks run unconfined.

...

Snap heavily uses the AppArmor Linux Security Module for confinement. This is on top of cgroups and namespaces. This allows them to give apps fine-grained permissions to access different capabilities of the host system. This makes some cool things possible:
  • Applications like Docker, KVM and LXD can run in a secure container. As comparison: You can run KVM in Docker, but you need to turn off the container security in order to do that.

  • You can give an application access to a USB camera without giving it access to USB sticks.

  • You can give an application access to play audio but not record audio.

Flatpak uses Bubblewrap for confinement, which is a much more traditional container where you have to turn off the confinement completely in order to use advanced features of the OS.

Both Snap and Flatpak use XDG Desktop Portals which is a new api for applications to securely access things like the filesystem and the camera. This, for example, allows Flatpaks to access the camera without turning off the confinement. The downside is that applications need to be rewritten in order to use the secure api. As a result, most Flatpaks have much of the security disabled.

Because Snap uses AppArmor, it can mediate the existing Linux API's for accessing the camera and other things, so applications can run in a secure container without any code modifications. The downside of using AppArmor is that some distributions use a different Linux Security Module and you can only run one at a time. On Fedora, you have to choose: if SELinux is enabled, snaps will not be confined. If SELinux is disabled, snaps will be confined. Canonical is working with many other devs in order to put "Linux Security Module Stacking" into the kernel which will make it possible to turn on Snap confinement together with SELinux. This won't be finished for a long time, though.

...

> I'm really torn about centralization, or the gatekeeper concept

I personally think centralization and gatekeeping are important. Flatpak tried the decentralized approach initially, but they are now pushing Flathub much more because a decentralized approach has a lot of issues. Ubuntu tried the decentralized approach too, btw, with PPA's. Snap was explicitly centralized because of the lessons learned from PPA's.

With snap, there is no gatekeeping for the applications. There is gatekeeping for the permissions, however. Snaps describe which permissions they want to use, but they do not describe which permissions they are allowed to use. The default permissions are part of a Snap declaration. This is additional metadata also hosted in the Snap Store. Users can override the default permissions themselves.

When you publish a snap in the snap Store, it only has the permissions which are deemed "safe". For example, snaps do not have access to the camera by default because that is very privacy-sensitive. If your application needs a webcam, then you can either try to convince the user to manually enable the webcam or you can go to the Snapcraft forum and ask for additional default permissions. The Snapcraft developers then act as a gatekeeper, they decide which additional permissions are allowed based on a number of documented criteria.

I think this is a really good model. The current issue, in my view, is that Canonical is the only ones who creates the criteria for additional permissions. I think this should be done by an independent committee instead, so that it can remain neutral. Right now, the Snapcraft developers are completely independent of the Ubuntu developers, so Ubuntu has no more power over the Snap Store than other distro's. This is not enough, however. We really need an independent committee.

For comparison,(AFAIK; I'm not an expert in Flatpak), the default Flatpak permissions are set by the Flatpak itself. So Flathub without gatekeepers would not be possible: it would allow anyone to have complete control over your system by publishing a Flatpak on Flathub.

...

> Snap and Flatpak are less secure than distribution-supported software

Indeed, the point of Snaps and Flatpaks is that the packages are created by the upstream developers instead of the distro maintainers. Traditionally, the distro maintainers would make sure that apps are "safe", and you lose most of this advantage by using Flatpaks and snaps. The advantage is that a lot more software is available.

But I think the comparison of "snaps" vs "repositories" is a bit misleading. Most users already install third-party software from PPA's, binaries from web sites, installation scripts etc. If you compare snap and Flatpak to PPA's, they are actually a lot more secure. Even if you completely trust the person who created the binary or the PPA, there is still the issue of stability. The worst a broken snap package can do is "not start". The worst a broken PPA can do is "make your system unbootable".

From Luis Lavaire's "The future of app distribution in Linux: a comparison between AppImage, Snappy and Flatpak":
AppImage is not a package manager or an application store ... AppImage is just a packaging format that lets anybody using a Linux distribution to run it ...



My evaluation

+/-
I wanted two things: bug-reporting straight to app developer (no distro middleman) and per-app permissions (in a way easier than AppArmor or Firejail).

As of 11/2020, I haven't gotten them.

Many container images are built by a helpful third party, not the original app developer. So this introduces a party of unknown trustworthiness into the chain, and just replaces one middleman with a different middleman. Sometimes there are N different images for app A version N, and it takes detective work to figure out which one you should try.

On permissions:
  • Docker doesn't help me because I'm mostly not running server apps.
  • Appimage doesn't do permissions.
  • Flatpak's permission model is strange, has gaping holes ("portals").
  • Snap has a very limited permission set for files/dirs: an app can get all of home, all of system files, removable media, or nothing. And apparently it's going to implement "portals" too.
I feel I'm being forced onto AppArmor and Firejail.






Container Managers (orchestration)



Merlijn Sebrechts' "What's up with CRI-O, Kata Containers and Podman?"



Jack Wallen's "Monitor Your Containers with Sysdig"





Miscellaneous



Most of these containers have a good/bad feature: they allow your system to have N versions of library L at the same time. That's bad if many of those versions have security vulnerabilities. Better hope that the container's sandbox works properly.



Abnormal brain
Grype - A Vulnerability Scanner For Container Images And Filesystems



Qubes OS

+/-
An operating system with a lot of VMs, running on top of a Xen hypervisor. "Secure" in that the integrity of the OS is protected, apps are protected from each other, you can open dangerous documents and damage will be limited to inside a VM.

There are different domains. Xen runs in domain 0.

There are different types of VMs: disposable, vault (network-less).
There are Template VMs (e.g. Fedora, Debian, etc), App VMs, and Standalone VMs.
The Official templates are Fedora (I think no KDE) and Debian; the Community templates are Whonix, Ubuntu, Arch, CentOS, Gentoo.

Operations (e.g. copying files, cut-and-paste) can be done between VMs, but user needs to give explicit consent each time. Qubes has a Thunderbird add-on that opens attachments in disposable VMs.

You can run Windows 7 or 10 in a VM, with some limitations. You can use Katoolin to install Kali tools on a Debian Template VM.

There is a sys-usb (domain?) for handling USB devices (including microphoone and camera), and you explicitly connect devices to VMs. Similar sys-net for network devices.

No: 3D acceleration for graphics / gaming, Bluetooth, MacOS, Android. As fair amount of the security configuration is CLI-only.

Need minimum of 16 GB RAM to run Qubes decently; 32 GB better ?

Qubes OS
Micah Lee talk 9/2018 (video)
Dorian Dot Slash demo (video)
Jon Watson's "Qubes, Whonix, or Tails: which Linux distro should you use to stay anonymous?"
Jesse Smith's "Types of security provided by different projects"
Thomas Leonard's "Qubes-lite With KVM and Wayland"
Hardware Compatibility List (HCL)

A response to Thomas Leonard's "Qubes-lite With KVM and Wayland":
+/-
What a great deep-dive into replicating some of the features in Qubes OS. I used Qubes OS for a year, and loved it. It did feel sluggish at times, and video conferencing had too much latency, passing the webcam and mic through a USB qube.

My new system doesn't boot Qubes OS, and I'm not technical enough to build or write my own. However, the ideas in Qubes seeped into my daily workflow. I have a reasonably powerful host system with lots of RAM running Windows 10 Enterprise and VMware Workstation. I keep the host OS as minimal as possible and do all my work in several Linux and Windows VMs. The separation of projects is nice, and VMware's rolling snapshot feature is a good safety net. I even have a disposable VM for surfing the web for research. Video conferencing in a VMware VM is not terrible. It's probably 80% of the benefit of Qubes OS with 20% of the hassle.




Fedora Silverblue

+/-
"Immutable" kernel. OS files are updated in one packaging system (rpm-ostree), in whole-system updates. Apps are installed as Flatpaks, although native apps can be installed into the OS tree (disouraged). Then you have "toolboxes" (containers), which appear as various versions of Fedora.

Main selling-point seems to be increased stability because of the atomic system updates, separate package-systems, and containers.

From Fabio Alessandro Locati's "My immutable Fedora":
"... with an immutable OS, when the OS is running, the OS filesystem is in read-only mode. Therefore no application can change the OS or the installed applications."

DorianDotSlash's "Fedora Silverblue could be the future!"



Self-hosting

+/-
Reasons to do it:
+/-
  • Learn how to install and run servers and services.
  • Share files/services among home users and family/friends.
  • Resilience: (if on LAN) files/services still available if internet goes down.
  • Resilience: (if in cloud) files/services still available if your house has a disaster.
  • Control: avoid email/social accounts closed if service doesn't like something you said/did.
  • Privacy: files/services kept on server you control.

Locations:
+/-
  • On-premise: on a server in your house, on your LAN.
    Requires: buying server, having space, electricity, cooling, maybe a UPS, maintenance, backups, maybe opening ports into your LAN.
    Gives: best performance for users on LAN.

  • Cloud: on a virtual server in a data-center.
    Requires: monthly fee.
    Gives: best performance for users on internet; easy scalability; less maintenance.


Typical software:
+/-
  • Services to users:
    • Web server (nginx, Apache, more).
    • Blog server (Wordpress, more).
    • Email server.
    • Password manager server.
    • Chat/messaging server (Matrix, more).
    • Game server (Minecraft, more).
    • Photo/video gallery.
      (Linux Unplugged podcast episode 409)
    • Nextcloud: file-hosting, media-streaming, photo gallery, calendar, contacts, RSS, bookmarks, more.
    • FreeNAS: file-hosting, media-streaming.
    • Synology: file-hosting, media-streaming, video-recording. Large app-store, nice UI, lots of things are push-button and very easy-to-use. Maybe not for someone who wants to learn all the details of how to set up VMs and Docker etc.
    • Plex: media-streaming.
    • Jellyfin: media-streaming.
    • Terramaster: NAS.
      Kevin Norman's "Declouding my life - Replacing Google Photos"
    • Outbound VPN client (in router ?).
    • DNS ad-blocking (Pi-Hole).
  • Infrastructure:
    • Proxmox: VM/container management platform.
    • Unraid: NAS, app server, VM management platform.
    • Inbound VPN server: remote client machine gets full access to LAN.
    • Reverse proxy: routes inbound requests to appropriate servers. Nginx, HAProxy.
    • DNS.
    • Backup.
    • Monitoring (Prometheus ?).
    • Intrusion Detection (IDS article).
    • Router/firewall (pfSense, more).
Awesome-Selfhosted

From someone on reddit 2/2021:
+/-
Self-hosting lessons learned from over the years...
  • Keep it simple

    Every time I create a very complex setup it always comes back to bite me, one way or another. Complex networks, or complex configs, custom scripts and other hacks just add to complexity and make it difficult to maintain whatever it is you're putting together. Complex stuff also demands very good documentation so you can remember what the hell you did three months later when something stops working. If it's something simple, a few notes and a link to some official doc might get you going quick in the future.

  • Enterprise hardware is not a must

    I've bought used enterprise servers before, but the outdated CPUs and the power consumption costs made me realize I can do more with a lot less after I was annoyed and started researching alternatives. Back in 2020 one of my goals was to replace my enterprise crap with small/low-power servers, so I settled with Dell 5060 thin clients and a couple of APU4s from PCEngines. There are plenty of other options out there, NUCs are very awesome too. My only 2 enterprise servers are my pfSense firewall at home and my colocation server at a local DC because it was required in order to host it there.

  • Take notes, document and add comments to config files

    You don't have to be a professional tech writer, but simple notes related to each server, quick steps for replicating the config and some comments in your config files will definitely help you remember how stuff is running. When I change a config file somewhere, I usually add a note with a date and reason why, or quick explanation. When I go back to it 8 months later I don't have to try to remember why I did it.

  • Not all tutorials and how-tos are of the same quality

    A quick web search will give you tons of how-tos and tutorials on how to set something up. I've had the bad luck of following some of these in the past that had terrible decision-making, didn't follow best practices and was just all around a crappy tutorial, even if it was well written. Now I follow official documentation whenever possible, and might take a look at other tutorials for reference only. Not only that, tutorials can become outdated, whereas official docs are typically kept up by the devs.

  • Everything behind firewall/VPN if at all possible

    Opening up your services to the outside is risky for multiple reasons, and requires your stuff to be updated constantly, plus you should know about zero days, common exploits and mitigations, bla bla bla, etc. This is a huge time sink and if you have to be doing this kind of stuff, you should be getting paid for it :)

  • Reverse proxy is awesome

    A well-configured reverse proxy is an easy way to host multiple services behind a single server, public or not, and to me seems easier to manage than to have to keep track of all my stuff separately. It's also a cheap way to park domains, redirect domains and have auto-renewals for your SSL certificates (and to force HTTPS). My suggestions are Caddy v2 or Nginx Proxy Manager (nice little GUI). Good ol' NGINX by hand also works great.

  • Adding new services out of necessity vs for fun

    At certain points in time I've had tons of different services running, especially since there are so many cool projects out there. I am tempted to spin up a new VM/container for some new shiny app, but find myself not using it after a few weeks. This snowballs into a massive list of different systems to maintain and it will consume a lot of time. Now I only host stuff that solves a real big problem/need that I have, that way I only have to worry about maintaining a few things that are really useful to me and are worth the work.

  • Backups

    Have a good backup system, preferably located elsewhere than your main home lab. You don't really need to implement a full disaster-recovery system, but having copies of important config files, databases and your notes/docs is very useful. I run a lot of stuff in Linux containers, so snapshots and lxc backups are also very useful and can save you time if some change or update breaks something. And if you have those configs/files saved away also, it makes it even easier.

Feedback from others: have monitoring software, decommission stuff you aren't using, use VLANs, keep a separation between "production" other family members depend on and "lab" which you can mess around with.

Deny access from all external IP addresses, then whitelist IP addresses you want to allow access from.

From 2.5 Admins podcast: if family/friends are going to log into services from outside the LAN, don't expose the various service login pages to the open internet. Instead, set up a Wireguard connection from each friend's machine to your LAN. That means anyone who needs to get in will have an automatic connection with a good installed credential/key, before they get to see any login pages.

If all you're going to do is file-sharing from server to clients, you don't need a full NAS such as FreeNAS. You can just use Samba or other standard facilities of your server's OS. Using a NAS would add things such as web UI for administration, management of ZFS pools, maybe VPN, plug-ins for other stuff.

TheOrangeOne's "Exposing your Homelab"
CyberHost's "Where to start with Self-Hosting - A Beginners Guide"
TheOrangeOne's "LAN-only applications with TLS"
TheOrangeOne's "Securing public servers"
pwn.recipes' "Don't mindlessly follow security-gospel"
Josheli's "A New, Old Hobby: Self-hosted Services"
Leon Jacobs' "building a hipster-aware pi home server"
Hayden James' "Home Lab Beginners guide - Hardware"
Ctrl Blog's "What domain name to use for your home network"
TheOrangeOne's "Backing up and restoring containers"
reddit's /r/selfhosted

Inbound:
+/-
Reverse proxy: have one server (usually web server) on the LAN handle lots of incoming requests from the internet on one port (usually 443) and route the requests to various other servers on the LAN, thus hiding internal details from external clients. Can do sophisticated things such as load-balancing, auhentication, etc.

Port forwarding: rules in the router or firewall so incoming traffic to various ports gets redirected to particular IP addresses and ports on the LAN, thus hiding internal details from external clients.

Cloud hosting:
+/-
From people on reddit 2021:
Azure is for big business, and too complicated.
AWS and GCP also for business.
Self-hosters are better off with Digital Ocean, Vultr, or Linode.