Disk management



Layers:

+/-
  1. GUI apps to manage multiple layers: GNOME Disks, GParted, VeraCrypt, Stratis (XFS-only), etc.

  2. Linux standard organization of files and directories in a system: /, /bin, /etc, /etc/passwd, /usr, and so on.
    Chris Hoffman's "The Linux Directory Structure, Explained"


  3. Linux OS API to a filesystem: inodes which represent files (some of which are directories), operations such as read/write/createfile/deletefile/makedir/rmdir.
    Wikipedia's "File system"

    Format on disk (of ext* filesystem):
    - Boot block ?
    - Superblock (maybe run "sudo dumpe2fs -h /dev/sda1 | less" to see info).
    - Inode (index node) table. (Each inode points to data blocks)
    - Dentries (translate between names and inode numbers, and maintain relationships between directories and files).
    - Data storage blocks (actual contents of files and directories).
    Diagram from Simone Demblon and Sebastian Spitzner's "Linux Internals - Filesystems"
    M. Tim Jones' "Anatomy of the Linux file system"
    Kernel.org's "ext4 Data Structures and Algorithms - High Level Design - Special inodes"


  4. Plaintext or individually encrypted files: e.g. normal files and directories; app-encrypted files such as password manager databases or encrypted SQL databases.

  5. Filesystem mounted instances: mount-point name, and type, and device name.
    e.g. "/" is the mount location of an ext4 filesystem stored on /dev/sda5.
    Run "df -khT" to see the mounted filesystems.




  6. [I'm told layers 6 through 10 really can be mixed in any order, you can stack any block device on top of any other. Maybe consider following to be a typical order.]

  7. Upper (stacked) filesystem formats: eCryptfs; EncFS; gocryptfs; Windows' Encrypting File System (EFS); AVFS.

    Wikipedia's "ECryptfs"
    SK's "How To Encrypt Directories With eCryptfs In Linux"
    I'm told the eCryptfs code is considered old and unmaintained, so Ubuntu has dropped that option.
    Wikipedia's "Encrypting File System" (EFS)


  8. Base filesystem formats: format of data stored inside a partition. E.g. ext4, fat32, NTFS, Btrfs, ZFS.
    Jim Salter's "Understanding Linux filesystems: ext4 and beyond"
    ArchWiki's "File systems"

    ext4 can have a file/directory encryption module added to it: fscrypt (article1, article2).

    Scan filesystem and map bad blocks to "don't use" inode (ext* filesystem only): "e2fsck -c".

    Xiao Guoan's "How to Fix 'can't read superblock' Error on Linux (ext4 & Btrfs)"


  9. Manager: e.g. Linux's LVM (Logical Volume Manager), or software RAID, or ZFS, or Btrfs.

    Jesse Smith's "Combining the storage space of multiple disks"

    "Device mapper" is a framework that things such as LVM and dm-verity and dm-crypt talk to.
    "sudo dmsetup info"
    Wikipedia's "Device mapper"

    LVM is oriented toward providing flexibility, while RAID is oriented toward providing reliability.

    I think LVM can be used in two opposite ways:
    +/-
    • [On a large system:] Present a single "virtual partition" to the layer above, but the data is stored across multiple physical partitions and devices.

    • [On a small/old single-disk MBR system limited to 4 physical partitions:] Present multiple "virtual partitions" to the layer above, which can use them for swap, /, and /home, but the data is stored in a single physical "extended partition" on disk.


    Ubuntu Wiki's "LVM"
    Wikipedia's "Logical volume management"
    Wikipedia's "Logical Volume Manager (Linux)"
    Sarath Pillai's "The Advanced Guide to LVM"
    terminalblues' "LVM Lab Setup With VirtualBox"
    With LVM, see partition type "lvm" in lsblk.
    LVM concepts, from lowest: PV (Physical Volume), then VG (Volume Group), then LV (Logical Volume).
    Corresponding LVM commands: "sudo pvs --all", "sudo vgs --all", "sudo lvs --all".
    "sudo lvm fullreport"
    Example corresponding LVM names: /dev/sda6, vgubuntu-mate, root.

    Heard on a podcast: LVM can do snapshots, but they kill performance.

    Software RAID: "mdadm" command. (article)


  10. Device-mapper and full-volume/block-level encryption: e.g. dm-crypt (a LUKS-compliant implementation); VeraCrypt "full-disk" (really, full-partition) encryption; BitLocker.

    Wikipedia's "dm-crypt"
    Wikipedia's "Linux Unified Key Setup" (LUKS)
    ArchWiki's "dm-crypt / Encrypting an entire system"
    Wikipedia's "VeraCrypt"
    Wikipedia's "BitLocker"
    Dislocker (access BitLocker drive on Linux)
    With dm-crypt / LUKS, see partition type "crypt" in lsblk.
    With VeraCrypt, see partition type "dm" in lsblk.

    Device mapping also can be used to implement other things, such as:
    block integrity-checking: dm-integrity


  11. Physical partitions: e.g. /dev/sda5, /dev/sdb1. And a partition table (Master Boot Record (MBR) or GUID Partition Table (GPT)) to list the partitions.

    From someone on StackExchange:
    "Volume implies formatting and partition does not. A partition is just any continuous set of storage sectors listed in some table (e.g. MBR or GPT). A volume is a set of sectors belonging to the same filesystem, i.e. an implemented filesystem.".

    A key thing I would put at this level: for container files, command "losetup" makes a regular file appear as a block device (misleadingly named as a "loop" device). Try "losetup --list".

    To see device block size (but it may lie to you): "sudo blockdev --getbsz /dev/sdaN"
    "cat /sys/block/sda/queue/logical_block_size"
    "cat /sys/block/sda/queue/physical_block_size"
    "udevadm info -a -n /dev/nvme0n1p2"

    To see performance stats every 10 seconds:
    "iostat -xz 10" (from sysstat package)





  12. Disk hardware striping/mirroring (if any). E.g. hardware RAID.
    Wikipedia's "RAID"
    Wikipedia's "RAID-Z" (with ZFS)

  13. Intermediate/bridge physical interfaces to media: e.g. USB, SCSI, RAID controller.
    Try doing "cat /proc/scsi/scsi" and "lsusb" and "udevadm info -a -n /dev/nvme0n1p2".

  14. Physical interfaces to media: e.g. IDE, SCSI, SATA.
    Wikipedia's "Hard disk drive interface"

  15. Disk controller and firmware.

    Some drives support eDrive or Opal encryption (AKA "self-encrypting drive"; SED).
    Generally the system BIOS has to support using it.
    ArchWiki's "Self-encrypting drives"

    Most/all SSDs do block-remapping at the firmware level, to do wear-leveling.

  16. Raw media: e.g. spinning hard disk, SSD, flash drive, SD card, CD-ROM, DVD-ROM, floppy disk.
    Appears as e.g. /dev/sda, /dev/sdb.
    Use GParted View/DeviceInformation to see info.
    Use "badblocks" (not on SSD) to test for bad blocks on a raw empty partition.




Example 1, my Linux Mint 19.2 system:
+/-

$ df -hT
Filesystem             Type      Size  Used Avail Use% Mounted on
/dev/sda5              ext4       33G   25G  7.0G  78% /
/dev/sda6              ext4      259G  182G   65G  74% /home
/dev/sda1              ext4      945M  175M  705M  20% /boot
/home/user1/.Private   ecryptfs  259G  182G   65G  74% /home/user1
  1. Standard Linux organization, except my personal files under /home.
  2. Standard Linux filesystem API.
  3. My password manager database file "KeePassDatabase.kdbx" is app-encrypted.
  4. It is under "/home/user1", which is the mount location of an eCryptfs filesystem stored on "/home/user1/.Private".
  5. "/home/user1/.Private" is using upper filesystem format eCryptfs.
  6. The base filesystem format of "/home" is ext4.
  7. Manager: none; not using LVM or RAID.
  8. Device-mapper and full-volume/block-level encryption: none ?
  9. Physical partitions: /home is on /dev/sda6.
    Partition table on /dev/sda is a Master Boot Record (MBR) table.
  10. Disk hardware striping/mirroring: none.
  11. Intermediate/bridge physical interfaces to media: SCSI.
  12. Physical interface to media: SATAII / Serial-ATA/300.
  13. Disk controller: whatever is on the circuit board attached to the disk (probably mainly some FPGA chip); no hardware encryption.
  14. Raw media: Western Digital model "ATA WDC WD3200BEVT-7" spinning hard disk, 298 GiB (320 GB), 5400 RPM 2.5" diameter, appears as /dev/sda.

Example 2, a VeraCrypt container mounted in my Linux Mint 19.2 system:
+/-

$ df -ahT
Filesystem             Type             Size  Used Avail Use% Mounted on
/dev/sda6              ext4      259G  182G   65G  74% /home
/home/user1/.Private   ecryptfs  259G  182G   65G  74% /home/user1
/dev/mapper/veracrypt1 ext4             2.0G  1.1G  750M  60% /media/veracrypt1
  1. Standard Linux organization, except my personal files under /home.
  2. Standard Linux filesystem API.
  3. Plaintext file "MyBankInfo.txt" is in a 2.0GB VeraCrypt container on /home/user1.
  4. It is under "/media/veracrypt1", which is the mount location of an ext4 filesystem stored on "/dev/mapper/veracrypt1".
  5. "/dev/mapper/veracrypt1" is using upper filesystem format ???.
    Both VeraCrypt and eCryptfs are in here somewhere.
  6. The base filesystem format of "/dev/mapper/veracrypt1" is ext4 ?
  7. Manager: none; not using LVM or RAID.
  8. Device-mapper and full-volume/block-level encryption: none ?
  9. Physical partitions: /home is on /dev/sda6.
    Partition table on /dev/sda is a Master Boot Record (MBR) table.
  10. Disk hardware striping/mirroring: none.
  11. Intermediate/bridge physical interfaces to media: SCSI.
  12. Physical interface to media: SATAII / Serial-ATA/300.
  13. Disk controller: whatever is on the circuit board attached to the disk (probably mainly some FPGA chip); no hardware encryption.
  14. Raw media: Western Digital model "ATA WDC WD3200BEVT-7" spinning hard disk, 298 GiB (320 GB), 5400 RPM 2.5" diameter, appears as /dev/sda.



From someone on reddit:
+/-
My view on it is that there are no layers. There are just different combinations, abstractions, attachments, slices and mirrors of block devices. Upon which you can either build other block devices, or store raw data which could include filesystems.

...

The root of it is that the Linux block device is the base unit and since the other entities present block devices as their product, it gets confusing since the system is making block devices from other block devices and parts of block devices.

...

The first two items in #5 [VeraCrypt containers; eCryptfs] are special types of filesystems, but the 3rd thing [Windows' Encrypting File System (EFS)] is referring to something that becomes a block device. Once it is a block device, then it can be used wherever a block device is used.

#6 is talking about filesystems and "partitions". But it's only a partition if it is referred to in a partition table (GPT, MBR, Sun, SGI, BSD). And even then, the OS only sees that data through the lens of a block device. See "man fdisk".

Trying to represent this as layers breaks pretty fast. For example with LVM, the LV is in a VG. And a VG encompasses one more more PVs. An LV can be spread across multiple PVs.

As I say, in the end actual data is on storage that shows up in Linux as a block device. http://www.haifux.org/lectures/86-sil/kernel-modules-drivers/node10.html

> [me trying to defend layers:]
> For example, can a VeraCrypt container be below (contain) a LVM
> volume ? I don't think so, but maybe I'm wrong.

In Linux, VeraCrypt can encrypt a file. That file can contain general data, a filesystem, or a partition table that divides up the file into partitions.

Also as a file, you can attach it to a loop device and then you can use that as an LVM PV (physical Volume) -- the first bullet here: https://www.veracrypt.fr/en/Home.html



"sudo blkid" is the best way to see what type a filesystem is if you're using non-standard filesystems. In output of blkid, exFAT displays as 'SEC_TYPE="msdos" TYPE="vfat"'; NTFS displays as 'TYPE="ntfs" PTTYPE="dos"'. Most other commands show them as "fuseblk" or no-type.

"mount | column --table" may show an amazing number of mounted filesystems, including snaps, tmpfs's, cgroups, mappers.
"findmnt -A"

Vivek Gite's "Linux Hard Disk Encryption With LUKS [ cryptsetup encrypt command ]"
Beencrypted's "How To Encrypt Disk In Linux Securely"
ArchWiki's "Disk encryption"
"man cryptsetup"

There is another mechanism that lets a non-sudo user mount LUKS volumes:
"man cryptmount", "man cryptsetup-mount", "man cmtab", "cat /etc/cryptmount/cmtab"

ZFS is a somewhat-new-on-Linux [in 2020] system that integrates several layers (logical volume manager, RAID system, and filesystem) into a unit, and includes features such as checksums and snapshots and copy-on-write. Mostly oriented toward server/enterprise. And it requires letting ZFS manage the entire disk; you can't put ZFS in just one partition.

Magesh Maruthamuthu's "13 Methods to [Identify] the File System Type on Linux"



If you boot from USB, how to mount LVM/LUKS hard disk:

+/-
random neuron misfires' "HOWTO mount an external, encrypted LUKS volume under Linux"
Vivek Gite's "Linux mount an LVM volume / partition"


apt list | grep lvm2/ | grep installed
# if not:
sudo apt install lvm2

lsmod | grep dm_crypt
# if not:
sudo modprobe dm-mod

# encrypted LUKS volume contains an encrypted LVM

# do LUKS
lsblk
cryptsetup luksOpen /dev/sda6 VGNAME  # arbitrary GROUPNAME "VGNAME"
# give passphrase
stat /dev/mapper/VGNAME
sudo mkdir -p /mnt/VGNAME

mount /dev/mapper/VGNAME /mnt/VGNAME/
# should get "mount: unknown filesystem type 'LVM2_member'"

# do LVM
sudo vgscan               # find info about LVM devices
# should see LV == "VGNAME", VG == something like "vgubuntu-mate"
sudo vgchange -a y VGNAME   # activate LVM group "VGNAME"
sudo lvdisplay
sudo lvs
# see VGPATH something like "/dev/dm-2"
ls -l /dev/vgubuntu-mate/   # suppose it has devices home, root, swap
sudo mkdir -vp /mnt/VGNAME/{root,home} # create mount points
sudo mount /dev/dm-2 /mnt/VGNAME/root

df -T | grep VGNAME
ls -l /mnt/VGNAME/root

sudo umount /dev/dm-2
sudo vgchange -a n VGNAME   # de-activate LVM group "VGNAME"



Vivek Gite's "How To List Disk Partitions"





Filesystem Types



Wikipedia's "File system"
Wikipedia's "Comparison of file systems"

As far as I know, the only common Linux local filesystems that do check-summing to fight "bit rot" (failed sectors on disk) are ZFS and Btrfs. Check-summing does not protect you from data loss, but it prevents such data loss from going undetected. If you want to repair the errors without losing data, you need to be using parity or some forms of RAID.

Some things that are not filesystems: partition table, boot, swap, LVM group.




# List filesystem types available to load:
ls /lib/modules/$(uname -r)/kernel/fs

# List filesystem types currently loaded:
cat /proc/filesystems

# Can't list all available FUSE filesystem types;
# any app could make a type available at any time.

# List FUSE filesystem types currently loaded:
mount | grep 'fuse\.' | cut -d ' ' -f 5 | uniq

# Descriptions of filesystem types:
man fs





Michael Larabel's "XFS / EXT4 / Btrfs / F2FS / NILFS2 Performance On Linux 5.8"



From someone on reddit 8/2020:
+/-
It's worth mentioning that while btrfs and zfs both have features that make snapshots easier to take (and a bunch of other awesome features), ext4 is actually more resilient. When ext4 breaks, it's almost always fixable without having to reformat. When zfs and btrfs break, the data is usually recoverable, but it can sometimes require a reformat to get the drive operational again.

Source: I do data recovery professionally.

...

[I asked why: better journaling, tools, what ?]

I'm not 100% sure why ... When it comes to repair, I use the same tools as everyone else. And ext4's pretty much always work (which isn't the case for any filesystem on any OS, from what I can tell). I think ext4 being developed and patched by so many more people, for so many years as the default for pretty much all of Linux, and as a result of its ubiquitousness, has resulted in a rock-solid piece of technology that's almost impossible to break.

It's worth noting that NTFS on Windows can be broken beyond repair, and we regularly see that. As can Apple's filesystems, APFS and HFS+ (HFS+ was actually surprisingly fragile).

...

I'm talking about worst-case scenarios anyway. As someone who does data recovery professionally, nobody calls me when things are going well, lol. Btrfs is a fantastic, and almost always stable, filesystem. ZFS even more so (just because it's a more mature code-base).

I also install, configure, and maintain Linux systems professionally (both for standard desktop users and servers). And the majority of filesystem errors on all common Linux filesystems are repairable, even when caused by power outages or hardware failure (which are the worst-case scenarios for a filesystem). My comments were mostly meant to highlight the bulletproof nature of ext4, not to call out the next generation filesystems.

...

[Guessing from a comment by someone else: Advanced filesystems such as Btrfs and ZFS may be totally fine and repairable if you use them as simple filesystems. Perhaps it's when you start using them "full stack" and RAID and such and then have a hardware failure that you can get into rare irreparable situations.]



Hard link: two or more directory entries (filenames) contain same inode number. Made with "ln". All entries have equal status, and one can be deleted without affecting others. All entries must be on same filesystem as the file. Can't hard-link to a directory.

Symbolic / soft link: a special file whose contents are the name of another file. Made with "ln -s". If the real file is deleted, the link becomes dangling. Can link to a file on another filesystem. Can symbolic-link to a directory, or to another symbolic link.





Encryption of data at rest



Things you could encrypt:

+/-
  • A single file (maybe use 7zip, GnuPG, ccrypt).

  • An archive of files (maybe use 7zip, rar).

  • A directory tree (maybe use eCryptfs, gocryptfs, zuluCrypt, Plasma's Vault).

  • A container file that you then mount as a data disk (LUKS, VeraCrypt, zuluCrypt).

  • A disk partition (maybe use LUKS, VeraCrypt, zuluCrypt).

  • An entire data disk (maybe use LUKS, VeraCrypt, zuluCrypt).

    Danger: if you attach an encrypted disk to Windows, and Windows doesn't recognize the encryption format, it will assume it's a new unformatted disk and ask if you want to format it. Clippy want to format hard disk

  • Most of a boot/system disk (LVM/LUKS on Linux, or VeraCrypt on Windows).

  • An entire boot/system disk (hardware encryption).




How is the plaintext data presented to you ?

+/-
  • A single file. (GnuPG, ccrypt)

  • A directory tree from a mountable filesystem, perhaps encompassing everything you see from "/" on down, or everything from your home directory on down, or other places. (VeraCrypt, zuluCrypt, LUKS, eCryptfs, gocryptfs, AVFS)

  • Files inside an archive manager application, and you can extract them to the normal directories. (7zip, Engrampa, Archive Manager)




My strategy:

+/-
For critical security software, I want: open-source, standard part of stock OS, widely used, lots of devs, simpler.

So for system/boot disk encryption, I use whatever is built into the OS. Usually LVM/LUKS.

For data disks and containers, I had been using VeraCrypt. But I got a little spooked about the TrueCrypt devs being anonymous, VeraCrypt's license status, and I wonder how many devs are working on VC. It's not "simple" in that it has a lot of features I don't need: cross-platform, hidden volumes, encrypted boot disk (on Windows), in-place encryption, many cipher algorithms.

I used to think cross-platform was important to me, but I changed my mind. In an emergency, I can boot from a Linux live-session on a USB stick, and copy needed files to a non-encrypted filesystem.

Then I found that LUKS can do container files as well as partitions, so I switched to LUKS for everything. It seems VeraCrypt is mostly a GUI on top of code similar to LUKS1: all the same features are there in LUKS. And in fact VeraCrypt seems to be sticking with older settings (LUKS1) to preserve compatibility with old software. But using LUKS directly, I have LUKS2. And people are telling me that LUKS2 uses more secure algorithms.

On Ubuntu, software automatically detects a LUKS-encrypted disk and asks for passphrase and opens and mounts it, no scripts needed. For container files, the behavior varies by file manager.



Alternatives:

+/-
Archive encryption: "zip -e", zipcloak, gpgtar.
Single-file encryption: "vim -x", aescrypt, bcrypt.

"7z" files: "sudo apt install p7zip-full", and then maybe Archive Manager will handle 7z files. If not, "7za x FILE.7z" or "7za x FILE.7z.001" to extract files.

Cryptomator: started as cloud-only, but now supports local folder encryption ? article





VeraCrypt



VeraCrypt
VeraCrypt on SourceForge
Tails' "Using VeraCrypt encrypted volumes"
Security-in-a-Box's "VeraCrypt for Windows - Secure File Storage"
reddit thread about VeraCrypt and Windows updates



Installed VeraCrypt by downloading a script and then running it via "sudo bash", but I don't see VeraCrypt in the GUI menu of applications. It showed up later. Made a couple of containers, and they work fine.

To update VeraCrypt:
Download veracrypt-*-setup.tar.bz2 or veracrypt-*-.deb file from Downloads.
If *.bz2, double-click it and extract the files from it.
Unmount all VC volumes and quit out of VeraCrypt.
Double-click on veracrypt-*-setup-gui-x64 or *.deb file.
See text-GUI installer, click buttons to install.

There is a PPA: https://launchpad.net/%7Eunit193/+archive/ubuntu/encryption But then you're trusting whoever owns that PPA. [Apparently VeraCrypt is not in the standard repos and stores because the TrueCrypt license is not clear/standard, the TrueCrypt devs are anonymous, it's unclear whether the VeraCrypt license is valid at all.]



Choices and policies:

+/-
If you want an encrypted Container or partition to be accessible on Windows, choose exFAT or NTFS (not ext*) for the filesystem inside it. There are freeware utilities such as Linux Reader that can read ext* filesystems on Windows, and now WSL2 can do ext4, but maybe it's better to just use a filesystem that Windows can understand natively. Relevant discussion. On Linux, you may lose symlinks when copying from ext4 to exFAT or NTFS ? exFAT is best choice: Linux kernel will be adding deeper support for it mid-2020, and Mac OSX also supports it. On Linux 5.4 kernel, "sudo apt install exfat-fuse exfat-utils". But exFAT doesn't allow some characters in filenames, which complicates things when copying from ext4 to exFAT.

For a big external disk, it's far easier/quicker to make a full-disk/full-partition VC-encrypted volume, rather than create a non-encrypted partition with a filesystem in it and then a VeraCrypt container file in that filesystem. Just leave the drive "raw" (no partitions), use VeraCrypt to make an encrypted volume (Linux: on the drive /dev/sdb, not a partition /dev/sdb1), using VC's "quick format" option. You will have to have the VeraCrypt application already installed on any computer where you want to access that drive, which seems okay. One danger: when you attach such a drive (full-volume VC-encrypted) to a Windows machine, Windows will say something like "drive looks unformatted, want me to format it for you ?". You must be VERY careful to say "no" every time you attach that drive. [Some people say you can disable this behavior via "delete the drive letter in Disk Management".] Clippy want to format hard disk

If you want extra security, when you create an encrypted container or partition, you could choose a stacked cipher instead of the default AES, and some hash function other than the default HMAC-SHA-512. And if you create multiple containers/volumes, you could use different settings for each. You also could use keyfiles, hidden containers, PIM settings. But at some point you are more likely to fool or confuse yourself rather than some adversary. It might be best to just stick to the defaults, or use just one group of settings and apply it to all of your volumes.

Good practice: right after you create an encrypted container, before you put anything in it, turn off compression, disable COW, turn off atime updating: "chattr -c +C +A FILENAME". These settings are ignored and harmless in filesystems that don't support them.

Good practice: after you create an encrypted container or partition, click the "Volume Tools ..." button and select "Backup Volume Header ..." and save the backup to somewhere else. Might save you if the volume gets corrupted somehow. [Although actually VeraCrypt already maintains a backup copy of the header inside the volume. So you'd have to lose a bunch of blocks to lose both the primary and backup headers inside the volume, and need to use your external backup copy.]



Quirks and issues:

+/-
A couple of quirks with VeraCrypt containers, at least in Linux: Firefox doesn't properly remember "last folder saved to" if it's in a VC container, and Nemo renaming a folder in a VC container doesn't work if all you're changing is capitalization ?

A quirk with VeraCrypt containers related to cloud backup: When you modify a file in the container, the modified-time and checksum of the whole container change. So if you add a 1-KB file to a 10 GB container, the backup software will say "okay, have to write this whole 10-GB file up to the cloud". (Same is true of any other aggregate format, such as ZIP or RAR.)

I used MEGAsync for a while, but had a couple of bad experiences where somehow it appeared that the file on my laptop (what I considered the master) was older than the copy in MEGAsync (what I considered a backup), and MEGAsync synced the old file down to my laptop, and I lost data. Seemed to happen with VeraCrypt containers in use; I would forget to dismount them, and MEGAsync would see them as old. VeraCrypt on Linux has some quirks with updating the modified time on the container file. https://sourceforge.net/p/veracrypt/tickets/277/

Auto-mount functionality is only for encrypted volumes/partitions, not on containers.
Same with quick-format; not available when creating a container.
To mount a container using CLI:

veracrypt --slot=1 /home/user1/.../MYCONTAINERFILENAME /media/veracrypt1

Apparently there is a longstanding performance issue with VeraCrypt, although it may show up only on heavy multi-threaded benchmarks or under extreme loads, mostly with SSD ? reddit thread



Checking and fixing a VeraCrypt volume:

+/-
It seems there are really just two things that can go wrong:
  • VeraCrypt volume header gets damaged somehow.

    Even with correct password, you will get a message "can't mount because wrong password or not VeraCrypt volume".

    In this case, maybe copy whole volume to a known-good disk, check original for disk errors, then restore VeraCrypt volume header from a header backup.

  • Contents of VeraCrypt volume after the VC header gets damaged somehow.

    The volume will be listed in VeraCrypt as mounted properly, but the operating system will say "unrecognized filesystem" (if the filesystem header is damaged) or something else if blocks after the FS header are damaged.

    In this case, maybe copy whole volume to a known-good disk, check original for disk errors, then run filesystem-repair utilities.


VeraCrypt's "Troubleshooting"

From VeraCrypt's "Frequently Asked Questions":
File system within a VeraCrypt volume may become corrupted in the same way as any normal unencrypted file system. When that happens, you can use filesystem repair tools supplied with your operating system to fix it. In Windows, it is the 'chkdsk' tool. VeraCrypt provides an easy way to use this tool on a VeraCrypt volume: Right-click the mounted volume in the main VeraCrypt window (in the drive list) and from the context menu select 'Repair Filesystem'.

I think the procedure (for a VC container) is:
There are TWO filesystems: the one inside the disk partition (call it FS-D) and the one inside the container file (call it FS-C).

Now, a sector goes bad that happens to be used by the container file.
Do this:
  1. First, make sure the disk is not throwing hardware errors. Maybe use a SMART utility. article
  2. Run fsck/chkdsk or whatever to repair the disk filesystem (FS-D). The OS might do this automatically. The filesystem has to be unmounted while it's being checked and repaired. (Note: "chkdsk /r" checks much more than "chkdsk /f".)
  3. Open VC container without mounting filesystem that's inside it (FS-C): Click Mount button, click Options button to expand the dialog, near bottom check the box "Do not mount". Type password, device will appear in list but the "Mount Directory" column will be empty.
  4. If opening the VC container fails, both volume headers inside the container are bad. Use a backup copy of the volume header, that you saved elsewhere. VC has a "repair" function to do this ?
  5. Right-click on the device in the list and select either the Check Filesystem or Repair Filesystem menu item. A small terminal window will open and FSCK will run. If instead you get an error dialog "xterm not found", go to CLI and run "apt install xterm", then try again.
  6. Mount the container in VeraCrypt, and check dmesg to see that there are no error messages. Nemo does not report dirty filesystems (bad).
  7. Then you're good, no need to copy or move the container.

John Wesorick's "Running fsck on a Truecrypt Volume"
CGSecurity's "Recover a TrueCrypt Volume"
Silvershock's "Opening/Decrypting VeraCrypt drives without mounting them (for fsck)"



How secure is a VeraCrypt volume ?




Resizing a VeraCrypt volume:

+/-
VeraCryptExpander utility. Windows-only.

First, outside of VeraCrypt (Windows Disk Management), resize the partition. Then run VeraCryptExpander to resize filesystem inside VC volume ?



Bug-reporting: VeraCrypt / Tickets
Martin Brinkmann's "How to change the PIM of a VeraCrypt volume"
Andrew D. Anderson's "Auto-mounting a VeraCrypt volume under Ubuntu / Debian Linux"



Make a Btrfs filesystem inside a VeraCrypt volume:

+/-
I'm using Ubuntu GNOME 20.04.

Install software:

sudo apt install btrfs-prog
man mkfs.btrfs

Important: In the following steps, change device names and labels as appropriate for your system. Best to have no extra removable or optional devices attached while doing operations, to avoid confusion.

Try Btrfs first on a real device (no VC) to make sure it works:

# Attach a USB drive to system.

# Check device name:
sudo dmesg		# probably /dev/sdb
# If you look in file explorer and see that the device got mounted, ignore that.

sudo wipefs --force --all /dev/sdb
# If you look in file explorer and see that the device got mounted, ignore that.

# Make Btrfs filesystem:
sudo mkfs.btrfs --force --label LABEL /dev/sdb
# If it says "ERROR: /dev/sdb is mounted", do "sudo umount /dev/sdb".

sudo btrfs check /dev/sdb

# See that new filesystem got mounted, maybe as /media/user1/LABEL:
df | grep sdb			# doesn't show up here ?
blkid | grep sdb		# doesn't show up here !
lsblk --fs /dev/sdb

sudo umount /dev/sdb
# Detach USB drive from system.
# Attach USB drive to system.

sudo chmod a+w /media/user1/LABEL

# Go to file explorer, see that new filesystem got mounted.
# Copy some files to it.
# In file explorer, unmount, remove drive, attach again, see it appear again.

Mount an existing VeraCrypt-encrypted volume to check device names:

# Attach USB drive to system.

# Run VeraCrypt GUI and mount volume.

# Check names:
lsblk --fs /dev/sdb
df -h | grep verac
# On my system, /dev/sdb to /dev/mapper/veracryptN mounted onto /media/veracryptN

# In VeraCrypt GUI, dismount volume.

# Detach USB drive from system.

VeraCrypt-encrypt a volume then add Btrfs:

# Attach USB drive to system.
# In file-explorer, if drive appears, ignore it.

# Run VeraCrypt GUI to create volume.
# Choose filesystem type "Linux Ext4" and "Quick format".
# Choose "I will mount the volume only on Linux".

# In VeraCrypt GUI, mount the volume.

# In file-explorer, in "Other Locations", drive should appear, click to unmount it.

# Check names:
df -h | grep verac		# volume does not appear

# Make Btrfs filesystem:
sudo mkfs.btrfs --force --label LABEL /dev/mapper/veracryptN

sudo btrfs check /dev/mapper/veracryptN

lsblk --fs /dev/mapper/veracryptN

# In VeraCrypt GUI, dismount the volume.

# Detach USB drive from system.

# Attach USB drive to system.
# In file-explorer, if drive appears (shouldn't), ignore it.

# In VeraCrypt GUI, mount the volume.

sudo chmod a+w /media/veracryptN

# In file-explorer, in "Other Locations", drive should appear.
# Copy files to it.

# Dismount in VC GUI, detach, attach, mount in VC GUI.
# Check files.

# Now mount/unmount can be done through VeraCrypt
# as usual; no need to do any special Btrfs commands.

One complication: it's best to mount a non-system Btrfs filesystem with the "noatime" flag specified, to avoid triggering COW on metadata when you read a file. In VeraCrypt GUI, specify that in Settings / Preferences / Mount Options / mount options. In VeraCrypt CLI, add "--fs-options=noatime". I would do this for all non-system volumes, regardless of filesystem type. Probably not a good idea to do it for a system volume, although you could do it to everything under your home directory via "chattr -R +A ~".

To see the flags after a volume is mounted:

mount | grep veracrypt5
# for Btrfs you probably want similar to: rw,noatime,space_cache,subvolid=5,subvol=/

I ended up doing that for all of my VeraCrypt volumes, regardless of filesystem type.

Make an encrypted ZFS filesystem (not using VeraCrypt):

+/-
I was going to try making a ZFS filesystem inside a VeraCrypt volume, but ZFS supports encryption natively, so no need to use VeraCrypt.

I'm using Ubuntu GNOME 20.04.

https://itsfoss.com/zfs-ubuntu/
https://wiki.ubuntu.com/Kernel/Reference/ZFS

I've read on /r/ZFS that ZFS is not intended for use with removable/USB drives. It's intended for enterprise large static configurations. It should work with USB, just USB inherently is less reliable.

From someone on /r/ZFS:
"When you mount the ZFS pool to the system, it mounts to a directory in your filesystem. It won't show as a separate volume."

Apparently installing zfs-fuse would remove zfsutils-linux. I'm told zfs-fuse would give lower performance.

Install software:

sudo apt install zfsutils-linux zfs-dkms
# Probably get a license dialog, have to press Return for OK.
# May have to reboot at this point.

man zfs
man zpool
zfs list

Important: In the following steps, change device names and labels as appropriate for your system. Best to have no extra removable or optional devices attached while doing operations, to avoid confusion.

zfs unencrypted first on a real device (no VC) to make sure it works:

# Attach a USB drive to system.

# Check device name:
sudo dmesg		# probably /dev/sdb
# If you look in file explorer and see that the device got mounted, ignore that.

sudo wipefs --force --all /dev/sdb
# If you look in file explorer and see that the device got mounted, ignore that.

zfs list
sudo zpool create MYPOOL /dev/sdb
sudo zpool status MYPOOL
zfs list
df -h | grep POOL
mount | grep zfs

# Make ZFS filesystem:
sudo zfs create MYPOOL/fs1
# If it says "ERROR: /dev/sdb is mounted", do "sudo umount /dev/sdb".
zfs list
df -h | grep POOL
mount | grep zfs
ls -ld /MYPOOL/fs1
lsblk --fs /dev/sdb
# Now filesystem is mounted and usable.
sudo chmod a+w /MYPOOL/fs1

cp /bin/ls /MYPOOL/fs1/aaa		# copy a file to it

sudo zpool scrub MYPOOL		# test data integrity
sudo zpool status -v MYPOOL	# if "scrub in progress", do again

sudo zpool export MYPOOL
sudo zpool status MYPOOL
# Detach USB drive from system.

# Attach USB drive to system.
sudo dmesg
sudo zpool import MYPOOL
zfs list
sudo zpool status MYPOOL
ls -l /MYPOOL/fs1

sudo zpool export MYPOOL
# Detach USB drive from system.

ZFS encrypted on a real device:

# https://www.medo64.com/2020/05/installing-encrypted-uefi-zfs-root-on-ubuntu-20-04/
# https://www.medo64.com/2020/04/installing-uefi-zfs-root-on-ubuntu-20-04/
# https://www.medo64.com/2020/06/testing-native-zfs-encryption-speed/
# https://blog.heckel.io/2017/01/08/zfs-encryption-openzfs-zfs-on-linux/
# https://linsomniac.gitlab.io/post/2020-04-09-ubuntu-2004-encrypted-zfs/

# Attach a USB drive to system.

# Check device name:
sudo dmesg		# probably /dev/sdb
# If you look in file explorer and see that the device got mounted, ignore that.

sudo wipefs --force --all /dev/sdb
# If you look in file explorer and see that the device got mounted, ignore that.

zfs list
sudo zpool create -O encryption=aes-256-gcm -O keylocation=prompt -O keyformat=passphrase -f MYPOOL /dev/sdb
# Passphrase minimum of 8 chars.

# Make ZFS filesystem:
sudo zfs create MYPOOL/fs1
# If it says "ERROR: /dev/sdb is mounted", do "sudo umount /dev/sdb".
ls -ld /MYPOOL/fs1
lsblk --fs /dev/sdb
# Now filesystem is mounted and usable.
sudo chmod a+w /MYPOOL/fs1

cp /bin/ls /MYPOOL/fs1/aaa		# copy a file to it

# Test data integrity:
sudo zpool scrub MYPOOL
sudo zpool status -v MYPOOL		# if "scrub in progress", repeat

sudo zpool export MYPOOL
# Detach USB drive from system.

# Attach USB drive to system.
sudo zpool import -l MYPOOL
ls -l /MYPOOL/fs1

sudo zpool export MYPOOL
# Detach USB drive from system.

ZFS using an encrypted VeraCrypt volume (FAILED):

# Attach USB drive to system.
# In file-explorer, if drive appears, ignore it.
sudo dmesg
sudo wipefs --force --all /dev/sdb

# Run VeraCrypt GUI to create volume.
# Choose filesystem type "Linux Ext4" and "Quick format".
# Choose "I will mount the volume only on Linux".

# In VeraCrypt GUI, mount the volume.

# Check names:
df -h | grep verac

# In file explorer, Other Locations, find device and unmount it.

sudo zpool create -f MYPOOL /dev/mapper/veracryptN
zfs list
sudo zpool status MYPOOL

# Make ZFS filesystem:
sudo zfs create MYPOOL/fs1
ls -ld /MYPOOL/fs1
lsblk --fs /dev/mapper/veracryptN
# Now filesystem is mounted and usable.
df -h | grep MYPOOL
sudo chmod a+w /MYPOOL/fs1

cp /bin/ls /MYPOOL/fs1/aaa		# copy a file to it

# Test data integrity:
sudo zpool scrub MYPOOL
sudo zpool status -v MYPOOL		# if "scrub in progress", repeat

sudo umount /MYPOOL/fs1
sudo zpool export MYPOOL
# In VeraCrypt GUI, dismount the volume.
# Detach USB drive from system.

# Attach USB drive to system.
# In VeraCrypt GUI, mount the volume by clicking "Mount",
# type password, click "Options", check "Filesystem - do not mount".
#sudo zpool create -f MYPOOL /dev/mapper/veracryptN
sudo zpool create -f MYPOOL
sudo zfs create MYPOOL/fs1
sudo mkdir /MYPOOL/fs1
# FAIL: can't figure out how to get filesystem defined in pool without creating it anew
lsblk --fs /dev/mapper/veracryptN
sudo zfs mount MYPOOL/fs1
sudo mount -t zfs /dev/mapper/veracryptN /MYPOOL/fs1

ls -l /MYPOOL/fs1

sudo zpool export MYPOOL
# In VeraCrypt GUI, dismount the volume.
# Detach USB drive from system.




VeraCrypt on Linux uses FUSE to implement the filesystem "driver". Apparently the veracrypt application itself is used as GUI app, CLI app, and FUSE adapter/handler/daemon. "man fuse" https://github.com/libfuse/libfuse



Sarbasish Basu's "How to mount encrypted VeraCrypt or other volumes on an Android device"
EDS (Encrypted Data Store)





LUKS encryption



LVM is a volume manager; LUKS (Linux Unified Key Setup) is an encryption module.

LVM/LUKS as used on Ubuntu* distros to do "full-disk-encryption" really isn't "whole disk" encryption: partition table and boot partition are not encrypted. (Apparently there is a tricky way to also encrypt the boot-loader second-stage file-system: article)



Milosz Galazka's "How to backup or restore LUKS header"
Vivek Gite's "How to backup and restore LUKS header"
Alvin Alexander's "Linux Mint: How to change the disk encryption password"
Diverto's "Cracking LUKS/dm-crypt passphrases"
Tails' "Creating and using LUKS encrypted volumes"
Tyler Burton's "How to migrate from TrueCrypt to LUKS file containers"
Vivek Gite's "How to change LUKS disk encryption passphrase"
Vivek Gite's "How to enable LUKS disk encryption with keyfile on Linux"
Lennart Poettering's "Unlocking LUKS2 volumes with TPM2, FIDO2, PKCS#11 Security Hardware on systemd 248"
Kees Cook's "GRUB and LUKS"

Michael Larabel's "The 2019 Laptop Performance Cost To Linux Full-Disk Encryption"

Pawit Pornkitprasan's "Full Disk Encryption on Arch Linux backed by TPM 2.0"
ArchWiki's "Trusted Platform Module"
linux-luks-tpm-boot
"lsmod | grep tpm"




# see all algorithms available in your kernel
cryptsetup benchmark

# dump parameters
sudo cryptsetup luksDump /dev/sda6 >saved.luksDump.sda6.txt

# make backup of header
sudo cryptsetup luksHeaderBackup /dev/sda6 --header-backup-file saved.luks_backup_sda6



Creating a LUKS encrypted full-disk volume

+/-

# started with Disks app showing no partitions

sudo cryptsetup --type luks2 --iter-time 4100 --verify-passphrase luksFormat /dev/sdb
# I added the --iter-time setting; just wanted something not default.
# will ask twice for passphrase

# Make a backup of the LUKS header:
sudo cryptsetup luksHeaderBackup /dev/sdb --header-backup-file LUKS2.HeaderBackup.MYVOL1

# MYVOL1 is an arbitrary, temporary name.
# Device /dev/mapper/MYVOL1 will appear.
sudo cryptsetup luksOpen /dev/sdb MYVOL1
# will ask for passphrase

# Format the volume.
# Use --mixed if size less than 109 MB.
# VOLNAME is a permanent name for the filesystem.
sudo mkfs.btrfs --label VOLNAME --mixed /dev/mapper/MYVOL1

# If you forget the label (I did), later do:
sudo btrfs filesystem label /dev/mapper/MYVOL1 VOLNAME

sync

sudo cryptsetup luksClose MYVOL1

eject /dev/sdb		# probably get "unable to open"

# unplug the USB drive, and plug it in again

# File manager should detect the drive, see that it is LUKS,
# ask for passphrase, and mount it.

# https://unix.stackexchange.com/questions/319592/set-default-mount-options-for-usb
# Change mount options:
# Run Disks app.
# Select the device in the left pane.
# Select the filesystem (lower "Volume") in the main pane.
# Click on the "gears" button below.
# Select "Edit mount options". 
# Slide "User Session Defaults" to left to turn it off.
# Un-click "Mount at system startup".
# Add a name in the "" field, it shows up in fstab later.
# Edit mount options (maybe add "noatime").
# Click Save.
# Quit out of Disks app.

cat /etc/fstab		# to see mods made by Disks app
# note the UUID of the new disk

# In file manager, dismount filesystem.
# Unplug drive and plug it in again.
# Filesystem may get mounted automatically.

mount				# to see the new mount flags
sudo chmod 777 /mnt/THEUUID




Creating a LUKS encrypted container

+/-
From Oleg Afonin's "Breaking LUKS Encryption":
"LUKS can be used to create and run encrypted containers in a manner similar to other crypto containers such as VeraCrypt."

Following article1 (seems best to me):

dd if=/dev/zero of=vol1.luks conv=fsync bs=1 count=0 seek=50M

sudo cryptsetup --type luks2 --iter-time 4100 --verify-passphrase luksFormat vol1.luks
# I added the --iter-time setting; just wanted something not default.
# will ask twice for passphrase

# Make a backup of the LUKS header:
sudo cryptsetup luksHeaderBackup vol1.luks --header-backup-file LUKS2.HeaderBackup.MYVOL1

# MYVOL1 is an arbitrary, temporary name.
# Device /dev/mapper/MYVOL1 will appear.
sudo cryptsetup luksOpen vol1.luks MYVOL1
# will ask for passphrase

# Format the volume.
# Use --mixed if size less than 109 MB.
# VOLNAME is a permanent name for the filesystem.
sudo mkfs.btrfs --label VOLNAME --mixed /dev/mapper/MYVOL1

# If you forget the label (I did), later do:
sudo btrfs filesystem label /dev/mapper/MYVOL1 VOLNAME

# vol1 is an arbitrary, temporary mount-point name.
sudo mkdir /mnt/vol1
# I like to use noatime; maybe you don't.
sudo mount -o defaults,noatime /dev/mapper/MYVOL1 /mnt/vol1
sudo chown -R $USER /mnt/vol1

sudo umount /mnt/vol1
sudo cryptsetup luksClose MYVOL1

Following article2:

SIZE=500
FILE=xxx.luks
fallocate -l ${SIZE}M $FILE
dd if=/dev/urandom of=$FILE conv=fsync bs=1M count=$SIZE

# run Disks utility
# select "Attach Disk Image"
# un-check "Set up read-only loop device"
# select file xxx.luks
# click Attach
# Now the file is attached as if it were a hard drive.

# Select the new "drive", click the Gears icon,
# choose type Encrypted (LUKS + ext4), set passphrase etc.
# Click Format.

# Eject the drive in Disks or in file manager.

Use Disks application to Attach the container volume when you want to use it.

Or: associate file-extension ".luks" with "Disk Image Mounter" application, then you can double-click on any "*.luks" container file to mount it. BUT: it will mount read-only ! You have to REMOVE the association to "Disk Image Mounter" and create an association to:

'/usr/bin/gnome-disk-image-mounter' --writable

# Note: This is not available in KDE, can't find any equivalent.
# Maybe clevis-luks, clevis-luks-bind ?
# Maybe create a new Dolphin "service menu" ?
# I created a new "Service Menu" to do it: lukscontainerfile.

Change mount options to add noatime: Run Disks application.



LUKS logo

How secure is a LUKS* volume ?

+/-
My understanding is that there is no stored hash of the password (passphrase) in the LUKS header. So there is nothing to extract and then try to match with hashcat or similar. What is stored is an encrypted value V of the key K used to encrypt the actual data. So when the user opens the volume, the passphrase P and a salt S is used to transform V to produce K, and then K is used to decrypt the data. If P is wrong, you don't find out until you get the decrypted data and find out that it's not a valid filesystem.

From https://blog.elcomsoft.com/2020/08/breaking-luks-encryption/ :
"Unlike TrueCrypt/VeraCrypt, LUKS does store the information about the selected encryption settings in the encryption metadata, making it possible to detect the encryption settings prior to launching the attack."

Mike Fettis' "Cracking linux full disc encryption, luks with hashcat"
Forensic Focus's "Bruteforcing Linux Full Disk Encryption (LUKS) With Hashcat"
Cryptsetup Wiki
Milan Broz's "LUKS2 On-Disk Format Specification 1.0.0"
Darell Tan's "Bruteforcing LUKS Volumes Explained"
glv2 / bruteforce-luks (uses cryptsetup API to be faster)

Make a LUKS1 volume, and dictionary-attack it with hashcat:

sudo apt install hashcat	# or hashcat-nvidia
hashcat --version			# has to be 3.5.0 or better

# LUKS1 container file will be vol1.luks
dd if=/dev/zero of=vol1.luks conv=fsync bs=1 count=0 seek=50M

# Cracking will NOT work if you specify --iter-time
sudo cryptsetup --type luks1 --verify-passphrase luksFormat vol1.luks

# Cracking will NOT work unless you make a filesystem inside the container.
sudo cryptsetup luksOpen vol1.luks MYVOL1
sudo mkfs.ext4 /dev/mapper/MYVOL1
sudo cryptsetup luksClose MYVOL1

# LUKS1 container file is vol1.luks
# Dictionary is dict.txt; have container's password on one of the lines in it.
hashcat --force -m 14600 -a 0 -w 3 vol1.luks dict.txt -o luks_password.txt
# Keep typing "s" for "status" until done.
# See "Status ......: Cracked".
sudo cat luks_password.txt

Apparently hashcat 5.1.0 does not support attacking LUKS2 volumes.



Sarbasish Basu's "How to mount encrypted VeraCrypt or other volumes on an Android device"
EDS (Encrypted Data Store)





Miscellaneous



Periodically check the health of your drives

+/-
Run SMART utility (Disks application, or smartctl).

If HDD:
"sudo apt install smartmontools" and then "sudo smartctl -a /dev/sda | less"
"sudo apt install libatasmart-bin" and then "sudo skdump /dev/sda | less"


If drive is an NVMe SSD:
"sudo apt install nvme-cli" and then "sudo nvme smart-log /dev/nvme0"
"sudo apt install smartmontools" and then "sudo smartctl -a /dev/nvme0n1p2 | less"

Thomas-Krenn's "SMART tests with smartctl"
Wikipedia's "S.M.A.R.T."
Chris Siebenmann's "SMART Threshold numbers turn out to not be useful for us in practice"

Disk test in BIOS or GRUB/EFI ?

I get an email every day from smartd daemon about "number of Error Log entries increased" on my SSD. Support for my laptop said "Those errors are not related to the SSD drive status and SMART status is OK, it's just telling you that there are two additional entries to the log." and pointed me to openmediavault thread



Drive Interfaces

+/-

Form-factors

+/-
  • 2.5 Inch:

    Standard form-factor for hard disks in laptops.

  • M.2:

    Smaller form-factor; based on the mSATA (Mini-SATA) standard. Too small to hold a HDD. Other peripherals such as Wi-Fi cards may also make use of M.2 connectors.

    From Josh Covington's "NVMe vs. M.2 vs. SATA - What's the Difference?":
    M.2 is just the form factor. M.2 drives can come in SATA versions and NVMe versions, which describes the bus they use to electrically communicate with the other PC components. SATA M.2 SSD drives and 2.5" SATA SSDs actually operate at virtually identical spec. NVMe M.2's on the other hand, definitely do not ...

    From Chris Siebenmann's "Getting NVMe and related terminology straight":
    The dominant consumer form factor and physical connector for NVMe SSDs is M.2, specifically what is called 'M.2 2280' (the 2280 tells you the physical size). If you say 'NVMe SSD' with no qualification, many people will assume you are talking about an M.2 2280 NVMe SSD, or at least an M.2 22xx NVMe SSD.



Electrical interfaces and protocols

+/-
  • AHCI SATA:

    Communicates through SATA controller before getting to CPU.

    Limited to 1 command queue and 32 commands per queue.

    From Chris Siebenmann's "Getting NVMe and related terminology straight":
    Traditional SATA SSDs are, well, SATA SSDs, in the usual 2.5" form factor and with the usual SATA edge connectors (which are the same for 2.5" and 3.5" drives). If you simply say 'SSD' today, most people will probably assume that you mean a SATA SSD, not a NVMe SSD. Certainly I will. If I want to be precise I should use 'SATA SSD', though. SATA comes in various speeds but today everyone will assume 6 Gbits/s SATA (SATA 3.x).


  • NVMe (Non-Volatile Memory Express):

    Can communicate directly to CPU.

    Up to 64K command queues and up to 64K commands per queue.

    From Chris Siebenmann's "Getting NVMe and related terminology straight":
    NVMe, also known as NVM Express, is the general standard for accessing non-volatile storage over PCIe (aka PCI Express). NVMe doesn't specify any particular drive form factor or way of physically connecting drives, but it does mean PCIe ...

    From Josh Covington's "NVMe vs. M.2 vs. SATA - What's the Difference?":
    NVMe ... developed to allow modern SSDs to operate at the read/write speeds their flash memory is capable of. Essentially, it allows flash memory to operate as an SSD directly through the PCIe interface rather than going through SATA and being limited by the slower SATA speeds.

    ...

    NVMe drives provide write speeds as high as 3500 MB/s [PCI Express Gen 3 bandwidth]. That's 7x over SATA 3 SSDs and as much as 35x over spinning HDDs!

    Every NVMe drive has an M.2 form-factor.

    Faster NVMe chips and drives coming in 2021 or later will support PCI Express Gen 4 bandwidth, maybe 5000 to 7000 MB/s.

    Figure out how many PCIe lanes the SSD (controller) is using:
    
    sudo lspci -vv | grep 'Non-Volatile'                # get device number
    sudo lspci -vv -s NN:NN.N                           # see all info for device
    sudo lspci -vv -s NN:NN.N | egrep 'LnkCap:|LnkSta:' # see lane info
    # you could search for datasheet for NVMe controller chip to see total lanes
    

    Wikipedia's "NVM Express"



Christopher Harper's "NVMe vs M.2 vs SATA: Which is the best for your SSD?"
Anthony Spence's "NVMe vs SATA vs M.2 : What's the difference when it comes to SSDs?"



Solid-State Drive (SSD)

+/-
Note: an SSD is not just an HDD with chips instead of platters. Usually an SSD will have a cache, and firmware that does block-level (maybe 128 sKB) operations, and implements a mapping between sector numbers from OS and block numbers in the chips, and does wear-leveling, and has over-provisioning. [Some fancier HDDs may have the same features.]

So the SSD may lie to you about performance, and "overwriting" a sector probably won't result in the data being gone out of the chips, and "deleted" data can not be recovered by running a recovery utility.

ArchWiki's "Solid state drive"
Wikipedia's "Trim (computing)"
Alan Formy-Duval's "Extend the life of your SSD drive with fstrim"
speed guide's "SSD Linux Tweaks"
Justin Ellingwood's "How To Configure Periodic TRIM for SSD Storage on Linux Servers"
stevea's "Setting up and using SSD drives in Fedora Linux"
Chris Siebenmann's "Understanding plain Linux NVMe device names"


sudo apt install nvme-cli
man nvme

sudo nvme list
sudo nvme fw-log /dev/nvme0n1
sudo nvme error-log /dev/nvme0n1 | less

# see if an "NVMe fabric" is running ?
sudo nvme discover -t rdma -a IPADDRESS

# get block size, but it may lie to you:
sudo blockdev --getbsz /dev/sdaN
sudo blockdev --getbsz /dev/nvme0n1
cat /sys/block/sda/queue/logical_block_size
cat /sys/block/sda/queue/physical_block_size

sudo nvme id-ctrl /dev/nvme0n1 -H | less

# see if drive supports TRIM:
sudo hdparm -I /dev/sda | grep TRIM
sudo ??? /dev/nvme0n1p3 | grep TRIM
lsblk --discard

# Is OS doing TRIM via "discard" option in /etc/fstab,
# cron job, or systemctl fstrim.timer ?
mount | grep discard
systemctl status fstrim.service fstrim.timer

# Is the firmware updatable ?
sudo fwupdmgr get-devices
# Look for updates:
sudo fwupdmgr refresh

Swap

Most distros run a TRIM via a systemd service about once/week.
From Fedora docs:
"The Linux swap code will issue TRIM commands to TRIM-enabled devices ..."
But I don't see the kernel code doing so, such as in swapfile.c
And from ArchWiki:
"If using an SSD with TRIM support, consider using discard in the swap line in fstab. If activating swap manually with swapon, using the -d/--discard parameter achieves the same."
Note: swap is not mounted as a filesystem, so mount's "discard" flag does not apply, and fstrim does mounted filesystems.

You could have no swap, or use zram, instead of a swap partition or swap-file. Doing so would reduce use of SSD.
"swapon --show"
"sudo swapoff -a"
But this is not persistent across reboot.
Comment out swap line in /etc/fstab.
Also check contents of /etc/crypttab.
Don't remove /dev/vgkubuntu-swap_1 partition; I did that and something didn't like it.

You could change "swappiness" to 0 to reduce traffic to swap partition on SSD.
"sudo sysctl vm.swappiness"
Edit /etc/sysctl.conf and add "vm.swappiness=0"

Reducing writes to disk

Reducing writes really is not necessary; modern SSDs have around a 100 TBW lifetime, which would be 10 years while writing 30 GB per day.

[Most of this assumes you have plenty of RAM.]

Firefox and Thunderbird have settings to put cache in RAM instead of disk. Doing so would reduce use of SSD. Can't find similar in chromium browsers.

/tmp and /var/crash and ~/.cache (also /var/log, but I don't recommend that) can be mounted in RAM (tmpfs) instead of disk. Doing so would reduce use of SSD. I have seen cautions that you should not put /var/tmp on tmpfs. [Note: if you have swap, there are times when data from tmpfs can be swapped out.] [Note: "echo $TMPDIR" may show another place used for temp files.]

Do Timeshift backups to external disk, not to system disk (safer, too).

Don't have a swap partition or swap-file on SSD.

System journal can be kept in RAM, not disk, but then you lose past history. Edit /etc/systemd/journald.conf to have "Storage=volatile" in [Journal] section.

From thread on StackOverflow:
+/-
There is no standard way for a SSD to report its page size or erase block size. Few if any manufacturers report them in the datasheets. (Because they may change during the lifetime of a SKU, for example because of changing suppliers.)

...

Best leave about 5% to 10% of unallocated space outside any partitions. Having overprovisioned space is of great help to SSDs in maintaining their performance in time.

[This] will leave a number of blocks on the "never touched by the OS" list. This will simplify the life of the microcontroller which needs to juggle block allocations and erasures; remember that the poor little microcontroller has no notion of file systems, and from its point of view all the blocks touched by the operating system are in use, unless of course the OS is kind enough to trim unused blocks from time to time. This is especially important for external SSDs, which quite often may not even expose a trim interface.

... so the microcontroller has more space to move around stuff coming from within the partitions, without touching the interior of the partitions in the intermediate stages.

Some SSD drives have two types of storage in them, a small faster cache (MLC), and then the slower main storage (TLC). Samsung EVO has this; Samsung Pro has only the faster storage (all MLC). Even faster is SLC.

Seen many places: using 4 KB block size everywhere is best, and align partitions to 1 MB or 2 MB boundaries.

"Some SSD drives have factory 'over-provisioning', which reserves a few % space of total capacity."

I asked about partitioning and over-provisioning, on reddit:
+/-
> I plan to have one partition for / and /home.
> I think I will always have 20% free and be
> doing weekly TRIMs. Should I just use all of the space
> visible to me as the single big partition ?

... just go ahead and use the entirety of space ... Modern SSDs have more space than advertised specifically for balancing, they do it on their own.

...

Always use LVM. Create the logical volumes as you need/want. But given that it is very easy to extend logical volume, create them with the size you need. Say, have a 30 GB logical volume for root filesystem, 1 GB volume for swap and 150 GB for home. You do not have to fill up all your SSD. If in a few months time you see that you need more space in /home, extend it.

Might as well do fsck at every boot. For / as ext4 filesystem on LUKS: "sudo tune2fs -c 1 /dev/mapper/vgkubuntu-root"

For drives that support encryption on the SSD (in SSD firmware or hardware), a special utility app from the manufacturer will be needed to enable/disable the encryption.



Note: USB flash drives really are not intended for heavy-duty use, and can heat up and/or fail.

Note: SD cards can be very unreliable, and can fail suddenly and catastrophically.



Deliberately creating a damaged device, and more:
Michael Ablassmeier's "dd, bs= and why you should use conv=fsync"
dm-flakey



AMD's PSP (Platform Security Processor) and CCP (Cryptographic Coprocessor) hardware

+/-
Wikipedia's "AMD Platform Security Processor"
"What is known about the capabilities of AMD's Secure Processor?"
"AMD PSP 2.0 AMD Secure Processor"
Apparently this just verifies firmware contents, it has no remote capability ? But see: reddit thread


sudo lshw -class generic
sudo ss -lptun | grep :8732	# supposedly listens here; not on my system

sudo dmesg | grep -i ccp
grep -i ccp /proc/crypto
modinfo ccp
Greg Marsden's "Using AMD Secure Memory Encryption with Oracle Linux"
CCP-related source code in kernel
more kernel code
OpenSSL and AMD Cryptographic CoProcessor (CCP)
"apt show librte-pmd-ccp20.0"
https://doc.dpdk.org/guides/cryptodevs/ccp.html
"apt show dpdk"
https://forum.gigabyte.us/thread/9479/bug-linux-x570-aorus-initialize
https://elixir.bootlin.com/linux/latest/source/drivers/crypto/ccp/ccp-dev-v5.c#L791
AMD CCP dev says it's a BIOS issue.