Basics
- Network control / firewalls:
Inside the kernel, there is netfilter, which is a set of hooks for kernel modules to control networking.
Wikipedia's "Netfilter"
On top of netfilter, there are iptables and nftables (use one, not both).
Each consists of a set of kernel modules and a set of user-space tools.
For iptables, the kernel modules are ip_tables, ip6_tables, arp_tables, and ebtables, and the user-space tools are iptables, ip6tables, arptables, and ebtables.
For nftables, the kernel module is nftables, and the user-space tool is nft.
Wikipedia's "nftables"
Debian Wiki's "nftables"
nftables wiki
On top of iptables, there are ufw and gufw or firewalld (use one).
On top of nftables, there is firewalld.
There also is an iptables-nft compatibility layer that lets you use iptables on top of nftables. And an iptables-translate utility to translate many existing iptables rules to equivalent nftables rules. - User Control:
Control what files and directories a user or group can access. - Application Control and Security:
Control what files, directories, system calls, networks an application can access.
Derived from Ubuntu Security Podcast episode 83 7/2020:
Stacking == using multiple security modules in same system.
As of 5.8 kernel, stacking is limited, but this is being changed.
Current (5.8) stacking rules:
Major modules (SELinux, AppArmor, Smack): can't stack with another major module, because they all try to attach their security data blobs to the same hooks inside the kernel.
Minor modules (TOMOYO, Yama, LoadPin): can stack.
Some modules might be allowed to stack but not make sense to stack on each other, they conflict or duplicate.
Wikipedia's "Linux Security Modules"
Network Control and Firewalls
This section is for tools that generally run unattended. For tools used by a person, see the Network Monitoring section.
Some terms:
- IDS: Intrusion Detection System.
- SIEM: Security Information and Event Management.
Ubuntu's "DoINeedAFirewall"
Adrian Grigorof's "Open Source Security Controls"
You can change your MAC address to any value, either for Wi-Fi or for wired Ethernet, via the Network or Network Manager application.
Firewalls
"Netfilter is the framework in the Linux kernel, which implements the rule and filters provided by the user, through an interface available to user called iptables."
GUFW and UFW (simplified UIs to use instead of iptables)
Coming in KDE: plasma-firewall, a GUI on top of multiple types of back-end firewalls such as ufw and firewalld.
- /etc/hosts file:
- iptables:
"modinfo ip_tables"
There are 5 "tables": filter, nat, mangle, raw, security. We only care about the filter table, which has these built-in "chains" of rules: INPUT, FORWARD, OUTPUT. You can create new chains if you wish. Each rule ends with an action which can be: ACCEPT, DROP, LOG, or name of another chain. (There is more, but that's all we need to know.)
To see how much traffic is passing through each section of rules in filter table, do "sudo iptables -L -v" (can reset counters via "sudo iptables -Z"). On my system, with Windscribe VPN active, after doing a bunch of downloading, that gives:Chain INPUT (policy ACCEPT 2005K packets, 2860M bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy DROP 306 packets, 21425 bytes) pkts bytes target prot opt in out source destination 486 75925 ACCEPT all -- any any anywhere 192.168.0.0/16 1104 72461 ACCEPT all -- any any anywhere 10.0.0.0/8 0 0 ACCEPT all -- any any anywhere 172.16.0.0/12 495K 860M ACCEPT all -- any any anywhere localhost 1339 139K ACCEPT all -- any any anywhere 104.20.122.38 1369 143K ACCEPT all -- any any anywhere 104.20.123.38 417K 55M ACCEPT all -- any tun+ anywhere anywhere 0 0 ACCEPT all -- any any anywhere localhost 423K 78M ACCEPT all -- any any anywhere 89.238.nnn.nnn
To block SSH connections from any address, do "sudo iptables -A INPUT -p tcp --dport ssh -j DROP". Can do the same with http and https if you're not running a web server.
There is "ip6tables" which is separate but mostly has the same syntax as "iptables".
There is a "conntrack" module which lets you do things such as "ctstate" in rules.
To save changes so they survive across system restart, on Ubuntu-type systems, do "sudo apt install iptables-persistent", then turn off Windscribe VPN and firewall, then do "sudo su", and then "iptables-save >/etc/iptables/rules.v4" and "ip6tables-save >/etc/iptables/rules.v6".
You can write commands such as "-P INPUT DROP" into a file such as "iptables.txt" and then run "sudo su" then "iptables-restore <iptable.txt"
How-To Geek's "The Beginner's Guide to iptables, the Linux Firewall"
Supriyo Biswas's "An In-Depth Guide to iptables, the Linux Firewall"
Ravi Saive's "Basic Guide on IPTables (Linux Firewall) Tips / Commands"
Mitchell Anicas's "Iptables Essentials: Common Firewall Rules and Commands"
IP sets
I ran a bash file with commands:iptables -P INPUT DROP iptables -I INPUT -i lo -j ACCEPT iptables -I INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT iptables -P FORWARD DROP
Ran Windscribe VPN client which resulted ("iptables -L") in:Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere Chain FORWARD (policy DROP) target prot opt source destination Chain OUTPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere 192.168.0.0/16 ACCEPT all -- anywhere 10.0.0.0/8 ACCEPT all -- anywhere 172.16.0.0/12 ACCEPT all -- anywhere localhost ACCEPT all -- anywhere 104.20.123.38 ACCEPT all -- anywhere 104.20.122.38 ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere localhost ACCEPT all -- anywhere 89.238.nnn.nnn
That anywhere-anywhere rule actually has "out=tun+" attached to it (can see that via "iptables -L -v"), so all that traffic goes through the VPN.
Put DROP on all IPv6 chains; my ISP doesn't support IPv6.
Ran a couple of net-testing apps on my phone (which is on my LAN), targeting my PC, and they show all ports blocked, no response to ping, no services offered.
If you want to log packets that don't match any rule, and end up getting DROPed by the chain policy, add a rule at the END of the chain via a command such as "iptables -A FORWARD -m limit --limit 1/minute -j LOG".
Default logfile for iptables is /var/log/kern.log;
"sudo dmesg -T" command shows same log, with some useful coloring added.
I would like to log/detect all applications that create output connections. But it seems this is very difficult to do in a human-readable way:
Super User's "With Linux iptables, is it possible to log the process/command name that initiates an outbound connection?"
Akkana's "Find out what processes are making network connections"
You can get instantaneous snapshots (not cumulative logs) by running "ss -tp" or "netstat -A inet -p".
To see IP address and other details about each network interface, run "ip -d address".
I want to log any incoming attempts on protocols/ports I don't use.
Found this: "You can add the --syn flag to make the rule match only packets with the SYN flag set, which is set only on new connection attempts."
Maybe do this:
"iptables -I INPUT -p tcp --dport ssh --syn -m limit --limit 1/minute -j LOG --log-prefix "Incoming SSH attempt ""
Do similar for FTP, Telnet, HTTP, HTTPS.
I didn't bother to make rules for TeamViewer (port 5938), Remote Desktop (port 3389), SMB (port 139), NFS (port 2049), VNC (port 5900), http-alt (port 8080), RTelnet (port 107), TFTP (port 69), Simple FTP (port 115), rsh (port 514), rsync (port 873), Telnet-TLS (port 992), PPTP (port 1723), SSDP (port 1900), CIFS (port 3020), UPnP (port 5000), Socks Proxy (port 1080), Microsoft-DS (port 445) attempts. You can go a little nuts with this stuff. I think I don't have listeners active for any of it, but it would be nice to log and drop the packets.
But it turns out you can make one rule for multiple ports, so I made:
"iptables -I INPUT -p tcp --match multiport --dports 5938,3389,139,2049,5900,8080,107,69,115,514,873,992,1723,1900,3020,5000 --syn -m limit --limit 1/minute -j LOG --log-prefix "Incoming suspicious port attempt ""
Limit of 15 port numbers per rule; had to split it into two.
If one of your rules logs incoming HTTP attempts, test it by putting address "localhost:80" into browser's address field, then looking in logs. Or test by running "curl localhost" on the CLI.
If one of your rules logs incoming HTTPS attempts, test it by putting address "https://localhost" into browser's address field, then looking in logs. Or test by running "curl localhost:443" on the CLI.
If your rules log incoming SSH and/or Telnet and/or FTP attempts, test them by running "ssh localhost" and/or "telnet localhost" and/or "ftp localhost" in CLI, then looking in logs. Or doing "curl localhost:22" and/or "curl localhost:23" and/or "curl localhost:21" in CLI, then looking in logs.
But I want to drop the packets after logging them. So I did this:iptables -N I_LOG_DROP iptables -A I_LOG_DROP -m limit --limit 4/minute -j LOG --log-prefix "IPTABLES-I-LOG-DROP: " --log-level 6 iptables -A I_LOG_DROP -j DROP iptables -P INPUT DROP iptables -A INPUT -p tcp --match multiport --dports 20,21,22,23,80,5938,3389,139,2049,5900,8080,107 --syn -j I_LOG_DROP iptables -A INPUT -p tcp --match multiport --dports 69,115,514,873,992,1723,1900,3020,5000 --syn -j I_LOG_DROP
At some point, maybe it's easier to list the ports that should be open, instead of those that should be blocked. But maybe some high port numbers get opened dynamically.
"iptables -I INPUT -p tcp --match multiport ! --dports 80,443 --syn -m limit --limit 1/minute -j LOG --log-prefix "Incoming suspicious port attempt ""
Did this (in a shell script file):iptables -F iptables -Z iptables -N I_LOG_DROP iptables -A I_LOG_DROP -m limit --limit 4/minute -j LOG --log-prefix "IPTABLES-I-LOG-DROP: " --log-level 6 iptables -A I_LOG_DROP -j DROP iptables -N O_LOG_DROP iptables -A O_LOG_DROP -m limit --limit 4/minute -j LOG --log-prefix "IPTABLES-O-LOG-DROP: " --log-level 6 iptables -A O_LOG_DROP -j DROP iptables -P INPUT DROP iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -i lo -j ACCEPT iptables -A INPUT -p tcp -m tcp --syn -j I_LOG_DROP # UDP 1194 for ProtonVPN, UDP 5353 is multicast DNS / Avahi iptables -A INPUT -p udp --match multiport ! --dports 22,67,68,80,443,1194,5353 -j I_LOG_DROP iptables -P FORWARD DROP iptables -P OUTPUT ACCEPT # port 23189 used by SFTP iptables -A OUTPUT -p tcp --match multiport ! --dports ssh,http,https,23189 --syn -j O_LOG_DROP # UDP 1194 for ProtonVPN, UDP 5037 when using Wireshark iptables -A OUTPUT -p udp --match multiport ! --dports 22,53,67,68,80,123,443,1194,1900,5037,5353,30000:64000 -j O_LOG_DROP
The whole thing gets more complicated because your VPN probably does iptables stuff too. Windscribe VPN adds rules and changes chain policies. I had to have a big shell script to add rules before the VPN starts, then a small script to do a couple of tweaks after VPN has started.
And the number of ports keeps growing and growing. Apps such as Firefox and torrent-client open lots of high port numbers. Most/all of them may be on localhost, so maybe you have to start wiring addresses into your rules. The whole thing just gets too complicated.
If you see an iptables LOG line in /var/log/kern.log that doesn't show source and dest ports, but gives "PROTO=n", look up that protocol number in /etc/protocols.
Do "sudo netstat -tulpn" to see what ports have listeners. (Also "sudo ss -lptu")
Around this time, I mostly gave up with iptables. I think it was the wrong approach.Instead, concentrate on reducing and understanding the number of listeners you have. It doesn't matter if an incoming packet gets through iptables, as long as no process is listening on that port.
Let the VPN do what it wants with iptables. Maybe do "sudo iptables -L -v" occasionally to see how much traffic is hitting each rule.
I think it would be different with a server, running only a few services. There you could allow only 10 or so open ports.
Maybe there are some "listeners" built into the protocol stack ? Turn off protocols you know you aren't using. Maybe do this to see unusual protocols:
iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -i lo -j ACCEPT ... # assuming your INPUT chain policy is ACCEPT, put these at the end of the chain iptables -A INPUT -p tcp -j ACCEPT iptables -A INPUT -p udp -j ACCEPT iptables -A INPUT -p icmp -j ACCEPT iptables -A INPUT -m limit --limit 1/minute -j LOG --log-prefix "Incoming IPv4 protocol "
- ufw and gufw:
GUFW and UFW (simplified UIs to use instead of iptables)
Virdo, Boucheron and Juell's "How To Set Up a Firewall with UFW on Debian 10"
Vivek Gite's "How To Configure Firewall with UFW on Ubuntu 20.04 LTS"
Jahid Onik's "How To Configure Firewall with UFW on Ubuntu"
Daniel Aleksandersen's "How to switch firewalls from FirewallD to UFW"
"less /etc/default/ufw"
"less /etc/ufw/sysctl.conf"
"sudo ufw status verbose"
Files in /etc/gufw: "sudo less /etc/gufw/gufw.cfg" to find which profile is in use, then "sudo less /etc/gufw/Home.profile" to see that profile.
- firewalld:
firewalld
LinuxTeck's "15 basic useful firewall-cmd commands in Linux"
Brian Boucheron's "How To Set Up a Firewall Using firewalld on CentOS 8"
GUI for firewalld: firewall-config. Very detailed, a bit overwhelming.
Avoid duelling firewalls. You can't run firewalld and ufw/gufw at the same time. firewalld and ufw both use iptables.
"whereis -b -B /usr/bin /usr/sbin -f gufw ufw firewalld" - plasma-firewall:
Coming in KDE: plasma-firewall, a GUI on top of multiple types of back-end firewalls such as ufw and firewalld.
- Nftables:
This is a replacement for iptables, ip6tables, arptables, and ebtables. It is a more procedural language, I think. Should avoid a lot of duplication that you can get in iptables rules, and scale better. Also combines IPv4 and IPv6 in one structure. Not supported by VPNs yet, I think. Still in development 11/2018, I think.
Debian / wiki / nftables
Nftables wiki
Alistair Ross's "Hello nftables, Goodbye iptables"
"apt show nftables"
"nft list tables"
- BPF (Berkeley Packet Filter; AKA cBPF) and eBPF:
Possible replacement for iptables and nftables.
BPF has a fairly small VM inside the kernel, and byte-code can be placed in it to do network processing.
eBPF adds attaching code in many more places inside the kernel, adds a Just-In-Time compiler to native code (for some CPU architectures), and adds global state (in arrays, key-value pairs).
Also has ability to jump out to user-land code, bypassing much of the standard network stack, for example, in cases where reduced features and higher performance make sense. Or go the other way: where there used to be a (costly) jump out to user-land, instead do the operation inside the kernel, in the VM.
Wikipedia's "Berkeley Packet Filter"
Filip Nikolovski's "TIL: eBPF is awesome"
Eric Geier's "Little Known GUI Firewall Options for Linux"
IDSs and loggers and file-checkers
- OSSEC:
OSSEC
File integrity monitoring, log monitoring, rootcheck, and process monitoring. - Snort:
Snort
Can be used as a straight packet sniffer like tcpdump, a packet logger (useful for network traffic debugging, etc), or as a network intrusion prevention system. Can perform protocol analysis, content searching/matching, and can be used to detect a variety of attacks and probes.
I installed it (version 2.9.7.0):
Got "interface 'eth0' is invalid". I have valid interfaces "enp19s0" (wired Ethernet), "wlp18s0" (Wi-Fi), and "tun0" (VPN). I switch among all three; usually VPN is on, but sometimes it's off, usually I'm on Ethernet, but sometimes on Wi-Fi. Am I going to have to reconfigure snort each time ?
Chose interface "enp19s0" even though VPN is on. I guess installer completed okay; no message.
At CLI, run "snort -V" to see that it's installed.
Snort logs to the /var/log/snort directory by default.
NIDS config file is /etc/snort/snort.conf
Run "sudo snort" to start in packet dump mode AKA sniffer mode.
Run "sudo snort -vde -c /etc/snort/snort.conf" to start in verbose NIDS mode.
Add "-i INTERFACENAME" to specify the network interface to use.
Ctrl+C to terminate it.
[These were already done in my version:]
Edit /etc/snort/snort.conf to:
Un-comment line starting with "output unified2"
Comment out line starting with "output log_tcpdump"
"snort -T -c /etc/snort/snort.conf" to syntax-check the config file.
Run "sudo snort -A fast -d -N -s -c /etc/snort/snort.conf" to start in NIDS alerting non-logging mode.
"cat /var/log/snort/alert" to see alerts.
I did a LAN-discovery from a phone app (Fing) and snort wrote entries about uPnP discovery scanning, into the alert file.
After about 8 hours, the alert file was over 105 KB, mostly alerts about malformed uPnP stuff.
I signed up for a Snort account, but there is no ruleset available for the old version I have (2.9.7.0).
A serious setup where logs/alerts couldn't be deleted by an attacker probably would send the info to some other machine, probably to a SQL database. And view the database with Splunk or something.
OccupyTheWeb's "Snort IDS for the Aspiring Hacker, Part 1 (Installing Snort)"
OccupyTheWeb's "Snort IDS for the Aspiring Hacker, Part 2 (Setting Up the Basic Configuration)"
OccupyTheWeb's "Snort IDS for the Aspiring Hacker, Part 3 (Sending Intrusion Alerts to MySQL)"
OccupyTheWeb's "How to Read & Write Snort Rules to Evade an NIDS (Network Intrusion Detection System)"
OccupyTheWeb's "How to Evade a Network Intrusion Detection System (NIDS) Using Snort"
Noah Dietrich's "Snort 2.9.9.x on Ubuntu"
- Suricata:
Suricata
Capable of real-time intrusion detection (IDS), inline intrusion prevention (IPS), network security monitoring (NSM) and offline pcap processing. - psad:
CipherDyne's "psad: Intrusion Detection and Log Analysis with iptables"
- Graylog:
Graylog
Teknikal's_Domain's "Graylog, and the Syslog Protocol, Explained"
- Zeek (formerly Bro):
Network Security Monitor.
Zeek
Daniel Berman's "6 Open Source SIEM Tools"
Honeypots and tar-pits
- Dionaea:
Honeypot.
DinoTools / dionaea
- Thug:
Honeypot.
buffer / thug
- Canarytokens:
Honeypot. Mixture of Windows-only and OS-independent items.
Thinkst's "Canarytokens"
KaliLinux.in's "Canarytokens -- Danger For Attackers"
- OpenCanary:
Honeypot. Maybe same as Canarytokens ? By same company.
OpenCanary
thinkst / opencanary
- Cowrie:
Honeypot for SSH and Telnet.
cowrie / cowrie
- Modern Honey Network (MHN):
Software to deploy and manage honeypots in a network.
pwnlandia / mhn
- T-Pot:
Software to deploy and manage honeypots in a network.
Deutsche Telekom's "T-Pot: A Multi-Honeypot Platform"
- Endlessh:
A "tar-pit" which ties up SSH attackers for a long time.
skeeto / endlessh
Chris Wellons' "Endlessh: an SSH Tarpit" (gives HTTP example too)
- Responder:
Listens for and poisons responses to Link-Local Multicast Name Resolution (LLMNR), NetBIOS Name Service (NBT-NS), Web Proxy Auto-Discovery (WPAD), more.
SpiderLabs / Responder
- xtables-addons-dkms:
Iptables add-ons to implement "tar-pit" rules.
moblog's "Howto: Install TARPIT on Debian Stable"
Ubuntu manuals' "Xtables-addons - additional extensions for iptables, ip6tables, etc."
- SNARE and Tanner:
A web-app that emulates vulnerabilities and waits for attackers to try things such as SQL injection, URL parameter mangling, etc.
mushorg / snare
mushorg / tanner
Do you have to supply your own app, and then SNARE clones it ?
paralax / awesome-honeypots
Application Control and Security
Maybe decide what is important to you.
Isolate applications from each other and the OS:
Solutions from lightest to heaviest:
- No isolation: userid gives every app access to all of your files, X gives apps access to
each other's event-queues. Native packaging: deb, rpm, etc.
- Sandbox created by someone other than app dev: AppArmor, Firejail.
- Container (and permissions) created by app dev: Snap, Flatpak, Docker. All containers run
on top of same, shared OS.
- Micro-VM or unikernel for each app ? A severely stripped-down server image. If you're going
to run a single process, no GUI, no SSH, single user, no command-line shell,
then strip lots of stuff out of the Linux kernel-plus. Even run kernel and app in same address space.
- VM for each app. OS is a "full" server distro such as Ubuntu server or a minimal distro such as Alpine.
- Separate bare metal for each app. OS is a "full" server distro.
Latest software:
Solutions from fastest to slowest:
- Container / image created by app dev: Snap, Flatpak, Docker, appimage.
- Distro maintainer does native packaging: deb, rpm, etc.
- LTS distro / stable repo.
- VMs created using LTS software.
Low-level kernel stuff and building blocks:
Lowest-level stuff
- Namespaces:
Wikipedia's "Linux namespaces"
Mahmud Ridwan's "Separation Anxiety: A Tutorial for Isolating Your System with Linux Namespaces"
Ed King's "Linux Namespaces"
Namespaces are a key feature used to implement isolation for container systems (Docker, flatpak, snap, LXC), applications and daemons (AppArmor, Firejail), and multi-process applications (some browsers).
man namespaces man lsns lsns # list namespaces in use by current user sudo lsns # list namespaces in use by all users
A program uses the clone() system call instead of fork(), to make a copy of itself running in a separate space. That can mean separate process trees (clone can't see parent process or any pre-existing processes), different views of the network connections, separate mount spaces (/etc/fstab's, I think), pseudo-root privileges, separate IPC spaces, more. There can be sockets set up to communicate between namespaces.
In the CLI, you can use the "unshare" command to run a process in a separate namespace. The "ps" command has options to display namespace IDs for processes.
- cgroups:
Not a security mechanism. Used to control resource contention among groups of processes. Defines 13 resource categories ("controllers"), and lets you specify that process group A can use amount N of resource category R ?
So, for example, normally if 2 applications start 5 processes each, those 10 processes all contend for resources on an equal footing. But suppose you want the 5 processes for application A to get higher priority for CPU than the 5 processes for application B ? Define each set of 5 processes as a separate cgroup.
David Both's "Managing resources with cgroups in systemd"
- Access Control Lists (ACLs):
A way to set finer-grained permissions than the base UGO permissions.
Kuldeep Sharma's "Secure Files/Directories using ACLs (Access Control Lists) in Linux"
man acl man setfacl grep -i acl /boot/config*" # see if enabled in kernel mount | grep acl # see if any filesystems mounted with ACL enabled
Works on most Linux filesystems, but not vfat, exFAT, and some other non-Linux filesystems. - Capabilities:
man page
K3A's "Linux Capabilities in a nutshell"
Good info in Chapter 39 "Capabilities" of "The Linux Programming Interface" by Michael Kerrisk.
"The goal of capabilities is to divide the power of superuser into pieces".
There is a set of defined capabilities: audit-control, audit-write, network admin, network broadcast, etc.grep 'define CAP_' /usr/src/linux-headers-$(uname -r)/include/uapi/linux/capability.h | grep -v \( | grep -v LAST_CAP man capabilities
A set of capabilities can be granted to a file, and then a process that execs that file gets those capabilities. Threads in a process can each have different capabilities. Capabilities can be inherited in various ways by daughter processes.
Most applications probably aren't written to know or care about capabilities, and don't have any capabilities assigned to their executable file.getcap /bin/* /usr/bin/* /usr/bin/*
And anything run by root user gets all capabilities.
Mid-level stuff
- iptables:
You could assign an application to belong to a particular group, then make iptables rules about that group using "-m owner --gid-owner" qualifiers in iptables rules.
discussion
- seccomp and seccomp-bpf (SECure COMPuting with filters):
A Linux kernel security module that filters syscalls from a process.
The Linux Kernel's "Seccomp BPF (SECure COMPuting with filters)"
Wikipedia's "seccomp"
- SELinux:
A Linux kernel security module. More fine-grained than AppArmor and Firejail, which run at the application level, only at the interface between app and OS.
Steven Vaughan-Nichols' "How to set up SELinux right, the first time"
Alex Callejas' "A sysadmin's guide to SELinux"
Mike Calizo's "Secure your containers with SELinux"
Wikipedia's "Security-Enhanced Linux"
RHEL's "SELinux User's and Administrator's Guide"
SELinux Project
SELinux Project on GitHub
Fedora's "Getting started with SELinux"
Barrow's "Use SELinux Targeted Policy to Secure Your Hosts"
Good info in Chapter 24 "Enhancing Linux Security with SELinux" of "Linux Bible" by Christopher Negus.
From Lubos Rendek's "How to disable/enable SELinux on Ubuntu 20.04 Focal Fossa Linux":Ubuntu offers AppArmor as an alternative to SELinux. While SELinux is available on Ubuntu, it is rather in an experimental stage and most likely will break your system if set to enforcing mode. In case you must use SELinux, make sure to disable AppArmor first. Also set SELinux first to permissive mode and check your logs for potential issues before you enable enforcing mode.
# See if it installed and/or enabled: sestatus getenforce # Install it: sudo apt install selinux-utils selinux-basics policycoreutils setools setroubleshoot setroubleshoot-server sudo setenforce permissive sudo selinux-activate # Allow anything to run: sudo setenforce permissive grep CONFIG_SECURITY_SELINUX_DEVELOP /etc/selinux/config # want "y" # Possible modes are: disabled permissive enforcing # In permissive mode, there can be a lot of logging going on # if lots of operations are (silently) failing security checks. # Note: changing mode can trigger (upon next reboot) a scan # of all files to relabel them (check security contexts). # Possible policy types are: targeted mls minimum # Edit /etc/selinux/config to set value in line SELINUXTYPE=whatever # If you set policy type to mls, first install package selinux-policy-mls # If you set policy type to minimum, first install package selinux-policy-minimum cat /etc/selinux/config man -k selinux man runcon man sandbox # runcon is a little dangerous, better to use sandbox secon -urt # show security context of current process id # show security context of current user journalctl -t setroubleshoot --since=14:20 # If you have auditing enabled: aureport | grep AVC ausearch -m avc # If you don't have auditing enabled: journalctl | grep sealert
SELinux is installed and enforcing by default on RHEL and Fedora ?
- Yama:
A Linux kernel security module. Seems to be very narrowly focused on ptrace ?
Michael Boelen's "Protect against ptrace of processes: kernel.yama.ptrace_scope"
Yama
- Lockdown:
A set of Linux kernel features to optionally close off interfaces that allow root to modify the kernel.
Matthew Garrett's "Linux kernel lockdown, integrity, and confidentiality"
Arch Wiki's "Kernel lockdown mode"
- SMACK:
A Linux kernel security module.
Wikipedia's "Smack"
The Smack Project - Home
- TOMOYO:
A Linux kernel security module.
Wikipedia's "Tomoyo Linux"
TOMOYO Linux
eLinux's "TomoyoLinux"
- Bubblewrap:
ArchWiki's "Bubblewrap"
# all available kernel modules: find /lib/modules/$(uname -r)/kernel -name '*.ko' -print | sort | less lsmod | sort | less # see loaded modules modinfo MODNAME # see info about module, incl params when loaded cat /proc/cmdline # see kernel launch command line sudo sysctl -a | less # see kernel parameters # https://www.kernel.org/doc/html/v4.14/admin-guide/kernel-parameters.html less /etc/sysctl.conf # see system variables
Service and app-level stuff:
- Firejail:
An application that restricts the environments of other apps, running them with access only to certain directories, or to fake copies of system directories, or no network access.
Firetools/Firejail
Easy Linux tips project's "Run your web browser (and other apps) in a secure sandbox"
xenopeek's "Using firejail as security sandbox for your programs"
"firejail --version" (shows more than just version number)
You could use Firetools or Firejail Configuration Wizard apps in the GUI. But building a security profile doesn't seem to be persistent, and the Firetools monitor shows a much-too-high number for memory usage. And the UI is really annoying.
To just run an app while denying network access:firejail --net=none APPNAME
"sudo apt install firejail firejail-profiles firetools".
Profiles stored in /etc/firejail.
Poked around a bit, quickest way to see the restrictions is to run "mount" in CLI before and after running "firejail bash".
When trying to debug a Firejail profile, launch the app in CLI ("firejail APPNAME"), not via an icon. Apps often put out error messages on stderr. Also, do "sudo ps -ax | grep firejail" to make sure an app isn't hanging; reboot if there is one.
Firejail project home: netblue30 / firejail
Firefox with Firejail:[You could run Firetools, right-click, Configuration Wizard, select Firefox, click "Build a custom security profile", Continue. But the changes are not persistent; they're used for only one launch ?
Instead:]
- Do "mkdir -v ~/.config/firejail".
- Do "cp -v /etc/firejail/firefox.profile ~/.config/firejail".
- Do "xed ~/.config/firejail/firefox.profile".
[Following works on FF 63.0; other versions may be different.]
Key lines I did/didn't change:
- ~/Downloads is already whitelisted.
- Add whitelist lines for ${HOME}/Videos or any other
directories you want to give access to.
No way to specify read-only ? - "caps.drop all": I didn't change it.
- "netfilter": I didn't change it.
- "noroot": I didn't change it.
- Set the "protocol" line to "protocol unix,inet,inet6,netlink".
- Set the "seccomp" line to "seccomp.drop
@clock,@cpu-emulation,@debug,@module,@obsolete,@raw-io,@reboot,
@resources,@swap,acct,bpf,fanotify_init,io_cancel,io_destroy,
io_getevents,io_setup,io_submit,ioprio_set,kcmp,keyctl,mount,
name_to_handle_at,nfsservctl,open_by_handle_at,personality,
pivot_root,process_vm_readv,remap_file_pages,setdomainname,
sethostname,umount,umount2,userfaultfd,vhangup,vmsplice" (all on one line, no spaces in the list)
["seccomp.drop mount,umount2,swapon,swapoff" also worked]. - "shell none": I didn't change it.
- "private-etc" commented out: I didn't change it.
- Right-click on the orange Firefox icon near the Start button, click
Edit, and change Command from "firefox %u" to "firejail firefox %u".
Some articles say add "-no-remote", but that prevents my password manager from sending keystrokes to Firefox. - Launch Firefox, and test to see if it works normally.
- Try Ctrl+O to open a page from disk, and only the allowed directories should be shown.
- In CLI do "firejail --tree" or "firejail --list" and you should see Firefox there.
- Test password manager sending info to Firefox.
If you caused a bunch of FF crashes while debugging, look in "~/.mozilla/firefox/'Crash Reports'/pending" and delete them.
Using Firefox under Firejail, I'm seeing some cases where FF can't access the internet, some where it doesn't shut down properly. It's not saving my uMatrix settings changes, too. Stopped running it under Firejail routinely.
Mint 19 has Firejail 0.9.52-2 as of 14 Nov 2018. "sudo apt install firejail" says that is latest version. But the project page says there is a 0.9.56-LTS version. Download page says download DEB file and do "sudo dpkg -i firejail_X.Y_1_amd64.deb". Did that, and got some error "while trying to overwrite /etc/firejail/etr.profile". I don't see anything special about that file. Removed that file, tried again, same error. Firejail still works, still says version 0.9.52-2.
So brought over just new firefox.profile, firefox-common.profile, and created empty firefox.local. Seems to work better, but still some problems, end up with zombie Firefox processes, etc. Stopped using Firejail on FF for now.
KeePassXC with Firejail:Mint comes with a Firejail profile for it, "/etc/firejail/keepassxc.profile". But it disables all the "click on URL to open in browser" and "auto-type" features of KeePassXC. You'd be reduced to copy-switch-paste to copy info from manager to browser.
- AppArmor:
Very similar to Firejail, but implemented as a Linux kernel security module.
Do "sudo apparmor_status" to see info.
To install more: "sudo apt install apparmor-profiles apparmor-profiles-extra apparmor-utils apparmor-easyprof"
"man -k apparmor"
Wikipedia's "AppArmor"
AppArmor
Making profiles:
Ubuntu Tutorials' "How to create an AppArmor Profile"
Uzair Shamim's "The Comprehensive Guide To AppArmor: Part 1"
The Debian Administrator's Handbook's "14.4. Introduction to AppArmor"
In Mint, Apparmor (user-space parser utility) was installed by default.
I installed Apparmor-utils through Software Manager.
Profiles are stored in /etc/apparmor.d directory.
You could turn on the profile for Firefox by doing "sudo aa-enforce /etc/apparmor.d/usr.bin.firefox".
To turn it off, do "sudo aa-disable /etc/apparmor.d/usr.bin.firefox".
If you do a disable when enforce is not on, you'll see a "Profile doesn't exist" error.
To see the list of active profiles, do "cat /sys/kernel/security/apparmor/profiles".
To see AppArmor activity, do "grep apparmor /var/log/kern.log".
When trying to debug an AppArmor profile, launch the app in CLI, not via an icon. Apps often put out error messages on stderr. Also, do "sudo ps -ax | grep APPNAME" to make sure an app isn't hanging; reboot if it is.
Firefox with AppArmor:Already had FireFox working in Firejail, so all of the following is with FF running in Firejail.
Started enforcing AppArmor for Firefox, launched Firefox in Firejail, it showed some "welcome to Mint - file access failed" page, and the kernel log shows a dozen or more AppArmor "denied" messages.
Quit Firefox, disabled AppArmor for Firefox, launched Firefox, and my Firefox user profile is gone ! Bookmarks, settings, add-ons, everything gone !
Fortunately, I was able to find it all under a directory in "~/.mozilla/firefox", and was able to restore it by editing "~/.mozilla/firefox/profiles.ini" to point at the old profile instead of the new one. Whew ! Do a backup of that stuff before trying again.
Looking at "denied" messages in /var/log/kern.log, it looks like access to "/run/firejail/mnt/" is denied, maybe also access to "home/.ecryptfs/USERNAME/...". Edited "/etc/apparmor.d/usr.bin.firefox". Above "# so browsing directories works", I added lines "/run/firejail/mnt/** r," and "/home/.ecryptfs/** rw,".
Still got "bookmarks and history system will not be functional because one of Firefox's files is in use by another application. Security software can cause this problem." error message.
Tried AppArmor on, Firejail off, and FF works ! So the problem is somewhere in the interaction of AppArmor and Firejail. Did "sudo xed /etc/apparmor.d/firejail-default /etc/apparmor.d/usr.bin.firefox ~/.config/firejail/firefox.profile". Couldn't fix the problem. Tried adding lines "network unix stream, network netlink stream," and "/run/firejail/mnt/** r, /home/.ecryptfs/** rw," to "/etc/apparmor.d/usr.bin.firefox", didn't help.
If you caused a bunch of FF crashes while debugging, look in "~/.mozilla/firefox/'Crash Reports'/pending" and delete them.
KeePassXC with AppArmor:Mint has no AppArmor profile for KeePassXC.
Made a profile the stupid way, trial-and-error, not using the profiling tools. Started with no restrictions.
"sudo aa-enforce /etc/apparmor.d/usr.bin.keepassxc"
"sudo aa-disable /etc/apparmor.d/usr.bin.keepassxc"
"sudo apparmor_parser -r /etc/apparmor.d/usr.bin.keepassxc"
Got it to work, despite showing "qt5ct: D-Bus global menu: no" error (same error when AppArmor is disabled).
Tried to turn off networking, got "Could not connect to any X display" error. Adding "network unix stream" fixed that. Very unfortunate that I had to do that; my main goal for this profile was to turn off networking.
Took a lot of fiddling to get the directory-tree permissions okay; I'm sure they're still too generous.
Ended up with this.
- OpenSnitch:
evilsocket / opensnitch
But LinuxUprising's "How To Install OpenSnitch Application-Level Firewall In Ubuntu" says it's beta and you have to build from source (compiling in Go). The source project says it edits iptables rules and interacts with netfilter.
7/2019: Developer says he's dropping it for a while, discouraged by no help.
nixCraft's "OpenSnitch: The Little Snitch application like firewall tool for Linux"
Step 3 in TokyoNeon's "Using Ubuntu as Your Primary OS, Part 4 (Auditing, Antivirus & Monitoring)"
Do Son's "opensnitch: GNU/Linux port of the Little Snitch application firewall"
gustavo-iniguez-goya / opensnitch (fork that IS active)
Linux Uprising's "OpenSnitch 1.3.0" (original is active again)
- Douane:
Douane
Douane on GitLab
Tuxdiary's "Douane: easy firewall with app rules"
Dedoimedo's "Linux per-application firewalls - Doable? Douane"
Project-insanity's "Application firewall Douane for ArchLinux"
- Trickle:
ArchWiki's "Trickle"
Reduce bandwidth available to a single process.
My evaluation
- Firejail better because:
- Can have different icons that launch an app with/without Firejail.
- Can have private copies of /dev, /bin, /tmp, /etc.
- Easy to say nodvd, noshell, noroot.
- In Mint, there are far more Firejail profiles than AppArmor profiles.
- AppArmor better because:
- No way to evade controls when you launch an app.
- Lots of fine-grained controls on D-Bus.
- Can set detailed rwxmk permissions on directory trees and files; Firejail just has whitelist/blacklist.
- Has some kind of controls on "signals".
- Neither AppArmor nor Firejail supports restricting network access to just X system, or just localhost,
or just LAN addresses.
- Trying to use both AppArmor and Firejail on same app usually fails.
- In Mint, by default, AppArmor is controlling several daemons,
such as dhcp, cups, mysql-akonadi, Clam updater, ippusbxd (USB printer).
- I think I would try to get my key applications running in Firejail. Leave
AppArmor enforcing on the daemons.
Opinion: firewalls / app control / security is a mess:
Firejail, AppArmor, SELinux, Yama, iptables, ip6tables, nftables, bpf, ufw, gufw, firewalld, netfilter, VPN, IDS, IPS, etc. Why can't we have one integrated solution, with a body of rules that can do filesystem and network access and syscall access and service and app and login control, and take user ID, app ID, MAC address, IP address, IP port, etc as input to the rules ?
Examples:
- I'd like to control Firefox now. I go to the Security GUI, select Firefox, and I see everything about it:
what files/dirs it can access, what syscalls it can do, what devices, what IP ports,
domain whitelists and blacklists, what users/groups can access it, everything.
All in one place. I don't care if separate kernel modules do the networking and the
filesystem control, and some module inside Firefox does the domain whitelist/blacklist.
- Same for any other app or service or module. I go to the Security GUI and choose my password manager app.
In one place, I can control everything about it: filesystem access, network access, syscalls, etc.
- I don't want my system to do anything with IPv6.
I go to the Security GUI and choose IPv6.
In one place, I can control everything about it: turn it off for everything, turn off incoming, whatever.
Later, maybe I decide only Firefox can do IPv6. I can check a box to allow that.
- "Networking" should have zones: can/can't access internet, can/can't access router,
can/can't access other devices on LAN, maybe also services listening on localhost. And internally
X11 does networking, so that has to be allowed.
- I don't want user "skype" (which owns Skype app and the cron job that
updates it) to be able to contact any domains
except Microsoft's Skype domain.
I go to the Security GUI and choose user "skype" and set the appropriate things.
System Hardware Monitoring and Control
- TLP:
Tecmint's "TLP: Quickly Increase and Optimize Linux Laptop Battery Life"
TLP - Optimize Linux Laptop Battery Life
Installed it.
Rebooted to make it take effect.
Not in Ubuntu 20 stores.
On Ubuntu 20, do:sudo apt install tlp tlp-rdw sudo tlp start
It's a CLI-only tool.
Config file is /etc/default/tlp.
After changing that file, do "sudo tlp start".
# see if the services are running: sudo systemctl status tlp.service sudo systemctl status tlp-sleep.service sudo tlp-stat -s # check status sudo tlp-stat | less # see all settings and status
Since I'm running on AC power, I probably won't be able to see any difference.
Software Resource Monitoring
- System's GUI apps:
Run System Monitor application. - service:
To see what services are running, run "service --status-all". - pstree -T:
Nice tree-view of running processes. - top:
It shows process/memory info, but not networking.
Run "top", then:
"?" to see list of commands in top,
"f" for field management,
"c" to see command-line of each process,
"V" to see tree-view of processes,
"q" to quit.
Useful command-line flags: "top -d 5.0" - Quick test for overall memory leaks:
Do "watch 'free -m'". - auditd:
Ran "sudo auditctl -l" and it says there are no rules.
Ran "sudo auditctl -s" to see status of the auditing subsystem.
Also "sudo systemctl status auditd" to see status of the auditing subsystem.
Ran "sudo cat /etc/audit/audit.rules" to see contents of rules file.
Ran "sudo ausearch --interpret" to see what has been logged.
Run "sudo /etc/init.d/auditd restart" (start/stop/restart) to control auditd service.
Log file is "/var/log/audit/audit.log".
Stopped auditd via "sudo /etc/init.d/auditd stop", but after reboot it was running again.
Did "sudo systemctl disable auditd", which seemed to succeed, but auditd is still running.
"sudo auditctl -a exit,always -F arch=b64 -S connect -k MYCONNECT" to log all outgoing network connections (warning: log file size will increase by about 1-2 KB/sec in an "idle" system!).
"sudo auditctl -d exit,always -F arch=b64 -S connect -k MYCONNECT" to remove that rule so log file isn't flooded.
LinOxide's "Auditd - Tool for Security Auditing on Linux Server"
Aaron Kili's "Learn Linux System Auditing with Auditd Tool on CentOS/RHEL"
Paul Brown's "Customized File Monitoring with Auditd"
- Process accounting:
Sandra Henry-Stocker's "Managing process accounting on Linux"
Why Did Something Get Installed ?
With deb/apt/dpkg packages:
ls -lt /var/lib/dpkg/info/*.list | less zcat -f /var/log/dpkg.log* | grep -i PKGNAME | egrep "\ install\ |\ upgrade\ " # grab part of datetime from that, and do: zcat -f /var/log/dpkg.log* | grep -i DATETIME | egrep "\ install\ |\ upgrade\ " sudo apt-get remove PKGNAME --simulate apt-cache rdepends --installed --recurse PKGNAME aptitude why PKGNAME --show-summary
Network Monitoring
This section is for tools used by a person. For tools that generally run unattended, see the Network Control section.
Some terms:
- IDS: Intrusion Detection System.
- SIEM: Security Information and Event Management.
Traffic monitoring and analysis
- netstat:
To see what incoming ports are open and/or have listeners on them, do "sudo netstat -tulp".
To see what connections have been established and by what app, do "sudo netstat -tp".
Mehedi Hasan's "Linux Netstat Command Tutorial for SysAdmins"
- lsof:
To see established connections, do "sudo lsof -i". - iptraf-ng:
Ran "sudo iptraf-ng". If you want names instead of numbers, go into "Configure" and turn on "Reverse DNS lookups" and "TCP/UDP Service names". Exit out of Configuration and then go to "IP traffic monitor".
Shows both UDP and TCP. - iftop:
Ran "sudo iftop -P". Type "h" to get into and out of help screen.
It does show cumulative totals; you may have to press "T".
But it just shows IP addresses and port numbers; no app or service names. - Nethogs:
Ran "sudo nethogs".
But it seems to show only currently active processes, not a historical/cumulative log.
Aha ! "sudo nethogs -v 3" will show cumulative, so apps don't disappear from the list after they terminate.
Tried to install hogwatch, but got a lot of "error: invalid command 'bdist_wheel'" problems. - ntopng:
But it won't start up, says address or port 3000 is busy. Turned off VPN, maybe iptables is stopping it, gave up. - conntrack-tools:
conntrack-tools: Netfilter's connection tracking userspace tools
conntrack (8) - Linux Man Pages
To install: "sudo apt install conntrack". - Justniffer:
Thought about installing Justniffer, but it looks old, decided against it. If I'm going to learn some big complicated tool, it's going to be Wireshark. - Wireshark:
Tried running Wireshark from GUI Start button, but none of the 4 "capture filters" it shows me seem appropriate.
Ran "sudo wireshark" from CLI, and it gave a warning but then showed me interfaces I expected, including my Ethernet adapter. Chose that, and it started live monitoring of packets.
If you see a lot of UDP traffic labeled as "QUIC", Wireshark probably is guessing wrong, and: "If you don't want the QUIC tag, simply go to the "Analyze" menu and select "Enabled Protocols" from the list. Find the entry for QUIC and uncheck the box."
Did "sudo usermod -a -G wireshark MYUSERNAME" so I can run Wireshark without sudo.
In Wireshark, do "Capture / Options" and turn off "promiscuous mode" to see traffic just for your computer, not all traffic on the LAN.
Chris Hoffman's "How to Use Wireshark to Capture, Filter and Inspect Packets"
David D Warden's "Everything You Need to Know About Wireshark"
OTW's "Network Forensics: Wireshark Basics, Part 2"
Ceos3c's "Wireshark Tutorial Series 1 - Introduction, lab setup and GUI overview"
FromDev's "30+ Best Free Wireshark Tutorials PDF & eBooks To Learn"
Brad Duncan's "Using Wireshark: Identifying Hosts and Users"
elitest's "Decrypting TLS Browser Traffic With Wireshark - The Easy Way!"
There's also a CLI version/part of Wireshark: tshark
PA Toolkit (Pentester Academy Wireshark Toolkit)
- Nikto:
Nikto2
Web server testing. - arpwatch:
Tool for seeing ARP traffic, alerting if a new MAC-to-IP address mapping appears or an existing one changes.
Wikipedia's "arpwatch"
- gupnp-tools:
See uPNP traffic.
"sudo apt install gupnp-tools"
Run "GUPNP Universal Control Point" application (on CLI: gupnp-universal-cp) to display traffic.
No man page, but see "gupnp-universal-cp --help".
Works with VPN on or off: I see uPNP traffic from our smart TV.
Another application: "GUPnP AV Control Point" application (on CLI: gupnp-av-cp) to get a remote control UI.
I can't get it to control our smart TV, but if I use the TV's remote control to change the volume, the app's volume slider moves to reflect the change.
Hayden James' "Linux Networking commands and scripts"
Martin Bruchanov's "Linux Network Administration"
Monitor the traffic in/out of your LAN. Best ways probably are custom software in your router, and a Pi-hole doing DNS filtering. From Security in Five Podcast - Episode 746, investigation of traffic volume exceeding data cap found that iCloud was uploading/downloading the entire collection any time one thing was added, and after that was fixed almost 50% of all traffic was due to blockable scripts (ads, trackers).
Endpoint monitoring
I want a monitoring app on my Linux system that tells me how much free storage/disk each device on the LAN (Windows laptops, Android phones and tablets, printer) has, last time it was updated, is it on the network right now. That's about it. I don't really care about network traffic monitoring.
It looks like all the managers work by using DNS to name and access devices. You'll have to set static IP assignments in the router, and put names/numbers in /etc/hosts on manager machine, then "sudo systemctl restart systemd-resolved", then "ping NAME".
Some people are telling me that there is a market separation between "network monitoring (SNMP)" products [LibreNMS, Icinga, more] and "mobile device management (MDM)" products [Quest Kace, JAMF, Spiceworks, Intune, more], and you will not find one product which can monitor both servers and smartphones, for example.
System Types
- Custom agent on each client: install a custom "agent" app or service on each client device,
get lots of detailed information and control.
- SNMP agent on each client: install or enable an SNMP agent app or service on each client device,
get varying amount of information and control.
- Agentless: manager does network access (ping, HTTP, whatever)
to each client to see if it is alive and seems
to be functioning as intended. Little information, probably no remote control.
Agent Types
- Passive: agent just collects data and responds to requests/orders from manager. So all clients
have to have fixed known IP addresses ?
- Trap: when some trigger happens (e.g. free disk space goes below 10%),
agent sends an event to manager. So only manager
has to have fixed known IP address ?
- Both
Network Managers
- EventSentry Light:
Limited to 5 devices ? Maybe limited only for updating/control ?
EventSentry
Server runs on ???
Agents available for ???
- Zabbix:
Zabbix
Server runs on Linux.
Agents available for Linux, Mac, Windows.
There is an Android app "Unofficial Zabbix Agent" by dentier.
"docker pull zabbix/zabbix-server-mysql" and "docker run -p 8080:8080 zabbix/zabbix-server-mysql"
"docker pull zabbix/zabbix-web-nginx-mysql" and "docker run -p 8080:8080 zabbix/zabbix-web-nginx-mysql"
- LibreNMS:
LibreNMS
Server runs on Linux and Windows/Docker.
No agents, devices must have static IP address and SNMP.
LibreNMS was forked from Observium before Observium became a commercial product.
Installing LibreNMS
There is a Docker image, "docker pull librenms/librenms" and "docker run -p 8080:8080 librenms/librenms", but it expects you to have a running MySQL database first.
Tried native install:# See https://docs.librenms.org/Installation/Install-LibreNMS/ # I followed the "sudo -s" / Ubuntu / Nginx path. # but starting mariadb and OpenIPMI services failed. # Removed everything: # Do NOT remove mysql-server and mariadb-server ! desktop went with them ! sudo rm -r /opt/librenms sudo deluser librenms sudo delgroup librenms # Ended up in a state where neither MySQL nor mariadb will re-install.
- Icinga 2:
Icinga
Server runs on Linux.
No agents, devices must have static IP address and SNMP.
Can use Nagios plug-ins.
There is an Android app "Probe for OMD and Nagios" by Marco Ribero.
"docker pull jordan/icinga2" and "docker run -p 8080:8080 jordan/icinga2"
"sudo apt install icinga2"
- Observium Community:
Observium
Server runs on Linux, more ?
Agents available for xxx
Tagged as "simple to use" in reviews.
It seems that a client device MUST have an agent running on it (responding to SNMP queries) before you can define the device in the manager application; there's no way to define the device and then just get traps from it (at least in CE).
Tried Docker image (FAILED):docker pull uberchuckie/observium # See https://hub.docker.com/r/uberchuckie/observium # paths have to be full paths ! #docker run -d -v /your-config-location:~/projects/observium/config -e TZ="Madrid" -v /path-to-logs:~/projects/observium/logs -v /path-to-rrds:~/projects/observium/rrd -p 8668:8668 uberchuckie/observium docker run -d -v /your-config-location:/home/user1/projects/observium/config -e TZ="Madrid" -v /path-to-logs:/home/user1/projects/observium/logs -v /path-to-rrds:/home/user1/projects/observium/rrd -p 8668:8668 uberchuckie/observium docker container list # Browse to http://localhost:8668 # Log in as observium / observium # Fail: it just ran and ran, never created subdirs, # didn't respond to browser. # Later, the image owner said there are a whole # lot of undocumented reqts, such as run inside unRAID, # assumes that there is a "nobody" user with user id 99 and # group id 100. I didn't retry. # When done: docker stop CONTAINERID docker images docker rmi IMAGEID # if it says there are stopped containers: docker rm CONTAINERID
Tried native install:# See https://docs.observium.org/install_debian/ wget http://www.observium.org/observium_installscript.sh chmod +x observium_installscript.sh sudo ./observium_installscript.sh # choose Community Edition # Said "no" to installing snapd and agent. # set MySQL password # set Observium admin acct name and password # Observium CE 20.9.10731 sudo systemctl restart apache2 # Browse to http://localhost:8668 or http://127.0.1.1:8668/ # FAIL # Browse to http://localhost # Log in; works okay. # Decided to remove it. sudo apt remove apache2 php7.4-mysql mysql-server mysql-client rrdtool sudo rm -fr /opt/observium* sudo rm /etc/cron.d/observium sudo deluser observium sudo delgroup observium
- MeshCentral:
MeshCentral
/r/MeshCentral
Seems very beta 11/2020.
Server runs on Linux or Windows.
Agents available for Linux, Windows
- Xymon Monitor:
The Xymon Monitor
Server runs on Linux.
Agents available for Linux, Windows.
Need to have web server (e.g. Apache) installed, and C compiler and make utilities.
Seems to expect a dedicated server, where you log in as user "xymon" and run a daemon.
Latest update 2019-09.
- Linux snmp CLI utilities:
There is no constantly-running manager command ?
http://net-snmp.sourceforge.net/wiki/index.php/TUT:MRTG
https://www.comparitech.com/net-admin/snmpwalk-examples-windows-linux/
There IS a trap-handler daemon, which will receive messages from clients and maybe send email to notify. First, set up the Linux agent software (see "agents" section). Then:sudo systemctl start snmptrapd # Snmptrapd service says: # Warning: no access control information configured. # This receiver will *NOT* accept any incoming notifications. man snmptrapd man snmptrapd.conf sudo edit /etc/snmp/snmptrapd.conf # un-comment line "authCommunity log,execute,net public" sudo systemctl restart snmptrapd sudo systemctl status snmptrapd sudo netstat -tulpn sudo journalctl --unit='snmp*' --pager-end
- Simple Android manager-type apps, mainly for testing:
- "Fing - Network Tools" by Fing Limited.
Under a machine's "Network Details" section, it will show SNMP machine name, contact, location. But in a port scan, it will not show machine's port 161 as open, I think because SNMP is using UDP. - "SNMP MIB Browser" by Zoho Corporation.
When installed, just has one MIB "RFC1213-MIB", stored on phone as three files in "Internal shared storage / mibs" folder. There is a similar file "/var/lib/snmp/mibs/ietf/RFC1213-MIB" on my Linux system. Copied all MIB files from remote device (e.g. /var/lib/snmp/mibs/ from my Linux laptop) to phone, NOT replacing the RFC1213-MIB that was there already.
When you add a host to the "Host List", the circle to the left of it will turn green if the SNMP connection to it is successfull.
When polling for an ObjectId value, you can put in an ObjectId such as ".1.3.6.1.2.1.1.1.0", or a name if you've loaded the MIB where that name is defined. Some names and their MIBs are "sysDescr" (SNMPv2-MIB), "sysLocation.0" (SNMPv2-MIB). "hrSystemProcesses.0" (HOST-RESOURCES-MIB). If you put in a name and start polling and get no response at all, probably you have not loaded the MIB needed to translate that name.
After you've successfully connected to the client system, you can go into "SNMP MIB Browser" and you will find that all the fields have been filled in with the values obtained from the client.
- "Fing - Network Tools" by Fing Limited.
All of the above systems seem to be huge overkill, and few of them support Android phones.
SNMP Agents
- Linux snmp CLI utilities:
To get SNMP agent working locally:# http://net-snmp.sourceforge.net/ sudo apt install snmp man snmpget man snmpcmd man snmpstatus man snmptrap man -k snmp sudo apt install snmp-mibs-downloader # https://medium.com/@CameronSparr/downloading-installing-common-snmp-mibs-on-ubuntu-af5d02f85425 sudo sed -i 's/mibs :/# mibs :/g' /etc/snmp/snmp.conf sudo apt install snmptrapd # can log incoming SNMP notifications to syslog etc sudo systemctl status snmpd sudo systemctl status snmptrapd sudo systemctl enable snmptrapd sudo systemctl start snmptrapd sudo netstat -tulpn # https://help.ubuntu.com/community/SNMPAgent # man snmpd.conf # sudo edit /etc/snmp/snmpd.conf to add: rocommunity public default includeAllDisks 10% # or e.g. "disk /home %10" # Note: default frequency for disk-checking is every 10 mins ? # But I don't see it logging anything, ever. # Then: sudo systemctl restart snmpd sudo systemctl status snmpd snmpwalk -c public -v1 localhost | less # notice that lines start with various "DISMAN-EVENT-MIB", # "SNMPv2-MIB", "IF-MIB", "SNMPv2-SMI", "IP-MIB", # "HOST-RESOURCES-MIB", more. snmpdf -v 2c -CH -c public localhost snmpps -v 2c -c public localhost | less watch --interval 5 snmpstatus -v 2c -c public localhost # https://www.debianadmin.com/linux-snmp-oids-for-cpumemory-and-disk-statistics.html # every 5 seconds show Total RAM free: watch --interval 5 snmpget -v 2c -c public localhost .1.3.6.1.4.1.2021.4.11.0 snmpdelta -v 2c -Cp 5 -c public localhost .1.3.6.1.4.1.2021.10.1.3.1 # notice that lines start with "UCD-SNMP-MIB"
To make SNMP agent accessible from network:# https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-an-snmp-daemon-and-client-on-ubuntu-18-04 # edit /etc/snmp/snmpd.conf to comment out # "agentAddress udp:127.0.0.1:161" line and add: agentAddress udp:161 # or just comment out all the agentAddress lines. # add lines: NO !!! createUser suser1 MD5 SOMETMPPASSWORD DES rouser suser1 priv # A different way to do it: NO !!! sudo apt install libsnmp-dev net-snmp-create-v3-user --help sudo systemctl restart snmpd sudo systemctl status snmpd # Run gufw and add rules to allow IN/OUT on ports 161 and 162 # from/to anywhere, with UDP, with IPv4. # get your LAN IP address (call it THEIPADDRESS): ip addr | grep 192. snmpdf -v 2c -CH -c public localhost snmpdf -v 2c -CH -c public THEIPADDRESS snmpget -u suser1 -l authPriv -a MD5 -x DES -A SOMETMPPASSWORD -X SOMETMPPASSWORD THEIPADDRESS 1.3.6.1.2.1.1.1.0 # should get machine's "name", which is output of "uname -a" # Go to manager app, such as Android apps in previous section, # and try to query values from this system. # Got stuck at this point, rebooted my Linux system, # and things started working ! SNMP agent (snmpd) is working. sudo journalctl --unit='snmp*' --pager-end
- Android agent apps:
I don't see a FOSS SNMP agent app for Android. Is SNMP built into Android by default, or just not available ?
https://github.com/brnunes/Android-SNMP
https://sourceforge.net/projects/libandroidsnmpsdk/
https://gist.github.com/issamux/6709513
http://www.net-snmp.org/
https://www.javatpoint.com/simple-network-management-protocol
https://agentpp.com/
https://www.programcreek.com/java-api-examples/?api=org.snmp4j.Snmp
https://apkpure.com/snmp-agent-4a/jp.snmp_agent4a
https://joyent.github.io/node-snmpjs/agent.html
Android "trap" app that sends to manager when some event happens:- "SNMP Trap" by Maildover LLC.
http://www.maildover.com/
http://www.maildover.com/joomla/
- "SNMP Trap" by Maildover LLC.
Phone can NOT be using VPN-to-internet (at least, strongSwan the way I have it configured), when acting as a manager.
Linux system CAN be using VPN-to-internet (at least, OpenVPN the way I have it configured), when acting as a client.
Mobile Device Managers (MDMs)
- Desktop Central:
Desktop Central
Server runs on Windows only, I think.
Agents available for ???
- Spiceworks "Inventory":
Spiceworks
Server runs on Windows.
Agents available for ???
Scans by IP address. No mention of mobile devices.
- Esper:
Esper
Server runs on ???
Agents available for Android only.
More of a DevOps thing.
- Miradore:
Miradore
Server runs on ??? Not Linux.
Agents available for all but Linux.
- Countly:
Countly
Server runs on Linux.
Your app on mobile has to be compiled with their SDK, to talk to server.
- ITarian:
ITarian
Server ("manager") runs on Mac, Windows, Linux.
Agents available for Android and iOS mobile devices.
- Relution:
Relution
Can use their cloud server (SaaS) up to 5 devices.
Agents available for Android and iOS mobile devices.
- AppTec EMM:
AppTec EMM
Server runs on ???
Agents available for Android and iOS mobile and Windows Mobile devices.
- ManageEngine:
ManageEngine
Server runs on Windows, but can use their cloud server (SaaS) up to 25 devices.
Agents available for Apple, Android, Windows, Chrome.
At this point, I'm thinking of rolling my own solution: An agent on each client that just does an HTTP GET of the manager's web server every hour, passing in some information.
"MacroDroid - Device Automation" by ArloSoft Tools (easy to send IMEI and Battery%, not much else)
Syslog protocol
Teknikal's_Domain's "Graylog, and the Syslog Protocol, Explained"
Wikipedia's "Syslog"
RSyslog Documentation
Port 514 on UDP is simplest way.
# See if rsyslog service is enabled and listening: systemctl status rsyslog sudo netstat -tulp # If not, enable and check again: less /etc/rsyslog.conf # Edit /etc/rsyslog.conf to enable UDP-514. Then: sudo systemctl restart rsyslog # Then check again. echo "<29>mymachine mycmd[2]: hello" > /dev/udp/127.0.0.1/514 sudo grep hello /var/log/syslog # Apparently, if the message starts with a date-stamp, that stamp # will be used in syslog; otherwise rsyslog will add its own stamp. # Change data to start with different leading value to end up in # different log files. Or omit leading "<nn>" entirely. sudo apt install sendip sudo sendip -p ipv4 -is 127.0.0.1 -p udp -us 514 -ud 514 -d 'Hello' -v 127.0.0.1 sudo grep Hello /var/log/syslog # In router admin, set a fixed address for your computer. Then: sudo sendip -p ipv4 -is 192.168.1.131 -p udp -us 514 -ud 514 -d 'Hello Again' -v 192.168.1.131 sudo grep Hello /var/log/syslog # Now go to Android phone and install app and send messages from there. # This app works for testing purposes but mainly is logging # its own internal failure msgs: # https://f-droid.org/en/packages/sk.madzik.android.logcatudp/ sudo tail /var/log/syslog # If firewall is running on your computer, disable it or add exception for port 514. # Probably have to disable VPN on phone. # System log will aggregate repeated identical messages into # one "repeated N times" message, which is a real pain while testing. # Edit /etc/rsyslog.conf to set "$RepeatedMsgReduction off". # Various rules in /etc/rsyslog.d/50-default.conf # Client for Windows: # https://www.solarwinds.com/free-tools/event-log-forwarder-for-windows # https://en.freedownloadmanager.org/Windows-PC/syslog-ng-Agent-for-Windows-FREE.html
"snap install graylog"
"Edit /var/snap/graylog/common/server.conf to set the admin password and mongodb connection string. Connection information for elastic search is also needed."
Security Testing and Penetration Testing
- Smartphone apps or web sites to do port scanning of your system:
See the "Port scanning and router testing" section of my "Computer Security and Privacy" page. - Nmap / Zenmap:
Turn off VPN; it prevents operations Zenmap/Nmap tries to do.
Run via "Zenmap as root" menu item.
To get started, type "127.0.0.0" into Target field, select "Quick Scan" in Profile field, and click Scan.
Use Target of "192.168.0.0/24" to scan your whole LAN, maybe with Profile set to "Quick Scan". Equivalent of "nmap -F 192.168.0.0/24" ? Also try "sudo nmap -O -F 192.168.0.0/24".
Try scanning your own machine with Profile set to "Intense scan plus UDP". - WebMap:
A visual dashboard that uses a capture file from nmap.
- BabySploit:
M4cs / BabySploit
- Sparta:
SPARTA
WonderHowTo's "Using Sparta for Reconnaissance"
- Lynis:
A security-auditing package. Ran "sudo lynis audit system". It complained about outdated version of Lynis. Followed instructions at Cisofy's "Software Repository (Community)" to update Lynis. Ran "sudo lynis audit system" again. Went to BOOT-5122 - Set boot loader password but decided not to do it.
Maybe try "lynis --check-all --quick" ?
Later got this on reddit:
"Tool author here. Lynis does not make changes to the system. It only makes suggestions on what you can do."
Ran the tool again, it ran fine, afterward do: "sudo grep Sugg /var/log/lynis.log"
Balaji N's "Lynis - Open Source Security Auditing & Pentesting Tool - A Detailed Explanation"
Alan Formy-Duval's "How to read Lynis reports to improve Linux security"
- Vuls:
Vuls
Scans system for vulnerabilities listed in various vulnerability databases.
Savik's "How To Use Vuls as a Vulnerability Scanner on Ubuntu 18.04"
- Pompem:
rfunix / Pompem
Scans system for vulnerabilities listed in various vulnerability databases.
- Vulmap:
Vulmap
Scans system for vulnerabilities listed in a vulnerability database.
- cvescan:
Ubuntu-only, snap-only, from Canonical.
I think it's based on OpenSCAP.
Scans system for CVEs listed in official list from Canonical.
No man page, just CLI help.
No version option in CLI. "snap list" gives version.
CLI-only, no GUI.
Default "cvescan" shows high/critical-priority CVEs for which fixes are available but not yet applied to your system.
"cvescan --priority medium --unresolved | less" shows medium/high/critical-priority CVEs for which fixes are not available.
https://github.com/canonical/sec-cvescan
- ubuntu-security-status:
Ubuntu-only, from Canonical.
Gives support/update status (e.g. "will receive updates with LTS until 4/2025") of installed packages.
- linux-exploit-suggester:
mzet- / linux-exploit-suggester
- checksec.sh:
slimm609 / checksec.sh
Dejan Lukan's "Gentoo Hardening: Part 3: Using Checksec"
- security-assessor:
Scripts that scan a Linux system looking for security and robustness problems.
stevegrubb / security-assessor
- Microsoft Attack Surface Analyzer:
Attack Surface Analyzer 2.0
microsoft / AttackSurfaceAnalyzer
Does a diff of security configuration before and after installing new software.
- Aircrack-ng:
Wi-Fi penetration and analysis.
Aircrack-ng
Maybe run it via the wifite or wifite2 front-end application.
- systemd-analyze security:
Requires version 240 or later; "systemd-analyze --version"
"systemd-analyze security"
"systemd-analyze security UNITNAME"
"Should be used with caution. Not every security setting makes sense for every unit. You should therefore know what you are doing. The tool is therefore less suitable for end users but more for administrators."
"For example, sshd carries a status of '9.6 UNSAFE'. Most of this is because it requires running as UID 1 (root), loading kernel modules and lots of net based capabilities. To get sshd.service to a safe status would completely break the service and render it not even capable of performing its basic functions."
Daniel Aleksandersen's "systemd service sandboxing and security hardening 101"
Daniel Aleksandersen's "Limit the impact of a security intrusion with systemd security directives"
Arch Wiki's "Security guidelines - Systemd services"
Adrian Grigorof's "Open Source Security Controls"
Tightening Security
Really, it seems that 95% of the vulnerabilities are eliminated if you just don't run a web server on your machine. Also don't run SSH or FTP or other login-type services, and keep software updated, and you're above 99%.
From older version of Easy Linux tips project's "Security in Linux Mint: an explanation and some tips":
"Don't install Windows emulators such as Wine, PlayOnLinux and CrossOver, or the Mono infrastructure, in your Linux, because they make your Linux partially vulnerable to Windows malware. Mono is present by default in Linux Mint; run 'sudo apt remove mono-runtime-common' to get rid of Mono."
[First run 'sudo apt --simulate remove mono-runtime-common' to see what else you'd lose.]
Ask Ubuntu's "What are PPAs and how do I use them?"
But: "One thing to keep in mind about using PPAs (Personal Package Archives) is that when you add a PPA to your Software Sources, you're giving Administrative access (root) to everyone that can upload to that PPA. Packages in PPAs have access to your entire system as they get installed (just like a regular package from the main Ubuntu Archive), so always decide if you trust a PPA before you add it to your system."
It's a good idea to get CLI mail working, and check it regularly, since various services and packages will send failure or security notices to root's email. See "Getting Linux local CLI mail working" section.
- Turn off services and network listeners:
Run the System Settings application and click through many of the icons, turn off anything you don't need.
See what incoming ports are open and/or have listeners on them, and close down as many as possible. Do "sudo netstat -tulpn" or "sudo netstat -tulp". (Also "sudo ss -lptu".) For all ports, do "sudo netstat -tuap".
I see various TCP and UDP permutations of:
- systemd-resolve: DNS caching. Listens on a localhost address.
systemd-resolved manual page
I think you shouldn't disable this.
zbyszek's "systemd-resolved: introduction to split DNS" - cupsd and cups-browsed: print server. Port 631.
Some pieces of the cups package do other things such as PDF-conversion.
Service can be configured via http://localhost:631/admin
Ubuntu's "CUPS - Print Server"
- redis-server: a database server. Uses port 6379 and only on loopback 127.0.0.1 ?
Run "sudo systemctl status redis-server" to see status of it.
I removed it and apparently nothing broke, but probably it's best to just leave it alone.
NixCraft's "How to install Redis server on Ubuntu Linux 16.04"
OSTechNix's "How To Install And Configure Redis Server In Ubuntu" - dhclient: DHCP Client. Uses port 68. I think you shouldn't disable this
unless your machine has a static IP address.
- avahi-daemon: Apple mDNS/DNS Zeroconf (AKA Rendezvous or Bonjour) service.
For easy access to (Apple ?) networked
printers and file servers on your LAN.
Wikipedia.
Uses UDP ports 32768 and 5353, maybe also 52482 and 51623 ?
- openvpn: I'm using a VPN, can't remove it.
- Tor: listening on localhost port 9050 even when I'm not using Tor Browser.
Apparently it's there in case you want to run a Tor socks proxy, or host an onion service, or run an onion node.
Did "sudo systemctl disable tor" and "sudo systemctl stop tor". Tor Browser still works afterward. While Tor Browser is running, you will see a Tor listener on port 9150.
Later re-enabled the listener service. I want to do some things with outbound onion links from apps other than Tor Browser. Edit /etc/tor/torrc to un-comment the lines:SocksPolicy accept 192.168.0.0/16 SocksPolicy reject *
- SSH:
"ps -ax | grep ssh"
ssh-agent is not a listener, it's a server that supplies keys to other apps.
- Apache:
ps -ax | grep apache sudo ss -lptu | grep apache sudo systemctl status apache2
This appeared after a while, apparently because I installed php-gd library !
- KDE Connect (kdeconnectd):
On KDE, for connecting to smartphones and such.
"sudo apt remove kdeconnect"
What services are started at boot time: "sudo systemctl list-units".
Also see timer-based services: "sudo systemctl list-timers".
To disable a service: "sudo systemctl disable --now SERVICENAME" or sometimes "/lib/systemd/systemd-sysv-install disable SERVICENAME".
Delightly Linux's "Speed Up Linux Boot by Disabling Services"
TecMint's "How to Stop and Disable Unwanted Services from Linux System"
- systemd-resolve: DNS caching. Listens on a localhost address.
- Learn a little about how network traffic is controlled in your system:
The key pieces and how you can see things:- Physical network devices (Ethernet adapter, Wi-Fi adapter).
- Logical or software network devices (loopback, localhost, VPN).
- inxi -i (network devices)
- ip -d address (network devices)
- netstat -r or route -n or ip route show (IP routing)
- sudo iptables -L -v (packet filtering and routing)
- sudo netstat -tulp (listeners)
- cat /etc/resolv.conf (DNS servers)
- resolvectl domain (DNS resolution among network interfaces)
- arp and arp -n (LAN devices talked to)
- cat /var/lib/dhcp/dhclient.leases (current DHCP leases)
- Apps: services (service --status-all), listeners (sudo netstat -tulp), and foreground apps
Inside Linux, while running a VPN and through a router, there are four kinds of IPv4 address:- LAN address (192.n.n.n).
- VPN client's WAN address (10.n.n.n in my case).
- Router's WAN address (77.n.n.n in my case).
- VPN server's WAN address (89.n.n.n in my case).
I haven't found a way yet that an app on my computer can get the Router's WAN address, either with VPN on or VPN off. But with VPN off, an app could talk to a server outside and ask it "what IP address am I coming from ?".
Other types of address:- MAC address (address of physical adapter xx.xx.xx.xx.xx.xx, see with "inxi -i").
- IPv6 address (xxxx.xxxx.xxxx.xxxx.xxxx.xxxx).
- Password Store:
I use a dedicated password manager application (KeePassXC), so I remove or disable the system password store (Seahorse, KWallet [in System Settings], etc). - VPN:
IMO, use OS's built-in generic client (e.g. OpenVPN), or a FOSS client (e.g. strongSwan), not a VPN company's proprietary client. That proprietary client just would have too much access to your data.
# check with VPN off and then again with VPN on: echo IPv4: && curl --ipv4 --get ifconfig.me && echo echo IPv6: && curl --ipv6 --get v6.ident.me && echo
Sometimes VPN will mess with iptables rules, and then firewall software will do same. You don't want the two fighting each other. - Some things that probably aren't installed, or you don't want to mess with:
- Firewall:
sudo iptables -L -v sudo ip6tables -L -v sudo ufw status verbose sudo nft list ruleset
Sometimes VPN will mess with iptables rules, and then firewall software will do same. You don't want the two fighting each other.
After a while, in Ubuntu MATE 20.04, with Windscribe VPN active through OpenVPN, I did System Settings / Firewall Configuration, and slid the "Status" slider to "on". It's "gufw". Default setting is "Incoming == Deny, Outgoing == Allow". List of rules is empty. After doing that, "sudo ufw status verbose" shows active and "deny (incoming), allow (outgoing), disabled (routed)", and "sudo ip*tables -L -v" show HUGE sets of chains. - Use iptables to log and understand and block network traffic:
See iptables section for details.
To see how much traffic is passing through each section of rules in filter table, do "sudo iptables -L -v" (can reset counters via "sudo iptables -Z").
There is "ip6tables" which is separate but mostly has the same syntax as "iptables". "sudo ip6tables -L -v"
Default logfile for iptables is /var/log/kern.log;
"sudo dmesg -T" command shows same log, with some useful coloring added.
I would like to log/detect all applications that create output connections. You can get instantaneous snapshots (not cumulative logs) by running "ss -tp" or "netstat -A inet -p".
To see IP address and other details about each network interface, run "ip -d address".
Setting iptables rules gets more complicated because your VPN probably does iptables stuff too. Windscribe VPN adds rules and changes chain policies. I had to have a big shell script to add rules before the VPN starts, then a small script to do a couple of tweaks after VPN has started.
Do "sudo netstat -tulpn" to see what ports have listeners. (Also "sudo ss -lptu")
I mostly gave up with iptables. I think it was the wrong approach.Instead, concentrate on reducing and understanding the number of listeners you have. It doesn't matter if an incoming packet gets through iptables, as long as no process is listening on that port.
I think it would be different with a server, running only a few services. There you could allow only 10 or so open ports. - IPv6:
I say: turn it off everywhere, unless you have a good reason to use it. A lot of software doesn't work with it, or isn't tested much with it, or just converts it to IPv4. Having it enabled is likely to leave vulnerabilities.
My ISP (or their router) blocks IPv6, so I can't test it. I do get a little incoming ICMPv6 traffic from somewhere, and my system outputs a little ICMPv6 traffic. Windscribe VPN does not support IPv6, and wants to block outgoing IPv6 via iptables.
superuser's "Did you know that IPv6 may include your MAC address?"
Teknikal's_Domain's "IPv6 Is a Total Nightmare - This is Why"
- Cron:
Check what jobs are being run by cron.
Cron jobs live under a variety of /etc/cron* dirs, and also in files /etc/crontab and /etc/anacrontab, and in dir /var/spool/cron/crontabs.
# See cron log entries grep CRON /var/log/syslog /var/log/syslog.1 # command "debian-sa1" is /usr/lib/sysstat/debian-sa1, see "man sysstat" # See status and some logging systemctl status cron.service # See which cron file is producing a job you see in the log: sudo bash grep SOMETHING /etc/crontab /etc/cron.*/* /etc/anacrontab /var/spool/cron/crontabs/* # crontab line format (man 5 crontab) is: # minute hour dayofmonth month dayofweek # Edit jobs for current user: crontab -e # Edit jobs for root user: sudo crontab -e
To change the editor used by "crontab -e", set e.g. "export VISUAL=/usr/bin/xed" in your .profile.
Paraphrased from discussion on reddit:Rules:
- Don't put user commands in the default (root) cron file.
- Use the user cron file for user commands.
- Don't put GUI commands in any cron file.
- Always include complete paths for every command and data file.
- Don't put commands and arguments in the cron file, instead call a shell script.
- Cron's shell environment is very different than yours. See the cron man page.
nixCraft's "I put a cronjob in /etc/cron.{hourly,daily,weekly,monthly} and it does not run and how can I troubleshoot it?"
Ubuntu wiki's "CronHowto"
Raj's "Linux Basics: 20 Useful Crontab Examples in Linux"
- systemd:
The name "systemd" is a bit confusing: There is- systemd the service manager.
- journald, networkd, sd-bus (all mandatory).
- systemd-resolved.service, systemd-boot, systemd-homed.service (all optional).
- consoled, systemd-logind.service, systemd-hostnamed.service, systemd-machined.service, many more (not sure if all optional).
Wikipedia's "systemd"
systemd diagram
Sahil Suri's "Introducing systemd"
ArchWiki's "systemd"
Carla Schroder's "Understanding and Using Systemd"
David Both's "Learning to love systemd"
neeasade's "systemd user services" (under $HOME/.config/systemd/user)
Andy's "Starting services only when the network is ready on Debian/systemd"
Trapped inside a tolva's "systemd targets and infrastructure layers"
alegrey91 / systemd-service-hardening
Ctrl blog's "systemd application firewalls by example"
systemd.io
freedesktop.org's "systemd System and Service Manager"
Chris Hoffman's "Meet systemd"
Debian Reference's "Chapter 3. The system initialization"
Jason Wertz's "Basics of the Linux Boot Process" (video) (2013 and pre-systemd, but interesting)
Professor Messer's "Init, Systemd, and Upstart" (video) (2013, not much systemd, but interesting)
man systemd man udev systemctl list-units systemctl status UNITNAME systemctl cat UNITNAME systemctl status PID # find out which unit a process belongs to
Casey Houser's "How to Use Systemd Timers as a Cron Replacement"
blog'o'less's "Using systemd timers instead of cron"
blog'o'less's "More about using systemd timers: reboot"
luqmaan's "Using systemd as a better cron"
ArchWiki's "systemd/Timers: As a cron replacement"
From discussion on stackexchange, why systemd better than cron:
Checking what your cron job really does can be kind of a mess, but all systemd timer events are carefully logged in systemd journal.
Systemd timers are systemd services with all their capabilities for controlling their resource management, IO, CPU, scheduling, etc.
There can be dependencies on other services.
Services can be started and triggered by different events like user, boot, hardware state changes or for example 5 mins after some hardware plugged in.
Easily enable/disable the whole thing with "systemctl enable/disable" and kill all the job's children with "systemctl start/stop".
Timer events can be scheduled based on finish times of executions some delays can be set between executions.
Communication with other programs is also notable; sometimes it's needed for some other programs to know timers and the state of their tasks.
Good things about systemd:
- Does things that init scripts can't do: react to events, do operations in parallel
(but init scripts can do this with startpar ?), let
user start/stop daemons, set limits to the resource usage of a service, have one
service depend on result of another (but init scripts can do this with insserv ?).
- Pushes control away from the init-project and towards the distro maintainers.
It's easier to enable/disable/add/delete services than to rewrite the init scripts.
- User is less likely to bork the system by making a mistake while
adding a new service than while editing init scripts ? More modular.
- Has "targets", which are finer-grained and more flexible than init's "run-levels".
/u/thlst's "Why did ArchLinux embrace Systemd?"
David Edmundson's "Plasma and the systemd startup"
IMO: systemd the service manager does one thing, and does it well: represent/control units of work. It satisfies many uses: managing services at boot/login/logout/shutdown times, letting the user manage services manually, managing services in reaction to events, I think daemons spawning/controlling other services. I'd like to see init go away completely, and also have cron be a systemd service (that launchs other services).
Bad things about systemd:
- Added a second system log (the journal) to the system, and it's binary, not text.
- Has its own DNS resolver.
[But maybe it's a good thing: zbyszek's "systemd-resolved: introduction to split DNS"]
From someone on reddit:> Why does systemd have DNS in it ?
Because in certain situations they need to have it very early in the boot process, before other stuff is up and running. In most cases it isn't needed and thus isn't used, but when it is needed it is very useful.
That is the big thing a lot of people don't seem to get: systemd provides a lot of optional components that serve niche use-cases but are turned off by default. In most cases those aren't needed and aren't used, but they are there when someone really needs them.
> Something being turned off by default doesn't make it any easier to understand the code,
> or understand which DNS specification takes precedence over which other one.
- Runs all services in one process ? Or dispatches all from one process ? Not sure.
- Now is trying to control the definition of the home directory ?
But there's a good case to be made for it.
article1, article2 - Also has time-sync ?
- An overly-heavy solution for situations where a very simple init system would suffice
(such as a container that contains multiple processes).
V.R.'s "systemd, 10 years later: a historical and technical retrospective"
See "systemd Service" section of my Develop an Application page
- ArpON:
A network daemon to protect against ARP MITM attacks.
ArpON
- In LibreOffice, turn off macroes:
See item "Libre Office: improving Macro Security" in Easy Linux tips project's "Security in Linux Mint: an explanation and some tips" - Application controls and sandboxing:
See Application Control and Security section. - Monitor what's happening in your system:
Look at some key log and status files every day or every week:# Whatever command shows status of your VPN connection, e.g.: sudo ipsec statusall # if you're using IPsec, maybe with IKEv2, strongSwan windscribe status # for Windscribe using OpenVPN systemctl status openvpn curl --get ifconfig.me && echo # Check logs for errors: sudo journalctl -p 3 -xb sudo systemctl --failed # Maybe do these, or not: sudo iptables -L -v sudo netstat -tulp sudo apparmor_status firejail --list sudo journalctl -u cron cat /var/log/kern.log sudo dmesg -T # shows same log, with coloring added sudo journalctl --pager-end egrep -i 'error|warn' /var/log/*g | less # Read root's CLI mailbox; some failure and security messages are sent to it sudo bash mail last # history of logins lastlog # last time each user logged in sudo less /var/log/auth.log # authentication-related activity
You can put a message into the journal, but not the kernel log, by doing "logger -p user.info this is the message". See "man logger" for more options. Maybe do this each time you make a significant change to the system ?
You can delete old entries in journalctl, freeing space, by doing "sudo journalctl --vacuum-time='2d'"
To view log files, install "glogg" application.
Radu Gheorghe's "Linux Logging Tutorial: What Are Linux Logs, How to View, Search and Centralize Them"
D-Bus:
There's a "D-Bus" for inter-application messaging.
Apps can communicate, and each has a name such as "/org/Mozilla/FireFox", and exposes methods that can be called. The piece I am missing is any convention for unknown apps to talk to each other. E.g. "can some app please convert this file from JPG to PNG for me" or "can some app please open this onion link". I'm not sure where the concept of "default browser" exists, for example. For files, is there any standard operation other than "open" ?
There is a close relaionship between D-Bus and systemd: systemd units implement D-Bus interfaces, and changes in unit states emit D-Bus signals.
D-Bus is not used for clipboard; there is a separate mechanism.
Wikipedia's "D-Bus"
Freedesktop.org's "Desktop Notifications Specification"
Freedesktop.org's "D-BUS Protocol"
Koen Vervloesem's "Control Your Linux Desktop with D-Bus"
Bobbin Zachariah's "A Good Understanding of D-BUS - An IPC Mechanism in Linux"
Do "sudo dbus-monitor --system --profile" to look at activity on it ? But I don't see much happening.
Do "qdbus" to see what apps/services are registered on it (but it's all system stuff, no browsers or password managers registered on it).
"man busctl"
Install "Bustle" (Flathub's "Bustle").
qdbusviewer ?
d-feet "apt install d-feet" ?
Clipboard:
How do applications communicate to/from the clipboard ? Is there any security ?
Clipboard is a DE-and-windowing-system thing, not a kernel facility.
ArchWiki's "Clipboard"
uninformativ.de's "X11: How does 'the' clipboard work?"
Freedesktop.org Specifications
Simon Ser's "Wayland clipboard and drag & drop"
For web pages, there is a navigator.clipboard object in the browser DOM, and read/write permissions on it.
MDN's "Clipboard"
There are CLI utilities for directing command output to clipboard:
Josphat Mutai's "How To Copy and Paste Text Content from Linux Terminal"
X:
There is network traffic between X11 clients and server.
"Modern X servers have the MIT Shared Memory Extension and communicate with their local X clients using the local shared memory."
uninformativ.de's "Dumping X11 traffic"
Debian Reference's "Chapter 7. The X Window System"
"ls /usr/share/applications/ | sort"
Wayland:
Kristian Hogsberg's "The Wayland Protocol"
Shivam Singh Sengar's "Wayland v/s Xorg : How Are They Similar & How Are They Different"
Wayland book draft
Drew DeVault's "Wayland misconceptions debunked" (2/2019)
Linux Experiment's "Wayland: what is it, and is it ready for daily use?" (video) (12/2020)
Aishou / wayland-keylogger
probonopd's "Think twice before abandoning Xorg. Wayland breaks everything!"
To add Wayland session option to KDE: "sudo apt install plasma-wayland-session"
Put some commands in a shell script that you can run a couple of times each day, to show status of VPN, iptables, listeners, tails of log files.
A mega-command that will "tail" a whole bunch of log files at the same time and keep updating as they get updated:
"sudo find /var/log/ -type f \( -name "*.log" \) -exec tail -f "$file" {} +"
Run "System Monitor" every now and then, sort by memory used, sort by name, look for anything unusual.
To see all packages installed on your system, do "dpkg -l".
On Ubuntu, to see known security vulnerabilities, do "cvescan" (without args, it shows high/critical CVEs for which fixes are available but not yet applied to your system). - Don't expect perfect security just because you're running Linux.
You're still relying on a lot of applications and other software to be well-behaved.
From someone on reddit 6/2020, about Linux Mint:Firewall is off by default.
Ufwd does not switch rule zones when on connections like firewalld can so people do not use it or use it well (they dump everything into one rule zone or forget to switch) due to the inconvenience and hassle ironic given UFW is supposed to make it easy (firewalld can do this but UFW is Canonical so it's in the distro).
Avahi zeroconf enabled by default and not behind a home firewall rule zone (has been exploited for UDP reflection amplifications and fingerprinting even if you do not use zeroconf services on it, it by itself is exploitable).
Lagging behind in updates for main attack surface apps like Firefox.
Have not checked if Warpinator is enabled and advertising by default, last I checked in the source it was not but it uses zeroconf to do so.
See "Secure because Linux" section of my "Linux Problems" page.
Easy Linux tips project's "Security in Linux Mint: an explanation and some tips"
The Empire's "An Ubuntu Hardening Guide"
lfit's "Linux workstation security checklist"
blakkheim's "Linux Security Hardening and Other Tweaks"
Madaidan's Insecurities' "Linux Hardening Guide"
Maybe likely to break things:
SK's "How To Password Protect GRUB Bootloader In Linux"
[But that doesn't protect against booting from USB drive.]
See Anti-Virus and Malware Scanners section of my "Using Linux" page.
See Application Control and Security section.
Tightening Privacy
- Location services:
To test these, best if you have separate locations set for:- Real IP address.
- VPN server.
- Location Guard add-on in each browser.
- Fake location set in GeoClue.
Location services:- GeoClue service in D-Bus.
You can remove Geoclue-2.0, but then things such as Redshift will complain (fix for Redshift, or maybe use "-l" option on command line).
Installed Geoclue-examples, but found no obvious new apps, in /usr/lib/geoclue-2.0/demos I see "where-am-i". Also found /usr/bin/geoclue-test-gui app, but it just returns all zeroes even though the GeoClue service should be running and accessing a Mozilla location server. (It shows "Error getting position from Geoclue Master: Geoclue master client has no usable Position providers" on stderr.)
Redshift also shows zeroes for location.
Configuration file is /etc/geoclue/geoclue.conf.
Tried replacing the Mozilla service URL in there with "file" URL to a local JSON file, but it failed, couldn't be fetched.
Put a "https://api.jsonbin.io/" URL in there, got "Failed to query location: Not Found" in journalctl.
Verified that the original Mozilla URL returns valid JSON, so the problem must be in the code.
Installed Gnome Maps application, but when it starts it says "no internet connection", same with VPN on or off.
For package dependencies, do "apt-cache rdepends geoclue" and "apt-cache rdepends geoclue-2.0".
At system boot, in journalctl I see (via "sudo journalctl | grep [Gg]eo[Cc]") "geoclue: Failed to connect to avahi service" (which is local DNS, sort of), but you can turn that off by changing "Fetch location from NMEA sources on local network?" in /etc/geoclue/geoclue.conf to false or undefined.
"geoclue-example.provider" is from "/usr/lib/geoclue/geoclue-example" file ?
Project home
"apt show geoclue-2.0" says I have version 2.4.7-1ubuntu1; project says newest is 2.5.1. - Mozilla Location Service.
Used by GeoClue.
You can opt-out of having your Wi-Fi listed in MLS, but you have to modify your Wi-Fi network name or visibility to do so.
- Apple, Google, Microsoft have similar services.
"http://maps.google.com/?ie=UTF8&ll=48.861426,2.338929&spn=0.011237,0.027874&z=16" (Paris)
"http://maps.google.com/?ie=UTF8&ll=36.7160858,-4.4233916&spn=0.011237,0.027874&z=16" (Malaga)
- The browser is a key point for controlling location data.
Set the preferences in each browser you use.
Also there are browser add-ons to control or fake your location, such as Location Guard - Network Time server.
Not a good idea to turn this off.
Testing your location settings:- Settings / Date & Time is getting location from Network Time server, I think.
For me it is showing VPN server location. - Test browser via
BrowserLeaks' "Geolocation API".
For me, in Firefox, it shows location set in Location Guard add-on.
For me, in Chromium, it showed VPN server location, until I installed Location Guard.
- System knowledge of online accounts:
GNOME Online Accounts (GOA). org.gnome.OnlineAccounts
GNOME Help's "Allow or disallow online accounts"
- To trace what files an app or command is using:
strace -e trace=open,openat -f YOURCOMMANDANDARGSGOHERE
Accounts
sudo bash # To see accounts that can log in: grep -v ':\*:' /etc/shadow | grep -v ':\!:' | grep -v ':\!\!:' | grep -v ':\!\*:' # To see accounts that will be shown on the login page: grep -v '/usr/sbin/nologin$' /etc/passwd | grep -v '/bin/false$' | grep -v '/bin/sync$' | grep -v '^root' # To see accounts that can do "sudo": grep sudo /etc/group
My understanding of accounts:
- There are "local" accounts (defined in /etc/passwd), but in a corporate environment
there could be "network" accounts that are defined in a server somewhere (e.g. LDAP).
- Generally there is a "local" home directory for each account (named "/home/USERNAME"),
but in a corporate environment the home directory could be on some server,
and just mounted onto the local mountpoint.
- Login can occur on the system desktop GUI (display and keyboard and mouse),
through system virtual console (TTY1; Ctrl+Alt+F1 to open, Ctrl+Alt+F7 to close),
through Telnet/SSH from a network,
through remote-desktop software such as VNC or TeamViewer.
- The account you specified at installation time is a user that belongs to the "sudoers" group and thus
can use the "sudo" command to do super-user things. You can login as that user.
- There is a "root" account, but it is locked so you can't log in as root.
No password can be typed which will match the value in root's password hash field.
BUT: from Easy Linux tips project's "Security in Linux Mint: an explanation and some tips":
"In Linux Mint 18.3, the root password is unfortunately no longer set by default. This means that a malicious person with physical access to your computer, can simply boot it into Recovery mode. In the recovery menu he can then select to launch a root shell, without having to enter any password. After which your system is fully his. So, set a password for root (preferably identical to your own password)." - There may be a "guest" account, and you can log in as that user. [Disabled by default on Mint 19.]
There is no password on that account. Can it login through SSH ?
Definitely disable guest login, and you have to reboot after doing so. In Mint, run "System Settings", go to "Administration" section, click on "Login Window", click "Users" tab, probably everything should be turned off.
Abhishek Prakash's "How To Disable Guest Account In Ubuntu"
Ubuntu's "CustomizeGuestSession" - If you create more users, you can choose to add each new user to the "sudoers" group or not.
- If you are logged in as a user who belongs to the "sudoers" group, you
can do any administrative operation, and the password to use when "sudo" asks is the password for that user.
- I guess it is okay to routinely log in and do all your daily computing as
the user you specified during installation, which belongs to the "sudoers" group.
Any malware you run accidentally would have to know your password in order to do "sudo"
and then do administrative stuff, and I think sudo accepts password only from keyboard.
So it's not quite like routinely running as Administrator under Windows.
- On a single-user system, the security distinction between root and normal user is not so important.
All of the interesting personal files probably are owned by the normal user. So if an attacker can get
in as that normal user, they get all the good stuff, no need to escalate to root.
Escalating to root might let the attacker do a few more things, such as access network hardware at a low level to attack other machines on the LAN.
Escalating to root on a multi-user system is much more serious/important than on a single-user system. - From someone on reddit:
"Normally, all keyrings should get signed into automatically upon user login to the system. If suddenly you have to log in to a keyring when you never did before, what's gone wrong is that you changed the password with passwd instead of using the account manager built into the GUI. The GUI will automatically change your keyring password too. The command line won't."
Some command-line ways to list all users: "getent passwd", "compgen -u", "cat /etc/passwd".
List users with no password set: "sudo awk -F: '($2 == "") {print}' /etc/shadow"
List users with UID set to 0 (superuser): "sudo awk -F: '($3 == "0") {print}' /etc/passwd"
List info about a user: "id user1"
Set limits on users or groups: /etc/security/limits.conf
Login security can be defeated if attacker has physical access:
Alarming article about (a hole in) account security:
Abhishek Prakash's "How to Reset Ubuntu Password in 2 Minutes" (boot into Recovery mode)
Maybe there is some way to password-protect GRUB, or maybe this doesn't work if /home is encrypted ?
SK's "How To Password Protect GRUB Bootloader In Linux"
Another way to change passwords if you have physical access: boot the machine from a Live system on USB or CD, do "sudo -i", do chroot to the main system disk, do "passwd $username".
Ask Ubuntu's "How do I reset a lost administrative password?" (boot into Recovery mode)
SK's "How To Reset Root User Password In Linux"
Not sure, but I think these methods work even if user's home is encrypted. Access to the disk encryption passphrase is controlled by the user permissions, so once you login as the user (with any or empty password), software can decrypt the user's home.
PAM (Pluggable Authentication Modules):
Applications (including the "login manager" and sudo) can be written to be PAM-aware. Then the application doesn't have to invent its own authentication mechanism.
The authentication process can be complex and multi-step. But as far as I can tell, all it does is give a yes/no for access to the application; there's no way to have it do fine-grained control of access to permissions within the application ? To do that, you'd have to define separate users with separate permissions ?
Configuration files are in /etc/pam.d directory. Some are for applications, others for events.
"apt list | grep libpam | less"
PAM can limit resource usage via settings in /etc/security/limits.conf
PAM can enforce password strength rules via libpam-cracklib and /etc/pam.d/system-auth
PAM can lock an account after N failed login attempts via pam_tally2 and /etc/pam.d/system-auth
Mokhtar Ebrahim's "Configure and Use Linux-PAM"
Good info in Chapter 23 "Understanding Advanced Linux Security" of "Linux Bible" by Christopher Negus.
Old but some useful stuff: Debian Reference's "Chapter 4. Authentication"
To enable TOTP on desktop logins:
If you're going to enable this, I would save a copy of "/etc/pam.d/lightdm",
then create another user account, login to that account, and enable TOTP on that account,
to make sure everything works.
Chris Hoffman's "How to Log In To Your Linux Desktop With Google Authenticator"
Daniel Pellarini's "How To Configure Multi-Factor Authentication on Ubuntu 18.04"
nixCraft's "Secure Your Linux Desktop and SSH Login Using Two Factor Google Authenticator"
Linux Uprising's "How To Login With A USB Flash Drive Instead Of A Password On Linux Using pam_usb"
"sudo apt install libpam-google-authenticator".
"man google-authenticator".
Chris Hoffman's "How to Log In To Your Linux Desktop With Google Authenticator"
Daniel Pellarini's "How To Configure Multi-Factor Authentication on Ubuntu 18.04"
nixCraft's "Secure Your Linux Desktop and SSH Login Using Two Factor Google Authenticator"
Linux Uprising's "How To Login With A USB Flash Drive Instead Of A Password On Linux Using pam_usb"
"sudo apt install libpam-google-authenticator".
"man google-authenticator".
Types of keys and certificates:
- RSA / SSH public-private key-pair.
Usually generated by a client, which authenticates to a server some other way and then sends the public key to that server, for future use in authentication.
File id_rsa.pub is the public key and file id_rsa is the private key.
Can these files hold multiple key-pairs, or only one pair ?
~/.ssh directory. - Identity certificate.
X.509, usually generated and signed by a trusted authority.
Wikipedia's "X.509"
Personal cert file types .pem .p12 .pfx.
Authority cert file types .cer .cert .crt. Also .der ? - PGP public-private key-pair.
~/.gnupg directory.
From "man ssh":
"The idea is that each user creates a public/private key pair for authentication purposes. The server knows the public key, and only the user knows the private key. ssh implements public key authentication protocol automatically ..." and "A variation on public key authentication is available in the form of certificate authentication: instead of a set of public/private keys, signed certificates are used. This has the advantage that a single trusted certification authority can be used in place of many public/private keys."
Also relevant "man ssh-keygen".
Steve Cope's "SSL and SSL Certificates Explained For Beginners"
Keyring / GnomeKeyring / ksecretservice:
setevoy's "What is: Linux keyring, gnome-keyring, Secret Service, and D-Bus" (also Arseny Zinchenko (setevoy)'s "What is: Linux keyring, gnome-keyring, Secret Service, and D-Bus")
GNOME Keyring
Keyrings(7) man page
Arch Wiki's "GNOME/Keyring"
Nurdletech's "Gnome Keyring"
There is a Linux kernel keyring (see "man 7 keyrings"), and a GNOME Keyring (GNOME Keyring).
Is integrated with ssh, sftp, scp, PAM, Chrome, chromium. Can be integrated with Git, GnuPG, Firefox.
swick / mozilla-gnome-keyring (extension for Firefox and Thunderbird)
From Gnome Keyring - Security FAQ:
"Gnome Keyring is integrated with PAM, so that the 'login' keyring can be unlocked when the user logs in.".
LZone's "Using Linux keyring secrets from your scripts"
On CLI, do "cat /proc/keys" to see some of the keys in the Linux kernel keyring.
On CLI, do "man keyctl".
GNOME keyring stored under ~/.local/share/keyrings
Apparently Thunderbird has its own internal keyring.
"Passwords and Keys" application (AKA "Seahorse"):
Accesses GNOME Keyring.
AKA Seahorse
Under Passwords - Logins, it seems to have a bunch of placeholder entries for web sites, and a couple of things for apps (Chrome, Skype). There's nothing (for me) under Certificates (I do have certs installed in FF, Chrome, Thunderbird, but they don't show up here), and under Secure Shell (OpenSSH = ~/.ssh). But there are several keys under PGP Keys (maybe stored under ~/.gnupg directory ?). Hover mouse over each item to see tooltips.
SSH logins:
Ubuntu's "SSH / OpenSSH / Installing Configuring Testing"
Chris Hoffman's "How to Secure SSH with Google Authenticator's Two-Factor Authentication"
Linuxaria's "Add security to your ssh daemon with PAM module"
nixCraft's "Top 20 OpenSSH Server Best Security Practices"
From Ravi Saive's "How to Setup Two-Factor Authentication (Google Authenticator) for SSH Logins":
"Important: The two-factor authentication works with password based SSH login. If you are using any private/public key SSH session, it will ignore two-factor authentication and log you in directly."
SK's "How To Configure SSH Key-based Authentication In Linux"
Alistair Ross's "How To Set Up SSH Keys"
Carla Schroder's "5 SSH Hardening Tips"
Testing your SSH from outside:
InfoByIp's "SSH server connectivity test"
Rebex SSH Check
But really you need to try to connect from an outside machine and see what happens.
Jesus Vigo's "How to join a Linux computer to an Active Directory domain"
Trusted certificate stores:
Security certificates can be stored in a number of places ?
- In some browsers. /usr/share/ca-certificates/mozilla/
- In other mini-browser-containing apps such as Thunderbird email client ? ~/.thunderbird/PROFILENAME/cert9.db and key4.db ?
- Electron apps contain the Chromium browser engine; what certificate store is used, or none ?
- Node.js comes with a built-in store of CA's ?
- Java has its own store ? Also each Java app could have its own ? /etc/ssl/certs/java/cacerts, jre/lib/security/cacerts ? "man keytool" article
- LDAP or OpenLDAP ?
- System store used by openssl (/etc/ssl):
# find the base directory: openssl version -d # list the certs: ls -R `openssl version -d | sed -E 's/OPENSSLDIR: "([^"]*)"/\1/'`/cert* # or maybe sudo ls -lAR /etc/ssl # or sudo find /etc/ssl -name '*.pem' -print | xargs -I{} openssl x509 -subject -noout -in {} sudo find /etc/ssl -name '*.crt' -print | xargs -I{} openssl x509 -subject -noout -in {} sudo find /etc/pki -name '*.pem' -print | xargs -I{} openssl x509 -subject -noout -in {} # but my personal certs (installed in browsers etc) don't show up in there man update-ca-certificates ls -l /etc/ssh ls -lAR ~/.pki ls -lAR /etc/pki # while user is logged in: sudo ls -lAR /run/user/USERIDNUMBER/keyring
From someone on Stack Exchange:
Most distros put their certificates soft-link in system-wide location at /etc/ssl/certs.
- Key files go into /etc/ssl/private
- System-provided actual files are located at /usr/share/ca-certificates
- Custom certificates go into /usr/local/share/ca-certificates
From someone on reddit 11/2019:
Applications that utilize the system cert store: Chrome on macOS/windows
[but 11/2020 Chrome is changing to have its own store:
article].
Safari on macOS. Edge on windows [old Edge, I assume; don't know about new Edge].
Linux support depends on the distribution. RHEL is probably better than others.
Firefox uses its own key store ...
Java applications will vary in support. It really depends on the implementer.
[Certs can be stored in a hardware device:] A Yubikey with certs provisioned acts as a pkcs#11 device which is an industry standard interface to cryptographic devices. It has good support for all applications that utilize the system cert store. There are plugins to utilize pkcs11 devices for Firefox.
Firefox uses its own key store ...
Java applications will vary in support. It really depends on the implementer.
[Certs can be stored in a hardware device:] A Yubikey with certs provisioned acts as a pkcs#11 device which is an industry standard interface to cryptographic devices. It has good support for all applications that utilize the system cert store. There are plugins to utilize pkcs11 devices for Firefox.
Amit N. Bhagat's "Digital Certificates Explained"
Federal Public Key Infrastructure Guides' "Trust Stores"
Places passwords are stored:
GNOME networking passwords are stored in plaintext in files in /etc/NetworkManager/system-connections
MEGA password discussion
MEGAchat: Technical Security Primer
libsecret-based clients via the Freedesktop.org secret storage DBus API ?
KeePassXC 2.5.x can be used as a vault service by libsecret: https://keepassxc.org/blog/2019-10-26-2.5.0-released/ KeePassXC as "secret service"
KeePassXC password manager can supply SSH keys to an SSH agent: KeePassXC and SSH.
Run "ssh-add -l" or "ssh-add -L" to see all keys available through ssh-agent.
Run "ssh-add -s filename.pkcs11" to add a digital certificate to ssh-agent.
"nmap --script ssl-cert localhost" gives me one cert used by port 25 SMTP, called "mint" or "DNS:Mint".
"nmap --script ssl-enum-ciphers localhost" gives me TLS ciphers used by port 25 SMTP, port 631 CUPS.
Security Test / Audit
Lynis
David Mytton's "80+ Linux Monitoring Tools for SysAdmins"
tcpdump:
Daniel Miessler's "A tcpdump Tutorial and Primer with Examples"
"sudo tcpdump -i lo -A | grep Host:"
iptraf
iptop
ntop
netstat: "sudo netstat -atupl"
lsof: "sudo lsof -i" to see established connections.
ss: "sudo ss -lptu".
NixCraft's "ss command: Display Linux TCP / UDP Network/Socket Information"
NixCraft's "Linux: 25 Iptables Netfilter Firewall Examples For New SysAdmins" (see "27. Testing Your Firewall")
nethogs: "sudo nethogs"
ngrep
auditd
- unix-privesc-check:
Download from pentestmonkey / unix-privesc-check
Run via:sudo ./upc.sh --help | less sudo ./upc.sh --color --type all
Results tend to be very repetitive. For example, okay, the 100,000 files under my home directory all have write permission for everyone in my group. Would be nice if it didn't report that 100,000 times. So:cd unix-privesc-check-master/lib/checks/enabled/all rm group_writable rm privileged_writable
and run it again. It ran for almost 2 hours, didn't give any output colored red, so I guess it didn't find anything serious. - Tiger UN*X security checking system:
Savannnah's "Tiger UNIX security tool - Summary"
nongnu.org's "Tiger - The Unix security audit and intrusion detection tool"
Checks lots of stuff. I ran it from Start menu, but see "man tiger" about running it from CLI. Ran for 1 hour on my laptop, and then just exited at the end, the CLI window disappeared. Left a 5.7 MB report file in /var/log/tiger directory. Definitely read the report; all kinds of information and warnings in there. But I think most of it is just the way the system is configured.
"sudo tiger -h"
After I got local CLI mail working, started getting daily reports from tiger in mail. It's running every HOUR, via cron ! Edit /etc/crontab.d/tiger to change that.
CERT's "Intruder Detection Checklist"
See the "Port scanning and router testing" section of my "Computer Security and Privacy" page.
SEI's "Steps for Recovering from a UNIX or NT System Compromise" (PDF)
Miscellaneous
Throttling network bandwidth, for testing purposes:
"sudo tc qdisc add dev enp19s0 root tbf rate 32kbit latency 50ms burst 770"
"sudo tc qdisc delete dev enp19s0 root"
Brendan Gregg's "Linux Performance Tools" diagram