Light notes

Quick notes

This a special note that list various tips that are too short for a single “light note”, since I only list two light notes per page.


How to import ‘recent’ Seamonkey and Waterfox profiles in recent Firefox releases

Locate the folder of your Seamonkey or Waterfox profile.
On Linux systems, it’s in ~/.mozilla/seamonkey and ~/.waterfox respectively.
On Windows, a random guess would be %APPDATA%\Mozilla\Seamonkey\Profiles or %APPDATA%\Waterfox\Profiles.

Check that keys4.db and logins.json are present. If only keys3.db is present, your installation is too old and won’t be imported correctly.

If these files are present, copy the profile folder into ~/.mozilla/firefox (or %APPDATA%\Mozilla\Firefox\Profiles).
In the same folder, you should see a profiles.ini file. Back it up and edit it.

This should look like this :





The ‘Path’ will be different on your installation.

Add a [ProfileN] section where N is the next Profile number.

For example, if :

  • the profile folder you just copied is named abcdef12345.default
  • you have [Profile0] and [Profile1] sections in your profiles.ini file

Then add the following


The Name= is not very important. You should just remember it as you’ll have to choose it during the migration wizard.

Once done, close every window of Firefox and then start firefox with the --migration argument :

firefox --migration

On Windows, you’ll have to open a Powershell in the installation folder of Firefox, located in Program Files.

This should open a ‘Migration windows’. From this point :

  • Select Firefox
  • Click on Next >
  • Select toimport
  • Click on Next >
  • Then click on Finish

toimport is the name set in the profiles.ini just before.

Firefox should open the main page by now.
Check the “Logins and passwords” to ensure that, at least, your passwords were imported successfully.
Then check your bookmarks.

If the name toimport didn’t appear, check that you didn’t mess up the profiles.ini configuration. If that’s not the case, maybe your profile is really too old for a direct Firefox import.


standard_init_linux.go:207: exec user process caused "exec format error"

Either :

View the content of a running container, or post-mortem

docker export containerid -o /path/to/file.tar.gz

/path/to/file.tar.gz will be a snapshot of the / directory of your container.

No network during builds, while using custom networks

One solution would be to use the ‘host’ network just for the build, by adding a network: host node below your build node.

Complete example :

version: '3.4'

    image: myy/synapse:latest
      context: ./build-synapse
      network: host
        ipv6_address: fc00::105

    external: true

In this example, the Dockerfile path is ./build-synapse/Dockerfile
Also, the network myynet is not used during the build

Let’s encrypt (letsencrypt)

requests.exceptions.SSLError: HTTPSConnectionPool(host='', port=443)

The complete error could be something like :

Max retries exceeded with url: /directory (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])")))

Check your /etc/hosts if there’s a entry.

Then compare your result of :

$ nslookup

Between your current machine and another machine.
If the results are differents, you might to try using the same DNS server as your other machine, by :

  1. Modifying /etc/resolv.conf for quick-testing and adding a nameserver W.X.Y.Z entry at the top.
  2. Re-executing your certbot script.

If that works, configure your network to use the same DNS server, through the standard configuration files/utilities, since /etc/resolv.conf generally don’t persist after a reboot (or a DHCP lease renewal).

If that doesn’t work, try to use instead, by passing --server to certbot.

Also make sure that your firewall isn’t blocking HTTPS egress connections, by checking up on :

curl -L


I can connect to SSH but the port is wrong and the ssh.service is disabled ?

Turns out that some distributions install a ssh.socket in /etc/systemd/system/ that, not only ignores your port configuration in /etc/ssh/sshd_config but also block ssh.service from running !

I’d like to say :
Just rm /etc/systemd/system/
BUT ! It turns out that I got burned by this issue after a standard Debian unstable system update and a simple reboot… which means that you can easily be burned by this after a system update.

So… one alternative is to create a custom SSH service unit in /etc/systemd/system/ with a name like donttouchmyssh.service.
The content of /etc/systemd/system/donttouchmyssh.service like this :

Description=OpenBSD Secure Shell server
Documentation=man:sshd(8) man:sshd_config(5) auditd.service

ExecStartPre=/usr/sbin/sshd -t
ExecStart=/usr/sbin/sshd -D $SSHD_OPTS
ExecReload=/usr/sbin/sshd -t
ExecReload=/bin/kill -HUP $MAINPID


The unit is based on /lib/systemd/system/ssh.service, but ours don’t advertise itself as ssh.service or sshd.service, meaning that ssh.socket won’t block it.

Test it by running :

systemctl start donttouchmyssh

And then try to connect on the SSH port you configured in the server /etc/ssh/sshd_config file.

If you can connect correctly, set this service to execute on startup

systemctl enable donttouchmyssh

Reboot and retry to connect on the configured port.

If you can, then you can use your firewall to block port 22 connections !
Block these ports from a machine, or an interface, which you can access if your custom service fails to start after another system update.

If you cannot, check for errors using journalctl -u donttouchmyssh and systemctl status donttouchmyssh (with sudo if required).


Save / Restore

If you don’t have a iptables rules save/restore service, here’s a quick one for IPv4 and IPv6 rules.

In these services, rules will be loaded and saved from /etc/iptables/v4-rules and /etc/iptables/v6-rules, so prepare the environment before like this :

mkdir /etc/iptables
iptables-save > /etc/iptables/v4-rules
ip6tables-save > /etc/iptables/v6-rules

After that, create two services for loading and restoring IPv4 and IPv6 rules. The services must be saved in /etc/systemd/system.
You can name them iptables.service and ip6tables.service for example, but the name is up to you.


Description=Iptables IPv4 rules save/load service

ExecStart=/bin/sh -c "iptables-restore < /etc/iptables/v4-rules"
ExceStop=/bin/sh -c "iptables-save > /etc/iptables/v4-rules"



Description=Iptables IPv6 rules save/load service

ExecStart=/bin/sh -c "ip6tables-restore < /etc/iptables/v6-rules"
ExceStop=/bin/sh -c "ip6tables-save > /etc/iptables/v6-rules"


Test the services using :

systemctl start iptables && systemctl start ip6tables

Then check your firewall rules, using iptables -L, iptables -t nat -L, ip6tables -L and ip6tables -t nat -L.

If the rules appear ok, enable the script on startup and reboot

systemctl enable iptables
systemctl enable ip6tables

On reboot check the rules again, just to be sure.

If the rules aren’t good

If the rules are not good, correct them and then save them again in /etc/iptables/v4-rules and /etc/iptables/v6-rules using iptables-save and ip6tables-save respectively, then restart the iptables and ip6tables services like this :

systemctl restart iptables
systemctl restart ip6tables

Check the content of the files in /etc/iptables before and after the systemctl restart if the problem persist, in order to understand what’s going on.

Why is it blocking ?

If you happen to wonder why some packets are blocked in INPUT or OUTPUT, one simple way to get some clues is to set up a catch-all LOG rule :


iptables -A INPUT -j LOG --log-level 6


iptables -A OUTPUT -j LOG --log-level 6

Other cases

You can also put your catch-all sentence in any rule-chain, to see if the rule-chain is traversed :

iptables -A YOUR_RULE_CHAIN -j LOG --log-level 6

If you’re afraid that some rules might send the traffic to other rule chains, you can setup the LOG rule as the first rule of your chain. This will break your rulechain temporarily though.

iptables -I YOUR_RULE_CHAIN -j LOG --log-level 6

To check the logs, you can use journalctl on SystemD systems :

journalctl -k

Or just check dmesg or /var/log/messages on other systems.

/var/log/messages requires a syslog daemon logging kernel messages to this filepath.



(config for tls_private_key_path): No such file or directory even though no_tls: True is added

So if you server cannot start, with the following error :

matrix_1  | 2019-10-24 20:27:50,458 - twisted - 171 - ERROR -  - Traceback (most recent call last):
matrix_1  | 2019-10-24 20:27:50,459 - twisted - 171 - ERROR -  -   File "/usr/local/lib/python3.7/site-packages/synapse/app/", line 263, in start
matrix_1  | 2019-10-24 20:27:50,459 - twisted - 171 - ERROR -  -     refresh_certificate(hs)
matrix_1  | 2019-10-24 20:27:50,460 - twisted - 171 - ERROR -  -   File "/usr/local/lib/python3.7/site-packages/synapse/app/", line 212, in refresh_certificate
matrix_1  | 2019-10-24 20:27:50,461 - twisted - 171 - ERROR -  -     hs.config.read_certificate_from_disk(require_cert_and_key=True)
matrix_1  | 2019-10-24 20:27:50,462 - twisted - 171 - ERROR -  -   File "/usr/local/lib/python3.7/site-packages/synapse/config/", line 221, in read_certificate_from_disk
matrix_1  | 2019-10-24 20:27:50,462 - twisted - 171 - ERROR -  -     self.tls_private_key = self.read_tls_private_key()
matrix_1  | 2019-10-24 20:27:50,463 - twisted - 171 - ERROR -  -   File "/usr/local/lib/python3.7/site-packages/synapse/config/", line 487, in read_tls_private_key
matrix_1  | 2019-10-24 20:27:50,465 - twisted - 171 - ERROR -  -     private_key_pem = self.read_file(private_key_path, "tls_private_key_path")
matrix_1  | 2019-10-24 20:27:50,467 - twisted - 171 - ERROR -  -   File "/usr/local/lib/python3.7/site-packages/synapse/config/", line 135, in read_file
matrix_1  | 2019-10-24 20:27:50,468 - twisted - 171 - ERROR -  -     cls.check_file(file_path, config_name)
matrix_1  | 2019-10-24 20:27:50,469 - twisted - 171 - ERROR -  -   File "/usr/local/lib/python3.7/site-packages/synapse/config/", line 117, in check_file
matrix_1  | 2019-10-24 20:27:50,469 - twisted - 171 - ERROR -  -     % (file_path, config_name, e.strerror)
matrix_1  | 2019-10-24 20:27:50,470 - twisted - 171 - ERROR -  - synapse.config._base.ConfigError: Error accessing file '/data/' (config for tls_private_key_path): No such file or directory

Even though you seem to have enable no_tls: true in your /data/homeserver.yaml, check the two following things :

  1. The directory you are actually mounting has a file named homeserver.yaml.
    While this seems dumb, you have to understand that :
  2. If you setup the environment variable SYNAPSE_SERVER_NAME, this will generate a temporary configuration file that will overshadow your current configuration file.
    I.e. If you set SYNAPSE_SERVER_NAME, the server won’t look for your /data/homeserver.yaml but use an auto-generated one instead !.

Migrate your SQLite DB to a PostgreSQL DB using Docker

First, ensure that your synapse Docker /data is file-system accessible volume, or ensure at least that you can access this directory to create the new configuration file, and do a database copy.

Here, I’ll assume that you’re in a directory where the sub-directory data is mounted as /data in your Synapse Docker instance.

Quick and dirty

  • Create a volume for the PostgreSQL data

    docker volume create synapse_db_data

If you know what you’re doing and want to use a host directory for the database files, just replace following synapse_db_data references with /absolute/path/to/your/host/directory, like this : -v /absolute/path/to/your/host/directory:/var/lib/postgresql/data

  • Create a temporary environment variable for PostgreSQL configuration through Docker

    cat <<EOF > postgres.env
    # If you actually use Y0urP4ssw0rd as a password, you're an idiot
  • Fire the PostgreSQL server

    docker run --rm --env-file docker.env -v synapse_db_data:/var/lib/postgresql/data --name synapsedb -d postgres

If the instance doesn’t start, remove --rm, then type docker logs synapsedb to view the logs, and then docker container rm synapsedb to get rid of the stopped container else you won’t be able to recreate it

  • Copy and modify your Synapse server homeserver.yaml configuration to use PostgreSQL

    cp data/homeserver.yaml data/homeserver-pgsql.yaml

Then modify the section that look like this :

## Database ##

  # The database engine name
  name: "sqlite3"
  # Arguments to pass to the engine
    # Path to the database
    database: "/data/homeserver.db"

and make it look this :

  # The database engine name
  name: "psycopg2"
  # Arguments to pass to the engine
    user: your_db_user
    password: Y0urP4ssw0rd
    database: your_db_name
    host: synapsedb
  • Stop your current Synapse server

    docker stop your_synapse_container_tag_or_numeric_id
  • Copy your SQLite database just in case

    cp data/homeserver.db{,.bak}
  • Run the migration script with a new synapse instance

Replace myy/synapse-image:latest-intel by the reference of your synapse image.

docker run --name synapse  -v $PWD/data:/data --entrypoint synapse_port_db -d myy/synapse-image:latest-intel --sqlite-database /data/homeserver.db --postgres-config /data/homeserver-postgres.yaml

If you haven’t build your own synapse-image, either use the official one, or build one using my guide.

  • Follow the migration

    docker logs -f synapse
  • Shutdown your PostgreSQL instance

    docker stop synapsedb
  • Launch the whole thing with a docker-compose.yml

Replace myy/synapse-image:latest-intel by the reference of your synapse image.

( Add docker secrets commands )

version: '3'

    image: myy/synapse:latest-intel
      context: ./build-synapse
      network: host
      - "./data:/data"
        ipv6_address: fc00::110
    image: postgres
      - pgdata:/var/lib/postgresql/data
      - POSTGRES_USER=your_db_user
      - POSTGRES_PASSWORD=Y0urP4ssw0rd
      - POSTGRES_DB=your_db_name

    external: true


Use a specific network for these containers

If you want to use a specific network for the generated containers, add this to the commands :

--net networkname --ip your.ip.v.4 --ip6 your::ipv6

Example with the PostgreSQL instance :

docker run --net networkname --ip your.ip.v.4 --ip6 your::ipv6 --rm --env-file docker.env -v synapse_db_data:/var/lib/postgresql/data --name synapsedb -d postgres

If you want to use a specific network in the docker-compose.yml, you’ll have to add the following nodes under each service :

    ipv4_address: your.ip.v.4
    ipv6_adderss: your::ipv6

And add the following at end of the file :

    external: true

Example :

version: '3'

    image: myy/synapse:latest-intel
      context: ./build-synapse
      network: host
      - "./data:/data"
        ipv4_address: your.ip.v.4
        ipv6_address: your::ipv6
    image: postgres
      - synapse_pg_data:/var/lib/postgresql/data
      - POSTGRES_USER: coincoin

    external: true

Check Docker compose official documentation for more details.

Rant : SystemD is too complex for Linux distributors

I could say too complex for Linux distributions, but it looks more like a lack of understanding of how SystemD works, and the fact that unit files get installed without asking leading to blocking issues.

The story

During the beginning of September, this server started to be inaccessible. I guess it’s during this period. Turns out that I checked my blog during August and it was still up !
Then holidays ended, I went back to work and… I checked my blog back at the start of October.

Note that, I receive emails from the hosting company I use, if the servers were to be “inaccessible” from their point of view.
But they only check their local network.

So, I tried to reach … “Connection refused”….
Ok, let’s start a SSH shell ! “Connection refused”.
!!? What’s happening !? Was my server hijacked !? What the fuck !

I then tried to use my provider (Scaleway) “Web console” : Nothing.
(Spoiler alert : Turns out that Scaleway web console sucks)

Ugh… Okay… I got locked out from my server ? Is that some kind of “HA-HA ! You forgot to secure this part of your server ! pwned you, m0r0n !” ?

Let’s try a nmap -sS -v on my server !

Discovered open port 22/tcp

Wait, what !? What’s running on port 22 !? The SSH server is supposed to run on another port !

… I changed my client ~/.ssh/config to use port 22 instead of the “configured” port. Then I retried to get a SSH Shell and…
Got a shell on my server !

… ?

Alright… ps auxww… nothing unusual…
Checked dmesg, checked journalctl… Nothing unusual !

Threw a tcpdump not port 22 for kicks… nothing unusual…

Maybe the machine wasn’t hacked ? I’m flairing some retarded system update now…

… Let’s see if something happens if we put the the system back together for the moment… I mean, I’m just hosting a static blog which content is available on a public git repository, which I can redeploy at any moment so, if it blows, I’ll order a new server unit.

docker container ls … Ok, the containers are down… iptables -L … The firewall was reset ?

Fine, ran my script to restablish the firewall, deleted all the containers, updated the docker images, deployed the new instances with docker-compose AAAND, then, I tried to reach
Success !
I got my blog back !

Updated the SSL certificates, the OCSP staples : Alright !

Then I tried to create a script to automate the blog updates as much as possible, discovered that there’s some big differences between the Hugo 0.46 on my machine and the Hugo 0.56 on the server, which fucked up my templates real bad.
After a few hours, checking each Hugo release note, to understand which update fucked up the templates, I pinpointed the issue, updated the templates and Voila : My blog is restored !

Now, I can focus on the real issue !


Checked /etc/ssh/sshd_service … It’s clearly written to listen on another port.

I don’t get it… Either sshd is executed with specific instructions to not listen on this port or there’s a rogue ssh server !

root# ps auxww | grep ssh
root      2119  0.0  0.3  14272  7572 ?        Ss   18:29   0:00 sshd: root@pts/0

… ? sshd: root@pts/0 … ? Whaaat ?

Maybe there’s an environment variable fucking with the sshd server ?

root# ps auxwwe | grep sshd
root      2119  0.0  0.3  14272  7572 ?        Ss   18:29   0:01 sshd: root@pts/0                                                                                                                                                                                                                                          =
root# systemctl status ssh
● ssh.service - OpenBSD Secure Shell server
   Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:sshd(8)


… Why don’t I see any real sshd server with ps auxwww ? Here, it’s listing my connection as a server… ??? Whut ?
I can’t see anything related to sshd “as-is”.

Maybe it’s executed from the initrd file ? Hmm… This server is certainly booted from the network so I’m not going to access the initrd easily… (Well, turns out that I could actually, but didn’t know that back then).
But even then, I should see it in the list…

Wait, if I run the ssh service with systemctl start ssh… and then try to connect on the good port… Success !


Then WHY !? Why systemd refused to execute this service on startup !?

… Maybe it’s not executed on startup ?

How do I check the services ran on startup again ?

root# systemctl -t service --state=active
UNIT                                               LOAD   ACTIVE SUB     DESCRIPTION                                                                  
blk-availability.service                           loaded active exited  Availability of block devices                                                
containerd.service                                 loaded active running containerd container runtime                                                 
cron.service                                       loaded active running Regular background program processing daemon                                 
dbus.service                                       loaded active running D-Bus System Message Bus                                                     
docker.service                                     loaded active running Docker Application Container Engine                                          
exim4.service                                      loaded active running LSB: exim Mail Transport Agent                                               
getty@tty1.service                                 loaded active running Getty on tty1                                                                
getty@ttyAMA0.service                              loaded active running Getty on ttyAMA0                                                             
haveged.service                                    loaded active running Entropy daemon using the HAVEGE algorithm                                    
kmod-static-nodes.service                          loaded active exited  Create list of required static device nodes for the current kernel           
lvm2-monitor.service                               loaded active exited  Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling
ntp.service                                        loaded active running Network Time Service                                                         
polkit.service                                     loaded active running Authorization Manager                                                        
rsyslog.service                                    loaded active running System Logging Service                                                       
serial-getty@ttyAMA0.service                       loaded active running Serial Getty on ttyAMA0                                                      
ssh@0-10.X.Y.Z:22-A.B.C.D:39924.service loaded active running OpenBSD Secure Shell server per-connection daemon                            
sysstat.service                                    loaded active exited  Resets System Activity Data Collector                                        
systemd-journal-flush.service                      loaded active exited  Flush Journal to Persistent Storage                                          
systemd-journald.service                           loaded active running Journal Service                                                              
systemd-logind.service                             loaded active running Login Service                                                                
systemd-modules-load.service                       loaded active exited  Load Kernel Modules                                                          
systemd-networkd-wait-online.service               loaded active exited  Wait for Network to be Configured                                            
systemd-networkd.service                           loaded active running Network Service                                                              
systemd-random-seed.service                        loaded active exited  Load/Save Random Seed                                                        
systemd-remount-fs.service                         loaded active exited  Remount Root and Kernel File Systems                                         
systemd-resolved.service                           loaded active running Network Name Resolution                                                      
systemd-sysctl.service                             loaded active exited  Apply Kernel Variables                                                       
systemd-sysusers.service                           loaded active exited  Create System Users                                                          
systemd-tmpfiles-setup-dev.service                 loaded active exited  Create Static Device Nodes in /dev                                           
systemd-tmpfiles-setup.service                     loaded active exited  Create Volatile Files and Directories                                        
systemd-udev-trigger.service                       loaded active exited  udev Coldplug all Devices                                                    
systemd-udevd.service                              loaded active running udev Kernel Device Manager                                                   
systemd-update-utmp.service                        loaded active exited  Update UTMP about System Boot/Shutdown                                       
systemd-user-sessions.service                      loaded active exited  Permit User Sessions                                                         
ufw.service                                        loaded active exited  Uncomplicated firewall                                                       
unattended-upgrades.service                        loaded active running Unattended Upgrades Shutdown                                                 
user-runtime-dir@0.service                         loaded active exited  User Runtime Directory /run/user/0                                           
user@0.service                                     loaded active running User Manager for UID 0     

The IP were replaced by 10.X.Y.Z and A.B.C.D in this copy.

… What the fuck ? There’s a service executed for my IP, but there’s no ssh.service running the server ?

This shit is insane ! What the fuck is wrong with SystemD !?

No wait… maybe I’m blaming SystemD while it’s actually System-V generating issues.

root# grep sshd /etc/* -r
/etc/default/ssh:# Options to pass to sshd
/etc/init.d/ssh:# Provides:             sshd
/etc/init.d/ssh:test -x /usr/sbin/sshd || exit 0
/etc/init.d/ssh:( /usr/sbin/sshd -\? 2>&1 | grep -q OpenSSH ) 2>/dev/null || exit 0
/etc/init.d/ssh:    # forget it if we're trying to start, and /etc/ssh/sshd_not_to_be_run exists
/etc/init.d/ssh:    if [ -e /etc/ssh/sshd_not_to_be_run ]; then 
/etc/init.d/ssh:            log_action_msg "OpenBSD Secure Shell server not in use (/etc/ssh/sshd_not_to_be_run)" || true
/etc/init.d/ssh:    if [ ! -d /run/sshd ]; then
/etc/init.d/ssh:        mkdir /run/sshd
/etc/init.d/ssh:        chmod 0755 /run/sshd
/etc/init.d/ssh:    if [ ! -e /etc/ssh/sshd_not_to_be_run ]; then
/etc/init.d/ssh:        /usr/sbin/sshd $SSHD_OPTS -t || exit 1
/etc/init.d/ssh:        log_daemon_msg "Starting OpenBSD Secure Shell server" "sshd" || true
/etc/init.d/ssh:        if start-stop-daemon --start --quiet --oknodo --chuid 0:0 --pidfile /run/ --exec /usr/sbin/sshd -- $SSHD_OPTS; then
/etc/init.d/ssh:        log_daemon_msg "Stopping OpenBSD Secure Shell server" "sshd" || true
/etc/init.d/ssh:        if start-stop-daemon --stop --quiet --oknodo --pidfile /run/ --exec /usr/sbin/sshd; then
/etc/init.d/ssh:        log_daemon_msg "Reloading OpenBSD Secure Shell server's configuration" "sshd" || true
/etc/init.d/ssh:        if start-stop-daemon --stop --signal 1 --quiet --oknodo --pidfile /run/ --exec /usr/sbin/sshd; then
/etc/init.d/ssh:        log_daemon_msg "Restarting OpenBSD Secure Shell server" "sshd" || true
/etc/init.d/ssh:        start-stop-daemon --stop --quiet --oknodo --retry 30 --pidfile /run/ --exec /usr/sbin/sshd
/etc/init.d/ssh:        if start-stop-daemon --start --quiet --oknodo --chuid 0:0 --pidfile /run/ --exec /usr/sbin/sshd -- $SSHD_OPTS; then
/etc/init.d/ssh:        log_daemon_msg "Restarting OpenBSD Secure Shell server" "sshd" || true
/etc/init.d/ssh:        start-stop-daemon --stop --quiet --retry 30 --pidfile /run/ --exec /usr/sbin/sshd || RET="$?"
/etc/init.d/ssh:                if start-stop-daemon --start --quiet --oknodo --chuid 0:0 --pidfile /run/ --exec /usr/sbin/sshd -- $SSHD_OPTS; then
/etc/init.d/ssh:        status_of_proc -p /run/ /usr/sbin/sshd sshd && exit 0 || exit $?
grep: /etc/motd: No such file or directory
/etc/pam.d/sshd:# access limits that are hard to express in sshd_config.
/etc/ssh/sshd_config:# See the sshd_config(5) manpage for details
/etc/ssh/sshd_config:# Use these options to restrict which interfaces/protocols sshd will bind to
/etc/ssh/sshd_config.ucf-dist:# $OpenBSD: sshd_config,v 1.103 2018/04/09 20:41:22 tj Exp $
/etc/ssh/sshd_config.ucf-dist:# This is the sshd server system-wide configuration file.  See
/etc/ssh/sshd_config.ucf-dist:# sshd_config(5) for more information.
/etc/ssh/sshd_config.ucf-dist:# This sshd was compiled with PATH=/usr/bin:/bin:/usr/sbin:/sbin
/etc/ssh/sshd_config.ucf-dist:# The strategy used for options in the default sshd_config shipped with
/etc/ssh/sshd_config.ucf-dist:#PidFile /var/run/

Ha ! /etc/init.d/ssh ! Maybe it’s that stupid service that’s generating issues ! … Why is Debian mixing System-V with SystemD ? Either go System-V or go SystemD… Don’t do both, it’s irritating…

Ok, let’s check if this script is actually run ! Let’s edit it and add a echo MEOW > /tmp/stoopid after set -e.

root# /etc/init.d/ssh restart
[ ok ] Restarting ssh (via systemctl): ssh.service.
root# cat /tmp/stoopid
root# systemctl status ssh
● ssh.service - OpenBSD Secure Shell server
   Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2019-10-25 19:48:28 UTC; 13min ago
     Docs: man:sshd(8)
  Process: 3914 ExecStartPre=/usr/sbin/sshd -t (code=exited, status=0/SUCCESS)
 Main PID: 3915 (sshd)
    Tasks: 1 (limit: 2377)
   Memory: 1016.0K
   CGroup: /system.slice/ssh.service
           └─3915 /usr/sbin/sshd -D

Oct 25 19:48:27 myy-blargh systemd[1]: Starting OpenBSD Secure Shell server...
Oct 25 19:48:28 myy-blargh sshd[3915]: Server listening on port N.
Oct 25 19:48:28 myy-blargh sshd[3915]: Server listening on :: port N.
Oct 25 19:48:28 myy-blargh systemd[1]: Started OpenBSD Secure Shell server.

The actual port was replaced by N in this copy of the logs.

Hmm… If reload it, SystemD is considering the service loaded as well… Both systems are cooperating correctly then ?

Alright, let’s reboot and see if the file is present on reboot !

After reboot :

root# cat /tmp/stoopid
cat: /tmp/stoopid: No such file or directory

… This script is not executed ?? … Maybe it’s the initrd after all ?

Turns out that Scaleway mount the initrd file in /run/initramfs.

root# grep ssh /run/initramfs/*
Binary file /run/initramfs/bin/busybox matches
/run/initramfs/functions:start_sshd() {
/run/initramfs/functions:    run mkdir -p /etc/dropbear /root/.ssh
/run/initramfs/functions:    run chmod 700 /root/.ssh
/run/initramfs/functions:    run sh -ec "scw-metadata --cached | grep 'SSH_PUBLIC_KEYS_.*_KEY' | cut -d'=' -f 2- | tr -d \' > /root/.ssh/authorized_keys"
/run/initramfs/functions:    run sh -ec "scw-metadata --cached | grep 'TAGS_.*=AUTHORIZED_KEY' | cut -d'=' -f 3- | sed 's/_/\ /g' >> /root/.ssh/authorized_keys"
/run/initramfs/init:log_begin_msg "Checking metadata for debug sshd (dropbear)"
/run/initramfs/init:    log_success_msg "Starting a debug sshd"
/run/initramfs/init:    start_sshd
/run/initramfs/init:    ewarn "You can connect to your server with 'scw' or 'ssh'"
/run/initramfs/init:    ewarn " -- ssh root@${PUBLIC_IP_ADDRESS}"
/run/initramfs/init:    ewarn "You can connect to your server with 'ssh'"
/run/initramfs/init:    ewarn " -- ssh root@${PUBLIC_IP_ADDRESS}"
/run/initramfs/init:# Ensure sshd is killed if running
Binary file /run/initramfs/lib/aarch64-linux-gnu/ matches
Binary file /run/initramfs/usr/sbin/dropbear matches
Binary file /run/initramfs/usr/bin/dropbearkey matches

Ooooh, here you go ! Let’s edit /run/initramfs/init and check how they # Ensure sshd is killed if running :

# Ensure sshd is killed if running
if [ "$(pgrep dropbear)" != "" ]; then
    run killall dropbear

Hmm :

root# ps auxww | grep drop
root      4016  0.0  0.0   5796   648 pts/0    S+   20:07   0:00 grep drop

Ok… Nothing… I don’t get it…
Let’s check for rootkits, just in case.

root# apt install chkrootkit
root# chkrootkit

No problems detected…

I’m tired… I’m tired of this shit, there’s a fucking SSH server running on my machine, I have NO idea who’s spawning it !

Oh wait ! I forgot about lsof !

lsof -t

No, wrong one… Couldn’t they use netstat syntax ?

lsof -i
systemd      1            root   52u  IPv6  17371      0t0  TCP TCP *:22 (LISTEN)

O_O … …

O_O !

YOU’RE FUCKING KIDDING ME !? SystemD ITSELF IS LISTENING ON PORT 22 !? But the SSH service is dead ! HOW ?

A little search on the internet, got me this gem :

“So, ive found out that sshd.socket was enabled and this was the cause”

root# find /etc -name "ssh*.socket"
root# cat /etc/systemd/system/
Description=OpenBSD Secure Shell server socket




The real issue with SystemD

It’s OVERLY COMPLEX ! If you don’t remember all the commands and, most importantly, ALL THE WAYS SYSTEMD CAN START A SERVICE, YOU WILL NOT be able to understand what’s going on.

systemctl status ssh was indicating the status of the ssh.service file, not the status or presence of ssh.socket.

Understand that I threw a journactl and searched for SSH and saw sshd being started and receiving connections as sshd[PID], but not why, nor how it was started ! And the logs were about sshd[PID] not systemd[PID] !

So until you understand that SystemD can start services using socket connections (Overkill feature for simple servers), you will NEVER KNOW :

  • Why is the SSH service not started
  • Why is there a SSH server listening on another port than the one provided in the configuration

Understand that I had a running server until start of September, the server got rebooted for whatever reasons and THEN, POOF, the SSH server started to listen on port 22.

So, the real issue with SystemD is that it’s TOO COMPLEX FOR DISTRIBUTORS !.

It can do a ton of things and maybe, once you understand it perfectly, you’ll be happy to use all the bells and whistles…
If you need them, of course…

However :

  • systemctl status ssh just shown the ssh service as DEAD !
  • journalctl had some “SSH connections” entries, but didn’t show why it started an SSH service !
  • ps auxww | grep sshd made it look like SSH servers are run “on the fly”.

It’s those things combined which make me hate SystemD for server management.
If something breaks, I want AS MUCH INFORMATION AS POSSIBLE !

If SystemD start a sshd server, systemctl status ssh or systemctl status sshd should show information about it !
I don’t give two shits about the file extension of the unit file triggering the execution of “sshd” !

That said, the issue here is not only due to SystemD overcomplexity. It’s also the fact that some distributors thought :

“Hey ! Let’s put a ssh.socket in /etc/systemd/system/ , so that the user will never know why his SSH server start ignoring /etc/ssh/sshd_config !
This will be so much fun !”

The fact is : even if I remove this .socket file, it might just come back after an update ! And fuck up my system again !

Do you understand the issue here ? I’M LOSING CONTROL OF THE SYSTEM ! Because distributors started to ship with an OVERCOMPLEX INIT, and started using features of this init system without understanding the consequences !

If I put CONFIGURATION DIRECTIVES in a CONFIGURATION FILE, I don’t want them to be OVERRIDEN BY SOME RANDOM .whatever FILE used by my init system. If I want a different configuration, I’ll either edit the configuration file, or force the daemon to use another configuration file by editing the appropriate init file.

I’d appreciate if distributions using SystemD went with the “least amount of SystemD unit files” on server configurations.

Look at “Clear Linux”, there’s a way to use SystemD while making things “Lean & Clean”.
Just check the /etc/ folder after installing Clear Linux :
It’s clean !
They don’t add tons of .service, .socket, .mount or .whatever extension systemd reacts on !
No, they put a clean and lean /etc directory, with only the strictly necessary files.
And it works !

That said, an Init system so complex do not interest me. I’ll brush up my systemd-foo on Arch Linux, because I really need it. But, still, I don’t give it a shit about SystemD on my servers ATM.
If you run tons of microservices, maybe you do.
But me ? Nope. All my main public services run in docker containers and the only daemons and configurations I care about on my system are :

  • The SSH server
  • The firewall
  • The Docker containers

I could manage this with a busybox ‘init’ file…

So I’ll start looking for anoter distro that I can deploy on my server, and which doesn’t use SystemD.
And I can’t find one, I’ll look for one using SystemD with the least amount of units files.

I’m done with Debian and SystemD.
And I’d like to be done playing detective to understand :

  • Why my server is not responding anymore ?
  • Why services are executing while ignoring their configurations ?
  • How I can I avoid traps added by systemd updates ?