docker

Quick notes

This a special note that list various tips that are too short for a single “light note”, since I only list two light notes per page.

Firefox

How to import ‘recent’ Seamonkey and Waterfox profiles in recent Firefox releases

Locate the folder of your Seamonkey or Waterfox profile.
On Linux systems, it’s in ~/.mozilla/seamonkey and ~/.waterfox respectively.
On Windows, a random guess would be %APPDATA%\Mozilla\Seamonkey\Profiles or %APPDATA%\Waterfox\Profiles.

Check that keys4.db and logins.json are present. If only keys3.db is present, your installation is too old and won’t be imported correctly.

If these files are present, copy the profile folder into ~/.mozilla/firefox (or %APPDATA%\Mozilla\Firefox\Profiles).
In the same folder, you should see a profiles.ini file. Back it up and edit it.

This should look like this :

[Install4F96D1932A9F858E]
Default=rp4qlak7.default-release
Locked=1

[Profile1]
Name=default-release
IsRelative=1
Path=rp4qlak7.default-release
Default=1

[Profile0]
Name=default
IsRelative=1
Path=9ruyutsw.default

[General]
StartWithLastProfile=1
Version=2

The ‘Path’ will be different on your installation.

Add a [ProfileN] section where N is the next Profile number.

For example, if :

  • the profile folder you just copied is named abcdef12345.default
  • you have [Profile0] and [Profile1] sections in your profiles.ini file

Then add the following

[Profile2]
Name=toimport
IsRelative=1
Path=abcdef12345.default

The Name= is not very important. You should just remember it as you’ll have to choose it during the migration wizard.

Once done, close every window of Firefox and then start firefox with the --migration argument :

firefox --migration

On Windows, you’ll have to open a Powershell in the installation folder of Firefox, located in Program Files.

This should open a ‘Migration windows’. From this point :

  • Select Firefox
  • Click on Next >
  • Select toimport
  • Click on Next >
  • Then click on Finish

toimport is the name set in the profiles.ini just before.

Firefox should open the main page by now.
Check the “Logins and passwords” to ensure that, at least, your passwords were imported successfully.
Then check your bookmarks.

If the name toimport didn’t appear, check that you didn’t mess up the profiles.ini configuration. If that’s not the case, maybe your profile is really too old for a direct Firefox import.

Docker

standard_init_linux.go:207: exec user process caused "exec format error"

Either :

View the content of a running container, or post-mortem

docker export containerid -o /path/to/file.tar.gz

/path/to/file.tar.gz will be a snapshot of the / directory of your container.

No network during builds, while using custom networks

One solution would be to use the ‘host’ network just for the build, by adding a network: host node below your build node.

Complete example :

version: '3.4'

services:
  matrix:
    image: myy/synapse:latest
    build:
      context: ./build-synapse
      network: host
    networks:
      myynet:
        ipv6_address: fc00::105

networks:
  myynet:
    external: true

In this example, the Dockerfile path is ./build-synapse/Dockerfile
Also, the network myynet is not used during the build

Let’s encrypt (letsencrypt)

requests.exceptions.SSLError: HTTPSConnectionPool(host='acme-v02.api.letsencrypt.org', port=443)

The complete error could be something like :

Max retries exceeded with url: /directory (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])")))

Check your /etc/hosts if there’s a acme-v02.api.letsencrypt.org entry.

Then compare your result of :

$ nslookup
> acme-v02.api.letsencrypt.org

Between your current machine and another machine.
If the results are differents, you might to try using the same DNS server as your other machine, by :

  1. Modifying /etc/resolv.conf for quick-testing and adding a nameserver W.X.Y.Z entry at the top.
  2. Re-executing your certbot script.

If that works, configure your network to use the same DNS server, through the standard configuration files/utilities, since /etc/resolv.conf generally don’t persist after a reboot (or a DHCP lease renewal).

If that doesn’t work, try to use acme-v01.api.letsencrypt.org instead, by passing --server https://acme-v01.api.letsencrypt.org/directory to certbot.

Also make sure that your firewall isn’t blocking HTTPS egress connections, by checking up on https://google.com :

curl -L https://google.com

Systemd

I can connect to SSH but the port is wrong and the ssh.service is disabled ?

Turns out that some distributions install a ssh.socket in /etc/systemd/system/sockets.target.wants that, not only ignores your port configuration in /etc/ssh/sshd_config but also block ssh.service from running !

I’d like to say :
Just rm /etc/systemd/system/sockets.target.wants/ssh.socket
BUT ! It turns out that I got burned by this issue after a standard Debian unstable system update and a simple reboot… which means that you can easily be burned by this after a system update.

So… one alternative is to create a custom SSH service unit in /etc/systemd/system/ with a name like donttouchmyssh.service.
The content of /etc/systemd/system/donttouchmyssh.service like this :

[Unit]
Description=OpenBSD Secure Shell server
Documentation=man:sshd(8) man:sshd_config(5)
After=network.target auditd.service
ConditionPathExists=!/etc/ssh/sshd_not_to_be_run

[Service]
EnvironmentFile=-/etc/default/ssh
ExecStartPre=/usr/sbin/sshd -t
ExecStart=/usr/sbin/sshd -D $SSHD_OPTS
ExecReload=/usr/sbin/sshd -t
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
RestartPreventExitStatus=255
Type=notify
RuntimeDirectory=sshd
RuntimeDirectoryMode=0755

[Install]
WantedBy=multi-user.target

The unit is based on /lib/systemd/system/ssh.service, but ours don’t advertise itself as ssh.service or sshd.service, meaning that ssh.socket won’t block it.

Test it by running :

systemctl start donttouchmyssh

And then try to connect on the SSH port you configured in the server /etc/ssh/sshd_config file.

If you can connect correctly, set this service to execute on startup

systemctl enable donttouchmyssh

Reboot and retry to connect on the configured port.

If you can, then you can use your firewall to block port 22 connections !
Block these ports from a machine, or an interface, which you can access if your custom service fails to start after another system update.

If you cannot, check for errors using journalctl -u donttouchmyssh and systemctl status donttouchmyssh (with sudo if required).

Iptables

Save / Restore

If you don’t have a iptables rules save/restore service, here’s a quick one for IPv4 and IPv6 rules.

In these services, rules will be loaded and saved from /etc/iptables/v4-rules and /etc/iptables/v6-rules, so prepare the environment before like this :

mkdir /etc/iptables
iptables-save > /etc/iptables/v4-rules
ip6tables-save > /etc/iptables/v6-rules

After that, create two services for loading and restoring IPv4 and IPv6 rules. The services must be saved in /etc/systemd/system.
You can name them iptables.service and ip6tables.service for example, but the name is up to you.

/etc/systemd/system/iptables.service

[Unit]
After=network.service
Description=Iptables IPv4 rules save/load service

[Service]
Type=oneshot
ExecStart=/bin/sh -c "iptables-restore < /etc/iptables/v4-rules"
ExceStop=/bin/sh -c "iptables-save > /etc/iptables/v4-rules"
RemainAfterExit=yes

[Install]
WantedBy=default.target

/etc/systemd/system/ip6tables.service

[Unit]
After=network.service
Description=Iptables IPv6 rules save/load service

[Service]
Type=oneshot
ExecStart=/bin/sh -c "ip6tables-restore < /etc/iptables/v6-rules"
ExceStop=/bin/sh -c "ip6tables-save > /etc/iptables/v6-rules"
RemainAfterExit=yes

[Install]
WantedBy=default.target

Test the services using :

systemctl start iptables && systemctl start ip6tables

Then check your firewall rules, using iptables -L, iptables -t nat -L, ip6tables -L and ip6tables -t nat -L.

If the rules appear ok, enable the script on startup and reboot

systemctl enable iptables
systemctl enable ip6tables
reboot

On reboot check the rules again, just to be sure.

If the rules aren’t good

If the rules are not good, correct them and then save them again in /etc/iptables/v4-rules and /etc/iptables/v6-rules using iptables-save and ip6tables-save respectively, then restart the iptables and ip6tables services like this :

systemctl restart iptables
systemctl restart ip6tables

Check the content of the files in /etc/iptables before and after the systemctl restart if the problem persist, in order to understand what’s going on.

Why is it blocking ?

If you happen to wonder why some packets are blocked in INPUT or OUTPUT, one simple way to get some clues is to set up a catch-all LOG rule :

INPUT

iptables -A INPUT -j LOG --log-level 6

OUTPUT

iptables -A OUTPUT -j LOG --log-level 6

Other cases

You can also put your catch-all sentence in any rule-chain, to see if the rule-chain is traversed :

iptables -A YOUR_RULE_CHAIN -j LOG --log-level 6

If you’re afraid that some rules might send the traffic to other rule chains, you can setup the LOG rule as the first rule of your chain. This will break your rulechain temporarily though.

iptables -I YOUR_RULE_CHAIN -j LOG --log-level 6

To check the logs, you can use journalctl on SystemD systems :

journalctl -k

Or just check dmesg or /var/log/messages on other systems.

/var/log/messages requires a syslog daemon logging kernel messages to this filepath.

Matrix

Synapse

(config for tls_private_key_path): No such file or directory even though no_tls: True is added

So if you server cannot start, with the following error :

matrix_1  | 2019-10-24 20:27:50,458 - twisted - 171 - ERROR -  - Traceback (most recent call last):
matrix_1  | 2019-10-24 20:27:50,459 - twisted - 171 - ERROR -  -   File "/usr/local/lib/python3.7/site-packages/synapse/app/_base.py", line 263, in start
matrix_1  | 2019-10-24 20:27:50,459 - twisted - 171 - ERROR -  -     refresh_certificate(hs)
matrix_1  | 2019-10-24 20:27:50,460 - twisted - 171 - ERROR -  -   File "/usr/local/lib/python3.7/site-packages/synapse/app/_base.py", line 212, in refresh_certificate
matrix_1  | 2019-10-24 20:27:50,461 - twisted - 171 - ERROR -  -     hs.config.read_certificate_from_disk(require_cert_and_key=True)
matrix_1  | 2019-10-24 20:27:50,462 - twisted - 171 - ERROR -  -   File "/usr/local/lib/python3.7/site-packages/synapse/config/tls.py", line 221, in read_certificate_from_disk
matrix_1  | 2019-10-24 20:27:50,462 - twisted - 171 - ERROR -  -     self.tls_private_key = self.read_tls_private_key()
matrix_1  | 2019-10-24 20:27:50,463 - twisted - 171 - ERROR -  -   File "/usr/local/lib/python3.7/site-packages/synapse/config/tls.py", line 487, in read_tls_private_key
matrix_1  | 2019-10-24 20:27:50,465 - twisted - 171 - ERROR -  -     private_key_pem = self.read_file(private_key_path, "tls_private_key_path")
matrix_1  | 2019-10-24 20:27:50,467 - twisted - 171 - ERROR -  -   File "/usr/local/lib/python3.7/site-packages/synapse/config/_base.py", line 135, in read_file
matrix_1  | 2019-10-24 20:27:50,468 - twisted - 171 - ERROR -  -     cls.check_file(file_path, config_name)
matrix_1  | 2019-10-24 20:27:50,469 - twisted - 171 - ERROR -  -   File "/usr/local/lib/python3.7/site-packages/synapse/config/_base.py", line 117, in check_file
matrix_1  | 2019-10-24 20:27:50,469 - twisted - 171 - ERROR -  -     % (file_path, config_name, e.strerror)
matrix_1  | 2019-10-24 20:27:50,470 - twisted - 171 - ERROR -  - synapse.config._base.ConfigError: Error accessing file '/data/matrix.miouyouyou.fr.tls.key' (config for tls_private_key_path): No such file or directory

Even though you seem to have enable no_tls: true in your /data/homeserver.yaml, check the two following things :

  1. The directory you are actually mounting has a file named homeserver.yaml.
    While this seems dumb, you have to understand that :
  2. If you setup the environment variable SYNAPSE_SERVER_NAME, this will generate a temporary configuration file that will overshadow your current configuration file.
    I.e. If you set SYNAPSE_SERVER_NAME, the server won’t look for your /data/homeserver.yaml but use an auto-generated one instead !.

Migrate your SQLite DB to a PostgreSQL DB using Docker

First, ensure that your synapse Docker /data is file-system accessible volume, or ensure at least that you can access this directory to create the new configuration file, and do a database copy.

Here, I’ll assume that you’re in a directory where the sub-directory data is mounted as /data in your Synapse Docker instance.

Quick and dirty

  • Create a volume for the PostgreSQL data

    docker volume create synapse_db_data

If you know what you’re doing and want to use a host directory for the database files, just replace following synapse_db_data references with /absolute/path/to/your/host/directory, like this : -v /absolute/path/to/your/host/directory:/var/lib/postgresql/data

  • Create a temporary environment variable for PostgreSQL configuration through Docker

    cat <<EOF > postgres.env
    POSTGRES_USER=your_db_user
    # If you actually use Y0urP4ssw0rd as a password, you're an idiot
    POSTGRES_PASSWORD=Y0urP4ssw0rd
    POSTGRES_DB=your_db_name
    EOF
  • Fire the PostgreSQL server

    docker run --rm --env-file docker.env -v synapse_db_data:/var/lib/postgresql/data --name synapsedb -d postgres

If the instance doesn’t start, remove --rm, then type docker logs synapsedb to view the logs, and then docker container rm synapsedb to get rid of the stopped container else you won’t be able to recreate it

  • Copy and modify your Synapse server homeserver.yaml configuration to use PostgreSQL

    cp data/homeserver.yaml data/homeserver-pgsql.yaml

Then modify the section that look like this :

## Database ##

database:
  # The database engine name
  name: "sqlite3"
  # Arguments to pass to the engine
  args:
    # Path to the database
    database: "/data/homeserver.db"

and make it look this :

database:
  # The database engine name
  name: "psycopg2"
  # Arguments to pass to the engine
  args:
    user: your_db_user
    password: Y0urP4ssw0rd
    database: your_db_name
    host: synapsedb
  • Stop your current Synapse server

    docker stop your_synapse_container_tag_or_numeric_id
  • Copy your SQLite database just in case

    cp data/homeserver.db{,.bak}
  • Run the migration script with a new synapse instance

Replace myy/synapse-image:latest-intel by the reference of your synapse image.

docker run --name synapse  -v $PWD/data:/data --entrypoint synapse_port_db -d myy/synapse-image:latest-intel --sqlite-database /data/homeserver.db --postgres-config /data/homeserver-postgres.yaml

If you haven’t build your own synapse-image, either use the official one, or build one using my guide.

  • Follow the migration

    docker logs -f synapse
  • Shutdown your PostgreSQL instance

    docker stop synapsedb
  • Launch the whole thing with a docker-compose.yml

Replace myy/synapse-image:latest-intel by the reference of your synapse image.

( Add docker secrets commands )

version: '3'

services:
  matrix:
    image: myy/synapse:latest-intel
    build:
      context: ./build-synapse
      network: host
    volumes:
      - "./data:/data"
    networks:
      myynet:
        ipv6_address: fc00::110
  postgresql:
    image: postgres
    volumes:
      - pgdata:/var/lib/postgresql/data
    environment:
      - POSTGRES_USER=your_db_user
      - POSTGRES_PASSWORD=Y0urP4ssw0rd
      - POSTGRES_DB=your_db_name

networks:
  myynet:
    external: true

Notes

Use a specific network for these containers

If you want to use a specific network for the generated containers, add this to the commands :

--net networkname --ip your.ip.v.4 --ip6 your::ipv6

Example with the PostgreSQL instance :

docker run --net networkname --ip your.ip.v.4 --ip6 your::ipv6 --rm --env-file docker.env -v synapse_db_data:/var/lib/postgresql/data --name synapsedb -d postgres

If you want to use a specific network in the docker-compose.yml, you’ll have to add the following nodes under each service :

networks:
  yournetworkname:
    ipv4_address: your.ip.v.4
    ipv6_adderss: your::ipv6

And add the following at end of the file :

networks:
  yournetworkname:
    external: true

Example :

version: '3'

services:
  matrix:
    image: myy/synapse:latest-intel
    build:
      context: ./build-synapse
      network: host
    volumes:
      - "./data:/data"
    networks:
      yournetworkname:
        ipv4_address: your.ip.v.4
        ipv6_address: your::ipv6
  postgresql:
    image: postgres
    volumes:
      - synapse_pg_data:/var/lib/postgresql/data
    environment:
      - POSTGRES_USER: coincoin
      - POSTGRES_PASSWORD:

networks:
  yournetworkname:
    external: true

Check Docker compose official documentation for more details.

Gitlab runner and Docker Desktop nightmares on Windows

Various fixes for issues I encountered

Here’s an old post that I wanted to write the week after I got all these issues, but were taken by other projects and… 2 months later, I don’t remember the whole chronology of the events.

All I remember is that Docker for Windows was unexplicably broken, and looking at different Github issues pages I stumbled upon, it seems that I wasn’t alone and that this tool was clearly not tested for integration.

After a few days, I was able to get Docker for Windows working correctly on Windows, however the Gitlab runner failed miserably for different reasons.

However what upseted me was that this “Gitlab runner” tricked me into thinking that it was only using the Docker images I told it to use, but instead it also used its “gitlab-runner” image for whatever reasons, and this image was bugged, failing the whole docker CI process.
The reason this upsets me is that, when I use Docker, I do it in order to get a reliable and reproduceable environment for the whole build process.
Really, reliability and control of the environment are the two main reasons I use Docker in the first place. The fact that the Gitlab runner introduced a third-party bugged image, while never really telling anything about it, broke these two main concepts.
The use of Docker with the Gitlab runner led to an unreliable process (Who knows when that image will be bugged again ? And if it’s bugged, what can I do ?) and stripped me from the control of the environment (What is this image doing in the first place ? How can I control it ?).

Seriously, I’m okay with throwing additional boiler plate scripts to the Docker building scripts, as long as I still understand what will be executed, why I should do this and in which environment it will be done.
But subtly pulling and running some Docker images, to do I don’t know what, I don’t why, AND FAILING THE WHOLE CI WITH THAT just scream “kludgy and unuseable build system”.
So, I gave up on the Gitlab CI. It looks cool on the outside, but once you start encountering issues with it, you suddenly lose a lot of trust in this tool after understanding the reasons of these issues, and how they are handled on their official Gitlab issues pages.

Also, I now understand that the tools that make automatic testing, CI/CD, and all other quality assurance processes more simpler are the ones that have the poorest QA.

So here’s the issues I encountered and documented 2 months ago.

Gitlab

How do I get the ID of my project ?

Below the name, when looking at the project Details (the main page of the project). Be sure to use a recent Gitlab.

How to test the API GET requests quickly

Log in your Gitlab server, using a cookie-accepting browser. Then go the the API endpoint.

For example, let’s say that your Gitlab server is hosted at : https://grumpy.hamsters.gitlab

Recent browsers will switch to a special console mode that display the received JSON response in a readable manner.

/api/v4/projects/:id -> 404 Not found

Either your didn’t provide the right credentials OR your database is broken.

This took me a while to understand what was going on though… So, if you just want to check for the database requests, here’s what you can do :

  1. Do a backup, and if it’s a production server duplicate it on a dev machine !

  2. Enable Ruby on Rails ActiveRecord database request logging, by setting config.log_level = :info to config.log_level = :debug in config/environments/production.rb and restarting Gitlab (gitlab-ctl restart).
    In the official Docker image container, the file will be located at :
    /opt/gitlab/embedded/service/gitlab-rails/config/environments/production.rb. Remember to revert this setup afterwards by setting config.log_level = :debug back to config.log_level = :info again.
    You could also create another environment file and set Rails to use this new environment, but this might generate new issues I’m not aware of.

  3. Request your project information through the API again (basically redo the GET request to https://your.server/api/v4/projects/1234) and check the logs.
    You should see the SQL request printed in the logs.
    Note that I’m using the official Docker image, so I’m using docker logs -f container_name to follow the logs.
    If you’re not using the official Docker image, /var/log/gitlab/production.log might be a good place to look for. Else just do a grep SELECT * -r in /var/log and check the results

  4. Start gitlab-psql and re-execute the database request. Generally the request will be something like SELECT "projects"."id" FROM "projects" WHERE "projects"."id" = 1234 AND "projects"."pending_delete" = false LIMIT 1;.
    In my case, since the constraints were not propagated correctly, pending_delete for my project was set to NULL instead of FALSE which failed the SQL request.

Now, if you want to check the code handling API requests for projects, check these files :

  • lib/api/projects.rb
  • lib/api/helpers.rb

In the official Docker image, these files are located at /opt/gitlab/embedded/service/gitlab-rails/lib/api.

You can do some puts debugging in it, restarting Gitlab everytime (gitlab-ctl restart) and checking the logs for the puts messages.
It’s ugly but it works and can help pinpoint the issue, though you’ll need some good knowledge in Ruby.
Just remember that in Ruby, functions/methods can be called without parentheses, and these can be confused with local variables…

Repair constraints and defaults on PostgreSQL

Single column

Repair NOT NULL
ALTER TABLE ONLY "table_name" ALTER COLUMN "column_name" SET NOT NULL;
Repair DEFAULT
ALTER TABLE ONLY "table_name" ALTER COLUMN "column_name" SET DEFAULT default_value;

Remember that strings, in PostgreSQL, must be single-quoted.

‘a’ → Good.
“a” → Error.

Set NULL values back to default
UPDATE "table_name" SET "column_name" = default_value WHERE "column_name" IS NULL;

Remember that strings, in PostgreSQL, must be single-quoted.

‘a’ → Good.
“a” → Error.

Repair PostgreSQL database constraints and defaults after a MySQL migration

I made a little custom Ruby script to repair the constraints and defaults, due to bad migrations…

Of course, as always, if you operate on your database : BACK IT UP !
No need for PostgreSQL commands, just copy the database folder ! Or the entire container persitent data volumes, if you’re using Docker.

Anyway, here’s the ruby script :

#!/usr/bin/env ruby

class Table
	def initialize(name)
		@table_name = name
		#puts "SELECT * from #@table_name"
	end

	def set_not_null(col_name, nullable)
		if (nullable == false)
			puts %Q[ALTER TABLE ONLY "#{@table_name}" ALTER COLUMN "#{col_name}" SET NOT NULL;] 
		end
	end

	def set_null_values_to_default(col_name, default)
		puts %Q[UPDATE "#{@table_name}" SET "#{col_name}" = #{default} WHERE "#{col_name}" IS NULL;]
	end
	
	def fix_null_when_not_null(col_name, nullable, default)
		if (nullable == false)
			set_not_null(col_name, false)
			set_null_values_to_default(col_name, default)
		end
	end

	def set_default_value(col_name, default)
		puts %Q[ALTER TABLE ONLY "#{@table_name}" ALTER COLUMN "#{col_name}" SET DEFAULT #{default};]
	end

	def repair_column(col_name, params, decent_default)
		fix_null_when_not_null(col_name, false, decent_default) if (params[:null] == false)
		set_default_value(col_name, decent_default) if (params[:default] != nil)
	end

	def datetime_with_timezone(col_name, **args)
		set_not_null(col_name, args[:null])
	end
	def datetime(col_name, **args)
		datetime_with_timezone(col_name, args)
	end
	def date(col_name, **args)
		datetime_with_timezone(col_name, args)
	end
	def integer(col_name, **args)
		default = (args[:default] || 0)
		repair_column(col_name, args, default)
	end
	def decimal(col_name, **args)
		default = (args[:default] || 0.0)
		repair_column(col_name, args, default)
	end
	def float(col_name, **args)
		decimal(col_name, args)
	end
	def bigint(col_name, **args)
		integer(col_name, args)
	end
	def text(col_name, **args)
		default = "'#{args[:default] || ""}'"
		repair_column(col_name, args, default)
	end
	def string(col_name, **args)
		text(col_name, args)
	end
	def boolean(col_name, **args)
		default = "#{args[:default] || false}"
		repair_column(col_name, args, default)
		set_null_values_to_default(col_name, default)
	end
	def index(*args)
	end
	def binary(col_name, **args)
		set_not_null(col_name, args[:null])
	end
	def jsonb(col_name, **args)
		set_not_null(col_name, args[:null])
	end
	
end

def create_table(name, **args, &block)
	t = Table.new(name)
	yield t
end

What I did then is :

  • I pasted all the create_table blocks from the schema/db.rb file used by the Gitlab version I used (/opt/gitlab/embedded/service/gitlab-rails/db/schema.rb in the official Gitlab Docker Image).
  • Ran it like this :

    ruby repair_db.rb > script.psql

If you do this with Powershell, script.psql will be saved in UTF-16… So, you’ll have to convert it to UTF-8, else PostgreSQL won’t be able to parse the script.

  • Checked that script.psql contained actual data

    cat script.psql
  • Copied the generated file (script.psql) to my Gitlab server.
    If you’re using Docker, use docker cp script.psql your_container_name:/tmp/script.psql.

  • Ran it with gitlab-psql -f /path/to/script.psql
    gitlab-psql -f /tmp/script.psql if we follow the same Docker example.

Some table names/column names were incorrect and produced errors. However, all the weird API issues that I encountered were fixed using this simple method.

Gitlab runner issues

Never find jobs

  • Check that your runner is enabled for this project (GitlabYour projectSettingsCI/CD)
  • Check that the runner isn’t setup to answer for ONE specific tag.
    If it’s setup, in Gitlab, to respond to specific tags, either tag the pipeline in the .gitlab-ci.yml, or remove the tag restrictions in the runner settings (click on the runner-name in the list).
  • Check that you can access the project description, through the API, using your credentials at least.
    If you cannot, it might be a Gitlab (database) issue.
    Just log-in on your Gitlab instance, go to https://your-same.gitlab.server/api/v4/projects/id and check that you’re not receiving a “404 Not Found” JSON error.

unable to access ‘https://gitlab-ci-token:[MASKED]@your.server/repo/name.git/': SSL certificate problem: unable to get local issuer certificate

Two options :

  1. Edit config.toml and add this in the [[runners]] section :

    pre_clone_script = "git config --global http.\"https://gitlab.grenoble.texis\".sslVerify false"
  2. Edit config.toml and add this in the [[runners]] section :

    pre_clone_script = "git config --global http.sslCAinfo /etc/ssl/certs/mycerts/gitlab.grenoble.texis.crt"

And change volumes in the [runners.docker] subsection to this :

volumes = ["/cache", "/c/Gitlab/certs:/etc/ssl/certs/mycerts"]

SSL Certificate problem: unable to get local issuer certificate

It’s a bug with the latest gitlab-runner images.
Thanks to Grégory Maitrallain for reporting this in the gitlab-runner issues list.

That said, while he has been able to work around this issue by adding helper_image = "gitlab/gitlab-runner-helper:x86_64-91c2f0aa" in the config.toml file, I haven’t been able to use this workaround.
On my side, using this helper image makes the docker script dies very early…

So, if the helper_image think doesn’t work for you, the best remaining solutions for now are to either :

  • Edit the config.toml file and add :

    pre_clone_script = "git config --global http.https://your.gitlab.com.sslVerify false"

(Replace your.gitlab.com by the DNS, or IP address, of the server you’re cloning from).

  • Use another runner, that will use webhooks instead.

Don’t bother trying the git config --global http.sslCAInfo /path/to/cert.pem solution, it won’t work. The runner already injects the SSL certificates it had to use to connect to the gitlab server.
You can check this by adding && git config --global list in the pre_clone_script, like this :

pre_clone_script = "git config --global http.https://your.gitlab.com.sslVerify false && git config --global list"

Docker Desktop issues

Firewall is blocking

This error message is just a generic message that makes no sense most of the time.

What I did to pass this error was enable the “Microsoft App-V client” service. Just look for “Disabled” services and check if there isn’t some “HyperV” or “Whatever-V” service that could be enabled.
If that’s the case, enable it and retry. Remember that you can to open the “Services” MMC pane using Administrator privileges if you want to start, stop or change services.

An unexpected error occured while sharing drive

Your drive (*C:*, *D:*, Whatever…) is not enabled for sharing in Docker Desktop settings.
Right click the Docker notification icon, in the notification bar, and select Settings.

Clicking Apply & Restart in Docker Desktop does nothing. Same thing for restart.

First, let me say to any developer coding an UI :

NEVER HIDE ERRORS !
NEVER EVER DO THAT !
IF SOMETHING GOES WRONG, SHOWING NOTHING IS THE WORST THING YOU COULD DO !

I understand that there’s an UX trend about “hiding everything to the user, so that he can live in a world of dreams and rainbows, without any error message or bad thing that would create DOUBTS and FEARS !“…
But what this does actually is ENRAGING THE USER, who have to deal with an unresponsive and broken UI instead !

Seriously, here’s what happened :

  • I clicked on “Apply & Restart”
  • Saw nothing happening !

”… Did it worked ? … Did it crashed ? …
Should I wait a little bit … ? CAN I GET ANY INFORMATION !?”

This “DON’T SHOW THE ERRORS AND DON’T SAY ANYTHING” just make the UI developpers look like they forgot to connect the buttons to actual functions… It’s seriously stupid.
If something goes wrong, show a helping message somehow. The same one you put on the logs if you cannot state the issue in a “user-friendly” way.

Anyway, in my case, this was due to the Server service being disabled, since I don’t want the file hosting service being run when I’m not using it.
The Server service is named like this, and it’s a very old Microsoft service dating from Windows… 98 First Edition ? 95 ?

Now, first, in order to pinpoint the real issue, you might want to check the logs or, better, follow the logs.
See below if you don’t know how to do this.

Follow the logs with ‘Git bash’

I recommend installing ‘Git for Windows’ with its ‘Git Bash’. This MSYS32 and bash setup work very nicely on Windows (as long as you don’t start Windows specific console softwares, like python or irb…)

With “Git bash” run as an administrator, do this :

tail -f $LOCALAPPDATA/Docker/log.txt

Then do the actions that don’t work on the “Docker Desktop Settings” UI, and check back the bash console to see if any error message were printed.

There might be a way to do so the same thing in Powershell, but given Powershell propension to auto-scroll to the end on new output, I’d highly recommend to not use Powershell for following logs, or find a way to disable this behaviour since looking for errors will be far more difficult.

Check the logs afterwards with CMD or Powershell

So, if you’re using cmd.exe or Powershell, what you could instead is : 1. Do the actions that do not work. 2. Check the logs

If you’re running cmd.exe as an administrator.

cd %LOCALAPPDATA%/Docker
dir
notepad log.txt

If you’re running Powershell as an administrator.

cd $env:LOCALAPPDATA/Docker
ls
notepad log.txt

mkdir /host_mnt/c: file exists when trying to mount a volume

  • Run this in a powershell run as administrator :

    docker container prune
    docker volume prune
  • Then open the Docker Desktop Settings.

  • In Resources -> Volumes, click on “Reset Credentials”.

  • Type your administrator credentials again.

  • Click on “Apply & Restart”.

Note that you might have to do this every time the system goes into “sleep mode”… I’m not kidding… This is insane !

The story behind this (written 1 week after the incidents)

So, last week, at my job, I decided to put a Gitlab CI runner in place, on Windows machines, in order to present and show how to use automated testing and deployment, using Gitlab runners and Docker containers.

I already testing the whole thing on another Gitlab server, from the same workplace, and my office Linux machine. However, since most of the team develop Windows applications, a Windows CI/CD workflow made sense.

Ugh…

You know, sometimes, you get this sensation that tells you to stop wasting energy into a project, of things will turn to shit…

So, to begin with, the first Gitlab server I tested it on was one I installed myself, using Docker-compose on Synology.

I have to say that docker-compose on Synology works nicely, but for fuck sake if you’re going to put a whole UI to manage docker, put one to generate and manage docker-compose.yml files…
It will be WAY better and WAY more useable than a UI where you have to SET EACH ENVIRONMENT VARIABLE BY HAND through a horrendous form EVERYTIME YOU WANT TO CREATE A CONTAINER.
Dear Docker UI makers : use Docker before making the UI !

Anyway, I installed the whole thing and it worked beautifully, with updates basically done like this :

cd /path/to/docker-configs/gitlab &&
# Shutdown
docker-compose down -f docker-compose.yml &&
# Backup
tar cJpvf ../backups/gitlab-`date +"%Y%m%d-%H%M"`.tar.xz . &&
# Update the image
docker pull gitlab/gitlab-ce:latest &&
# Start Gitlab again
docker-compose up -f docker-compose.yml -d

And the only reason why I’m using tarballs to generate backups, is that I didn’t learn how to use volumes correctly.

So, yeah… the other Gitlab server… It was the one provided by Synology as a package at some point, that I decided to migrate to the official docker image, using a similar configuration.
The whole idea was that Synology updates were sometimes SOOO slow, that you had to wait several months to get access to new Gitlab features.

The issue, however, was that the old Gitlab server used MySQL as a database and the new one used PostgreSQL as a database… The tutorials provided by Gitlab were either too old or unuseable, so it has been a pain in the ass !
Long-story short, I was able to migrate the tables and columns correctly, but not their restrictions.
Turns out that all the DEFAULT and NOT NULL, and others simple things were not ported… That I learned a bit too late. No error messages spew out during the migration.
Well… A LOT OF THEM HAPPENED during the ActiveRecord migrations scripts that update the old Gitlab version schemas.
And don’t you dare use a too old version, or these migrations scripts won’t even run !
I love Gitlab !

I was a able to fix these database issues when errors popped up here and there, but this still generated hidden issues with the API.
Turns out that Gitlab, while using Ruby on Rails, can be very silent about database issues.
The main reason being that ActiveRecord insertion requests generally leave some fields blank, hoping that the DEFAULT restrictions will kick in and fill the blanks. However, if these restrictions are not ported, you will get NULL values inserted in boolean columns, without either PostgreSQL nor ActiveRecord catching the issue.
So when Gitlab, using ActiveRecord, tries to SELECT rows WHERE the that column value is FALSE, PostgreSQL ignores the rows with NULL in that column, which lead to Gitlab finding nothing and returning nothing.

Now, my first issue was that the Gitlab runners would not answer to any job request set on specific projects.

Debugging issues with gitlab-runner is a REAL pain, by the way…
gitlab-runner outputs almost NOTHING, even when executed with the --debug flag. You’d expect this flag to send 10k of logs of the standard output, but NO ! It outputs almost NOTHING !
It doesn’t even print the REST requests it is sending !
So, when you have no clear idea on how it connects to the server, it makes debugging so much harder ! For no real reason !

So I started to use the API and… something was odd. I could access some projects descriptions with /api/v4/projects/:id, while some others would output “404 Not found”.

I first thought that it could be some permissions issues but I still feared that it might have to do with the database. However, since I saw nothing in the logs that went like “I DIDN’T EXPECT NULL VALUES HERE !“, I searched for simple reasons first.

AAAND, after wasting an entire day while checking the project permissions, checking the logs after creating new projects, and trying to sniff gitlab-runner’s traffic, I decided it was time to delve into Gitlab code, and understand why it replied with “404 Not Found” for valid projects.

So I backed up the docker-compose files and volumes data, put them on my machine, restarted an identical container and searched for the files handling the “projects” api requests.
After a few grep, I found that the main file handling the whole “projects” API logic was :
/opt/gitlab/embedded/service/gitlab-rails/lib/api/projects.rb

The next step was “Check if I can do some ‘puts’ debugging”…
Well, no, the first step was understanding how this… thing… is architectured. Good thing I have a lot of experience with Ruby. And after… guessing how the whole thing worked, I checked if I could do some puts debugging.

See, when you attack that kind the code, there might be some advanced debugging mechanics. But being able to just log the content of variables or objects, with simple functions/methods like puts, fprintf or log, is always the best call for short-term debugging.
These functions are easy to remember and don’t need to delve into dozen of manpages, to just get a simple variable content on the screen.

I tried to edit projects.rb and add some puts messages.
When using logging functions for short-term debugging, the first messages you want to display are stupid messages that you can trace easily in the logs.
Even though there were no traffic on that Gitlab instance, Gitlab still output a good amount of logs for every operation. So if you want to check that puts messages are available in the logs, you need dumb messages like : puts "MEOW".
I tried this, did an API request and… no MEOW… So the first reaction was “Hmm… maybe I need to restart the app. IIRC, Rails applications need to be restarted on modifications.“. And, yeah, a restart of Gitlab with gitlab-ctl restart was all that was needed.

One restart later, I checked the logs (docker logs -f gitlabce_gitlab_1) and : it MEOWED ! Yay ! I can log code results !

So the next step was to trace how the get “:id” function worked…
While I did a lot of projects in Ruby, I was a bit rusty and, while I appreciate the whole “functions calls without parentheses” in Ruby, I still find it very misleading when it is used like in this function.
Seriously, I spent 10 minutes trying to understand how “user_project” was defined. I took it for a local variable at first… Then I thought “maybe it’s more global ? But global variables are referenced with $ prefixes and… Rails projects don’t use global variables anyway…“.
AAAND after a few grep, I understood that user_project was actually a helper method, defined in lib/api/helpers/project_helpers.rb.

Before I understood this, I put some messages before and after the options definitions and while I saw the messages before, I never saw the messages after… So something went wrong meanwhile, obviously.
I then added some debugging messages to the helpers methods, which made me understood that projects.find_by(id: id) returned nothing.

Well… find_by is a generic Ruby on Rails function that send a request to the database so… If that returned nothing, it only meant one thing : the request on the database returned nothing.
Alright, “time to find how to log ActiveRecord database requests in Ruby on Rails applications…“. With a few web searches, I found that setting config.log_level = :info to config.log_level = :debug in config/environment/production.rb did the trick.
One restart later I had the database requests and the request was something like :

SELECT "projects" WHERE "projects"."id" = 1234 AND "projects"."pending_delete" IS FALSE

Turned out that pending_delete columns were set to nil instead of false in every new project, making the whole request fail every time… UGH… By looking at the db/schema.rb it was clear that the “default: false” constraint was not setup in the database…

Since it wasn’t the first time that happened, I devised a Ruby script which took db/schema.rb create_table instructions as input, and generate PostgreSQL UPDATE TABLE instructions that would update and setup the DEFAULT and NOT NULL constraints, while updating inserted NULL values to the default values when DEFAULT constraints existed.

This fixed most of the issues and, notably, the API. Once fixed the runners were able to access the server jobs again, and started executing them and… failing only on Windows… Hmm…

A lot of Windows administrators would say “It failed on Windows ? How surprising !“…
However, this time, the hate is really misplaced…

The problems encountered (written 2 months later)

Yeah, I don’t exactly remember how it went. But basically, I’ve hit a LOT of issues with only Docker for Windows. This was due to me not having the “Server” service enabled (which broke the UI) and not having the “Microsoft App-V client” service enabled (which led to some firewall issues messages that were complete red-herrings).
Note that this “Server” service is Microsoft NetBIOS “Server” service which date from… Windows 98 ? 95 ? … It’s still there on Windows 10.

Meanwhile, what I got was :

  • The UI of Docker for Desktop not responding
  • Docker failing to launch due to some pseudo firewall issues
  • Docker for Desktop unable to mount volumes
  • Had to repair some Gitlab database issues dating from the MySQL to PostgreSQL migration.
  • Gitlab runner unable to clone git repositories from internal webservers using self signed certificates.
    And, NO, the agent is UNABLE to use SSH to clone a repository.
    WHY !?
  • No way to provide the certificates to use to the runner, due to some stupid bugs in the “gitlab-runner” Docker image, that is covertly used to prepare the build environment.
    The bug was not fixed for severals months straight.

So, I could still clone the repositories by just disabling the git client SSL verifications… But, as I said earlier, the whole “let’s add some Docker images without telling you, and fail the whole build due to some hidden bugs in these images” just draw me mad. Seeing how the issue was handled too…
That just screamed Unreliable.