Gitlab runner and Docker Desktop nightmares on Windows

Various fixes for issues I encountered

Here’s an old post that I wanted to write the week after I got all these issues, but were taken by other projects and… 2 months later, I don’t remember the whole chronology of the events.

All I remember is that Docker for Windows was unexplicably broken, and looking at different Github issues pages I stumbled upon, it seems that I wasn’t alone and that this tool was clearly not tested for integration.

After a few days, I was able to get Docker for Windows working correctly on Windows, however the Gitlab runner failed miserably for different reasons.

However what upseted me was that this “Gitlab runner” tricked me into thinking that it was only using the Docker images I told it to use, but instead it also used its “gitlab-runner” image for whatever reasons, and this image was bugged, failing the whole docker CI process.
The reason this upsets me is that, when I use Docker, I do it in order to get a reliable and reproduceable environment for the whole build process.
Really, reliability and control of the environment are the two main reasons I use Docker in the first place. The fact that the Gitlab runner introduced a third-party bugged image, while never really telling anything about it, broke these two main concepts.
The use of Docker with the Gitlab runner led to an unreliable process (Who knows when that image will be bugged again ? And if it’s bugged, what can I do ?) and stripped me from the control of the environment (What is this image doing in the first place ? How can I control it ?).

Seriously, I’m okay with throwing additional boiler plate scripts to the Docker building scripts, as long as I still understand what will be executed, why I should do this and in which environment it will be done.
But subtly pulling and running some Docker images, to do I don’t know what, I don’t why, AND FAILING THE WHOLE CI WITH THAT just scream “kludgy and unuseable build system”.
So, I gave up on the Gitlab CI. It looks cool on the outside, but once you start encountering issues with it, you suddenly lose a lot of trust in this tool after understanding the reasons of these issues, and how they are handled on their official Gitlab issues pages.

Also, I now understand that the tools that make automatic testing, CI/CD, and all other quality assurance processes more simpler are the ones that have the poorest QA.

So here’s the issues I encountered and documented 2 months ago.

Gitlab

How do I get the ID of my project ?

Below the name, when looking at the project Details (the main page of the project). Be sure to use a recent Gitlab.

How to test the API GET requests quickly

Log in your Gitlab server, using a cookie-accepting browser. Then go the the API endpoint.

For example, let’s say that your Gitlab server is hosted at : https://grumpy.hamsters.gitlab

Recent browsers will switch to a special console mode that display the received JSON response in a readable manner.

/api/v4/projects/:id -> 404 Not found

Either your didn’t provide the right credentials OR your database is broken.

This took me a while to understand what was going on though… So, if you just want to check for the database requests, here’s what you can do :

  1. Do a backup, and if it’s a production server duplicate it on a dev machine !

  2. Enable Ruby on Rails ActiveRecord database request logging, by setting config.log_level = :info to config.log_level = :debug in config/environments/production.rb and restarting Gitlab (gitlab-ctl restart).
    In the official Docker image container, the file will be located at :
    /opt/gitlab/embedded/service/gitlab-rails/config/environments/production.rb. Remember to revert this setup afterwards by setting config.log_level = :debug back to config.log_level = :info again.
    You could also create another environment file and set Rails to use this new environment, but this might generate new issues I’m not aware of.

  3. Request your project information through the API again (basically redo the GET request to https://your.server/api/v4/projects/1234) and check the logs.
    You should see the SQL request printed in the logs.
    Note that I’m using the official Docker image, so I’m using docker logs -f container_name to follow the logs.
    If you’re not using the official Docker image, /var/log/gitlab/production.log might be a good place to look for. Else just do a grep SELECT * -r in /var/log and check the results

  4. Start gitlab-psql and re-execute the database request. Generally the request will be something like SELECT "projects"."id" FROM "projects" WHERE "projects"."id" = 1234 AND "projects"."pending_delete" = false LIMIT 1;.
    In my case, since the constraints were not propagated correctly, pending_delete for my project was set to NULL instead of FALSE which failed the SQL request.

Now, if you want to check the code handling API requests for projects, check these files :

  • lib/api/projects.rb
  • lib/api/helpers.rb

In the official Docker image, these files are located at /opt/gitlab/embedded/service/gitlab-rails/lib/api.

You can do some puts debugging in it, restarting Gitlab everytime (gitlab-ctl restart) and checking the logs for the puts messages.
It’s ugly but it works and can help pinpoint the issue, though you’ll need some good knowledge in Ruby.
Just remember that in Ruby, functions/methods can be called without parentheses, and these can be confused with local variables…

Repair constraints and defaults on PostgreSQL

Single column

Repair NOT NULL
ALTER TABLE ONLY "table_name" ALTER COLUMN "column_name" SET NOT NULL;
Repair DEFAULT
ALTER TABLE ONLY "table_name" ALTER COLUMN "column_name" SET DEFAULT default_value;

Remember that strings, in PostgreSQL, must be single-quoted.

‘a’ → Good.
“a” → Error.

Set NULL values back to default
UPDATE "table_name" SET "column_name" = default_value WHERE "column_name" IS NULL;

Remember that strings, in PostgreSQL, must be single-quoted.

‘a’ → Good.
“a” → Error.

Repair PostgreSQL database constraints and defaults after a MySQL migration

I made a little custom Ruby script to repair the constraints and defaults, due to bad migrations…

Of course, as always, if you operate on your database : BACK IT UP !
No need for PostgreSQL commands, just copy the database folder ! Or the entire container persitent data volumes, if you’re using Docker.

Anyway, here’s the ruby script :

#!/usr/bin/env ruby

class Table
	def initialize(name)
		@table_name = name
		#puts "SELECT * from #@table_name"
	end

	def set_not_null(col_name, nullable)
		if (nullable == false)
			puts %Q[ALTER TABLE ONLY "#{@table_name}" ALTER COLUMN "#{col_name}" SET NOT NULL;] 
		end
	end

	def set_null_values_to_default(col_name, default)
		puts %Q[UPDATE "#{@table_name}" SET "#{col_name}" = #{default} WHERE "#{col_name}" IS NULL;]
	end
	
	def fix_null_when_not_null(col_name, nullable, default)
		if (nullable == false)
			set_not_null(col_name, false)
			set_null_values_to_default(col_name, default)
		end
	end

	def set_default_value(col_name, default)
		puts %Q[ALTER TABLE ONLY "#{@table_name}" ALTER COLUMN "#{col_name}" SET DEFAULT #{default};]
	end

	def repair_column(col_name, params, decent_default)
		fix_null_when_not_null(col_name, false, decent_default) if (params[:null] == false)
		set_default_value(col_name, decent_default) if (params[:default] != nil)
	end

	def datetime_with_timezone(col_name, **args)
		set_not_null(col_name, args[:null])
	end
	def datetime(col_name, **args)
		datetime_with_timezone(col_name, args)
	end
	def date(col_name, **args)
		datetime_with_timezone(col_name, args)
	end
	def integer(col_name, **args)
		default = (args[:default] || 0)
		repair_column(col_name, args, default)
	end
	def decimal(col_name, **args)
		default = (args[:default] || 0.0)
		repair_column(col_name, args, default)
	end
	def float(col_name, **args)
		decimal(col_name, args)
	end
	def bigint(col_name, **args)
		integer(col_name, args)
	end
	def text(col_name, **args)
		default = "'#{args[:default] || ""}'"
		repair_column(col_name, args, default)
	end
	def string(col_name, **args)
		text(col_name, args)
	end
	def boolean(col_name, **args)
		default = "#{args[:default] || false}"
		repair_column(col_name, args, default)
		set_null_values_to_default(col_name, default)
	end
	def index(*args)
	end
	def binary(col_name, **args)
		set_not_null(col_name, args[:null])
	end
	def jsonb(col_name, **args)
		set_not_null(col_name, args[:null])
	end
	
end

def create_table(name, **args, &block)
	t = Table.new(name)
	yield t
end

What I did then is :

  • I pasted all the create_table blocks from the schema/db.rb file used by the Gitlab version I used (/opt/gitlab/embedded/service/gitlab-rails/db/schema.rb in the official Gitlab Docker Image).
  • Ran it like this :

    ruby repair_db.rb > script.psql

If you do this with Powershell, script.psql will be saved in UTF-16… So, you’ll have to convert it to UTF-8, else PostgreSQL won’t be able to parse the script.

  • Checked that script.psql contained actual data

    cat script.psql
  • Copied the generated file (script.psql) to my Gitlab server.
    If you’re using Docker, use docker cp script.psql your_container_name:/tmp/script.psql.

  • Ran it with gitlab-psql -f /path/to/script.psql
    gitlab-psql -f /tmp/script.psql if we follow the same Docker example.

Some table names/column names were incorrect and produced errors. However, all the weird API issues that I encountered were fixed using this simple method.

Gitlab runner issues

Never find jobs

  • Check that your runner is enabled for this project (GitlabYour projectSettingsCI/CD)
  • Check that the runner isn’t setup to answer for ONE specific tag.
    If it’s setup, in Gitlab, to respond to specific tags, either tag the pipeline in the .gitlab-ci.yml, or remove the tag restrictions in the runner settings (click on the runner-name in the list).
  • Check that you can access the project description, through the API, using your credentials at least.
    If you cannot, it might be a Gitlab (database) issue.
    Just log-in on your Gitlab instance, go to https://your-same.gitlab.server/api/v4/projects/id and check that you’re not receiving a “404 Not Found” JSON error.

unable to access ‘https://gitlab-ci-token:[MASKED]@your.server/repo/name.git/': SSL certificate problem: unable to get local issuer certificate

Two options :

  1. Edit config.toml and add this in the [[runners]] section :

    pre_clone_script = "git config --global http.\"https://gitlab.grenoble.texis\".sslVerify false"
  2. Edit config.toml and add this in the [[runners]] section :

    pre_clone_script = "git config --global http.sslCAinfo /etc/ssl/certs/mycerts/gitlab.grenoble.texis.crt"

And change volumes in the [runners.docker] subsection to this :

volumes = ["/cache", "/c/Gitlab/certs:/etc/ssl/certs/mycerts"]

SSL Certificate problem: unable to get local issuer certificate

It’s a bug with the latest gitlab-runner images.
Thanks to Grégory Maitrallain for reporting this in the gitlab-runner issues list.

That said, while he has been able to work around this issue by adding helper_image = "gitlab/gitlab-runner-helper:x86_64-91c2f0aa" in the config.toml file, I haven’t been able to use this workaround.
On my side, using this helper image makes the docker script dies very early…

So, if the helper_image think doesn’t work for you, the best remaining solutions for now are to either :

  • Edit the config.toml file and add :

    pre_clone_script = "git config --global http.https://your.gitlab.com.sslVerify false"

(Replace your.gitlab.com by the DNS, or IP address, of the server you’re cloning from).

  • Use another runner, that will use webhooks instead.

Don’t bother trying the git config --global http.sslCAInfo /path/to/cert.pem solution, it won’t work. The runner already injects the SSL certificates it had to use to connect to the gitlab server.
You can check this by adding && git config --global list in the pre_clone_script, like this :

pre_clone_script = "git config --global http.https://your.gitlab.com.sslVerify false && git config --global list"

Docker Desktop issues

Firewall is blocking

This error message is just a generic message that makes no sense most of the time.

What I did to pass this error was enable the “Microsoft App-V client” service. Just look for “Disabled” services and check if there isn’t some “HyperV” or “Whatever-V” service that could be enabled.
If that’s the case, enable it and retry. Remember that you can to open the “Services” MMC pane using Administrator privileges if you want to start, stop or change services.

An unexpected error occured while sharing drive

Your drive (*C:*, *D:*, Whatever…) is not enabled for sharing in Docker Desktop settings.
Right click the Docker notification icon, in the notification bar, and select Settings.

Clicking Apply & Restart in Docker Desktop does nothing. Same thing for restart.

First, let me say to any developer coding an UI :

NEVER HIDE ERRORS !
NEVER EVER DO THAT !
IF SOMETHING GOES WRONG, SHOWING NOTHING IS THE WORST THING YOU COULD DO !

I understand that there’s an UX trend about “hiding everything to the user, so that he can live in a world of dreams and rainbows, without any error message or bad thing that would create DOUBTS and FEARS !“…
But what this does actually is ENRAGING THE USER, who have to deal with an unresponsive and broken UI instead !

Seriously, here’s what happened :

  • I clicked on “Apply & Restart”
  • Saw nothing happening !

”… Did it worked ? … Did it crashed ? …
Should I wait a little bit … ? CAN I GET ANY INFORMATION !?”

This “DON’T SHOW THE ERRORS AND DON’T SAY ANYTHING” just make the UI developpers look like they forgot to connect the buttons to actual functions… It’s seriously stupid.
If something goes wrong, show a helping message somehow. The same one you put on the logs if you cannot state the issue in a “user-friendly” way.

Anyway, in my case, this was due to the Server service being disabled, since I don’t want the file hosting service being run when I’m not using it.
The Server service is named like this, and it’s a very old Microsoft service dating from Windows… 98 First Edition ? 95 ?

Now, first, in order to pinpoint the real issue, you might want to check the logs or, better, follow the logs.
See below if you don’t know how to do this.

Follow the logs with ‘Git bash’

I recommend installing ‘Git for Windows’ with its ‘Git Bash’. This MSYS32 and bash setup work very nicely on Windows (as long as you don’t start Windows specific console softwares, like python or irb…)

With “Git bash” run as an administrator, do this :

tail -f $LOCALAPPDATA/Docker/log.txt

Then do the actions that don’t work on the “Docker Desktop Settings” UI, and check back the bash console to see if any error message were printed.

There might be a way to do so the same thing in Powershell, but given Powershell propension to auto-scroll to the end on new output, I’d highly recommend to not use Powershell for following logs, or find a way to disable this behaviour since looking for errors will be far more difficult.

Check the logs afterwards with CMD or Powershell

So, if you’re using cmd.exe or Powershell, what you could instead is : 1. Do the actions that do not work. 2. Check the logs

If you’re running cmd.exe as an administrator.

cd %LOCALAPPDATA%/Docker
dir
notepad log.txt

If you’re running Powershell as an administrator.

cd $env:LOCALAPPDATA/Docker
ls
notepad log.txt

mkdir /host_mnt/c: file exists when trying to mount a volume

  • Run this in a powershell run as administrator :

    docker container prune
    docker volume prune
  • Then open the Docker Desktop Settings.

  • In Resources -> Volumes, click on “Reset Credentials”.

  • Type your administrator credentials again.

  • Click on “Apply & Restart”.

Note that you might have to do this every time the system goes into “sleep mode”… I’m not kidding… This is insane !

The story behind this (written 1 week after the incidents)

So, last week, at my job, I decided to put a Gitlab CI runner in place, on Windows machines, in order to present and show how to use automated testing and deployment, using Gitlab runners and Docker containers.

I already testing the whole thing on another Gitlab server, from the same workplace, and my office Linux machine. However, since most of the team develop Windows applications, a Windows CI/CD workflow made sense.

Ugh…

You know, sometimes, you get this sensation that tells you to stop wasting energy into a project, of things will turn to shit…

So, to begin with, the first Gitlab server I tested it on was one I installed myself, using Docker-compose on Synology.

I have to say that docker-compose on Synology works nicely, but for fuck sake if you’re going to put a whole UI to manage docker, put one to generate and manage docker-compose.yml files…
It will be WAY better and WAY more useable than a UI where you have to SET EACH ENVIRONMENT VARIABLE BY HAND through a horrendous form EVERYTIME YOU WANT TO CREATE A CONTAINER.
Dear Docker UI makers : use Docker before making the UI !

Anyway, I installed the whole thing and it worked beautifully, with updates basically done like this :

cd /path/to/docker-configs/gitlab &&
# Shutdown
docker-compose down -f docker-compose.yml &&
# Backup
tar cJpvf ../backups/gitlab-`date +"%Y%m%d-%H%M"`.tar.xz . &&
# Update the image
docker pull gitlab/gitlab-ce:latest &&
# Start Gitlab again
docker-compose up -f docker-compose.yml -d

And the only reason why I’m using tarballs to generate backups, is that I didn’t learn how to use volumes correctly.

So, yeah… the other Gitlab server… It was the one provided by Synology as a package at some point, that I decided to migrate to the official docker image, using a similar configuration.
The whole idea was that Synology updates were sometimes SOOO slow, that you had to wait several months to get access to new Gitlab features.

The issue, however, was that the old Gitlab server used MySQL as a database and the new one used PostgreSQL as a database… The tutorials provided by Gitlab were either too old or unuseable, so it has been a pain in the ass !
Long-story short, I was able to migrate the tables and columns correctly, but not their restrictions.
Turns out that all the DEFAULT and NOT NULL, and others simple things were not ported… That I learned a bit too late. No error messages spew out during the migration.
Well… A LOT OF THEM HAPPENED during the ActiveRecord migrations scripts that update the old Gitlab version schemas.
And don’t you dare use a too old version, or these migrations scripts won’t even run !
I love Gitlab !

I was a able to fix these database issues when errors popped up here and there, but this still generated hidden issues with the API.
Turns out that Gitlab, while using Ruby on Rails, can be very silent about database issues.
The main reason being that ActiveRecord insertion requests generally leave some fields blank, hoping that the DEFAULT restrictions will kick in and fill the blanks. However, if these restrictions are not ported, you will get NULL values inserted in boolean columns, without either PostgreSQL nor ActiveRecord catching the issue.
So when Gitlab, using ActiveRecord, tries to SELECT rows WHERE the that column value is FALSE, PostgreSQL ignores the rows with NULL in that column, which lead to Gitlab finding nothing and returning nothing.

Now, my first issue was that the Gitlab runners would not answer to any job request set on specific projects.

Debugging issues with gitlab-runner is a REAL pain, by the way…
gitlab-runner outputs almost NOTHING, even when executed with the --debug flag. You’d expect this flag to send 10k of logs of the standard output, but NO ! It outputs almost NOTHING !
It doesn’t even print the REST requests it is sending !
So, when you have no clear idea on how it connects to the server, it makes debugging so much harder ! For no real reason !

So I started to use the API and… something was odd. I could access some projects descriptions with /api/v4/projects/:id, while some others would output “404 Not found”.

I first thought that it could be some permissions issues but I still feared that it might have to do with the database. However, since I saw nothing in the logs that went like “I DIDN’T EXPECT NULL VALUES HERE !“, I searched for simple reasons first.

AAAND, after wasting an entire day while checking the project permissions, checking the logs after creating new projects, and trying to sniff gitlab-runner’s traffic, I decided it was time to delve into Gitlab code, and understand why it replied with “404 Not Found” for valid projects.

So I backed up the docker-compose files and volumes data, put them on my machine, restarted an identical container and searched for the files handling the “projects” api requests.
After a few grep, I found that the main file handling the whole “projects” API logic was :
/opt/gitlab/embedded/service/gitlab-rails/lib/api/projects.rb

The next step was “Check if I can do some ‘puts’ debugging”…
Well, no, the first step was understanding how this… thing… is architectured. Good thing I have a lot of experience with Ruby. And after… guessing how the whole thing worked, I checked if I could do some puts debugging.

See, when you attack that kind the code, there might be some advanced debugging mechanics. But being able to just log the content of variables or objects, with simple functions/methods like puts, fprintf or log, is always the best call for short-term debugging.
These functions are easy to remember and don’t need to delve into dozen of manpages, to just get a simple variable content on the screen.

I tried to edit projects.rb and add some puts messages.
When using logging functions for short-term debugging, the first messages you want to display are stupid messages that you can trace easily in the logs.
Even though there were no traffic on that Gitlab instance, Gitlab still output a good amount of logs for every operation. So if you want to check that puts messages are available in the logs, you need dumb messages like : puts "MEOW".
I tried this, did an API request and… no MEOW… So the first reaction was “Hmm… maybe I need to restart the app. IIRC, Rails applications need to be restarted on modifications.“. And, yeah, a restart of Gitlab with gitlab-ctl restart was all that was needed.

One restart later, I checked the logs (docker logs -f gitlabce_gitlab_1) and : it MEOWED ! Yay ! I can log code results !

So the next step was to trace how the get “:id” function worked…
While I did a lot of projects in Ruby, I was a bit rusty and, while I appreciate the whole “functions calls without parentheses” in Ruby, I still find it very misleading when it is used like in this function.
Seriously, I spent 10 minutes trying to understand how “user_project” was defined. I took it for a local variable at first… Then I thought “maybe it’s more global ? But global variables are referenced with $ prefixes and… Rails projects don’t use global variables anyway…“.
AAAND after a few grep, I understood that user_project was actually a helper method, defined in lib/api/helpers/project_helpers.rb.

Before I understood this, I put some messages before and after the options definitions and while I saw the messages before, I never saw the messages after… So something went wrong meanwhile, obviously.
I then added some debugging messages to the helpers methods, which made me understood that projects.find_by(id: id) returned nothing.

Well… find_by is a generic Ruby on Rails function that send a request to the database so… If that returned nothing, it only meant one thing : the request on the database returned nothing.
Alright, “time to find how to log ActiveRecord database requests in Ruby on Rails applications…“. With a few web searches, I found that setting config.log_level = :info to config.log_level = :debug in config/environment/production.rb did the trick.
One restart later I had the database requests and the request was something like :

SELECT "projects" WHERE "projects"."id" = 1234 AND "projects"."pending_delete" IS FALSE

Turned out that pending_delete columns were set to nil instead of false in every new project, making the whole request fail every time… UGH… By looking at the db/schema.rb it was clear that the “default: false” constraint was not setup in the database…

Since it wasn’t the first time that happened, I devised a Ruby script which took db/schema.rb create_table instructions as input, and generate PostgreSQL UPDATE TABLE instructions that would update and setup the DEFAULT and NOT NULL constraints, while updating inserted NULL values to the default values when DEFAULT constraints existed.

This fixed most of the issues and, notably, the API. Once fixed the runners were able to access the server jobs again, and started executing them and… failing only on Windows… Hmm…

A lot of Windows administrators would say “It failed on Windows ? How surprising !“…
However, this time, the hate is really misplaced…

The problems encountered (written 2 months later)

Yeah, I don’t exactly remember how it went. But basically, I’ve hit a LOT of issues with only Docker for Windows. This was due to me not having the “Server” service enabled (which broke the UI) and not having the “Microsoft App-V client” service enabled (which led to some firewall issues messages that were complete red-herrings).
Note that this “Server” service is Microsoft NetBIOS “Server” service which date from… Windows 98 ? 95 ? … It’s still there on Windows 10.

Meanwhile, what I got was :

  • The UI of Docker for Desktop not responding
  • Docker failing to launch due to some pseudo firewall issues
  • Docker for Desktop unable to mount volumes
  • Had to repair some Gitlab database issues dating from the MySQL to PostgreSQL migration.
  • Gitlab runner unable to clone git repositories from internal webservers using self signed certificates.
    And, NO, the agent is UNABLE to use SSH to clone a repository.
    WHY !?
  • No way to provide the certificates to use to the runner, due to some stupid bugs in the “gitlab-runner” Docker image, that is covertly used to prepare the build environment.
    The bug was not fixed for severals months straight.

So, I could still clone the repositories by just disabling the git client SSL verifications… But, as I said earlier, the whole “let’s add some Docker images without telling you, and fail the whole build due to some hidden bugs in these images” just draw me mad. Seeing how the issue was handled too…
That just screamed Unreliable.

Let's Encrypt certificates and OCSP staples with HAProxy

The quick setup

Generate OCSP staples

#!/bin/bash

# To change

DOMAIN_NAME=miouyouyou.fr
# Be careful, sometimes you need to append -XXXX
# Check which letsencrypt folder is the latest one updated with `ls -td /etc/letsencrypt/live/$DOMAIN_NAME* | head -1`
LETSENCRYPT_FOLDER=/etc/letsencrypt/live/$DOMAIN_NAME
HAPROXY_SSL_FOLDER=/srv/docker-files/LoadBalancer/haproxy/ssl

# These should be good
LETSENCRYPT_FULLCHAIN_CERT=$LETSENCRYPT_FOLDER/fullchain.pem
LETSENCRYPT_PRIVKEY_CERT=$LETSENCRYPT_FOLDER/privkey.pem
LETSENCRYPT_ISSUER_CERT=$LETSENCRYPT_FOLDER/chain.pem

HAPROXY_SSL_CERT=$HAPROXY_SSL_FOLDER/$DOMAIN_NAME.fullchain.pem
HAPROXY_SSL_CERT_OCSP=$HAPROXY_SSL_CERT.ocsp

# Uncomment to regenerate the real fullchain.pem SSL Certificate
# cat $LETSENCRYPT_FULLCHAIN_CERT $LETSENCRYPT_PRIVKEY_CERT > $HAPROXY_SSL_CERT

openssl ocsp -issuer $LETSENCRYPT_ISSUER_CERT -cert $HAPROXY_SSL_CERT -respout $HAPROXY_SSL_CERT_OCSP -noverify -no_nonce -url http://ocsp.int-x3.letsencrypt.org

Part of the HAProxy configuration

frontend blogfront

        bind 172.50.3.3:80
        bind 172.50.3.3:443 ssl crt /etc/ssl/mine/miouyouyou.fr.fullchain.pem ssl-min-ver TLSv1.0 ssl-max-ver TLSv1.3 alpn h2,http/1.1
        bind [fc00::103]:80
        bind [fc00::103]:443 ssl crt /etc/ssl/mine/miouyouyou.fr.fullchain.pem ssl-min-ver TLSv1.0 ssl-max-ver TLSv1.3 alpn h2,http/1.1
        mode http
        
        # ...

        default_backend blogback

backend blogback
        mode http
        balance roundrobin
        http-response set-header Strict-Transport-Security "max-age=16000000; includeSubDomains; preload;" if { ssl_fc }
        server web01 [fc00::102]:80 check

To check that OCSP staples are provided to the clients, use this command :

echo QUIT | openssl s_client -connect miouyouyou.fr:443 -status 2> /dev/null | grep -A 17 'OCSP response:' | grep -B 17 'Next Update'

Taken from : https://www.digitalocean.com/community/tutorials/how-to-configure-ocsp-stapling-on-apache-and-nginx

/etc/ssl/mine/miouyouyou.fr.fullchain.pem should be replaced by a pathname leading to your SSL cert file.
This fullchain is the concatenation of the fullchain.pem and privkey.pem, as stated in the OCSP script :

cat $LETSENCRYPT_FULLCHAIN_CERT $LETSENCRYPT_PRIVKEY_CERT > $HAPROXY_SSL_CERT

This is required by HAProxy.

In case you’re wondering, on my setup, the public_address:80 is NAT’ed to 172.50.3.3 and [fc00::103], depending on the IP protocol used. The complete configuration is below.
[fc00::102] is address of the HTTP server actually serving the content.
Yes you can listen on IPv4 addresses and redirect to IPv6 internal addresses, with HAProxy.

References

The long version

Greetings everyone !

It’s been a while since I had a little time for myself !

Bah, who am I kidding, trying to empty the used oil of an old useless fryer led me to spill 1 gallon of used frying oil on the floor of my living room…
Gotta love those days, where you wonder why the fuck renowned supermarkets accept to sell fryers that have a lid attached with cheap plastic brackets !
When trying to empty the oil of this rectangular fryer, I tried to take it diagonally, and so I put one of my hand on one of the handles, and lifted the fryer by the lid with the other hand… AND THE LID BRACKETS BROKE OFF INSTANTLY, DETACHING THE LID FROM THE MAIN PART ! Leading the fryer to fall over the broken side and on the ground, spilling the entire content on the floor…
This is so fucking dangerous ! If that oil was still boiling hot, I would be at the hospital ‘at best’ !

Fuck cheap plastic constructions, and fuck anyone selling that kind of shit.
These people should receive a complete business ban for selling such dangereous contraptions. It cost NOTHING to put solid brackets that don’t break off easily like this.
Now, in the first place, any big autonomous fryer should come with a detachable bowl that can be easily lifted and emptied when it’s full…

So yeah, I was able to get most of the oil out using paper towels, but the wooden floor is still greasy, so I’m trying to sponge the remaining with smectite clay powder.
Since it takes some time, I thought I’d finish this blog post about how to setup HAProxy with Let’s Encrypt SSL certificates, and provide OCSP staples, ALPN and HSTS in bonus !

So, the whole idea is that :

  • You setup your domain names to point to your server.
  • You setup a quick webserver (load-balancer or not… It just have to be accessible through your domain names, on port 80 and 443) for Lets Encrypt Certbot
  • You invoke certbot certonly and set it up so that it writes the challenges files into your webserver root folder.
    These challenges are basically text files that tell Lets Encrypt bots “Yes ! It’s really my domain and my server ! I’m not a fraud !”
  • You concatenate Lets Encrypt fullchain.pem and privkey.pem, and use the new file as the SSL certificate for HAProxy.

    cat /etc/letsencrypt/live/yourdomainname/fullchain.pem /etc/letsencrypt/live/yourdomainname/privkey.pem > /your/haproxy/ssl/fullchain.pem
  • Once you got your Let’s Encrypt SSL certificates, you configure HAProxy. Here’s how I setup mine. The main part is on the frontend section :

    global
        daemon
        # 256 maximum simultaneous connections...
        # It's a static website so, I don't think I'll
        # reach that level for the moment.
        maxconn 256
        # As stated in the documentation
        # ---
        # Sets the maximum size of the Diffie-Hellman parameters used for generating
        # the ephemeral/temporary Diffie-Hellman key in case of DHE key exchange.
        # values greater than 1024 bits are not supported by Java 7 and earlier clients.
        # ---
        # I don't care about Java 7 clients.
        tune.ssl.default-dh-param 4096
        log /dev/log    local0
        # Use only decent algorithms.
        ssl-default-bind-options no-tls-tickets
        ssl-default-bind-ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK:!DSS:!SRP:!LOW
    
    defaults
        mode http
        # You got 5 seconds to connect
        timeout connect 5s
        # And 10 seconds to answer
        timeout client 10s
        timeout server 10s
        # This isn't used in this configuration.
        # Tarpit might be the worst option if
        # you're dealing against Slowloris attacks.
        timeout tarpit 1m
        log    global
    
    frontend blogfront
    
        bind 172.50.3.3:80
        # Only accept TLSv1.0 to TLSv1.3 connections.
        # Support ALPN and tell the client we can use HTTP/2.
        # This is one of the rare place where you can tell the
        # client to use HTTP/2 and boost the connection a little.
        # Fallback to HTTP 1.1 if required...
        bind 172.50.3.3:443 ssl crt /etc/ssl/mine/miouyouyou.fr.fullchain.pem ssl-min-ver TLSv1.0 ssl-max-ver TLSv1.3 alpn h2,http/1.1
        # You REALLY want to use brackets with IPv6...
        # Without brackets, that would read fc00::103:80,
        # which is a valid address, completely different from fc00::103.
        # Who ever thought that ':' was a good separator for IPv6 is a fucking idiot.
        bind [fc00::103]:80
        bind [fc00::103]:443 ssl crt /etc/ssl/mine/miouyouyou.fr.fullchain.pem ssl-min-ver TLSv1.0 ssl-max-ver TLSv1.3 alpn h2,http/1.1
        mode http
        option httplog
    
        # Don't try stupid methods
        acl valid_method  method GET OPTION HEAD
        # Don't try the IP address alone.
        acl valid_domains hdr_dom(Host) -i miouyouyou.fr blog.miouyouyou.fr
        # I don't use PHP, so don't even bother checking for PHP security holes.
        acl php_file      path_end .php
        # Deny bots that don't comply
        http-request deny if !valid_method OR !valid_domains OR php_file OR HTTP_1.0
    
        # It's a static website, so you don't need 'Content-Length' in request headers.
        acl have_payload  hdr_val(content-length) gt 0
        # Deny bots that don't comply
        http-request deny if have_payload
    
        default_backend blogback
    
    backend blogback
        mode http
        balance roundrobin
        # Setup HSTS : https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Strict-Transport-Security
        http-response set-header Strict-Transport-Security "max-age=16000000; includeSubDomains; preload;" if { ssl_fc }
        server web01 [fc00::102]:80 check
  • And, then, you restart HAProxy.

  • For the moment, no OCSP will be sent. It’s ok, we’ll generate them now :

    #!/bin/bash
    
    # To change
    
    DOMAIN_NAME=miouyouyou.fr
    # Be careful, sometimes you need to append -XXXX
    # Check which letsencrypt folder is the latest one updated with `ls -td /etc/letsencrypt/live/$DOMAIN_NAME* | head -1`
    LETSENCRYPT_FOLDER=/etc/letsencrypt/live/$DOMAIN_NAME
    HAPROXY_SSL_FOLDER=/srv/docker-files/LoadBalancer/haproxy/ssl
    
    # These should be good
    LETSENCRYPT_FULLCHAIN_CERT=$LETSENCRYPT_FOLDER/fullchain.pem
    LETSENCRYPT_PRIVKEY_CERT=$LETSENCRYPT_FOLDER/privkey.pem
    LETSENCRYPT_ISSUER_CERT=$LETSENCRYPT_FOLDER/chain.pem
    
    HAPROXY_SSL_CERT=$HAPROXY_SSL_FOLDER/$DOMAIN_NAME.fullchain.pem
    HAPROXY_SSL_CERT_OCSP=$HAPROXY_SSL_CERT.ocsp
    
    # Uncomment to regenerate the real fullchain.pem SSL Certificate
    # cat $LETSENCRYPT_FULLCHAIN_CERT $LETSENCRYPT_PRIVKEY_CERT > $HAPROXY_SSL_CERT
    
    openssl ocsp -issuer $LETSENCRYPT_ISSUER_CERT -cert $HAPROXY_SSL_CERT -respout $HAPROXY_SSL_CERT_OCSP -noverify -no_nonce -url http://ocsp.int-x3.letsencrypt.org
  • Restart HAProxy again (killall -SIGHUP haproxy or docker kill -s HUP haproxy_container_id_or_name) and check that OCSP staples are provided with :

    echo QUIT | openssl s_client -connect your_domain_name.ext:443 -status 2> /dev/null | grep -A 17 'OCSP response:' | grep -B 17 'Next Update'

Taken from : https://www.digitalocean.com/community/tutorials/how-to-configure-ocsp-stapling-on-apache-and-nginx

Additional notes

HAProxy through Docker

Now, you might wonder “Why is he using /etc/ssl/mine/ in the HAProxy configuration, and /srv/docker-files/LoadBalancer/haproxy/ssl when generating the certificates and the OCSP staples ?“.

The answer is that HAProxy is running inside a Docker container, with the following docker-compose.yml configuration :

version: '3'

services:
  haproxy:
    image: haproxy:latest # A container that exposes an API to show its IP address
    volumes:
      - "./haproxy/config:/usr/local/etc/haproxy:ro"
      - "./haproxy/ssl:/etc/ssl/mine:ro"
      - "/dev/log:/dev/log"
    networks:
      myynet:
        ipv6_address: fc00::103
        ipv4_address: 172.50.3.3

networks:
  myynet:
    external: true

Which is stored in /srv/docker-files/LoadBalancer/ and run through docker-compose up -d.
So, /srv/docker-files/LoadBalancer/haproxy/ssl is mounted to /etc/ssl/mine in the Docker container.

Also, the myynet network has the following ranges : 172.50.3.0/24 and fc00::100/120, and is created using the docker network commands.

So, I hope this helped you understand how to setup HAProxy with a SSL certificate from Let’s Encrypt, and provide OCSP staples.
Also, if you look at the HAProxy configuration, you’ll see how to setup ALPN and HSTS as well, which should help you get nice SSL street creds, while speeding up HTTPS connections a little.