splitbrain.org

electronic brain surgery since 2001

Personal Server with Docker Compose

Over the last couple of weeks I updated the server this blog and many more stuff is running on. I decided to move (nearly) everything to Docker.

My previous setup was much more traditional. My server runs Debian Linux. A single Apache server provided different virtual hosts, the different services used different PHP FPM pools, each using an individual user.

My goal with the Docker setup was to make it easier to upgrade, change or even move individual services without affecting the others. This way I could for example run different PHP versions for different services.

However I still wanted to keep everything as simple as possible. I am the only admin and I need to understand what's going on. A lightweight Kubernetes setup with K3S would have been an alternative but the learning curve seemed much too steep. However I am familiar with Docker and Docker Compose so that's what I chose to use.

General Structure

My goal was to have everything under one common root. I chose /srv for that. The basic layout looks like this

srv/
├── service1/
│   ├── conf/
│   ├── volumes/
│   │   ├── appdata
│   │   └── otherdata
│   ├── .env
│   └── docker-compose.yaml
├── service2/
│   └── ...
├── ...
└── docker-compose.yaml

Each service (or if you're coming from Portainer, “stack”) has it's own directory with a docker-compose.yaml defining the setup with all the needed containers etc.

The conf and the volumes directories both hold data that is bind mounted into the containers. The main difference is that conf holds data to configure the service and is checked into my infrastructure git repository, while volumes is for actual application data.

All secrets for a service are managed in a .env file which is not checked into the repo of course.

For a bit of added security, I run many of the services as dedicated users. This way their data is also isolated on the host machine.

Instead of managing all services individually, I pull their setups into a single docker-compose.yaml at the top level using the include mechanism.

Below is a description of what I currently run and how it's set up. Check out the Git repository for the details on how I set it up.

One detail, maybe not immediately obvious is my DNS setup. I use splitbrain.net for everything infrastructure related. Eg. the Traefik Dashboard. I have a wildcard entry for *.splitbrain.net pointing to the server, so I can easily spin up new services without mucking with DNS.

My actual user facing services usually have their own domains or run under the splitbrain.org domain.

Traefik

Traefik is the heart of the setup and took me the longest time to set up as I want it. It is a reverse proxy that I use as central ingress for all web services. When you look at the individual services, you will notice that they do not expose any ports, instead Traefik routes HTTPS requests directly to the internal ports.

To make that work, Traefik and all web services need to use the same Docker network. Unlike many tutorials make you believe, you do not have to create the network outside of your Compose setup and mark it as external. All you need to do is give it a fixed name. Use the same name in all your services and they will all use the same network:

networks:
  traefik:
    name: traefik

Traefik also does the TLS certificate management by getting certificates from Let's Encrypt. I set up multiple certificate resolvers with a standard HTTP-01 challenge for most stuff, DNS-01 challenges with specific DNS providers. The latter are needed when wildcard certificates are used as is the case for my DokuWiki farm.

I also centrally configured a couple of middlewares that can easily be reused by any of the services.

Each individual service that should be routed through Traefik is then simply configured by labels and Traefik will automatically pick it up:

   labels:
      - "traefik.enable=true"
      - "traefik.http.routers.splitbrain.rule=Host(`www.splitbrain.org`) || Host(`splitbrain.org`)"
      - "traefik.http.routers.splitbrain.tls.certresolver=letsencrypt"
      - "traefik.http.routers.splitbrain.middlewares=redirect-to-www@file"

GoAccess and Logs

Since all web traffic is now routed through Traefik, it makes sense to log it there as well. To get a quick overview on how everything is working, GoAccess is a pretty nice log analyzer.

It can be set up with real time analysis, so that's what I did. It consists of two containers. One that runs goaccess (and it's web socket endpoint) and one that serves the HTML report it generates.

While Traefik can rotate it's internal logs, this feature seems not to exist for the access log, so I whipped up my own simple mechanism. A shell script that simply moves away the current file and send sends a USR1 signal to PID 1 (which is traefik itself). This is called weekly via Chadburn (see below), so I have two weeks of logs max.

Another Chadburn cron updates the Maxmind city database monthly.

Mails

At some point I want to move my mail server configuration into a container as well. But for now I kept it as is, running on the host itself. To make Postfix work as a relay for my Docker containers, I had to add the Docker network range to the mynetworks setting in /etc/postfix/main.cf:

mynetworks = 127.0.0.0/8, 172.16.0.0/12

In the containers I now can use 172.17.0.1 as the SMTP server.

Note that for many go based tools I needed to disable certificate checking because they didn't trust the ones my mail server uses.

Chadburn

Some services need periodic actions to be executed inside the container. Some containers come with their own scheduling solution, but not all of them. However it's quite simple to simply run docker exec to execute arbitrary stuff inside the container.

I could use the host's cron to do just that. But the more elegant solution is a container that reads other container's labels and schedules the tasks accordingly.

I first stumbled on Ofelia which promises to do just that. However I found it's fork Chadburn even better: unlike Ofelia it automatically reschedules jobs when labels change. No need to restart Chadburn when I add a new container label or change an existing one.

Borgmatic

Previously I used rsync to simply create a backup copy in a Hetzner storage box. This isn't ideal of course since it is just a copy. It doesn't allow you to roll back to an earlier state.

So when I redid everything, it was also time to rethink the backup strategy. I decided to use Borg through the Borgmatic container which automates daily backups and allows to set a data retention policy.

It took me a couple of tries to configure everything as I wanted: I don't keep a local copy of the backup archives and in addition to the /srv directory, I also back up the host's /etc directory just to be sure I have everything to rebuild the server from scratch if needed.

Watchtower

Watchtower auto-updates all the containers I am using. Since I mostly use generic tags like :stable or :v3 or if needed :latest, I don't have to worry about updating the services.

Static Sites

I have a couple of static HTML sites. For those I tried something new. The aptly named Static Web Server image is just 8MB in size and does everything I need.

DokuWiki

For DokuWiki I use the official Docker image. Easy.

Custom Stuff

In the future I want to deploy everything using it's own Docker image.

For indieblog.page, I already set up an action that builds an image, publishes it to Github's registry and then uses Watchtower's API to trigger an immediate update. Works great.

For other PHP stuff that doesn't have its own image, yet. I am using the chialab/docker-php image. It's basically the official PHP image but with a bunch of pre-compiled extensions so I don't need to build images.

Metrics

Many modern containers and applications export metrics that can be consumed by time series databases like Prometheus which then can be visualized and monitored via Grafana. I have started to look into that but I find especially Grafana not very intuitive to use. At some point I will get around to set this up and might write a separate post about it.

BTW. I wonder if DokuWiki should provide a /metrics endpoint. Not sure what it would export though.

Summary

I really like the new setup. It separates the different services in a satisfying way.

In an ideal world I would have started from a completely new server using a minimal host system (likely based on Alpine) but working on top of my existing system was more practical. I still have to clean up a bit on the host system and uninstall everything I don't need anymore.

Updating the underlying Debian system should be less exciting in the future.

The new system should hopefully make it easier to run new stuff in the future as well. My own Mastodon instance for example…

Tags:
docker, selfhost, splitbrain, sitenews
Similar posts: