Back
5 min read

Rsyslog and Docker

Few years ago I decided to write my own hadret.rsyslog Ansible role. I had two main goals in mind while writing it — first, I wanted to have a turn-key remote logging support, and second, I wanted to have a fairly straight-forward handler for Docker container logs. In this post I'd like to expand a bit on the latter.

Docker comes in with json-file log driver by default, but there are many more to choose from: supported logging drivers. I tend to default to journald. It used to be the case that it was only json-file and journald that allowed for having logs available elsewhere and by using docker logs — this limitation is no longer true, starting with Docker version 20.10 and up. While I'm not a fan of systemd+journald combo in the Linux world, it's essentially a standard these days. I would go for syslog directly as it's where I'm going to end up eventually either way, but this requires tcp or udp socket exposure on the host machine and I'd rather not do that. (For a remote/central syslog setup though, there's no better way than syslog — lemme know on Mastodon if I should write follow-up post targeting this use case).

Using journald allows for fairly easy interception via Rsyslog for further mingling. My general idea was to have a /var/log/docker folder and then to have each container writing logs to a separate .log file named as the container itself (for example alertmanager.log). There are many different ways to have the container switch to a certain log driver — directly when starting the container docker run --log-driver=journald ..., or in the docker-compose.yml file, or by using my hadret.containers Ansible role 😉 But probably the best way is to just set it globally, so that every single container that will spin up is going to end up with journald assigned to it.

To check what's currently set globally:

docker info --format '{{.LoggingDriver}}'
json-file

To check what's currently set for the given container (named priceless_haslett in the example below):

docker inspect priceless_haslett | grep -A 5 LogConfig
            "LogConfig": {
                "Type": "json-file",
                "Config": {
                    "max-size": "100m"
                }
            },

To change it globally to journald, there has to be a change made to the file under /etc/docker/daemon.json. This file may not exist. Please note that it has to be in a valid JSON format, otherwise Docker daemon will fail to start. Here's an example I'm going to play with further:

{
  "log-driver": "journald",
  "log-opts": {
    "tag": "docker/{{.Name}}"
  }
}

I assume the log-driver part is self-explanatory, what's up with the log-opts though? Each log driver allows for additional setting that are passed using log-opts. In the example above I'm making use of the log tag options for {{.Name}} so that my syslog log lines look like this:

2023-07-11T13:58:22.375926+02:00 metrics docker/alertmanager[2101616]: ts=2023-07-11T11:58:22.374Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=22.00658976s

I'm hardcoding the docker/ part which I'm later on catching and mingling with Rsyslog.

With the setup above in place, Docker daemon needs to be restarted to load the new value for its log driver: systemctl restart docker. To confirm the global change:

docker info --format '{{.LoggingDriver}}'
journald

This, however, won't affect already running containers — it will provide the journald log driver only for the newly spawned ones. There's no way around it and each container has to be restarted. In case docker-compose is in use, it's a rather straight-forward process: docker-compose up -d --force-recreate. To confirm the new value for the container(s):

docker inspect alertmanager | grep -A 5 LogConfig
            "LogConfig": {
                "Type": "journald",
                "Config": {
                    "tag": "docker/{{.Name}}"
                }
            },

This concludes the Docker part, let's move on to Rsyslog now.

Debian and Debian-based distributions ship with Rsyslog by default. These days it's also set to forward journald directly to syslog by default — this can be checked, like so:

systemd-analyze cat-config systemd/journald.conf | grep -Ev "^#|^$"
[Journal]
[Journal]
ForwardToSyslog=yes

That output is a bit truncated, but the first [Journal] comes from /etc/systemd/journald.conf (everything is commented out so there's nothing printed in there) and the second one is from /usr/lib/systemd/journald.conf.d/syslog.conf — this is the one that sets up the forwarding. Make sure this is enabled as otherwise logs from containers may not land in the syslog. Once that's set, it's ready for some testing. I tend to drop config files under /etc/rsyslog.d with some meaningful names and priorities. In my setup I have only single file under /etc/rsyslog.d that handles some cloud-init stuff I don't really care about. Its file is named 21-cloudinit.conf so I'm dropping my Docker stuff in /etc/rsyslog.d/22-docker.conf:

if $programname == 'docker' then {
  action(type="omfile" file="/var/log/docker_all.log")
  stop
}

This is going to catch logs from all the containers and smash them into /var/log/docker_all.log. I tend to do it whenever I wanna ensure all the logs are caught as expected and that their format is reasonable. Once the config file is dropped, be sure to restart Rsyslog: systemctl restart rsyslog. After that, and assuming everything clicked as expected, some/all container logs should start popping up in the docker_all.log file.

I know, it's already cool, but let's smash it to eleven. As was mentioned above, Docker parts has log-opts set with "tag": "docker/{{.Name}}" which I'm about to "catch" now in Rsyslog. Once it's caught, I can then mingle it and store in the dedicated file per container. Let's start with creating the path:

mkdir -m 0755 /var/log/docker
chgrp adm /var/log/docker

Wonderful. /etc/rsyslog.d/22-docker.conf needs some updating now:

template(name="DockerLogFileName" type="list") {
   constant(value="/var/log/docker/")
   property(name="syslogtag" securepath="replace" regex.expression="docker/\\(.*\\)\\[" regex.submatch="1")
   constant(value=".log")
}

# if $programname == 'docker' then {
#   action(type="omfile" file="/var/log/docker/all.log")
# }

if $programname == "docker" then {
  if $syslogtag contains "docker/" then {
    ?DockerLogFileName
  } else {
    action(type="omfile" file="/var/log/docker/no_tag.log")
  }
  stop
}

The first part is definition of the template makes use of a concept called dynamic filenames — as the name implies, it allows for having different filename depending on the intercepted message. There are three parts to that template, first and third one are static and they define the path for the logs (/var/log/docker) and file extension (.log) respectively. The second is a regex expression that grabs syslogtag value (docker/{{.Name}}) and extracts the {{.Name}} part of it to serve as a filename. When all of this is combined, for an example container named "alertmanager" one ends up with a /var/log/docker/alertmanager.log file with logs.

The second part defines the flow or writing logs to the appropriate paths. First, it catches $programname and ensures logs come from Docker. It then moves on the check for $syslogtag to ensure that it contains docker/ in it as we are using it in the template for the name extraction. If it's the case, the template is being applied (this part ?DockerLogFileName). If for whatever reason docker/ part is missing from the container, its logs are going to land in /var/log/docker/no_tag.log (it's really just for failsafe and the entire docker/ part could be omitted if so desired).

That's roughly it, here's an example tree view I'm running on one of my machines:

tree /var/log/docker
/var/log/docker
├── alertmanager.log
├── forwardly-go.log
├── karma.log
├── snips.log
├── traefik.log
└── uptime-kuma.log

In the example above I left (albeit commented out) adapted version of the "catch all" example I did earlier on. Sometimes it might be desired to have both of these — separate per container logs as well as all of them in a single file (just keep in mind that these will double required space to store them). Last but not least, having all these logs under a single path /var/log/docker allow for setting up some simple logrotation, here's an example /etc/logrotate.d/containers:

/var/log/docker/*.log {
    daily
    rotate 7
    copytruncate
    compress
    delaycompress
    notifempty
    missingok
}

Thanks for reading and should you have any questions/suggestions feel free to reach out on Mastodon.