I'm not huge on containers, but I can see and appreciate their value in rolling things out fast for testing purposes. I have my own server(s) running here and there and I use Ansible for handling pretty much everything on them. Until not long ago, Docker containers were among notable exceptions from that rule. But then I finally discovered[1] docker_container module. The only thing I was still missing was better handling of defining multiple containers.[2]

Sure enough, it's possible to specify multiple containers in single playbook, but this wasn't what I was after. I wanted the possibility to specify an array of definitions that would then be executed on host or given hosts group(s). Something like this:

containers:
  - name: cadvisor
    image: "google/cadvisor:latest"
    state: started
    restart_policy: always
    privileged: true
    log_driver: journald
    log_options:
      tag: docker/cadvisor
    published_ports:
      - "127.0.0.1:8080:8080"
    devices:
      - "/dev/zfs:/dev/zfs"
    volumes:
      - "/:/rootfs:ro"
      - "/var/run:/var/run:rw"
      - "/sys:/sys:ro"
      - "/var/lib/docker/:/var/lib/docker:ro"

  - name: alertmanager
    image: "prom/alertmanager:{{ am_version }}"
    state: started
    restart_policy: always
    log_driver: journald
    log_options:
      tag: docker/alertmanager
    ports:
      - "127.0.0.1:{{ am_port }}:{{ am_port }}"
    volumes:
      - "{{ am_path }}/config/alertmanager.yml:/etc/alertmanager/alertmanager.yml"
      - "{{ am_path }}/data:/data"

  - name: postgres_exporter
    image: "wrouesnel/postgres_exporter"
    state: started
    restart_policy: always
    log_driver: journald
    log_options:
      tag: docker/postgres_exporter
    ports:
      - "127.0.0.1:9187:9187"
    env:
      DATA_SOURCE_NAME: "postgresql://{{ exporter_db_user }}:{{ exporter_db_pass }}@localhost:5432/postgres?sslmode=disable"

It's so much easier this way to specify multiple containers, their connections between each other etc. It's also much easier to move specify them once in one hosts group and then reuse them in another with no or only slight changes (to variables for example).

Additionally this role also supports docker_login module. This allows to specify array of Docker registries with credentials to log into before, for example, rolling out containers from a private registry:

registries:
  - username: gitlab+deploy-token-1
    password: $TOKEN
    registry_url: registry.some.url

It's important to note that this role is not handling Docker installation. It expects Docker to be already installed and docker python module needs to be present as well. Here's an example playbook that handles this:

- hosts: all

  vars:
    pip_install_packages:
      - name: docker

  roles:
    - geerlingguy.pip
    - geerlingguy.docker
    - hadret.containers

This role is available on Ansible Galaxy and its source can be found on GitHub -- bug reports should also land in there.


  1. Took me a while... ↩︎

  2. docker-compose style. Kind of. ↩︎