Single container approach
As we discussed in the first post of this docker series, when you define and run containers, all the software required to run your magnolia server is already installed and configured so launching a magnolia app is just a matter of building your image once and then run it as many times as you wish with the required params.
Multi-container approach
Running containers helps you with the logistics of deploying, configuring and managing different versions of the same software stack. But when you have an app that consists of multiple containers that depend on each other, taking care of the loading order and the specific configs needed for each one could be tough and messy.
In the case of Magnolia, you need at least two different containers running, one for the author and one for the public instances. But if you also want to run Magnolia on top of a DB, then you would need to run two additional containers for each instance's DB. All those instances share credentials, networks, volumes, and have a specific loading order, i.e the DB has to be run before the web app server.
This is where docker-compose can help you to have everything declared in one place and easily reproducible whenever you need a new setup for your app.
Docker Compose
Docker compose is a separate tool that gets installed along with docker. It helps to startup multiple docker containers at the same time and automatically connect them together with networking, health-check and volume management. The main purpose of docker-compose is to function as docker CLI but allows you to issue more commands quickly.
In docker compose you have a YAML file where you define how you would like your multi-container application to be structured. This YAML file will then be used to automate the launch of the containers as defined.
Let's create a docker-compose.yaml file step by step trying to achieve the previous post Magnolia setup of one author and one public attached to a couple of postgres DBs.
Services
In order to define the configuration and params needed to run the containers, you need to provide a services element. Let's take a look to the postgres container as the first service:
version: '3.7'
services:
mgnlauthor-postgres:
image: postgres:12
restart: unless-stopped
healthcheck:
test: pg_isready -U magnolia || exit 1
interval: 10s
timeout: 5s
retries: 5
volumes:
- "~/docker/pgdata-author:/var/lib/postgresql/data"
networks:
- mgnlnet
environment:
POSTGRES_USER: "magnolia"
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:?password empty}
POSTGRES_DB: "magnolia"
PGDATA: "/var/lib/postgresql/data/pgdata"
mgnlpublic-postgres: ...
Reviewing the options provided:
- version: the service (yaml) definition format is compatible with docker compose version 3.7.
- mgnlauthor-postgres: the name of the container in the network. This will be used by the magnolia author instance to connect to this DB.
- image: the name of the image to be pulled from the docker registry.
- restart: the restart policy of the container. In this case we want docker to restart it if it gets killed somehow.
- healthcheck: a test command to be run in order to check the container status. exit code 0 is healthy, exit code 1 unhealthy/failed. The interval, timeout and retry number of the test can be configured as sub-options. For PostgreSQL we used this tool.
- volumes: a host mounted volume to be used inside the container.
- networks: the network where this container is going to be registered as mgnlauthor-postgres.
- environment: All the environment variables needed by the container to run. In this case the database credentials and PGDATA folder.
Note: the POSTGRES_PASSWORD is provided by a compose env variable, which must be provided by a .env file in the same folder as the docker-compose.yaml file.
Now let's continue with the magnolia author container:
mgnlauthor:
build:
context: ./
args:
MGNL_AUTHOR: "true"
image: ebguilbert/magnolia-cms-postgres:6.2.1-author
restart: unless-stopped
depends_on:
- mgnlauthor-postgres
volumes:
- mgnl:/opt/magnolia
networks:
- mgnlnet
ports:
- 8080:8080
environment:
DB_ADDRESS: "mgnlauthor-postgres"
DB_PORT: "5432"
DB_SCHEMA: "magnolia"
DB_USERNAME: "magnolia"
DB_PASSWORD: ${DB_PASSWORD:?DB_PASSWORD must be set!}
healthcheck:
test: curl -f http://localhost:8080/.rest/status || exit 1
interval: 1m
timeout: 10s
retries: 5
mgnlpublic: ...
The options are very similar to the postgres service, but we have a couple worth mentioning:
- build: This option lets you build your own local image if you don't have one registered in a public docker server. You can define the context where to look the Dockerfile for and provide the build_args.
- depends_on: This is a very important option since you can control the order in which the containers are run. In this case we want the DB to be started before the the magnolia author container.
- volumes: A named volume, which is managed by docker compose, more on this in the next section.
- ports:expose the host port 8080.
- environment: Credentials as env variables. Note the password is the same compose env variable used in the postgres service.
- healthcheck: The test command for magnolia is a rest endpoint we can invoke with curl. The interval is 1 min and the retry is 5, since the first time the rest endpoint might not be available yet and the healthcheck might need to be tested more than once (until 5).
Networks
Docker compose handles the creation and deletion of networks every-time you start up or shutdown the setup defined in the compose file.
For magnolia we only need one network:
networks:
mgnlnet:
name: mgnlnet
Volumes
Docker compose also handles the creation of named volumes and optionally the prune of volumes if needed.
We want one named volume for each magnolia container (author and public):
volumes:
mgnl:
name: mgnl
mgnlp1:
name: mgnlp1
docker-compose.yaml
The whole file including all services, networks and volumes for magnolia author and public instances would look like this:
version: '3.7'
services:
mgnlauthor-postgres:
image: postgres:12
restart: unless-stopped
healthcheck:
test: pg_isready -U magnolia || exit 1
interval: 10s
timeout: 5s
retries: 5
volumes:
- "~/docker/pgdata-author:/var/lib/postgresql/data"
networks:
- mgnlnet
environment:
POSTGRES_USER: "magnolia"
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:?password empty}
POSTGRES_DB: "magnolia"
PGDATA: "/var/lib/postgresql/data/pgdata"
mgnlpublic-postgres:
image: postgres:12
restart: unless-stopped
healthcheck:
test: pg_isready -U magnolia || exit 1
interval: 10s
timeout: 5s
retries: 5
volumes:
- "~/docker/pgdata-public:/var/lib/postgresql/data"
networks:
- mgnlnet
environment:
POSTGRES_USER: "magnolia"
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:?password empty}
POSTGRES_DB: "magnolia"
PGDATA: "/var/lib/postgresql/data/pgdata"
mgnlauthor:
build:
context: ./
args:
MGNL_AUTHOR: "true"
image: ebguilbert/magnolia-cms-postgres:6.2.1-author
restart: unless-stopped
depends_on:
- mgnlauthor-postgres
volumes:
- mgnl:/opt/magnolia
networks:
- mgnlnet
ports:
- 8080:8080
environment:
DB_ADDRESS: "mgnlauthor-postgres"
DB_PORT: "5432"
DB_SCHEMA: "magnolia"
DB_USERNAME: "magnolia"
DB_PASSWORD: ${DB_PASSWORD:?DB_PASSWORD must be set!}
healthcheck:
test: curl -f http://localhost:8080/.rest/status || exit 1
interval: 1m
timeout: 10s
retries: 5
mgnlpublic:
build:
context: ./
args:
MGNL_AUTHOR: "false"
image: ebguilbert/magnolia-cms-postgres:6.2.1-public
restart: unless-stopped
depends_on:
- mgnlpublic-postgres
volumes:
- mgnlp1:/opt/magnolia
networks:
- mgnlnet
ports:
- 8090:8080
environment:
DB_ADDRESS: "mgnlpublic-postgres"
DB_PORT: "5432"
DB_SCHEMA: "magnolia"
DB_USERNAME: "magnolia"
DB_PASSWORD: ${DB_PASSWORD:?DB_PASSWORD must be set!}
healthcheck:
test: curl -f http://localhost:8080/.rest/status || exit 1
interval: 1m
timeout: 10s
retries: 5
networks:
mgnlnet:
name: mgnlnet
volumes:
mgnl:
name: mgnl
mgnlp1:
name: mgnlp1
As you can see the file is self-explanatory and clearly defines what services are needed, the dependency order and what is needed by each service.
Docker compose up and down
One of the docker compose features I like the most is the possiblilty to compile and run everything at once with one single command:
docker-compose -f "docker-compose.yaml" up -d
The above is running in detached mode so the containers output is not streamed to the terminal.
If you also want to build the local images (if they weren't built before), you can add --build:
docker-compose -f "docker-compose.yaml" up -d --build
To check the status of your containers you can always use the ps command:
docker-compose ps
And finally you can shut everything down, including the removal of containers and networks with the following command:
docker-compose -f "docker-compose.yaml" down
Docker Swarm or Kubernetes
The next and final step is the orchestration of these containers, managing the auto-scaling and recovery of containers in clusters of nodes. For this, other tools like Docker Swarm or Kubernetes are needed.
Good news is that docker compose is fully compatible with docker swarm, so just few more steps are needed. For kubernetes, the docker-compose file can't be reused "as is" but the structure and concepts will be very similar so it shouldn't need big efforts.
Edit: There's a tool called kompose which translates docker-compose files into kubernetes-compatible files ready to be used by Kubernetes :)
Is there any configuration that has to be done in this setup so publishing content works?
ReplyDeleteIn previous posts the configuration needed was explained: https://www.edwinology.com/search/label/docker
DeleteHi @Edwin,
ReplyDeleteDo you know why this is failing?
https://drive.google.com/file/d/1P_vsaHTp6-4qG5e-yRDMAeXo3aCtr4oS/view?usp=sharing
Windows and Mac os I am getting the same issue, thanks for you post
Delete