Continue to advance to Kubernetes, the last article Docker Swarm In Action, After understanding Swarm, it is necessary to get familiar with Docker Compose. Docker Swarm forms a cluster of Docker hosts. As long as the Manager node is notified when the service is deployed, it will automatically find the corresponding node to run containers. Compose is another concept entirely. It organizes multiple associated containers into a whole for deployment, such as a load balance container, multiple web containers, and a Redis cache container to make up a whole bundle.
Revealed by its name, Compose, the concept of container orchestration was officially established. Later, Kubernetes, which we will learn, it's a tool for higher-level organization, operation and management of containers. Because Compose organizes the containers, it can start multiple associated containers with one command, instead of starting one container separately.
Regarding the installation of Docker Compose, Docker Desktop under Mac OS X comes with docker-compose; since Docker Compose is written in Python, we can use pip install docker-compose
to install it, and use its commands after installation docker-compose
.
The differences between Docker Container, Swarm, and Compose show in this picture.
Compose consists of multiple containers, which can be deployed as a whole to a single Docker host or Swarm host cluster.
Let’s experience the functions of Docker Compose and design a service with the following containers
- The front-end load balancing server is played by HAProxy, and request will be forwarded to the next following two web containers
- Two web containers, demonstrated with Python Flask
- A Redis cache container, the Web will access the cached data in the container
The files involved in defining Docker Compose are Dockerfile and docker-compose.yml. We need to create a Dockerfile for each custom Docker container. If one container directly use the Docker image, the no Dockerfile required. In other words, there can be one or more Dockerfiles in a Compose. Here is an example of directory structure of one compose
myapp/
- Dockerfile
- docker-compose.yml
If we need to use multiple Dockerfiles, the Dockerfile must be placed in a different directory. In this case, the directory structure needs to be adjusted as follows (plus several auxiliary files that will be used later)
myapp/
- proxy/
- Dockerfile
- haproxy.cfg
- web/
- Dockerfile
- app.py
- requirements.txt
- docker-compose.yml
Next, look at the content of each file
myapp/proxy/Dockerfile
1 2 |
FROM haproxy:2.1.3 COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg |
myapp/proxy/haproxy.cfg
1 2 3 4 5 6 7 |
frontend myweb bind *:80 default_backend realserver backend realserver balance roundrobin server web1 web_a:5000 check server web2 web_b:5000 check |
This is the simplest haproxy.cfg configuration file. It just forwards the request to web_a
and web_b
in a rotating manner. We will see web_a and web_b in the docker-compose.yml file later.
myapp/web/Dockerfile
1 2 3 4 |
FROM python:3.7-alpine ADD app.py requirements.txt ./ RUN pip install -r requirements.txt CMD ["python", "app.py"] |
myapp/web/requirements.txt
1 2 |
flask redis |
myapp / web / app.py
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
import redis import socket from flask import Flask cache = redis.Redis('redis') app = Flask(__name__) def hit_count(): return cache.incr('hits') @app.route('/') def index(): count = hit_count() return 'Served by <b>{}</b>, count: <b>{}</b>\n'.format(socket.gethostname(), count) if __name__ == '__main__': app.run(host='0.0.0.0') |
Use a simple Flask App to demonstrate a web application. The access count is saved in Redis, and responds the hostname of the machine where the request is currently processed.
myapp/docker-compose.yml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
version: '3.7' services: web_a: build: ./web ports: - 5000 web_b: build: ./web ports: - 5000 proxy: build: ./proxy ports: - "80:80" redis: image: "redis:alpine" |
We have built two Docker images. Redis:alpine is used directly for the image. The port and mapping are defined here, so EXPOSE is needed to define the port number in the Dockerfile. We will start two web services, web_a, web_b, and the haproxy.cfg in front will forward the request to these two containers.
Now we can go to the myapp directory under the command line and execute the command
$ docker-compose up -d
If first time run, this command will also build Docker images for myapp_proxy, myapp_web_a, and myapp_web_b, and then they will be started. If those images already exist locally, start those containers directly. After the startup is complete, run docker ps
to check
1 2 3 4 5 6 |
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e5d918e0ae2a myapp_proxy "/docker-entrypoint.…" About a minute ago Up About a minute 0.0.0.0:80->80/tcp myapp_proxy_1 56819534cfa6 myapp_web_a "python app.py" About a minute ago Up About a minute 0.0.0.0:32777->5000/tcp myapp_web_a_1 a2125855e55a myapp_web_b "python app.py" About a minute ago Up About a minute 0.0.0.0:32778->5000/tcp myapp_web_b_1 f216a7b5f95b redis:alpine "docker-entrypoint.s…" About a minute ago Up About a minute 6379/tcp myapp_redis_1 |
Four containers are all started, one haproxy, two web, and one redis, which is exactly what we need. Let’s verify the whole service behavior now:
1 2 3 4 5 6 |
$ curl http://localhost Served by <b>a2125855e55a</b>, count: <b>1</b> $ curl http://localhost Served by <b>56819534cfa6</b>, count: <b>2</b> $ curl http://localhost Served by <b>a2125855e55a</b>, count: <b>3</b> |
Load balancing, web services, and Redis are working together as our intention.
If we run docker kill 56
to get myapp_web_a
killed, it can't be automatically recover. But running docker-compose run -d
again will just save myapp_web_a
back, won't impact other containers.
1 2 3 4 |
$ docker-compose up -d Starting myapp_web_a_1 ... myapp_redis_1 is up-to-date Starting myapp_web_a_1 ... done |
please run docker-compose -h
to learn the detailed usage of this command, and a lot of them are similar to docker
command, following are only for docker-compose
command
- docker-compose images: Display the docker image used by current compose
- docker-compose logs: Display the running logs of the current compose
- docker-compose down: stop all containers involved in compose
- docker-compose kill: kill all the containers involved in compose without using docker kill to kill them one by one
- docker-compose restart: restart the entire compose service (all containers)
There is a simpler way to support HAProxy + multiple dynamic web containers
Modify the previous docker-compose.yml
contents of the file are as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
version: '3.7' services: web: build: ./web ports: - 5000 proxy: image: dockercloud/haproxy links: - web ports: - "80:80" volumes: - /var/run/docker.sock:/var/run/docker.sock redis: image: "redis:alpine" |
proxy selected dockercloud/haproxy
image, so, we can remove myapp/proxy
folder along with Dockerfile
and haproxy.cfg
.
Now start with the same command docker-compose up -d
1 2 3 4 5 |
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 3ce8fcd986ac dockercloud/haproxy "/sbin/tini -- docke…" 6 minutes ago Up 4 minutes 443/tcp, 0.0.0.0:80->80/tcp, 1936/tcp myapp_proxy_1 c73be3d3f795 myapp_web "python app.py" 6 minutes ago Up 4 minutes 0.0.0.0:32793->5000/tcp myapp_web_1 0fabf6c6e81f redis:alpine "docker-entrypoint.s…" 16 minutes ago Up 4 minutes 6379/tcp myapp_redis_1 |
There is only one web container, and now it can be dynamically expanded, using docker-compose up --scale web=d -d
1 2 3 4 5 6 7 8 9 10 11 12 13 |
$ docker-compose up --scale web=3 -d Starting myapp_web_1 ... Starting myapp_web_1 ... done Creating myapp_web_2 ... done Creating myapp_web_3 ... done myapp_proxy_1 is up-to-date $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e3b4db8db92d myapp_web "python app.py" 7 seconds ago Up 5 seconds 0.0.0.0:32796->5000/tcp myapp_web_2 f1ba850865e0 myapp_web "python app.py" 7 seconds ago Up 5 seconds 0.0.0.0:32795->5000/tcp myapp_web_3 3ce8fcd986ac dockercloud/haproxy "/sbin/tini -- docke…" 8 minutes ago Up 6 minutes 443/tcp, 0.0.0.0:80->80/tcp, 1936/tcp myapp_proxy_1 c73be3d3f795 myapp_web "python app.py" 8 minutes ago Up 6 minutes 0.0.0.0:32793->5000/tcp myapp_web_1 0fabf6c6e81f redis:alpine "docker-entrypoint.s…" 17 minutes ago Up 6 minutes 6379/tcp myapp_redis_1 |
1 2 3 4 5 6 |
$ curl http://localhost Served by <b>c73be3d3f795</b>, count: <b>20</b> $ curl http://localhost Served by <b>e3b4db8db92d</b>, count: <b>21</b> $ curl http://localhost Served by <b>f1ba850865e0</b>, count: <b>22</b> |
The three requests were processed by three different web containers.
Note:
docker-compose scale web=2
: The command is not recommended, we should rundocker-compose up --scale web=2 -d
insteaddocker-compose.yml
: The configurationdeploy
can only work with Swarm mode. Whiledeploy
inreplicas
works withdocker stack deploy
command, and is ignored bydocker-compose up
anddocker-compose run
About running Compose on the Swarm cluster
It seems a bit troublesome, see the official Use Compose with Swarm . We need to use docker stack to deploy Compose. We won't do the actual operation for now, and not sure if there are many such combination scenarios in a real production environment. The key is to understand how the containers are distributed among the Swarm nodes after deployment. Is Compose still kept as a whole? or extract containers from Compose to deploy to Swarm cluster?
From a picture in Deploy Docker Compose (v3) to Swarm (mode) Cluster
From above picture, Swarm does extract container from a Compose, then deploy each container to Swarm cluster instead of treating Compose as a whole.
link:
- How to use Docker Compose to run complex multi container apps on your Raspberry Pi
- Docker Compose multi-container deployment (5)
- Introduction to Docker-Compose Installation and Use
本文链接 https://yanbin.blog/docker-compose-in-practice/, 来自 隔叶黄莺 Yanbin Blog
[版权声明] 本文采用 署名-非商业性使用-相同方式共享 4.0 国际 (CC BY-NC-SA 4.0) 进行许可。