Docker Swarm In Action

Before officially entering into Kubernetes, I hope to know something about the earlier Docker Swarm, although it is basically no long used currently. Swarm provides a cluster of Docker hosts. There is at least one Manager (or more) in the cluster, plus 0 or more Workers. The main idea is that a Docker service (container) is executed by the Swarm cluster manager, and it will find the corresponding host in the cluster to execute the container. Both Manager and Worker can to run Docker containers, but only Manager can execute management commands.


The following two pictures shows the difference between running Docker containers on a host and a Swarm cluster host.

Run all containers in one host  => Run container together

Docker comes with Swarm mode after 1.12. This article essentially inspired by use Docker Swarm Swarm mode to build a cluster. We just use three Vagrant virtual machines for this experiment, one of which is a Swarm Manager node and two worker nodes.

  1. ubuntu-master: 172.28.128.6
  2. ubuntu-worker1: 172.28.128.7
  3. ubuntu-worker2: 172.28.128.7

The Vagrantfile used is
1Vagrant.configure("2") do |config|
2  config.vm.box = "ubuntu/disco64"
3  config.vm.hostname = "ubuntu-master"    # workers:ubuntu-worker1, ubuntu-worker2
4  config.vm.network "private_network", type: "dhcp"
5end

Or we can configure all three virtual machines in one single Vagrant file, please look at my previous blog Introduce Vagrant and common usages.

After vagrant up, vagrant ssh enter the machine, sudo su-switch to the root user, use snap install docker or apt install docker.io to install docker, and then you can use the docker swarm command.

Create Swarm cluster

There must be at least one Manager node in a Swarm cluster

Create Manager node

Execute on the ubuntu-master machine
 1root@ubuntu-master:~# docker swarm init --advertise-addr 172.28.128.6
 2Swarm initialized: current node (sjfg7nljuyt54yapffvosrzmh) is now a manager.
 3
 4
 5To add a worker to this swarm, run the following command:
 6
 7
 8    docker swarm join --token SWMTKN-1-3wnd3z2xp49j6vyf6nt8kiuz38bjgjbbs90kz8x48z6dhyr7rc-c2cwfzg2jzhi3qz12wicn89a0 172.28.128.6:2377
 9
10
11To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

--addvertise-addr indicates how Swarm nodes communicate with each other. So the port 2377 of the master must be opened. The above also shows the commands for adding workers to the Swarm cluster. Use the following two commands on the master to view the complete commands of adding workers or new managers at any time
1root@ubuntu-master: ~# docker swarm join-token worker
2......
3root@ubuntu-master: ~# docker swarm join-token manager
4....

We can execute docker info and  docker node ls view information of Swarm cluster
 1root@ubuntu-master:~# docker info
 2......
 3Swarm: active
 4NodeID: sjfg7nljuyt54yapffvosrzmh
 5Is Manager: true
 6ClusterID: r2cr7km655esw58olbc6gyncn
 7Managers: 1
 8Nodes: 1
 9Default Address Pool: 10.0.0.0/8
10SubnetSize: 24
11Orchestration:
12  Task History Retention Limit: 5
13......
14
15
16root@ubuntu-master:~# docker node ls
17ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
18sjfg7nljuyt54yapffvosrzmh *   ubuntu-master       Ready               Active              Leader              18.09.9

Join Worker node to Swarm cluster

Execute the same command on both ubuntu-worker1 and ubuntu-worker2 nodes
root@ubuntu-worker1:~# docker swarm join --token SWMTKN-1-3wnd3z2xp49j6vyf6nt8kiuz38bjgjbbs90kz8x48z6dhyr7rc-c2cwfzg2jzhi3qz12wicn89a0 172.28.128.6:2377 This node joined a swarm as a worker.

After that, go back to the manager node and run commands  docker info and docker node ls to check
 1root@ubuntu-master:~# docker info
 2......
 3Swarm: active
 4NodeID: sjfg7nljuyt54yapffvosrzmh
 5Is Manager: true
 6ClusterID: r2cr7km655esw58olbc6gyncn
 7Managers: 1
 8Nodes: 3
 9Default Address Pool: 10.0.0.0/8
10SubnetSize: 24
11Orchestration:
12  Task History Retention Limit: 5
13......
14root@ubuntu-master:~# docker node ls
15ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
16sjfg7nljuyt54yapffvosrzmh *   ubuntu-master       Ready               Active              Leader              18.09.9
17q82ndq11h1mie6lwmog5pfymu     ubuntu-worker1      Ready               Active                                  18.09.7
18xuzc43qrrdirkmabivdzg8ruc     ubuntu-worker2      Ready               Active                                  18.09.9

On the worker node, run  docker info, we'll see slightly different information, and the docker node command is not allowed to execute on worker nodes.

Deploy the service in the Swarm cluster

Now that we have a Swarm cluster consisting of a manager and two workers, we can start to run our docker container(service) in this cluster.

Different from the way we start docker container with docker run, to deploy services to Swarm cluster nodes, we must run docker service command on manager node(s).
1root@ubuntu-master:~# docker service create --replicas 2 --name helloworld alpine ping docker.com
215aptyokhcp42qa47rfmhgx82
3overall progress: 2 out of 2 tasks
41/2: running   [==================================================>]
52/2: running   [==================================================>]
6verify: Service converged
  1. --replicas Specifies that the startup service consists of several instances
  2. --name Same as --name in docker run, specify the service name
  3. alpine Is the docker image name
  4. ping docker.com The command executed by the service, because the ping command under Linux will not stop, so you can observe the running status later

View service information in Swarm

docker service ls View service list
1root@ubuntu-master:~# docker service ls
2ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
315aptyokhcp4        helloworld          replicated          2/2                 alpine:latest

Run docker service inspect to view the service information, and docker inspect to view information about a similar container
 1root@ubuntu-master:~# docker service inspect --pretty helloworld
 2
 3
 4ID:     15aptyokhcp42qa47rfmhgx82
 5Name:       helloworld
 6Service Mode:   Replicated
 7Replicas:  2
 8Placement:
 9UpdateConfig:
10Parallelism:   1
11......
12ContainerSpec:
13Image:     alpine:latest@sha256:ab00606a42621fb68f2ed6ad3c88be54397f981a7b70a79db3d1172b11c4367d
14Args:      ping docker.com
15Init:      false

Similar to docker ps, the docker service ps <SERVICE-ID> command lists service information including where the container located on which nodes
1root@ubuntu-master:~# docker service ps 15
2ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS
3ay8qybtmtpqo        helloworld.1        alpine:latest       ubuntu-worker1      Running             Running 6 minutes ago
4ou1gltvexpkc        helloworld.2        alpine:latest       ubuntu-master       Running             Running 6 minutes ago

Noticed that we specified the --replicas=2 to start two copies of service, so we can see one container is running on ubuntu-worker1, and another container is running on ubuntu-master. Also, we can verify this from all nodes by using command docker ps
1root@ubuntu-master:~# docker ps
2CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
316eb4dbfc88b        alpine:latest       "ping docker.com"   7 minutes ago       Up 7 minutes                            helloworld.2.ou1gltvexpkcbna6rttbjbosl
1root@ubuntu-worker1:~# docker ps
2CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
35b4367423853        alpine:latest       "ping docker.com"   7 minutes ago       Up 7 minutes                            helloworld.1.ay8qybtmtpqotpokauyde7ob0
1root@ubuntu-worker2:~# docker ps
2CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

Docker Swarm help us to distribute containers to each individual node by deploying service. So, we don't need to find out specific node and run demand like this docker run --name helloworld.x alpine ping docker.com

Dynamic scaling of services in the Swarm cluster

The advantage of having a cluster is that we can dynamically scale the services in the cluster. It is not a problem if the scale exceeds the number of cluster nodes. It is nothing more than running multiple services on one node.

1root@ubuntu-master:~# docker service scale helloworld=5
2helloworld scaled to 5
3overall progress: 5 out of 5 tasks
41/5: running   [==================================================>]
52/5: running   [==================================================>]
63/5: running   [==================================================>]
74/5: running   [==================================================>]
85/5: running   [==================================================>]
9verify: Service converged

helloworld=5 specified when creating a service equivalent to --replicas=5 the same

Again, look at the list of service node assignments
1root@ubuntu-master:~# docker service ps helloworld
2ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
3ay8qybtmtpqo        helloworld.1        alpine:latest       ubuntu-worker1      Running             Running 14 minutes ago
4ou1gltvexpkc        helloworld.2        alpine:latest       ubuntu-master       Running             Running 14 minutes ago
58xhmp5ehboo6        helloworld.3        alpine:latest       ubuntu-worker2      Running             Running 47 seconds ago
6kagiwq7sh105        helloworld.4        alpine:latest       ubuntu-worker2      Running             Running 47 seconds ago
7ijmgo4udbe1d        helloworld.5        alpine:latest       ubuntu-worker1      Running             Running 47 seconds ago

Two of the nodes are running two services. Let's try to decrease the number
1root@ubuntu-master:~# docker service scale helloworld=1
2helloworld scaled to 1
3overall progress: 1 out of 1 tasks
41/1: running   [==================================================>]
5verify: Service converged

Look at the list of services again
1root@ubuntu-master:~# docker service ps helloworld
2ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
3ay8qybtmtpqo        helloworld.1        alpine:latest       ubuntu-worker1      Running             Running 17 minutes ago

Everytime we make change with docker service scale helloworld=<NUM>, we can go a particular node to run docker ps to check containers running on that node.

By this time we have truly experienced some of the benefits of Swarm clusters, but the actual application scenarios do not seem to be the same. In practice, the nodes are dynamically added or removed, a manager node must start first, then the newly started node is automatically added as a worker, and removed from Swarm cluster automatically exited after the node dies.

Notes: A Swarm cluster can have more then one manager nodes.

The following is the management of services in the Swarm cluster

Delete services in the Swarm cluster

Run docker service rmi helloworld deleting service commands.

Update the services in the Swarm cluster

For example, to update the mirror version in the service, the basic command is docker service update --image alpine:3.10 helloworld, the following is a complete demonstration to update the mirror from alpine:latest to alpine:3.10
 1root@ubuntu-master:~# docker service create --replicas 3 --name helloworld alpine ping docker.com
 2pz8ifs41o90fren4hc6dgc1gz
 3overall progress: 3 out of 3 tasks
 41/3: running   [==================================================>]
 52/3: running   [==================================================>]
 63/3: running   [==================================================>]
 7verify: Service converged
 8root@ubuntu-master:~# docker service ps helloworld
 9ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS
10qhmkkife291l        helloworld.1        alpine:latest       ubuntu-worker1      Running             Running about a minute ago
11ia6bpvky5fdd        helloworld.2        alpine:latest       ubuntu-worker2      Running             Running about a minute ago
12i6ux7j6jpwqi        helloworld.3        alpine:latest       ubuntu-master       Running             Running about a minute ago
13root@ubuntu-master:~# docker service update --image alpine:3.10 helloworld
14helloworld
15overall progress: 3 out of 3 tasks                                     //这里可以看到怎么动态的更新过程
161/3: running   [==================================================>]
172/3: running   [==================================================>]
183/3: running   [==================================================>]
19verify: Service converged
20root@ubuntu-master:~# docker service ps helloworld                     //显示了由  latest 更新到了 3.10
21ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE             ERROR               PORTS
22lgqlxpkdjyl8        helloworld.1        alpine:3.10         ubuntu-worker1      Running             Running 14 seconds ago
23qhmkkife291l         \_ helloworld.1    alpine:latest       ubuntu-worker1      Shutdown            Shutdown 15 seconds ago
245qj7hf6dm75x        helloworld.2        alpine:3.10         ubuntu-worker2      Running             Running 26 seconds ago
25ia6bpvky5fdd         \_ helloworld.2    alpine:latest       ubuntu-worker2      Shutdown            Shutdown 27 seconds ago
26p1aibrthhs4k        helloworld.3        alpine:3.10         ubuntu-master       Running             Running 38 seconds ago
27i6ux7j6jpwqi         \_ helloworld.3    alpine:latest       ubuntu-master       Shutdown            Shutdown 39 seconds ago

In order not to interrupt the service as much as possible during the update, Swarm will stop, update, and restart the services one by one.

This is the Docker Swarm service deployment strategy.

Offline and online nodes in Swarm

For a Swarm cluster running a service, we can take a node offline, and then the service on it will be transferred to another available node. Offline nodes can go online again, but the previous services above will not be re-transferred, and will be allocated to newly online nodes only when the service is scaled or new services are created. See a full demo below
 1root@ubuntu-master:~# docker service create --replicas 3 --name helloworld alpine ping docker.com
 2zn19c80zdpehq40dwz25lgton
 3overall progress: 3 out of 3 tasks
 41/3: running   [==================================================>]
 52/3: running   [==================================================>]
 63/3: running   [==================================================>]
 7verify: Service converged
 8root@ubuntu-master:~# docker service ps helloworld          // each node runs a container
 9ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
10r36yh4yfyrjj        helloworld.1        alpine:latest       ubuntu-worker1      Running             Running 13 seconds ago
112fyzydfs2n8w        helloworld.2        alpine:latest       ubuntu-worker2      Running             Running 13 seconds ago
12smqv1mfh1lba        helloworld.3        alpine:latest       ubuntu-master       Running             Running 13 seconds ago
13root@ubuntu-master:~# docker node update --availability drain ubuntu-worker1
14ubuntu-worker1
15root@ubuntu-master:~# docker service ps helloworld          // ubuntu-worker1 offline, containers on this node moved to ubuntu-master node
16ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
175z10pj4z9ldq        helloworld.1        alpine:latest       ubuntu-master       Running             Running 1 second ago
18r36yh4yfyrjj         \_ helloworld.1    alpine:latest       ubuntu-worker1      Shutdown            Shutdown 2 seconds ago
192fyzydfs2n8w        helloworld.2        alpine:latest       ubuntu-worker2      Running             Running 50 seconds ago
20smqv1mfh1lba        helloworld.3        alpine:latest       ubuntu-master       Running             Running 50 seconds ago
21root@ubuntu-master:~# docker node update --availability active ubuntu-worker1
22ubuntu-worker1
23root@ubuntu-master:~# docker service ps helloworld         // ubuntu-worker1 online again, container assignment not changed
24ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS
255z10pj4z9ldq        helloworld.1        alpine:latest       ubuntu-master       Running             Running 56 seconds ago
26r36yh4yfyrjj         \_ helloworld.1    alpine:latest       ubuntu-worker1      Shutdown            Shutdown 56 seconds ago
272fyzydfs2n8w        helloworld.2        alpine:latest       ubuntu-worker2      Running             Running about a minute ago
28smqv1mfh1lba        helloworld.3        alpine:latest       ubuntu-master       Running             Running about a minute ago

Manage Swarm cluster

Finally, there are several commands for managing the Swarm cluster

Worker leaves the Swarm cluster
1root@ubuntu-worker1:~# docker swarm leave
2Node left the swarm.

After leaving, run docker node ls to check node status is the down
1root@ubuntu-master:~# docker node ls
2ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
3sjfg7nljuyt54yapffvosrzmh *   ubuntu-master       Ready               Active              Leader              18.09.9
4q82ndq11h1mie6lwmog5pfymu     ubuntu-worker1      Down                Active                                  18.09.7
5xuzc43qrrdirkmabivdzg8ruc     ubuntu-worker2      Down                Active                                  18.09.9

Notice that after worker left Swarm cluster, Swarm will do container re-assignment according to --replicas=<NUM>re-assignment(rebalance). If all workers are deleted, all services will be deployed on the Manager node. If need to remove node from the list of docker node ls, we should do

1root@ubuntu-master:~# docker node rm ubuntu-worker1
2ubuntu-worker1
3root@ubuntu-master:~# docker node rm --force ubuntu-worker2
4ubuntu-worker2

Add --force parameters if you can't delete 

Finally, if all manager nodes left, the whole Swarm cluster is end.
root@ubuntu-master:~# docker swarm leave --force Node left the swarm. oot@ubuntu-master:~# docker node ls Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again.

The Swarm cluster network has also been deleted.

That's the basic knowledge of Docker Swarm. If there is a problem with the creation of the Swarm cluster, please check that the following two port numbers are not blocked by firewall
  1. Port 7946, used for communication between cluster nodes
  2. Port 4789, used for overlay network traffic

Next, I will move on to Docker compose, and even compose can be deployed to a Swarm cluster. The ultimate goal is to understand Kubernetes. 永久链接 https://yanbin.blog/english-docker-swarm-in-action/, 来自 隔叶黄莺 Yanbin's Blog
[版权声明] 本文采用 署名-非商业性使用-相同方式共享 4.0 国际 (CC BY-NC-SA 4.0) 进行许可。