Before officially entering into Kubernetes, I hope to know something about the earlier Docker Swarm, although it is basically no long used currently. Swarm provides a cluster of Docker hosts. There is at least one Manager (or more) in the cluster, plus 0 or more Workers. The main idea is that a Docker service (container) is executed by the Swarm cluster manager, and it will find the corresponding host in the cluster to execute the container. Both Manager and Worker can to run Docker containers, but only Manager can execute management commands.
The following two pictures shows the difference between running Docker containers on a host and a Swarm cluster host.
|  Run all containers in one host | => |  Run container together | 
Docker comes with Swarm mode after 1.12. This article essentially inspired by use Docker Swarm Swarm mode to build a cluster. We just use three Vagrant virtual machines for this experiment, one of which is a Swarm Manager node and two worker nodes.
- ubuntu-master: 172.28.128.6
- ubuntu-worker1: 172.28.128.7
- ubuntu-worker2: 172.28.128.7
The Vagrantfile used is
| 1 2 3 4 5 | Vagrant.configure("2") do |config|   config.vm.box = "ubuntu/disco64"   config.vm.hostname = "ubuntu-master"    # workers:ubuntu-worker1, ubuntu-worker2   config.vm.network "private_network", type: "dhcp" end | 
Or we can configure all three virtual machines in one single Vagrant file, please look at my previous blog Introduce Vagrant and common usages.
After vagrant up, vagrant ssh enter the machine, sudo su-switch to the root user, use snap install docker or apt install docker.io to install docker, and then you can use the docker swarm command.
Create Swarm cluster
There must be at least one Manager node in a Swarm cluster
Create Manager node
Execute on the ubuntu-master machine
| 1 2 3 4 5 6 7 8 9 10 11 | root@ubuntu-master:~# docker swarm init --advertise-addr 172.28.128.6 Swarm initialized: current node (sjfg7nljuyt54yapffvosrzmh) is now a manager. To add a worker to this swarm, run the following command:     docker swarm join --token SWMTKN-1-3wnd3z2xp49j6vyf6nt8kiuz38bjgjbbs90kz8x48z6dhyr7rc-c2cwfzg2jzhi3qz12wicn89a0 172.28.128.6:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions. | 
--addvertise-addr indicates how Swarm nodes communicate with each other. So the port 2377 of the master must be opened. The above also shows the commands for adding workers to the Swarm cluster. Use the following two commands on the master to view the complete commands of adding workers or new managers at any time
| 1 2 3 4 | root@ubuntu-master: ~# docker swarm join-token worker ...... root@ubuntu-master: ~# docker swarm join-token manager .... | 
We can execute docker info and  docker node ls view information of Swarm cluster
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | root@ubuntu-master:~# docker info ...... Swarm: active NodeID: sjfg7nljuyt54yapffvosrzmh Is Manager: true ClusterID: r2cr7km655esw58olbc6gyncn Managers: 1 Nodes: 1 Default Address Pool: 10.0.0.0/8 SubnetSize: 24 Orchestration:   Task History Retention Limit: 5 ...... root@ubuntu-master:~# docker node ls ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION sjfg7nljuyt54yapffvosrzmh *   ubuntu-master       Ready               Active              Leader              18.09.9 | 
Join Worker node to Swarm cluster
Execute the same command on both ubuntu-worker1 and ubuntu-worker2 nodes
| 1 2 | root@ubuntu-worker1:~# docker swarm join --token SWMTKN-1-3wnd3z2xp49j6vyf6nt8kiuz38bjgjbbs90kz8x48z6dhyr7rc-c2cwfzg2jzhi3qz12wicn89a0 172.28.128.6:2377 This node joined a swarm as a worker. | 
After that, go back to the manager node and run commands  docker info and docker node ls to check
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 | root@ubuntu-master:~# docker info ...... Swarm: active NodeID: sjfg7nljuyt54yapffvosrzmh Is Manager: true ClusterID: r2cr7km655esw58olbc6gyncn Managers: 1 Nodes: 3 Default Address Pool: 10.0.0.0/8 SubnetSize: 24 Orchestration:   Task History Retention Limit: 5 ...... root@ubuntu-master:~# docker node ls ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION sjfg7nljuyt54yapffvosrzmh *   ubuntu-master       Ready               Active              Leader              18.09.9 q82ndq11h1mie6lwmog5pfymu     ubuntu-worker1      Ready               Active                                  18.09.7 xuzc43qrrdirkmabivdzg8ruc     ubuntu-worker2      Ready               Active                                  18.09.9 | 
On the worker node, run  docker info, we'll see slightly different information, and the docker node command is not allowed to execute on worker nodes.
Deploy the service in the Swarm cluster
Now that we have a Swarm cluster consisting of a manager and two workers, we can start to run our docker container(service) in this cluster.
Different from the way we start docker container with docker run, to deploy services to Swarm cluster nodes, we must run docker service command on manager node(s).
| 1 2 3 4 5 6 | root@ubuntu-master:~# docker service create --replicas 2 --name helloworld alpine ping docker.com 15aptyokhcp42qa47rfmhgx82 overall progress: 2 out of 2 tasks 1/2: running   [==================================================>] 2/2: running   [==================================================>] verify: Service converged | 
- --replicasSpecifies that the startup service consists of several instances
- --nameSame as --name in docker run, specify the service name
- alpineIs the docker image name
- ping docker.comThe command executed by the service, because the ping command under Linux will not stop, so you can observe the running status later
View service information in Swarm
docker service ls View service list
| 1 2 3 | root@ubuntu-master:~# docker service ls ID                  NAME                MODE                REPLICAS            IMAGE               PORTS 15aptyokhcp4        helloworld          replicated          2/2                 alpine:latest | 
Run docker service inspect to view the service information, and docker inspect to view information about a similar container
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | root@ubuntu-master:~# docker service inspect --pretty helloworld ID:     15aptyokhcp42qa47rfmhgx82 Name:       helloworld Service Mode:   Replicated Replicas:  2 Placement: UpdateConfig: Parallelism:   1 ...... ContainerSpec: Image:     alpine:latest@sha256:ab00606a42621fb68f2ed6ad3c88be54397f981a7b70a79db3d1172b11c4367d Args:      ping docker.com Init:      false | 
Similar to docker ps, the docker service ps <SERVICE-ID> command lists service information including where the container located on which nodes
| 1 2 3 4 | root@ubuntu-master:~# docker service ps 15 ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS ay8qybtmtpqo        helloworld.1        alpine:latest       ubuntu-worker1      Running             Running 6 minutes ago ou1gltvexpkc        helloworld.2        alpine:latest       ubuntu-master       Running             Running 6 minutes ago | 
Noticed that we specified the --replicas=2 to start two copies of service, so we can see one container is running on ubuntu-worker1, and another container is running on ubuntu-master. Also, we can verify this from all nodes by using command docker ps
| 1 2 3 | root@ubuntu-master:~# docker ps CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES 16eb4dbfc88b        alpine:latest       "ping docker.com"   7 minutes ago       Up 7 minutes                            helloworld.2.ou1gltvexpkcbna6rttbjbosl | 
| 1 2 3 | root@ubuntu-worker1:~# docker ps CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES 5b4367423853        alpine:latest       "ping docker.com"   7 minutes ago       Up 7 minutes                            helloworld.1.ay8qybtmtpqotpokauyde7ob0 | 
| 1 2 | root@ubuntu-worker2:~# docker ps CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES | 
Docker Swarm help us to distribute containers to each individual node by deploying service. So, we don't need to find out specific node and run demand like this docker run --name helloworld.x alpine ping docker.com
Dynamic scaling of services in the Swarm cluster
The advantage of having a cluster is that we can dynamically scale the services in the cluster. It is not a problem if the scale exceeds the number of cluster nodes. It is nothing more than running multiple services on one node.
| 1 2 3 4 5 6 7 8 9 | root@ubuntu-master:~# docker service scale helloworld=5 helloworld scaled to 5 overall progress: 5 out of 5 tasks 1/5: running   [==================================================>] 2/5: running   [==================================================>] 3/5: running   [==================================================>] 4/5: running   [==================================================>] 5/5: running   [==================================================>] verify: Service converged | 
helloworld=5 specified when creating a service equivalent to --replicas=5 the same
Again, look at the list of service node assignments
| 1 2 3 4 5 6 7 | root@ubuntu-master:~# docker service ps helloworld ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS ay8qybtmtpqo        helloworld.1        alpine:latest       ubuntu-worker1      Running             Running 14 minutes ago ou1gltvexpkc        helloworld.2        alpine:latest       ubuntu-master       Running             Running 14 minutes ago 8xhmp5ehboo6        helloworld.3        alpine:latest       ubuntu-worker2      Running             Running 47 seconds ago kagiwq7sh105        helloworld.4        alpine:latest       ubuntu-worker2      Running             Running 47 seconds ago ijmgo4udbe1d        helloworld.5        alpine:latest       ubuntu-worker1      Running             Running 47 seconds ago | 
Two of the nodes are running two services. Let's try to decrease the number
| 1 2 3 4 5 | root@ubuntu-master:~# docker service scale helloworld=1 helloworld scaled to 1 overall progress: 1 out of 1 tasks 1/1: running   [==================================================>] verify: Service converged | 
Look at the list of services again
| 1 2 3 | root@ubuntu-master:~# docker service ps helloworld ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS ay8qybtmtpqo        helloworld.1        alpine:latest       ubuntu-worker1      Running             Running 17 minutes ago | 
Everytime we make change with docker service scale helloworld=<NUM>, we can go a particular node to run docker ps to check containers running on that node.
By this time we have truly experienced some of the benefits of Swarm clusters, but the actual application scenarios do not seem to be the same. In practice, the nodes are dynamically added or removed, a manager node must start first, then the newly started node is automatically added as a worker, and removed from Swarm cluster automatically exited after the node dies.
Notes: A Swarm cluster can have more then one manager nodes.
The following is the management of services in the Swarm cluster
Delete services in the Swarm cluster
Run docker service rmi helloworld deleting service commands.
Update the services in the Swarm cluster
For example, to update the mirror version in the service, the basic command is docker service update --image alpine:3.10 helloworld, the following is a complete demonstration to update the mirror from alpine:latest to alpine:3.10
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | root@ubuntu-master:~# docker service create --replicas 3 --name helloworld alpine ping docker.com pz8ifs41o90fren4hc6dgc1gz overall progress: 3 out of 3 tasks 1/3: running   [==================================================>] 2/3: running   [==================================================>] 3/3: running   [==================================================>] verify: Service converged root@ubuntu-master:~# docker service ps helloworld ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS qhmkkife291l        helloworld.1        alpine:latest       ubuntu-worker1      Running             Running about a minute ago ia6bpvky5fdd        helloworld.2        alpine:latest       ubuntu-worker2      Running             Running about a minute ago i6ux7j6jpwqi        helloworld.3        alpine:latest       ubuntu-master       Running             Running about a minute ago root@ubuntu-master:~# docker service update --image alpine:3.10 helloworld helloworld overall progress: 3 out of 3 tasks                                     //这里可以看到怎么动态的更新过程 1/3: running   [==================================================>] 2/3: running   [==================================================>] 3/3: running   [==================================================>] verify: Service converged root@ubuntu-master:~# docker service ps helloworld                     //显示了由  latest 更新到了 3.10 ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE             ERROR               PORTS lgqlxpkdjyl8        helloworld.1        alpine:3.10         ubuntu-worker1      Running             Running 14 seconds ago qhmkkife291l         \_ helloworld.1    alpine:latest       ubuntu-worker1      Shutdown            Shutdown 15 seconds ago 5qj7hf6dm75x        helloworld.2        alpine:3.10         ubuntu-worker2      Running             Running 26 seconds ago ia6bpvky5fdd         \_ helloworld.2    alpine:latest       ubuntu-worker2      Shutdown            Shutdown 27 seconds ago p1aibrthhs4k        helloworld.3        alpine:3.10         ubuntu-master       Running             Running 38 seconds ago i6ux7j6jpwqi         \_ helloworld.3    alpine:latest       ubuntu-master       Shutdown            Shutdown 39 seconds ago | 
In order not to interrupt the service as much as possible during the update, Swarm will stop, update, and restart the services one by one.
This is the Docker Swarm service deployment strategy.
Offline and online nodes in Swarm
For a Swarm cluster running a service, we can take a node offline, and then the service on it will be transferred to another available node. Offline nodes can go online again, but the previous services above will not be re-transferred, and will be allocated to newly online nodes only when the service is scaled or new services are created. See a full demo below
| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | root@ubuntu-master:~# docker service create --replicas 3 --name helloworld alpine ping docker.com zn19c80zdpehq40dwz25lgton overall progress: 3 out of 3 tasks 1/3: running   [==================================================>] 2/3: running   [==================================================>] 3/3: running   [==================================================>] verify: Service converged root@ubuntu-master:~# docker service ps helloworld          // each node runs a container ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS r36yh4yfyrjj        helloworld.1        alpine:latest       ubuntu-worker1      Running             Running 13 seconds ago 2fyzydfs2n8w        helloworld.2        alpine:latest       ubuntu-worker2      Running             Running 13 seconds ago smqv1mfh1lba        helloworld.3        alpine:latest       ubuntu-master       Running             Running 13 seconds ago root@ubuntu-master:~# docker node update --availability drain ubuntu-worker1 ubuntu-worker1 root@ubuntu-master:~# docker service ps helloworld          // ubuntu-worker1 offline, containers on this node moved to ubuntu-master node ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS 5z10pj4z9ldq        helloworld.1        alpine:latest       ubuntu-master       Running             Running 1 second ago r36yh4yfyrjj         \_ helloworld.1    alpine:latest       ubuntu-worker1      Shutdown            Shutdown 2 seconds ago 2fyzydfs2n8w        helloworld.2        alpine:latest       ubuntu-worker2      Running             Running 50 seconds ago smqv1mfh1lba        helloworld.3        alpine:latest       ubuntu-master       Running             Running 50 seconds ago root@ubuntu-master:~# docker node update --availability active ubuntu-worker1 ubuntu-worker1 root@ubuntu-master:~# docker service ps helloworld         // ubuntu-worker1 online again, container assignment not changed ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS 5z10pj4z9ldq        helloworld.1        alpine:latest       ubuntu-master       Running             Running 56 seconds ago r36yh4yfyrjj         \_ helloworld.1    alpine:latest       ubuntu-worker1      Shutdown            Shutdown 56 seconds ago 2fyzydfs2n8w        helloworld.2        alpine:latest       ubuntu-worker2      Running             Running about a minute ago smqv1mfh1lba        helloworld.3        alpine:latest       ubuntu-master       Running             Running about a minute ago | 
Manage Swarm cluster
Finally, there are several commands for managing the Swarm cluster
Worker leaves the Swarm cluster
| 1 2 | root@ubuntu-worker1:~# docker swarm leave Node left the swarm. | 
After leaving, run docker node ls to check node status is the down
| 1 2 3 4 5 | root@ubuntu-master:~# docker node ls ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION sjfg7nljuyt54yapffvosrzmh *   ubuntu-master       Ready               Active              Leader              18.09.9 q82ndq11h1mie6lwmog5pfymu     ubuntu-worker1      Down                Active                                  18.09.7 xuzc43qrrdirkmabivdzg8ruc     ubuntu-worker2      Down                Active                                  18.09.9 | 
--replicas=<NUM>re-assignment(rebalance). If all workers are deleted, all services will be deployed on the Manager node. If need to remove node from the list of docker node ls, we should do| 1 2 3 4 | root@ubuntu-master:~# docker node rm ubuntu-worker1 ubuntu-worker1 root@ubuntu-master:~# docker node rm --force ubuntu-worker2 ubuntu-worker2 | 
Add --force parameters if you can't delete 
Finally, if all manager nodes left, the whole Swarm cluster is end.
| 1 2 3 4 | root@ubuntu-master:~# docker swarm leave --force Node left the swarm. oot@ubuntu-master:~# docker node ls Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again. | 
The Swarm cluster network has also been deleted.
That's the basic knowledge of Docker Swarm. If there is a problem with the creation of the Swarm cluster, please check that the following two port numbers are not blocked by firewall
- Port 7946, used for communication between cluster nodes
- Port 4789, used for overlay network traffic
Next, I will move on to Docker compose, and even compose can be deployed to a Swarm cluster. The ultimate goal is to understand Kubernetes.
本文链接 https://yanbin.blog/english-docker-swarm-in-action/, 来自 隔叶黄莺 Yanbin Blog
[版权声明]  本文采用 署名-非商业性使用-相同方式共享 4.0 国际 (CC BY-NC-SA 4.0) 进行许可。
 本文采用 署名-非商业性使用-相同方式共享 4.0 国际 (CC BY-NC-SA 4.0) 进行许可。
[…] to advance to Kubernetes, the last article Docker Swarm In Action, After understanding Swarm, it is necessary to get familiar with Docker Compose. Docker Swarm forms […]