Docker Swarm In Action

Before officially entering into Kubernetes, I hope to know something about the earlier Docker Swarm, although it is basically no long used currently. Swarm provides a cluster of Docker hosts. There is at least one Manager (or more) in the cluster, plus 0 or more Workers. The main idea is that a Docker service (container) is executed by the Swarm cluster manager, and it will find the corresponding host in the cluster to execute the container. Both Manager and Worker can to run Docker containers, but only Manager can execute management commands.

The following two pictures shows the difference between running Docker containers on a host and a Swarm cluster host.

Run all containers in one host   => Run container together

Docker comes with Swarm mode after 1.12. This article essentially inspired by use Docker Swarm Swarm mode to build a cluster. We just use three Vagrant virtual machines for this experiment, one of which is a Swarm Manager node and two worker nodes.

  1. ubuntu-master: 172.28.128.6
  2. ubuntu-worker1: 172.28.128.7
  3. ubuntu-worker2: 172.28.128.7

The Vagrantfile used is

Or we can configure all three virtual machines in one single Vagrant file, please look at my previous blog Introduce Vagrant and common usages.

After vagrant up, vagrant ssh enter the machine, sudo su-switch to the root user, use snap install docker or apt install docker.io to install docker, and then you can use the docker swarm command.

Create Swarm cluster

There must be at least one Manager node in a Swarm cluster

Create Manager node

Execute on the ubuntu-master machine

--addvertise-addr indicates how Swarm nodes communicate with each other. So the port 2377 of the master must be opened. The above also shows the commands for adding workers to the Swarm cluster. Use the following two commands on the master to view the complete commands of adding workers or new managers at any time

We can execute docker info and  docker node ls view information of Swarm cluster

Join Worker node to Swarm cluster

Execute the same command on both ubuntu-worker1 and ubuntu-worker2 nodes

After that, go back to the manager node and run commands  docker info and docker node ls to check

On the worker node, run  docker info, we'll see slightly different information, and the docker node command is not allowed to execute on worker nodes.

Deploy the service in the Swarm cluster

Now that we have a Swarm cluster consisting of a manager and two workers, we can start to run our docker container(service) in this cluster.

Different from the way we start docker container with docker run, to deploy services to Swarm cluster nodes, we must run docker service command on manager node(s).

  1. --replicas Specifies that the startup service consists of several instances
  2. --name Same as --name in docker run, specify the service name
  3. alpine Is the docker image name
  4. ping docker.com The command executed by the service, because the ping command under Linux will not stop, so you can observe the running status later

View service information in Swarm

docker service ls View service list

Run docker service inspect to view the service information, and docker inspect to view information about a similar container

Similar to docker ps, the docker service ps <SERVICE-ID> command lists service information including where the container located on which nodes

Noticed that we specified the --replicas=2 to start two copies of service, so we can see one container is running on ubuntu-worker1, and another container is running on ubuntu-master. Also, we can verify this from all nodes by using command docker ps

Docker Swarm help us to distribute containers to each individual node by deploying service. So, we don't need to find out specific node and run demand like this docker run --name helloworld.x alpine ping docker.com

Dynamic scaling of services in the Swarm cluster

The advantage of having a cluster is that we can dynamically scale the services in the cluster. It is not a problem if the scale exceeds the number of cluster nodes. It is nothing more than running multiple services on one node.

helloworld=5 specified when creating a service equivalent to --replicas=5 the same

Again, look at the list of service node assignments

Two of the nodes are running two services. Let's try to decrease the number

Look at the list of services again

Everytime we make change with docker service scale helloworld=<NUM>, we can go a particular node to run docker ps to check containers running on that node.

By this time we have truly experienced some of the benefits of Swarm clusters, but the actual application scenarios do not seem to be the same. In practice, the nodes are dynamically added or removed, a manager node must start first, then the newly started node is automatically added as a worker, and removed from Swarm cluster automatically exited after the node dies.

Notes: A Swarm cluster can have more then one manager nodes.

The following is the management of services in the Swarm cluster

Delete services in the Swarm cluster

Run docker service rmi helloworld deleting service commands.

Update the services in the Swarm cluster

For example, to update the mirror version in the service, the basic command is docker service update --image alpine:3.10 helloworld, the following is a complete demonstration to update the mirror from alpine:latest to alpine:3.10

In order not to interrupt the service as much as possible during the update, Swarm will stop, update, and restart the services one by one.

This is the Docker Swarm service deployment strategy.

Offline and online nodes in Swarm

For a Swarm cluster running a service, we can take a node offline, and then the service on it will be transferred to another available node. Offline nodes can go online again, but the previous services above will not be re-transferred, and will be allocated to newly online nodes only when the service is scaled or new services are created. See a full demo below

Manage Swarm cluster

Finally, there are several commands for managing the Swarm cluster

Worker leaves the Swarm cluster

After leaving, run docker node ls to check node status is the down

Add --force parameters if you can't delete 

Finally, if all manager nodes left, the whole Swarm cluster is end.

The Swarm cluster network has also been deleted.

That's the basic knowledge of Docker Swarm. If there is a problem with the creation of the Swarm cluster, please check that the following two port numbers are not blocked by firewall

  1. Port 7946, used for communication between cluster nodes
  2. Port 4789, used for overlay network traffic

Next, I will move on to Docker compose, and even compose can be deployed to a Swarm cluster. The ultimate goal is to understand Kubernetes. 

本文链接 https://yanbin.blog/english-docker-swarm-in-action/, 来自 隔叶黄莺 Yanbin Blog

[版权声明] Creative Commons License 本文采用 署名-非商业性使用-相同方式共享 4.0 国际 (CC BY-NC-SA 4.0) 进行许可。

Subscribe
Notify of
guest

1 Comment
Inline Feedbacks
View all comments
trackback

[…] to advance to Kubernetes, the last article Docker Swarm In Action, After understanding Swarm, it is necessary to get familiar with Docker Compose. Docker Swarm forms […]