One of the key updates as part of Docker 1.6 is Docker Swarm 0.2.0. Docker Swarm solves one of the fundamental limitations of Docker where the containers could only run on a single Docker host. Docker Swarm is native clustering for Docker. It turns a pool of Docker hosts into a single, virtual host.
This Tech Tip will show how to create a cluster across multiple hosts with Docker Swarm.
A good introduction to Docker Swarm is by @aluzzardi and @vieux from Container Camp:
Key Components of Docker Swarm
Swarm Manager: Docker Swarm has a Master or Manager, that is a pre-defined Docker Host, and is a single point for all administration. Currently only a single instance of manager is allowed in the cluster. This is a SPOF for high availability architectures and additional managers will be allowed in a future version of Swarm with #598.
Swarm Nodes: The containers are deployed on Nodes that are additional Docker Hosts. Each Swarm Node must be accessible by the manager, each node must listen to the same network interface (TCP port). Each node runs a node agent that registers the referenced Docker daemon, monitors it, and updates the discovery backend with the node’s status. The containers run on a node.
Scheduler Strategy: Different scheduler strategies (binpack
, spread
, and random
) can be applied to pick the best node to run your container. The default strategy is spread
which optimizes the node for least number of running containers. There are multiple kinds of filters, such as constraints and affinity. This should allow for a decent scheduling algorithm.
Node Discovery Service: By default, Swarm uses hosted discovery service, based on Docker Hub, using tokens to discover nodes that are part of a cluster. However etcd, consul, and zookeeper can be also be used for service discovery as well. This is particularly useful if there is no access to Internet, or you are running the setup in a closed network. A new discovery backend can be created as explained here. It would be useful to have the hosted Discovery Service inside the firewall and #660 will discuss this.
Standard Docker API: Docker Swarm serves the standard Docker API and thus any tool that talks to a single Docker host will seamlessly scale to multiple hosts now. That means if you were using shell scripts using Docker CLI to configure multiple Docker hosts, the same CLI would can now talk to Swarm cluster and Docker Swarm will then act as proxy and run it on the cluster.
There are lots of other concepts but these are the main ones.
TL;DR Here is a simple script that will create a boilerplate cluster with a master and two nodes:
1
2
3
4
5
6
7
8
9
10
11
|
echo "Creating cluster ..."
TOKEN=`docker run swarm create`
echo "Got the token " $TOKEN
echo "Creating Swarm master ..."
docker-machine create -d virtualbox --swarm --swarm-master --swarm-discovery token://$TOKEN swarm-master
echo "Creating Swarm node 01 ..."
docker-machine create -d virtualbox --swarm --swarm-discovery token://$TOKEN swarm-node-01
echo "Creating Swarm node 02 ..."
docker-machine create -d virtualbox --swarm --swarm-discovery token://$TOKEN swarm-node-02
|
Lets dig into the details now!
Create Swarm Cluster
Create a Swarm cluster as:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
~> docker run swarm create
Unable to find image 'swarm:latest' locally
latest: Pulling from swarm
dc2cace3cb9c: Pull complete
dc2cace3cb9c: Download complete
132dac22a0c2: Download complete
c578e06f7812: Download complete
1dfbc304fc7f: Download complete
9b5a856b703d: Download complete
282cd8d4f06e: Download complete
96b8c18d1208: Download complete
511136ea3c5a: Download complete
Status: Downloaded newer image for swarm:latest
117c8d19ba140d7ba3259aae9012e22f
|
This command returns a token and is the unique cluster id. It will be used when creating master and nodes later. As mentioned earlier, this cluster id is returned by the hosted discovery service on Docker Hub.
Make sure to note this cluster id now as there is no means to list it later. #661 should fix this.
Create Swarm Master
Swarm is fully integrated with Docker Machine, and so is the easiest way to get started on OSX.
- Create Swarm master as:
123456789~> docker-machine create -d virtualbox --swarm --swarm-master --swarm-discovery token://117c8d19ba140d7ba3259aae9012e22f swarm-masterINFO[0000] Creating SSH key...INFO[0000] Creating VirtualBox VM...INFO[0005] Starting VirtualBox VM...INFO[0006] Waiting for VM to start...INFO[0060] "swarm-master" has been created and is now the active machine.INFO[0060] To point your Docker client at it, run this in your shell: eval "$(docker-machine env swarm-master)"--swarm
configures the machine with Swarm,--swarm-master
configures the created machine to be Swarm master. Make sure to replace cluster id aftertoken://
with that obtained in the previous step. Swarm master creation talks to the hosted service on Docker Hub and informs that a master is created in the cluster.There should be an option to make an existing machine as Swarm master. This is reported as #1017.
- List all the running machines as:
12345~> docker-machine lsNAME ACTIVE DRIVER STATE URL SWARMswarm-master * virtualbox Running tcp://192.168.99.108:2376 swarm-master (master)
Notice, how
swarm-master
is marked as master.Seems like the cluster name is derived from the master’s name. There should be an option to specify the cluster name, likely during cluster creation. This is reported as #1018.
- Connect to this newly created master and find some more information about it:
12345678910111213141516171819202122232425262728293031~> eval "$(docker-machine env swarm-master)"~> docker infoContainers: 2Images: 8Storage Driver: aufsRoot Dir: /mnt/sda1/var/lib/docker/aufsBacking Filesystem: extfsDirs: 12Dirperm1 Supported: trueExecution Driver: native-0.2Kernel Version: 3.18.11-tinycore64Operating System: Boot2Docker 1.6.0 (TCL 5.4); master : a270c71 - Thu Apr 16 19:50:36 UTC 2015CPUs: 8Total Memory: 999.4 MiBName: swarm-masterID: UAEO:HLG6:2XOF:QQH7:GTGW:XW6K:ZILW:RY57:JSEY:2PHI:4OHE:QMVWDebug mode (server): trueDebug mode (client): falseFds: 25Goroutines: 38System Time: Thu Apr 23 02:15:55 UTC 2015EventsListeners: 1Init SHA1: 9145575052383dbf64cede3bac278606472e027cInit Path: /usr/local/bin/dockerDocker Root Dir: /mnt/sda1/var/lib/dockerUsername: arunguptaRegistry: [https://index.docker.io/v1/]Labels:provider=virtualbox
Create Swarm Nodes
- Create a swarm node as:
123456789~> docker-machine create -d virtualbox --swarm --swarm-discovery token://117c8d19ba140d7ba3259aae9012e22f swarm-node-01INFO[0000] Creating SSH key...INFO[0000] Creating VirtualBox VM...INFO[0006] Starting VirtualBox VM...INFO[0006] Waiting for VM to start...INFO[0070] "swarm-node-01" has been created and is now the active machine.INFO[0070] To point your Docker client at it, run this in your shell: eval "$(docker-machine env swarm-node-01)"
Once again, node creation talks to the hosted service at Docker Hub and joins the previously created cluster. This is specified by
--swarm-discovery token://...
and specifying the cluster id obtained earlier. - Create another Swarm node as:
123456789~> docker-machine create -d virtualbox --swarm --swarm-discovery token://117c8d19ba140d7ba3259aae9012e22f swarm-node-02INFO[0000] Creating SSH key...INFO[0000] Creating VirtualBox VM...INFO[0006] Starting VirtualBox VM...INFO[0006] Waiting for VM to start...INFO[0061] "swarm-node-02" has been created and is now the active machine.INFO[0061] To point your Docker client at it, run this in your shell: eval "$(docker-machine env swarm-node-02)"
- List all the existing Docker machines:
12345678~> docker-machine lsNAME ACTIVE DRIVER STATE URL SWARMmydocker virtualbox Running tcp://192.168.99.107:2376swarm-master virtualbox Running tcp://192.168.99.108:2376 swarm-master (master)swarm-node-01 virtualbox Running tcp://192.168.99.109:2376 swarm-masterswarm-node-02 * virtualbox Running tcp://192.168.99.110:2376 swarm-master
The machines that are part of the cluster have the cluster’s name in the
SWARM
column, blank otherwise. For example,mydocker
is a standalone machine where as all other machines are part ofswarm-master
cluster. The Swarm master is also identified by(master)
in theSWARM
column. - Connect to the Swarm cluster and find some information about it:
1234567891011121314151617181920~> eval "$(docker-machine env --swarm swarm-master)"~> docker infoContainers: 4Strategy: spreadFilters: affinity, health, constraint, port, dependencyNodes: 3swarm-master: 192.168.99.108:2376└ Containers: 2└ Reserved CPUs: 0 / 8└ Reserved Memory: 0 B / 1.025 GiBswarm-node-01: 192.168.99.109:2376└ Containers: 1└ Reserved CPUs: 0 / 8└ Reserved Memory: 0 B / 1.025 GiBswarm-node-02: 192.168.99.110:2376└ Containers: 1└ Reserved CPUs: 0 / 8└ Reserved Memory: 0 B / 1.025 GiB
There are 3 nodes – one Swarm master and 2 Swarm nodes. There is a total of 4 containers running in this cluster – one Swarm agent on master and each node, and there is an additional
swarm-agent-master
running on the master. This can be verified by connecting to the master and listing all the containers:1234567~> eval "$(docker-machine env swarm-master)"~> docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES458c58f93a2b swarm:latest "/swarm join --addr 19 minutes ago Up 19 minutes 2375/tcp swarm-agent0c80a04859ba swarm:latest "/swarm manage --tls 19 minutes ago Up 19 minutes 2375/tcp, 0.0.0.0:3376->3376/tcp swarm-agent-master - Configure the Docker client to connect to Swarm cluster and check the list of running containers:
12345~> eval "$(docker-machine env --swarm swarm-master)"~> docker psCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
No application containers are running in the cluster, as expected.
- List the nodes in the cluster as:
123456~> docker run swarm list token://117c8d19ba140d7ba3259aae9012e22f192.168.99.108:2376192.168.99.109:2376192.168.99.110:2376
A subsequent blog will show how to run multiple containers across hosts on this cluster, and also look into different scheduling strategies.
Scaling Docker with Swarm has good details.
Swarm is not fully integrated with Docker Compose yet. But what would be really cool is when I can specify all the Docker Machine descriptions in docker-compose.yml
, in addition to the containers. Then docker-compose up -d
would setup the cluster and run the containers in that cluster.
Nice introduction