Intro to Docker Swarm Mode and Azure — Part 1

Published 11/20/2018 08:50 AM   |    Updated 12/03/2018 07:44 AM
This article originally appeared on June 20, 2017.

I was asked to do some research on how to deploy a Java application in Azure. Although you could fire up and configure your own virtual machine or create a Tomcat App Service, the option that intrigued me the most was Docker for Azure. Based on my experiences, introducing Docker Swarm Mode first and then diving into Docker for Azure will make the most sense. 
 
NOTE: If you’re completely new to Docker, read this article first.
 

What is swarm mode?

 
 
Example diagram of a swarm cluster 

Swarm mode is Docker’s container management and orchestration tool. This allows admins and software developers to create and manage multiple Docker containers running on one or more nodes. Because this is native to Docker, you can use the Docker API to run native tools such as Docker Compose. Theoretically, the software can handle up to 30,000 containers and clusters of up to 1,000 nodes without performance issues.
 

Basic terminology


 
 
How services work 

Node — a machine running the Docker engine. In most cases, this is a VM, but it can be a standalone server.
 
Service — the definition of the tasks to execute on the nodes. When creating a service, you specify what container image to use and any commands you wish to run on it.
 
Task — the Docker container and the commands to run inside the container. With swarm mode, you can have one or more tasks (e.g., multiple web server containers) to distribute the load of work being done.
 
Swarm — a cluster of Docker nodes. When initializing a swarm, you’ll create one or more “Manager” nodes and one or more “Slave” nodes. Managers are responsible for distributing tasks within the cluster based on what nodes are available, while workers receive and run tasks.
 

Setting up swarm mode

 
  1. Install Docker for Windows. This install package has everything you need to get going to run Docker and swarm mode commands via Command-Line Interface (CLI).
  2. Create a virtual switch. This is needed for your VMs to be created properly and for your nodes to be able to talk to one another.
    1. Open the Hyper-V Manager. You should be able to find this by opening your Start menu and typing “Hyper-V.”
    2. In the right Actions menu, click on Virtual Switch Manager… In the new modal, click on New virtual network switch and choose External.
    3. Enter a desired name. For Connection type, choose the network adapter you use, and check Allow management OS to share this network adapter
  3. Create your VMs. We’ll be creating three VMs (one manager and two worker nodes).
    1. Open a Windows command prompt and run as administrator. (This prevents Hyper-V precheck errors during your VM creation.) To create a VM using Hyper-V, run the following Docker machine command in the following format: > docker-machine create -d hyperv --hyperv-virtual-switch ""
    2. For this example, I ran the following three commands:
      > docker-machine create -d hyperv --hyperv-virtual-switch "PrimaryVirtualSwitch" manager1
      > docker-machine create -d hyperv --hyperv-virtual-switch "PrimaryVirtualSwitch" worker1
      > docker-machine create -d hyperv --hyperv-virtual-switch "PrimaryVirtualSwitch" worker2
  4. Initialize your swarm.
    1. Figure out the IP address allocated to each VM by running the following command:
      > docker-machine ip <name-of-vm>
    2. SSH into the manager node and initialize swarm mode on the manager by running the following:
      > docker-machine ssh <name-of-manager-vm>
      > docker swarm init --advertise-addr <manager-ip>
    3. You should see output similar to the following if successful: 
    4. Now that the manager node is up and running, you can join your worker nodes to the manager. SSH into each worker node, just like you did into the manager, and then join them to the swarm by running the command generated from the docker swarm init command.
 

Deploying services to your swarm


At this point, you should have three nodes created and talking to one another. Let’s start deploying some things and see how swarm mode works.
 
  1. Deploying “HelloWorld”
    1. SSH back into the manager node. This needs to be done as swarm mode-specific commands can only be run on the manager node: > docker-machine ssh
    2. Once in, you can deploy your first service using the following command: > docker service create --replicas 1 --name HelloWorld alpine ping docker.com.  So let’s examine this command to understand what’s going on.
      1. docker service create — the main command to create a new service
      2. –replicas <unit> — tells the Docker swarm to create a number of tasks. In our example above, we only want one task. The default is one.
      3. –name <string> — gives the service a name. In our example above, the service is named “HelloWorld.” If this is not specified, a random name will be given to the service.
      4. alpine — the name of the Docker image we want to use. Alpine is a very lightweight flavor of Linux.
      5. ping docker.com — a command we want to run on the alpine task after it has been instantiated
    3. You can check to see if your service(s) deployed successfully: > docker service ls
  2. Tinkering with your service
    1. If you want to inspect your running service to get more details about it: > docker service inspect --pretty HelloWorld  (You can substitute “HelloWorld” for whatever name you use for your service.)
    2. At this point, we only have one task running in our service. Let’s add a total of four tasks for our HelloWorld service: > docker service scale HelloWorld=4  If you run “docker service ls,” you should now see 4/4 replicas instead of 1/1. So what exactly just happened?
      1. The HelloWorld service is defined as an alpine container running “ping docker.com.”
      2. The scale command tells the manager node to create three more tasks. In other words, it will create three more alpine containers running “ping docker.com.”
      3. With Docker swarm mode, the manager decides which nodes are the most available and tries to evenly distribute the additional tasks across manager and worker nodes.
      4. Running the following command will show which nodes are running what tasks > docker service ps HelloWorld
    3. Let’s say you only want three tasks to run instead of four. Just run the same command but with a 3 instead. Swarm mode will take the last three deployed containers and remove them.
  3. Fault tolerance   You should have tasks running across some or all of your nodes. What happens if one of the tasks stops working?
    1. At the current command prompt, stay SSH’d into the manager node.
    2. Open another command prompt (remember to do so with admin rights) and SSH into the worker node that’s running the other task.
    3. Once in, run “docker ps” to see the container IDs/names running on the node. Force the removal of a container by running: > docker rm -f 
    4. Go back to the other command prompt window and run “docker service ps HelloWorld.” You should see output similar to the following:
    5. So, what just happened?  
      1. You force stopped/removed the container on the worker node.
      2. Because you told the swarm manager to have a certain number of tasks always up and running, it will redeploy another container to replace the one that was just killed.
      3. It will again try to determine the best node to host the container. It’s fairly easy to determine which one is the redeployment because it will have been up and running for a shorter amount of time than the initially deployed/scaled nodes.
 
This concludes the basics on how to set up Docker swarm mode and how it works. In part 2, we’ll go over how we can do the same thing in Azure.

Is this answer helpful?