Full CI/CD pipeline to deploy multi-containers application on Azure Container Service with Docker Swarm using Visual Studio Team Services


Azure Container Service (ACS) allows to deploy and manage containers using Docker Swarm, Mesosphere DC/OS or Kubernetes orchestrators. Recently, the ACS team open sources ACS-engine. It is now very easy to deploy these three orchestrators on Azure, using the portal, an Azure Resource Manager template or Azure-CLI.

They also have released in preview the Azure Container Registry which is an implementation of the open source Docker Registry and that can run as a service on Azure and is fully compatible with Docker Swarm, Kubernetes and DC/OS. This is a private registry that allows to store Docker images for enterprise applications instead of having to use the public Docker Hub, for example.

In this new blog post I will detail how I have used ACS with Docker Swarm, an Azure Container Registry and Visual Studio Team Services to deliver continuously a multi-containers application wrote in .NET Core. Tooling has also been recently released to create a similar CI/CD pipeline with VSTS when using Mesosphere DC/OS.

The application source code is hosted on GitHub and is pretty simple:

image

As you can see, it’s a “kind of e-shop” web application that is composed of three APIs (Products, Recommendations and Ratings) and a front that make AJAX calls to these APIs. The APIs do nothing more than returning their versions and the name of the host they are running on. In this case, it’s the id of the container they are running within.

My goal is to ensure that as soon as I commit a change on GitHub, the application is redeployed on Azure Container Service. Here is the workflow that I am going to describe:

CICD Docker

  1. I develop new feature using Visual Studio Code (or any IDE) and commit changes on GitHub
  2. GitHub triggers a build in Visual Studio Team Services
  3. Visual Studio Team Services gets the latest version of the sources and build all the images that compose my application
  4. Visual Studio Team Services pushes each image in the Azure Container Registry
  5. Visual Studio Team Services triggers a new release
  6. The release runs some commands using SSH on the ACS cluster master node
  7. Docker Swarm on ACS pull the latest version of the image
  8. The new version of the application is deployed using docker-compose

Now that you have the big picture, let’s go deep dive in this CI/CD pipeline!

Deploy a Docker Swarm cluster with ACS and an Azure Container Registry

I have already blogged several times on how you can use the Azure portal to deploy a Docker Swarm cluster using Azure Container Services or on you can do it using Azure CLI.

While the cluster deployment is pending, you can create a new Azure Container Registry. To do that, go to the Azure Portal and search for Azure Container Registry:

image

Click on the “Container Registry (preview)” entry and then click on the Create button in the right pane that opens. The only things that you need to configure are:

  • The name of your registry to make it available on: yourname.azurecr.io
  • The subscription, resources group and location you want to deploy this new registry (you can deploy it in the same resource group than the Azure Container Service cluster, but it is not mandatory)
  • If you want to create an admin user with a password that has rights to push/pull image into the registry. Enable this option (you can do it after too). You can also use an Active Directory Service principal to authenticate the registry.
  • The storage account where you want to store the Docker images. If you let the default option, it will create a new one in the same resource group

Then, just click the Create button. Once done, you can access your brand new Azure Container Registry and get the URL and Access Key (the admin username and password) that allows to connect to the registry:

image

You can test the registry using the docker login command:

image

You now have an Azure Container Service and an Azure Container Registry. The next step is to define the CI/CD pipeline in Visual Studio Team Services.

Configure Visual Studio Team Services

The first thing you need to do to build and deploy “dockerized” applications on Linux using Visual Studio Team Services is to get the Docker integration task from the Marketplace and configure a Linux build agent. All the steps are detailed on this page.

Once done, you can declare the different external services that will be needed during the build and release processes: the Azure Container Registry, the GitHub account and an SSH connection to the Docker Swarm cluster.

This can be done in the Services entry of the settings menu, in your team project:

image

Add three different endpoints, for each type of external services:

image

Create the build definition

Now, it’s possible to create a new build definition, linked to the GitHub repository and that uses the Docker build task to build and push each image of the application in the Azure Container Registry:

image

image

I have already detailed all these steps in a preview article here. But there are some points that need more attention: when you are working with a private registry and not with the Docker hub, you need to prefix the name of your image with the URL of your registry, here yourname.azurecr.io.

On the capture, the Azure Container Registry is the external service that I have configured to point to the Azure Container Registry created above.

Last, but not least, I have added two more task in the build workflow that were not in my preview post.

A command line task that executes a simple bash command to replace a token in the docker-compose.yml file to update it with the latest version of the images that have been built and pushed to the registry (actually, the number of the running VSTS build received from the $(Build.BuildNumber) variable):

image

image

And a task that copies the updated compose file as an output artefact of the build to make it available later in the release process:

image

Now, when a build is completed, all the images are pushed in the Azure Container Registry and an updated version of the docker-compose.yml file is available as an output:

image

That’s being done, it’s time to define a new release to deploy the application on the Docker Swarm cluster!

Create the release definition

Visual Studio Team Services allows to manage releases across environments. It is possible to enable continuous deployment to make sure that your application is deployed on your different environments (dev, qa, pre-production, production…) in a smooth way. In this case, I have a single environment : the Docker Swarm cluster deployed with Azure Container Service:

image

Before detailing the tasks that are done within the release, there are two important things to explain. I have linked the release to the build detailed above to make the artefact (the docker-compose.yml file) available in the release steps:

image

Then, I have configured Continuous Deployment in the Triggers tab to make sure a new release is launched as soon as the build is successfully completed:

image

The release itself is really simple in this case. It is composed of two steps: using SCP to copy the docker-compose.yml file using SSH on a master node of the Docker Swarm cluster:

image

A second task that executes a bash command on the master node, using SSH:

image

The command that is executed is actually quite simple:

docker login -u $(docker.username) -p $(docker.password) $(docker.registry) && export DOCKER_HOST=:2375 && cd deploy && docker-compose pull && docker-compose stop & docker-compose rm -f && docker-compose up -d

It uses the Docker CLI on the VSTS Linux agent to;

  1. Login to the Azure Container Registry
  2. Set the DOCKER_HOST variable to connect to the port that Docker Swarm is listening on
  3. Go into the deploy directory that contains the compose file that has been copied using the SCP task before
  4. Use the docker-compose pull command that will pull the new Docker images from the Azure Container Registry to the Swarm nodes
  5. Use the docker-compose stop and docker-compose rm command to stop and remove the previous running containers
  6. Use the docker-compose up –d to start the application with the new images that have been pulled

As you can see on the capture above, the Fail on STDERR checkbox is unchecked. This is an important part that you do not want to miss! Actually, docker-compose print several diagnostic information like that containers are stopping or being deleted on the standard error output. If you let the checkbox checked, Visual Studio Team Services will consider that some errors have occurred during the release, even if all goes well.

Wrapping up

In this article I explained how you can deliver a multi-containers application continuously using Azure Container Service, Azure Container Registry with Visual Studio Team Services. Now, as soon as a change is committed to GitHub, a build is triggered in Visual Studio Team Services, runs on a Linux agent that build and pushes the new images in the Azure Container Registry, update the compose file with the latest version of the images and copy this file as an output artefact. Then, Visual Studio Team Services triggers a new release that use SCP to copy the new compose file on the Docker Swarm master node and then use a bash script to deploy the new application using the docker-compose CLI.

One improvement I am thinking to do is to separate each service that compose the application in different GitHub repositories so I can have one build per project (and per Docker image) and one release per image. In this way, I will be able to update only one service in the application and not the whole application.

If you have any question about this blog post, do not hesitate to contact me directly on Twitter.

Hope this helps,

Julien

Comments (14)

  1. Great write up. Thanks!

  2. Thomas says:

    Great article – everything works now til the last deploy command starts. I got an error with the docker login shell command. Without the –attachable option for the overlay network I get an error
    “2017-04-23T20:39:52.2151137Z ##[error]Creating network “********_myshop” with driver “overlay”
    When I add overlay –attachable I got an error
    2017-04-23T20:48:34.1005759Z ##[error]legacy plugin: plugin not found

    Any ideas?

    Regards

  3. Hi Julien – great article – many thanks.

    I could set up evrything except the last cmd-shell script (docker-login) in the deploy Pipeline.
    When I use your template I get an VSTS log “error Creating network “********_myshop with driver “overlay”.

    When I add –attachable to the Driver Definition in docker-compose.yml, as mentioned in some other articles, I get an error ##[error]legacy plugin: plugin not found, when the script Comes to docker-compose up -d.

    Maybe a Problem of changed docker functionalities – do you have an idea whats going wrong?

    Best regards

    1. Hi Thomas, thank you for your feedback. This is strange. When did you create the Swarm cluster and how? (Azure CLI, ACS Engine, Azure Portal?)

      1. Hi Julien, created all via Azure Portal some days ago ACS with docker Swarm and Container Registry

        Thomas

  4. Nilesh Gadhiya says:

    2017-04-24T19:12:15.6634033Z docker login XXXXX.azurecr.io -u USERNAME -p PASSWORD && export DOCKER_HOST=:2375 && docker-compose pull && docker-compose stop & docker-compose rm -f && docker-compose up -d
    2017-04-24T19:12:16.4602786Z Login Succeeded
    2017-04-24T19:12:16.4602786Z
    2017-04-24T19:12:16.4759035Z No stopped containers
    2017-04-24T19:12:16.4759035Z
    2017-04-24T19:12:16.9134122Z
    2017-04-24T19:12:16.9134122Z ##[error]Pulling shop (RegistryUrl/myshop/front:115)…
    2017-04-24T19:12:16.9134122Z
    2017-04-24T19:12:16.9290373Z
    2017-04-24T19:12:16.9290373Z ##[error]Creating network “deploy_myshop” with driver “overlay”
    2017-04-24T19:12:16.9290373Z
    2017-04-24T19:12:16.9290373Z
    2017-04-24T19:12:16.9290373Z
    2017-04-24T19:12:16.9290373Z ##[error]invalid reference format: repository name must be lowercase
    2017-04-24T19:12:16.9290373Z
    2017-04-24T19:12:16.9290373Z
    2017-04-24T19:12:16.9290373Z
    2017-04-24T19:12:16.9290373Z ##[error]datastore for scope “global” is not initialized
    2017-04-24T19:12:16.9290373Z
    2017-04-24T19:12:16.9602794Z ##[error]Command cd deploy
    2017-04-24T19:12:17.0072429Z ##[section]Finishing: Run shell commands on remote machine

    1. Hi, it seems that you have not replaced the RegistryUrl variable in the docker-compose file. You should put your registry prefix or automatize replacing this variable as explaining in the article.

      1. Nilesh Gadhiya says:

        2017-04-25T19:04:02.5754439Z No stopped containers
        2017-04-25T19:04:02.5754439Z
        2017-04-25T19:04:02.8254100Z
        2017-04-25T19:04:02.8254100Z ##[error]Pulling shop (dockrepo.azurecr.io/myshop/front:117)…
        2017-04-25T19:04:02.8254100Z
        2017-04-25T19:04:02.9973118Z
        2017-04-25T19:04:02.9973118Z ##[error]Creating network “deploy_myshop” with driver “overlay”
        2017-04-25T19:04:02.9973118Z
        2017-04-25T19:04:03.0129724Z
        2017-04-25T19:04:03.0129724Z ##[error]datastore for scope “global” is not initialized
        2017-04-25T19:04:03.0129724Z
        2017-04-25T19:04:04.7317075Z ##[error]Command cd deploy

        1. Hi Nilesh, hi Julien

          the scope global error is, what I get without doing first a docker swarm init on the Manager node, after that I got the error in my mail – it seem to be a Problem with the classic swarm mode and the new swarm mode with the changing docker Releases

          Regards Thomas

          1. Hello Thomas, ACS does not support Swarm Mode right now. If you want use Swarm Mode you have to use ACS Engine. See https://github.com/Azure/acs-demos/blob/master/training/swarm/deploy-acs-engine.md

          2. ok, i see, but without the swarm init I also get the ##[error]datastore for scope “global” is not initialized like Nilesh, but my RegistryURLs are correct

Skip to main content