How to integrate a new Azure Container Service cluster into an existing virtual network using ACS Engine

When I discuss about Azure Container Service with customers, one of the most frequent question that they ask to me is “is it possible to deploy a cluster into an existing virtual network?”. And since a few weeks, since the ACS Engine has been released and open sourced on GitHub, I am really happy to be able to answer “yes, and it is really easy!”.

What is ACS Engine?

ACS Engine, for Azure Container Service Engine, is a CLI tool that helps to generate Azure Resource Manager templates to deploy Docker enabled clusters on Microsoft Azure. It works with all the orchestrators supported by ACS: Docker Swarm, Mesosphere DC/OS and Kubernetes. Using ACS Engine to deploy a cluster instead of using the Microsoft Azure portal unlocks some really cool options like declaring multiple agent pools, with different sizes of VMs, public or private, and of course integrated the cluster into a customer virtual network that is the topic I will focus on in this post.

How to install ACS Engine

ACS Engine is written in Go and is available for Windows, Linux and Mac OS. To be able to use it on your machine, you just have to get the source code and build it. All is documented on this page:

Once you have a built copy of ACS Engine, your are ready to go!

Create the ACS Engine template

ACS Engine uses a JSON template in input and generates the ARM template and ARM parameters files in output.

Depending on the orchestrator you want to use, the number of agent pools, the machine size you want (etc.) this input template could differ from the one I am going to detail here. I have chosen to illustrate the custom virtual network integration feature with a Docker Swarm cluster example, but all will work in the same way for Kubernetes and DC/OS. You will find samples for those orchestrators on the Acs Engine Github.

I assume that you have already deployed one virtual network that contains two subnets:

- for the master nodes

- for the agent nodes

Here is the JSON template that I will use with the acsengine.exe CLI tool:

"apiVersion": "vlabs",
"properties": {
"orchestratorProfile": { "orchestratorType": "Swarm" },
"masterProfile": {
"count": 3,
"dnsPrefix": "swarmcustommgmt",
"vmSize": "Standard_D2_v2",
"vnetSubnetId": "/subscriptions/REPLACE_WITH_SUB_ID/resourceGroups/REPLACE_WITH_RESOURCE_GROUPE_NAME/providers/Microsoft.Network/virtualNetworks/REPLACE_WITH_VNET_NAME/subnets/SwarmMaster",
"firstConsecutiveStaticIP": ""
"agentPoolProfiles": [
"name": "agentprivate",
"count": 2,
"vmSize": "Standard_D3_v2",
"vnetSubnetId": "/subscriptions/REPLACE_WITH_SUB_ID/resourceGroups/REPLACE_WITH_RESOURCE_GROUPE_NAME/providers/Microsoft.Network/virtualNetworks/REPLACE_WITH_VNET_NAME/subnets/SwarmNode"
"name": "agentpublic",
"count": 2,
"vmSize": "Standard_D2_v2",
"dnsPrefix": "swarmpublicagent",
"vnetSubnetId": "/subscriptions/REPLACE_WITH_SUB_ID/resourceGroups/REPLACE_WITH_RESOURCE_GROUPE_NAME/providers/Microsoft.Network/virtualNetworks/REPLACE_WITH_VNET_NAME/subnets/SwarmNode",
"ports": [ 80, 443 ]
"linuxProfile": {
"adminUsername": "azureuser",
"ssh": { "publicKeys": [ { "keyData": "REPLACE_WITH_YOUR_SSH_PUBLIC_KEY" } ] }

As you can see, this file is really close to the Azure Resource Manager JSON format. It contains a collection of properties that define the cluster you will create. You can set the type of the orchestrator you want to use using the orchestratorType property. Then, you can define the profile and number of virtual machine you want for the Docker Swarm masters (here 3 Standard_D2_V2 virtual machines) and also the different agent pools you want to create.

In this case, I create two different agent pools, each compose by two virtual machines, one private (which is not connected to any Azure load balancer or public IP) and one public with 2 load balancing rules on HTTP 80 and 443 ports.

Each agent node definition and the master node definition contains a property vnetSubnetId that represents the identifier of the virtual network where you want to deploy the cluster.

Finally, you need to specify the SSH public key that will be used to secure the connection to your cluster.

Once you have defined this JSON template to fit with your needs, there is only one command to execute:


This command will generate two JSON files:

- azuredeploy.json

- azuredeploy.parameters.json


These two files can now be used to deploy the cluster, using PowerShell:

New-AzureRmResourceGroupDeployment -Name CustomVNETSwarmDeployment -ResourceGroupName REPLACE_WITH_RESOURCE_GROUPE_NAME -TemplateFile .\azuredeploy.json -TemplateParameterFile .\azuredeploy.parameters.json

Or using the Azure CLI:

azure group deployment create -f "azuredeploy.json" -e "azuredeploy.parameters.json" -g REPLACE_WITH_RESOURCE_GROUPE_NAME -n CustomVNETSwarmDeployment

And voilà! Your cluster is going to be deployed in the existing virtual network.

Enjoy !


Comments (13)

  1. Savithra says:

    I created my Azure Container Service cluster like this and everything got deployed successfully but there is no Azure Container Service i my resource group. Would that be a problem for future management of the cluster?

    1. Hi, this is the normal behavior. When you use ACS Engine, you deploy an ARM template and you do not use the Container Service resource provider, so there is no ACS entry in the resource group.

      1. Savithra says:

        Oh, didn’t know that. Thank you for the wonderful article btw. 🙂

  2. Kieran says:


    Thanks for the post, only one issue that I can see is that apiVersion: “vlabs” isn’t available in all regions. Is there a listing of the regions it is available?

  3. Kieran says:

    Update: Never mind, sleep deprivation is kicking in. Found my issue!

  4. That was a useful article. It led me to also documented an approach to doing this here: and a complete working project fully (mostly) automated on Terraform here:

    1. nice post ! thanks for sharing 🙂

  5. Alexander Alten-Lorenz says:

    I used it for swarm as well as kubernetes, but in both cases the slaves weren’t able to communicate with the master(s). As example for swarm, when I did a ‘docker node ls’ I got: “Error response from daemon: This node is not a swarm manager. Use “docker swarm init” or “docker swarm join” to connect this node to swarm and try again.”
    From the docker azure documentation I read that a service principal needs to be created (which I did), but that didn’t fixed the issue.
    Kubernetes simply times out: “kubectl get nodes
    Unable to connect to the server: dial tcp i/o timeout”

    Any ideas?

    1. hello, for the Swarm part, how did you deploy the cluster ? Swarm mode is only available through ACS Engine, so if you used the portal or Azure CLI, you are running legacy swarm that does not work with docker node / docker service commands.
      on the Kubernetes part, try to use Azure CLI to get the configuration (az acs Kubernetes get-credentials) to be sure you have the good conf / environment variable set before doing the kubectl get nodes.

      1. Alexander Alten-Lorenz says:

        I used exactly your example for swarm (orchestratorType”: “Swarm”).
        I guess thats something with the images in Azure:
        docker swarm join \
        > –token SWMTKN-1-5v1b44erob0w4nzpe96lbmxr1wu5jsnbc37zzsdct9znvvriwe-ak81rxsz54x3p30fgh3lgah3x \
        Error response from daemon: –cluster-store and –cluster-advertise daemon configurations are incompatible with swarm mode
        root@swarm-agentpublic-41239491000000:~# docker -v
        Docker version 17.04.0-ce, build 4845c56

  6. Pablo Lopez says:

    Hi Julien.

    Not very fluent with azure-cli, but it seems that I failed something :
    az: error: unrecognized arguments: -f azuredeploy.json -e azuredeploy.parameters.json

    1. Hi Pablo,

      Try: az group deployment create –help

      With the v2 of Azure CLI you need to replace -f by –template-file and -e by –parameters (and don’t miss the @ before the path of the parameters file)


Skip to main content