Kirk Evans Blog

.NET From a Markup Perspective

Autoscaling Azure–Cloud Services

This post shows how to autoscale Azure Cloud Services using a Service Bus queue.

Background

My role as an Architect for Azure COE requires me to have many different types of discussions with customers who are designing highly scalable systems.  A common discussion point is “elastic scale”, the ability to automatically scale up or down to meet resource demands.  One of the keys to building scalable cloud solutions is understanding queue-centric patterns.  Recognizing the opportunity to provide multiple points to independently scale an architecture ultimately protects against failures while it enables high availability.

In my previous post, I discussed Autoscaling Azure Virtual Machines where I showed how to leverage a Service Bus queue to determine if a solution requires more resources to keep up with demand.  In that post, we had to pre-provision resources, manually deploy our code solution to each machine, and then use PowerShell to enable a startup task.  This post shows how to create an Azure cloud service that contains a worker role, and we will see how to automatically scale the worker role based on the number of messages in a queue.

The previous post showed that we had to pre-provision virtual machines, and that the autoscale service simply turned them on and off.  This post will demonstrate that autoscaling a cloud service means creating and destroying the backing virtual machines.  Because data is not persisted on the local machine, we use Azure Storage to export diagnostics information, allowing persistent storage that survive instances.

image

Create the Cloud Service

In Visual Studio, create a new Cloud Service.  I named mine “Processor”.

image

The next screen enables you to choose from several template types.  I chose a Worker Role with Service Bus Queue as it will generate most of the code that I need.  I name the new role “ProcessorRole”.

image

Two projects are added to my solution.  The first project, ProcessorRole, is a class library that contains the implementation for my worker role and contains the Service Bus boilerplate code.  The second project, Processor, contains the information required to deploy my cloud service.

image

Some Code

The code for my cloud service is very straightforward, but definitely does not follow best practices.  When the worker role starts, we output Trace messages and then listen for incoming messages from the Service Bus queue.  As a message is received, we output a Trace message then wait for 3 seconds. 

Note: This code is different than the code generated by Visual Studio that uses a ManualResetEvent to prevent a crash.  Do not take the code below as a best practice, but rather as an admittedly lazy example used to demonstrate autoscaling based on the messages in a queue backing up. 

WorkerRole.cs
  1. using Microsoft.ServiceBus.Messaging;
  2. using Microsoft.WindowsAzure;
  3. using Microsoft.WindowsAzure.ServiceRuntime;
  4. using System;
  5. using System.Diagnostics;
  6. using System.Net;
  7.  
  8. namespace ProcessorRole
  9. {
  10.     public class WorkerRole : RoleEntryPoint
  11.     {
  12.         // The name of your queue
  13.         const string _queueName = "myqueue";
  14.  
  15.         // QueueClient is thread-safe. Recommended that you cache
  16.         // rather than recreating it on every request
  17.         QueueClient _client;        
  18.  
  19.         public override void Run()
  20.         {
  21.             Trace.WriteLine("Starting processing of messages");
  22.  
  23.  
  24.             while (true)
  25.             {
  26.                 // Not a best practice to use Receive synchronously.
  27.                 // Done here as an easy way to pause the thread,
  28.                 // in production you'd use _client.OnMessage or
  29.                 // _client.ReceiveAsync.
  30.                 var message = _client.Receive();
  31.                 if (null != message)
  32.                 {
  33.                     Trace.WriteLine("Received " + message.MessageId + " : " + message.GetBody<string>());
  34.                     message.Complete();
  35.                 }
  36.  
  37.                 // Also a terrible practice… use the ManualResetEvent
  38.                 // instead.  This is shown only to control the time
  39.                 // between receive operations
  40.                 System.Threading.Thread.Sleep(TimeSpan.FromSeconds(3));
  41.             }   
  42.         }
  43.  
  44.         public override bool OnStart()
  45.         {
  46.             // Set the maximum number of concurrent connections
  47.             ServicePointManager.DefaultConnectionLimit = 12;
  48.  
  49.             // Initialize the connection to Service Bus Queue
  50.             var connectionString = CloudConfigurationManager.GetSetting("Microsoft.ServiceBus.ConnectionString");
  51.             _client = QueueClient.CreateFromConnectionString(connectionString, _queueName);
  52.  
  53.             return base.OnStart();
  54.         }
  55.  
  56.         public override void OnStop()
  57.         {
  58.             // Close the connection to Service Bus Queue
  59.             _client.Close();            
  60.             base.OnStop();
  61.         }
  62.     }
  63. }

Again, I apologize for the code sample that does not follow best practices. 

The Configuration

Contrary to how we pre-provisioned virtual machines and copied the code to each virtual machine, Cloud Services will provision new virtual machine instances and then deploy the code from the Azure fabric controller to the cloud service.  If the role is upgraded or maintenance performed on the host, the underlying virtual machine is destroyed.  This is a fundamental difference between roles and persistent VMs.  This means we no longer RDP into each instance and make changes manually, we incorporate any desired changes into the deployment package itself.  The package we are creating uses desired state configuration to tell the Azure fabric controller how to provision the new role instance. 

As an example, you might add the Azure Role instance to a virtual network as I showed in the post Deploy Azure Roles Joined to a VNet Using Eclipse where I edited the .cscfg file to indicate the virtual network and subnet to add the role to.  Using Visual Studio, you can affect various settings such as the number of instances, the size of each virtual machine, and the diagnostics settings:

image

For a more detailed look at handling multiple configurations, debugging locally, and managing connection strings, see Developing and Deploying Microsoft Azure Cloud Services Using Visual Studio.

Deploying the Cloud Service

Right-click on the deployment project and choose Publish.  You are prompted to create a cloud service and an automatically created storage account.

image

The next screen provides more settings regarding whether it the code is being deployed to Staging or Production, whether it is a Debug or Release build, and which configuration to use.  You can also configure the ability to enable Remote Desktop to each of the roles.  I enable that, and provide the username and password to log into each role.

image

Click Publish and you can watch the status in the Microsoft Azure Activity Log pane.  Notice that the output shows we are uploading a package:

image

Once deployed, we can see the services are running in Visual Studio:

image

We can also see the deployed services in the management portal:

image

Managing Autoscale

Just like we did in the article Autoscaling Azure Virtual Machines, we will use the autoscale service to scale our cloud service based on queue length.  Go to the management portal, open the cloud service that you just created, and click the Dashboard tab.  Then click the “Configure Autoscale” link:

image

On that screen you will see that we have a minimum of 2 instances because we specified 2 instances in the deployment package. 

image

Click on the Queue option, and we can now scale between 1 and 350 instances! 

image

OK, that’s a little much… let’s go with a minimum of 2 instances, maximum of 5 instances, and scale 1 instance at a time over 5 minutes based on messages in my Service Bus queue.

image

Click save, and within seconds our configuration is saved.

Testing it Out

I wrote a quick Console application that will send messages to the queue once per second.  The receiver only processes messages once every 3 seconds, so we should quickly have more messages in queue than 2 instances can handle, forcing an autoscale event to occur.

Sender
  1. using Microsoft.ServiceBus.Messaging;
  2. using Microsoft.WindowsAzure;
  3. using System;
  4.  
  5. namespace Sender
  6. {
  7.     class Program
  8.     {
  9.         static void Main(string[] args)
  10.         {
  11.             string connectionString =
  12.                 CloudConfigurationManager.GetSetting("Microsoft.ServiceBus.ConnectionString");
  13.  
  14.             QueueClient client =
  15.                 QueueClient.CreateFromConnectionString(connectionString, "myqueue");
  16.  
  17.             int i = 0;
  18.             while(true)
  19.             {                
  20.                 var message = new BrokeredMessage("Test " + i);
  21.                 client.Send(message);
  22.                 
  23.                 Console.WriteLine("Sent message: {0} {1}",
  24.                     message.MessageId,
  25.                     message.GetBody<string>());
  26.                 
  27.                 //Sleep for 1 second
  28.                 System.Threading.Thread.Sleep(TimeSpan.FromSeconds(1));
  29.                 i++;
  30.             }
  31.             
  32.         }
  33.     }
  34. }

If we let the Sender program run for awhile, it is sending messages to the queue faster than the receivers can process them.  I go to the portal and look at the Service Bus queue, I can see that the queue length is now 61 after running for a short duration.

image

Hit refresh, and we see it is continuing to increase.  Next, go to the Azure Storage Account used for deployment and look at the WADLogsTable:

image

Double-click and you will see that the roles are processing the messages, just not faster than the Sender program is sending them.

image

After a few minutes, the autoscale service sees that there are more messages in queue than we configured, our current role instances cannot keep up with demand, so a new virtual machine instance is created.

image

This is very different than when we used virtual machines.  When using cloud services, the roles are created and destroyed as necessary.  I then stop the sender program, and the number of queue messages quickly falls as our current number of instances can handle the demand:

image

And we wait for a few more minutes to see that the autoscale service has now destroyed the newly created virtual machine according to our autoscale rules.

image

This is a good thing, as the virtual machine was automatically created according to our autoscale rules as well.  This should highlight the importance, then, of not simply using Remote Desktop to connect to a cloud service role instance to configure something.  Those settings must be applied within the deployment package itself.

Monitoring

If we go to the operation logs we can see the deployment operation (note: some values are redacted by me):

Operation Log
  1. <SubscriptionOperation xmlns="http://schemas.microsoft.com/windowsazure"
  2.                        xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
  3.   <OperationId>3c8f4a65-64c4-7c6a-86db-672972e504b9</OperationId>
  4.   <OperationObjectId>/REDACTED/services/hostedservices/kirkeautoscaledemo/deployments/REDACTED</OperationObjectId>
  5.   <OperationName>ChangeDeploymentConfigurationBySlot</OperationName>
  6.   <OperationParameters xmlns:d2p1="http://schemas.datacontract.org/2004/07/Microsoft.WindowsAzure.ServiceManagement">
  7.     <OperationParameter>
  8.       <d2p1:Name>subscriptionID</d2p1:Name>
  9.       <d2p1:Value>REDACTED</d2p1:Value>
  10.     </OperationParameter>
  11.     <OperationParameter>
  12.       <d2p1:Name>serviceName</d2p1:Name>
  13.       <d2p1:Value>kirkeautoscaledemo</d2p1:Value>
  14.     </OperationParameter>
  15.     <OperationParameter>
  16.       <d2p1:Name>deploymentSlot</d2p1:Name>
  17.       <d2p1:Value>Production</d2p1:Value>
  18.     </OperationParameter>
  19.     <OperationParameter>
  20.       <d2p1:Name>input</d2p1:Name>
  21.       <d2p1:Value><?xml version="1.0" encoding="utf-16"?>
  22.         <ChangeConfiguration xmlns:i="http://www.w3.org/2001/XMLSchema-instance"
  23.                              xmlns="http://schemas.microsoft.com/windowsazure">
  24.           <Configuration>REDACTED
  25.       </d2p1:Value>
  26.     </OperationParameter>
  27.   </OperationParameters>
  28.   <OperationCaller>
  29.     <UsedServiceManagementApi>true</UsedServiceManagementApi>
  30.     <UserEmailAddress>Unknown</UserEmailAddress>
  31.     <ClientIP>REDACTED</ClientIP>
  32.   </OperationCaller>
  33.   <OperationStatus>
  34.     <ID>REDACTED</ID>
  35.     <Status>InProgress</Status>
  36.   </OperationStatus>
  37.   <OperationStartedTime>2015-02-22T13:38:12Z</OperationStartedTime>
  38.   <OperationKind>UpdateDeploymentOperation</OperationKind>
  39. </SubscriptionOperation>

 

For More Information

Developing and Deploying Microsoft Azure Cloud Services Using Visual Studio

Autoscaling Azure Virtual Machines