Azure Service Bus Performance - Http Batch send


One of the easiest ways to increase throughput when using Azure Service Bus is by using the client-side batching feature in the Service Bus protocol (as you can read in our recommendations for increasing throughput). This feature reduces the number of protocol transmissions by batching many messages into one. Note that you will still have many messages in the database.

 This feature is enabled by default whenever you use the .Net client for ServiceBus. This is a guide on how to do something similar when you are using the REST API.

 Why would you want to do batching?

 Batching reduces the number of messages that are transmitted by merging information from multiple messages into a single batch of messages. This reduces the number of connections established as well as network bandwidth by reducing the number of packet headers that are sent over the network. Since this batch is persisted as different messages you don’t need to change your receive/processing logic.

 TCP protocol header is 40 bytes plus the size of IP header ~ 20 bytes. If two packets are merged, the cost of one of the headers is eliminated. Depending on the message size this batching can eliminate a big percentage of the bandwidth requirements.

 Also there is latency cost on sending and receiving each message as well as a CPU cost in the Service Bus Gateway for each call. It is in your best interest to reduce the CPU usage of the gateway because it will be able to process more messages.

How to batch?

 In some cases batching can be done by capturing state information from all the entities that are in the local processes and merging this information into a single batch. In this scheme the aggregation does not affect the quality of the experience because it is not artificially delaying the transmission of the data. Here it is easy to generate one batch that contains the data for all entities.

 In other cases we are actually reducing the frequency of update transmission. In this case batching will artificially delay the transmission of update packets.

 In general there are three algorithms you can follow to fix this:

 1) Timeout-based transmission policy

  • Create an empty message batch
  • While the timeout has no expired
  • ------- Add a message to the batch
  • Send batch (if not empty)

 + The advantage of this is that since you are sending the batch at regular intervals you are guaranteeing that you will not delay the messages too much.

- The disadvantage is if you don't generate more than one message in that time you will not get space-savings even though you will be paying the delay penalty 

2) Quorum-based transmission policy

  • Create an empty message batch
  • While the number of messages is less than the quorum
  • ------- Add generated message to the batch (or wait until another is generated)
  • Send batch 

+ The advantage of this is that you will always get message reduction because you are always sending an X number of messages per batch.

- The disadvantage is that a single message might be delayed indefinitely. 

3) Hybrid approach (used by .Net client)

  • Create an empty message batch
  • While the timeout has no expired OR the number of messages is less than the quorum
  • ------- Add a message to the batch
  • Send batch (if not empty)

 This hybrid approach offers the advantage of adapting to the rate of message generation. At a slow rate the timeout will guarantee the message will be sent and if there is a rapid rate the quorum ensures that you get the advantages of batching for each batch.

Batching messages with the REST API for ServiceBus

 As mentioned before, the ServiceBus .Net client already has batching enabled. The default send interval is 20ms and the batch size is 250kb.

 This was disabled when using Rest because the user sends one message at a time whenever they need. I worked on the feature that enabled support for Rest. Here is how you can do it:

1)    You will create/manage the batches yourself

2)    The message body will need to be in JSON format.

Sending a message to the Queue

As you can read in this blog by Will Perry, sending a message to a queue is straightforward: 

string messageBody = "Hello World!" ;
string sendAddress = serviceAddress + queueName + "/Messages" ;
webClient.Headers[ "Content-Type" ] = "text/plain" ;

using ( var requestStream = webClient.GetRequestStream())
{
   var bodyBytes = Encoding .Default.GetBytes(messageBody); requestStream.Write(bodyBytes, 0, bodyBytes.Length);

}

Sending a batch of messages is not that different:

string messageBody = "[{\"Body\":\"Message1\"},{\"Body\":\"Message2\"},{\"Body\":\"Message3\"}]" ;
string sendAddress = serviceAddress + queueName + "/Messages" ;
webClient.Headers[ "Content-Type" ] = "application/vnd.microsoft.servicebus.json" ;

using ( var requestStream = webClient.GetRequestStream())
{
   var bodyBytes = Encoding .Default.GetBytes(messageBody); requestStream.Write(bodyBytes, 0, bodyBytes.Length);

 

The difference is that the message body needs to be valid Json payload and needs to follow a convention to describe each message.

This is the structure of each message:

public  class  HttpBrokeredMessage
{
   public  string Body { get ; set ; }
   public  Dictionary < string , object > BrokerProperties { get ; set ; }
   public  bool IsBodyBase64 { get ; set ; }
   public  Dictionary < string , object > UserProperties { get ; set ; }

}

As you can see we have support for specific BrokerProperties, User Properties and text and binary bodies. 

Binary Support

Here is a sample binary payload (note that BrokerProperties and Userproperties are optional):

[{"Body":"SGVsbG8=","IsBodyBase64":true,"BrokerProperties":null,"UserProperties":null}]

 Here is how you would create the value of the Body property:

string objectEncodedBytes;
using ( var ms = new  MemoryStream ())
{
   var bf = new  BinaryFormatter ();
  bf.Serialize(ms, serializableObject);

  objectEncodedBytes = Convert .ToBase64String(ms.ToArray());

}

As you can see above if you plan on sending a binary object you need to convert it to Base64 and also set the IsBodyBase64 property of the message

 JSON

In the sample code you can see options for creating the Json payload. You can handcraft it yourself or you can use a library like Json.NET to do it for you (see attached code).

For example, given the HttpBrokeredMessage class defined above, you could serialize your object like this:

var jsonObject = new SerializationHelper.HttpBrokeredMessage() { Body = objectEncodedBytes, IsBodyBase64 = true };
var batch = new[] { jsonObject };
return JsonConvert.SerializeObject(batch);

Code

The attached code contains:

Program – Main class. Replace the constants with your subscription data

BatchMessageHelper – a helper class that given string or objects converts them to Json payload

RestHelper – Helper class that lets you easily generate Rest calls, Verify the result and output errors received from Service Bus

SerializationHelper – Convert from bytes to BrokeredMessage as well as serializable samples

The code for this article can be found here: https://github.com/krolth/Service-Bus-Http-Batch-Send/

Conclusion

If you are sending lots of messages to Service Bus using Rest protocol you should definitely consider batching the messages. Since this is a change to the Send path you will be able to consume your messages like you have done in the past.

In our performance lab we measure both throughput and latency. The throughput tests improved by close to 10x by using a combination of Send Rest(Batch)+Receive Service Bus client.

Basically matching the same throughput that you can get with pure .Net Send/Receive clients.

Comments (1)

  1. James Hardaker says:

    Does this work with service bus for windows server? Ive done exactly as above but it does not work. I get 1 message everytime.

Skip to main content