Configuring WAD via the diagnostics.wadcfg Config File

Azure 1.3 added the ability to control Windows Azure Diagnostics (WAD) via a config file.  The MSDN documentation covering diagnostics.wadcfg explains that the capability was added to support the VM role.  The documentation also says to continue configuring WAD via code in OnStart for the other role types.

I instead recommend using diagnostics.wadcfg for all role types to perform the the majority of the configuration and only configure via code when required, such as when using a custom performance counter.  This will allow WAD to capture diagnostics prior to OnStart’s execution plus it is easier to maintain a config file than code.

The documentation discusses the location Azure reads diagnostics.wadcfg from; each role type uses a different location.  What isn’t explained is how to add diagnostics.wadcfg to your Visual Studio solution such that Visual Studio packages the file into the correct location.  Others have blogged about this topic in sufficient detail so I’ll just say that for a web role, the hardest of the three, add an XML file to the root of your web project called diagnostics.wadcfg then change its properties so that “Build Action = Content” and “Copy to Output Directory = Copy always”.  “Copy if newer” should work too but I personally prefer “Copy always”.


The sample config XML in the MSDN documentation demonstrates most of the configuration capabilities but WAD fails if the sample is copied as-is into your project.  As shown below, the sample XML specifies paths and local storage names which may not exist.  Comment out the entire <DataSources> element to get the configuration working.  In my next blog post I’ll show how to add configuration settings for custom logs.

   1: <Directories bufferQuotaInMB="1024" 
   2:    scheduledTransferPeriod="PT1M">
   4:    <!-- These three elements specify the special directories 
   5:         that are set up for the log types -->
   6:    <CrashDumps container="wad-crash-dumps" directoryQuotaInMB="256" />
   7:    <FailedRequestLogs container="wad-frq" directoryQuotaInMB="256" />
   8:    <IISLogs container="wad-iis" directoryQuotaInMB="256" />
  10:    <!-- For regular directories the DataSources element is used -->
  11:    <DataSources>
  12:       <DirectoryConfiguration container="wad-panther" directoryQuotaInMB="128">
  13:          <!-- Absolute specifies an absolute path with optional environment expansion -->
  14:          <Absolute expandEnvironment="true" path="%SystemRoot%\system32\sysprep\Panther" />
  15:       </DirectoryConfiguration>
  16:       <DirectoryConfiguration container="wad-custom" directoryQuotaInMB="128">
  17:          <!-- LocalResource specifies a path relative to a local 
  18:               resource defined in the service definition -->
  19:          <LocalResource name="MyLoggingLocalResource" relativePath="logs" />
  20:       </DirectoryConfiguration>
  21:    </DataSources>
  22: </Directories>

WAD automatically maps the wad-crash-dumps, wad-frq, and wad-iis containers to special folders which only exist in web and worker roles.  For VM roles comment out the CrashDumps, FailedRequestLogs, and IISLogs elements.

Finally, there are the various “QuotaInMB” settings.  WAD automatically allocates 4096 MB of local storage named DiagnosticStore.  WAD fails if the overallQuotaInMB value is set higher than the local storage allocated or if the various “QuotaInMB” values add up to within about 750 MB of overallQuotaInMB.  Either:

  • Decrease some of the “QuotaInMB” values until the config works.


  • Add a LocalStorage setting named DiagnosticStore to ServiceDefinition.csdef and increase overallQuotaInMB.

It isn’t documented but the MB’s allocated to DiagnosticStore is a hard limit which WAD can’t exceed.  The various WAD quotas are soft limits which control when WAD starts deleting old data.  It is possible for WAD to exceed the quotas for a brief period of time while performing the delete.

For more posts in my WAD series:

Comments (1)

  1. In Windows Azure SDK1.3 we have introduced the concept of startup tasks that allow us to run commands to configure the role instance, install additional components and so on. However, this functionality requires that all pre-requisite components are part of the Azure solution package. In practice this has following limitations: if you add or modify pre-requisite component you need to regenerate the Azure solution package you will have to pay the bandwidth charge of transferring the regenerated solution package (perhaps 100s of MB) even though you actually want to update just one small component the time to update entire role instance is incremented by the time it takes to transfer the solution package to Azure datacenter you cannot update individual component rather you update entire package Below I describe an alternative approach. It is based on the idea to leverage blob storage to store the pre-requisite components. Decoupling of pre-requisite components (in most cases they have no relationship with the Azure role implementation) has a number of benefits: you do not need to touch Azure solution package, simply upload a new component to blob container with a tool like Windows Azure MMC you can update an individual component you only pay a bandwidth cost for component you are uploading, not the entire package your time to update the role instance is shorter because you are not transferring entire solution package  Here is how the solution is put together: When Azure role starts, it downloads the components from a blob container that is defined in the .cscfg configuration file < Setting name ="DeploymentContainer" value ="contoso" />

    The components are downloaded to a local disk of the Azure role. Sufficient disk space is reserved for a Local Resource disk defined in the .csdef definition file of the solution – in my case I reserve 2GB of disk space

    < LocalStorage name ="TempLocalStore" cleanOnRoleRecycle ="false" sizeInMB ="2048" />

    Frequently, the pre-requisite components are installers (self extracting executables or .msi files). In most cases they will use some form of temporary storage to extract the temporary files. Most installers allow you to specify the location for temporary files but in case of legacy or undocumented third party components you may not have this option. Frequently, the default location would the directory indicated by %TEMP% or %TMP% environment variables. There is a 100MB limit on the size of the TEMP target directory that is documented in Windows Azure general troubleshooting MSDN documentation .

    To avoid this issue I implemented the mapping of the TEMP/TMP environment variables as indicated in this document. These variables point to the local disk we reserved above.

    private void MapTempEnvVariable() { string customTempLocalResourcePath = RoleEnvironment.GetLocalResource( "TempLocalStore" ).RootPath; Environment.SetEnvironmentVariable( "TMP" , customTempLocalResourcePath); Environment.SetEnvironmentVariable( "TEMP" , customTempLocalResourcePath); }

    The OnStart() method of the role (in my case it is a “mixed Role” – Web Role that also implements a Run() method) starts the download procedure on a different thread and then blocks on a wait handle.

    public class WebRole : RoleEntryPoint { private readonly EventWaitHandle statusCheckWaitHandle = new ManualResetEvent( false ); private volatile bool busy = true ; private const int ThreadPollTimeInMilliseconds = 1500; //periodically check if the handle is signalled private void WaitForHandle(WaitHandle handle) { while (!handle.WaitOne(ThreadPollTimeInMilliseconds)) { Trace.WriteLine( "Waiting to complete configuration for this role ." ); } } […] public override bool OnStart() { //start another thread that will carry would configuration var startThread = new Thread(OnStartInternal); sta

Skip to main content