More Support for EventSource and strongly typed logging: The Semantic Logging Application Block

If you have been following my blog at all, you have seen my articles about System.Diagnostics.Tracing.EventSource, a class that introduced in V4.5 of the .NET Runtime for production logging.    This class replaces the System.Diagnostics.TraceSource class  and we strongly encourage people to consider using EventSource instead of TraceSource for any future work (You can events from one go to the other ‘stream’ so you can transition slowly if you need to)   We like to believe EventSource is  ‘ultimate’ in logging APIs for .NET in that you should be able to do pretty much any logging you desire with it and we should not have to change it in incompatible ways, ever.   We believe this simply because of the ‘shape’ of a logging statement, here is a prototypical one

  • myEventSource.MyEvent(eventArg1, eventArg2, …)

Basically at the call site you specify

  1. The EventSource (myEventSource)
  2. The Name of the Event being raised as a method (MyEvent)
  3. Any ‘payload’ arguments you wish to log as part of logging that event. 

Notice I did not specify any formatting strings, logging levels, or other meta data at the call site, just what ‘has to be there’.   What may not be quite so obvious is that all the event arguments are passed without loss of type information (the MyEvent method is strongly typed, not a ‘params object[]’).   Unlike ‘printf’ ‘string.Format’ logging we did not ‘stringify’ anything and loose information. It is all passed to the logging method.   This information is preserved in the EventSource ‘pipe’ which means that when you access a particular event you can do so in a strongly typed way, accessing each payload argument with a property accessor as this snipped of code demonstrates. 

MySourceParser.MyEvent += delegate(MyEventTraceEvent data) {

    Console.WriteLine(“MyEvent: Arg1 = {0}  Arg2 = {2} “, data.MyArg1, data.MyArg2);    // Where MyArg1, and MyArg2 are the names of the parameters of the ‘MyEvent’ method.


Thus you really can get to the point where your logging is what you wanted, it is like passing data a method call in the program being monitored, through the logging pipeline (serialization) to pop out as a  to strongly typed structure in some automation that processes logging information files.    Basically you get a ‘full fidelity’ (without loss of type information or ‘metadata’ like payload names) end to end pipeline for logging information.   This is ideal in event logging, and EventSource is in a position to deliver it.

We are not completely there yet as there are missing pieces.  However far more of it is actually in place, people just don’t know about it.  I am trying to fix this with my blog, but there is a lot to tell and event that is a work in progress.    Well in the blog entry I am here to tell you about one of the big pieces of this overarching ‘strongly typed eventing story’ that is falling into place:

The Semantic Logging Application Block

Microsoft has a team called the ‘Patterns and Practices’ team whose job it is to illustrate gpod and proven practices in using Microsoft technologies.   As part of their work they write guidance and build samples of real applications, but they also write utility libraries that ‘flesh out’ Microsoft technologies that currently only provided ‘the basics’.   This team recognized that the strongly typed pipeline that EventSource provides is a great foundation, but only provided the basics, and they could provide the next layer of ‘value add’ libraries.  

This is what the Semantic Logging Application Block is.    ‘Semantic Logging’ is their term for strongly typed logging.   They like the term because it strongly conveys the fact that the logging happens at a more structural level, and is much closer to the logging the semantics of the program (since you pass strongly typed fields without losing the types or the names of the event or fields), than classic string based logging does.    This ‘Embracing Semantic Logging’ article gives a great summary of their view of why they like semantic logging and why you should too. 

As you might expect, I heartily endorse their philosophy on logging, and their efforts to ‘flesh out’ the EventSource foundation, and make it as useful as possible to as many as possible.   If you are not already a convert to EventSource, I strongly encourage you to read ‘Embracing Semantic Logging’. 

For those of you who are already ‘on board’ with strongly typed logging, what can Semantic Logging Application Block do for you?    

  1. EventListeners that send your EventSource data to various places like a flat files, Azure storage, the windows event log, a database etc. 
  2. An EventListener host/service that can process events from any processes on the system and do application specific monitoring / rollups. 

 For more complete details see their the PDF file documentation on the ‘Download’ tab.  Here are the links for the current release (but are likely to be broken in the future)

  1. SemanticLogging-DevelopersGuide-draft-CTP.pdf
  2. SemanticLogging-ReferenceDocs-draft-CTP.pdf

So, if you are doing logging on the .NET platform, you should be using EventSource.   If are looking around for what reusable code you can leverage, take a look at the Semantic Logging Application Block. 



Comments (7)

  1. I like the notion of strongly typed events but it seems to be a bit at odds with a pluggable logging architecture.  With a typical ILog interface you have a set number of methods to log in an admittedly unstructured way but the interface is pretty much fixed.  With the strongly typed (EventSource) approach you either A) skip interfaces and use EventSource directly (tight coupling) and loose the benefits of a "pluggable" approach or B) you design an interface with strongly typed methods that map 1-to-1 with some EventSource that gets plugged in.  The problem with approach B is that for every new Write method you have to update the interface.  Has anybody made EventSource work in a pluggable logging architecture? Perhaps the pluggable aspect is how ETW is pluggable?  

    Is it also the recommended practice that for every single new trace event, the developer adds a strongly typed method to the EventSource?  That seems like it might get old after a while and perhaps messy as folks remove code and corresponding EventSource method calls.  As those EventSource methods become dead code, you can't really remove them without affecting the event id value, right?  So you're left carrying around the dead code.

  2. Strong typing is a tool, and EventSource (strong typing for events) is also a tool and should be used appropriately to the needs of your application.    The key to strong typing is that is makes the contract between the creator and consumer of the events much more explicit, and generally this is a very good thing, but it does have versioning and sharing issues (as you point out),  These are not unlike the versioning and sharing issues for strong typing in general.   Proponents of dynamic languages make arguments much along the lines you describe to say that you should always use dynamically typed languages.

    Now those of us who like our strong typing, realize that you don't have to be all (every events has its own event schema) or nothing (totally unstructured  (everything is a string).   For example you can have very strongly typed events for things that are built in (and probably frequent), but you can also have things a event that takes a enumeration or event a string that represents its 'type' and arguments (which may be specific types or may just be strings).   This allows you to build 'generic' events that provide some structure (every event has a programmatic 'kind' with arguments'), but not completely structures (eg. every event kind has a distinct EventSource event).

    The key here is the needs of the consumer of the logging events.  If the only things that will look at the events is a human than frankly a 'Message(string)' event is enough (since all you will do is display the event.   THe next step might be 'Message('string kind' string message).   which allows each message to have a programmatic 'key'' (which is a strong) that allows some triage code to sort the message by 'kind'..   Now maybe some class of messages have a double value so you might have 'LogValue(string kind, double value).   Notice these are 'generic but they are tailored for the needs of the consumer and are significantly better than taking totally unstructured data (strings)  and having to parse it.  

    Finally note that you have 'all of the above' you can have 'Message(string)' (so that developers can log ad-hoc things without needing to create a new template), as well as 'LogValue(string kind, double value), that many component can use to log value updates in a structure way, as well as MyCompoentHasAParticualrProblem(int arg1, int arg2, ….) that is very specific.   EventSource does not care, but unlike unstructured logging mechanisms it ALLOWS you to specify a more structured event if you so desire.

    Finally You don't need to have dead code to 'reserve' event IDs.   You can specify them explicitly with and attribute on the method (in fact we encourage this, since as you point out people tend to insert and delete methods and not realize that it renumbers things.   The 'auto numbering' feature is really a way to get you going quickly, but in long-lived systems we strongly encourage you to be explicit about ID assignment.  

  3. I shoudl also note that versioning is really orthogonal to whether you use strongly typed events or not. .  If the consumer needs detailed knowledge of a particular event then it will need it whether it was logged as a string (and it gets parsed), or it is logged as a very specific event in EventSource.    Thus if you need that detailed information in the consumer, you have taking a versioning burden, and frankly strong typing helps make that more explicit.  

    Conversely if you are willing to break consumers of the EventSource you are free to reuse event IDs.   Frankly you can change everything about the EventSource in that case (this is typically what actually happens when your logging is 'pre-release' because you don't have any automation relying on the structure of your logs.

    It it actually a pretty reasonable strategy to just have a 'Message(string)' event that you use for anything that JUST humans consume.   As time goes on, you might make contracts and thus have strongly typed events for things that your processing logic needs.  These get added on an as needed basis.      

  4. Finally EventSource is not at odds with a pluggable architecture.   In the limit you can pipe the data anywhere and you can throw away the types at any point you like (however it seems better if you design your 'pipe' so that the types can be preserved through it to the ultimate consumer of the events).   You ultimately get to decide that.

    The only real point of EventSource is that we believe types ARE important so we want the CAPABILITY of capturing that for cases where it is valuable.  You can always 'dumb it down' if you don't need that capability.

  5. Thanks!  I'm still trying to wrap my ahead this approach to logging and this extra info helps.

  6. sssssssssssssssssss says:


  7. Ohad Schneider says:

    SLAB is great but unfortunately it has two significant issues when used in Azure:

    (1) The out-of-process host/service cannot run on an Azure Website. Neither the deployment script nor a WebJob is allowed access ETW sessions. This is very unfortunate, considering how the somewhat obsolete  System.Diagnostics.Trace is integrated so well, with easy configuration for both blob and table storage (including retention policy and verbosity threshold). A SLAB service website configuration would would be a big win.

    (2) There is no ETW viewer that I know of that supports Azure Tables, so once your logs are written to the table your only way of viewing it is using a generic Table viewer. For filtering you can get around it using (paid) tools such as ClumsyLeaf's TableXplorer, but for activity ID correlation you're completely out of luck to the best of my knowledge. A perfview extension that reads SLAB Azure Table logs would be huge.