Spawned Contexts – Replicator, While, State, EventHandlers, and CAG

Ever wonder why this.delayActivity1.TimeoutDuration sometimes doesn’t change the timeout duration?  How come this.callExternalMethodActivity1.ParameterBindings[“(ReturnValue)”] isn’t giving you the value you expect in some scenarios?  How is it possible that sometimes this.GetActivityByName(“foo”) does not equal my sender for one of foo’s events? 

The answer to all of these questions is: spawned contexts.

One of the most powerful and most easily misunderstood concepts in Windows Workflow Foundation (WF) is that of new contexts executing cloned activities.  Before we even define a spawned context, let’s go back to the beginning …

The Beginning

One of the binding qualities of the activity state transition diagram is that there are no transitions from Closed back to Executing.  You can only get to Executing through an Initialized activity and you can only get to Initialized from a brand new instance. 

How do we handle looping activities then?  Replicator, While, and CAG all give the impression of executing the same activity (or set of activities) multiple times.  The answer is by creating a new context and cloning the template activity (explained later).

ActivityExecutionContext Interlude

First, let’s fully understand the ActivityExecutionContext.  This object is passed to all scheduled calls either as a specific parameter (Execute, Cancel, HandleFault) or as the sender (QueueItemAvailable handler, StatusChanged handler).  The ActivityExecutionContext provides the activity writer with an interface for controling activity execution (hence the first two words in the name) while giving the runtime enough control to enforce the rules of the WF engine.

But what about the “Context” part of the name?  A context in WF is a sphere of execution.  There is a root activity for each context and only activities which exist in that context can be executed in that context.  In short, a context is a mechanism used by the runtime to determine on which set of activities to enforce rules.

One important note is that the ActivityExecutionContext is merely a short-lived expression of the underlying context – every scheduled call to an activity method gets a new instance of the ActivityExecutionContext object which has been configured specifically for that activity.  You’ll notice, however, that the Guid associated with the context does not change.

Cloning Activities

Whenever a single activity needs to be executed multiple times it must be cloned.  Contexts are the mechanism provided to the activity writer for making this happen.  The code looks like this:

ActivityExecutionContext childContext = currentContext.ExecutionContextManager.CreateExecutionContext(childActivity);

This code will cause a new context, childContext, to be created with a root activity which is a clone of childActivity.  Note that this is a deep cloning so if childActivity is a composite activity then its entire tree is cloned as well.  Consider that we have a custom activity called WorkflowRoot which clones its only child activity using the above code.  Visually, we now have the following tree of contexts:

|      WorkflowRoot (1)
|         childActivity (1)
|            grandChildActivity (1)
– childContext
         childActivity (2)
            grandChildActivity (2)


Looking at the above diagram there are several questions which come up.  Let’s try to deal with a couple of easy ones first:

  • Are changes to childActivity(1) or childActivity(2) reflected in the other instance?
    No.  Once cloned these instances have no connection.  Changes to the template will affect future clones and changes to the clone will affect its own execution, but changes to one will not affect the other.

  • What is the return value of childActivity(2).Parent?
    WorkflowRoot(1).  The activities inside a new context do not know that they are not part of the rest of the tree.  The Parent property of the context’s root activity still points to the original parent.  It is only when walking down the tree that the context’s are noticeable.  For example, WorkflowRoot(1).Activities[0] will always return childActivity(1) and never childActivity(2).  Said another way, childActivity(2).Parent.Activities[0] == childActivity(1).  This is strange at first glance, but this soon becomes natural.


While Loop with Delay

Consider the following workflow:


Not very useful, I’ll admit, but it is handy for this demonstration.  Now, if we have implemented this as a code only workflow, we’ll probably have some field defined on our root called delayActivity1.  Let’s say that we want to change the delay amount each time through the loop, so we subscribe to the InitializeTimeoutDuration event with the following code:

this.delayActivity1.TimeoutDuration = TimeSpan.FromSeconds(iterationCount);

Assuming iterationCount is a variable that is incremented each time through the loop, we expect to see: delay 1 second, delay 2 seconds, delay 3 seconds, etc.  This, however, is not what we see.  Instead we get: delay 0 seconds, delay 1 seconds, delay 2 seconds, etc. 

The reason is that the WhileActivity is spawning a new context when it executes the child.  So, each iteration looks like this:

|      WhileActivity(1)
|         Delay(1)
– childContext
      Delay(1 + iterationCount)

this.delayActivity1 ALWAYS refers to Delay(1) and therefore we are updating the template every time InitializeTimeoutDuration is called.  That means we are always one timeout amount behind … Delay(2) is about to execute with TimeoutDuration set to 0 seconds and we update the template to 1 seconds.  Delay(3) is created with a 1 second timeout because it is just a clone of the template at that point in time.

Some new code for InitializeTimeoutDuration:

((DelayActivity)sender).TimeoutDuration = TimeSpan.FromSeconds(iterationCount);

This time we will see the following: delay 1 seconds, delay 2 seconds, delay 3 seconds, etc.  Here we have updated the cloned value instead of the template.  Note that for ALL events subscribed to in code beside the sender will be the actual instance of the activity which is currently running. 

Therefore, the sender above will always be the right one even if the delay is not in a context spawning activity.  If you want to avoid issues, learn to access activity properties in a context safe way (like using the sender objects) so that you do it right when it counts.  If the delay weren’t in a context spawning activity then the WRONG CODE and the RIGHT CODE would be equivalent, but if the delay is in a context spawning activity then the WRONG CODE will never work.

Replicator and GetActivityByName


The above workflow is a common pattern for replicated user tasks.  The CallExternalMethodActivity notifies the user of the task and the HandleExternalEventActivity gets an event when the task is complete.  Let’s say that we’re going to assign 3 tasks for UserA, UserB, and UserC so our replicator will initailize itself with the collection {“UserA”, “UserB”, “UserC”}.  Assuming that the user name is the correlation parameter, our ChildInitialized handler might look like:

CallExternalMethodActiivty act = this.GetActivityByName(“createTask1”) as CallExternalMethodActivity;
act.ParameterBindings[“userName”].Value = e.InstanceData;

The code above will not work as expected.  Let’s look at why by examining the contexts created:

|      Replicator (1)
|         Sequence(1)
|            CallExternalMethodActivity(1)
|            HandleExternalEventActivity(1)
– childContext1 (e.InstanceData = “UserA”)
|      Sequence(2)
|         CallExternalMethodActivity(2)
|         HandleExternalEventActivity(2)
– childContext2 (e.InstanceData = “UserB”)
|      Sequence(3)
|         CallExternalMethodActivity(3)
|         HandleExternalEventActivity(3)
– childContext3 (e.InstanceData = “UserC”)

“this” in our code snippet refers to the root workflow which exists in the RootContext.  When we call GetActivityByName and pass the CallExternalMethodActivity’s name we will get the instance that is in the root context – CallExternalMethodActivity(1).  What we want is the one in the current context so the code should look like:

CallExternalMethodActiivty act = e.Activity.GetActivityByName(“createTask1”, true) as CallExternalMethodActivity;
act.ParameterBindings[“userName”].Value = e.InstanceData;

Note the two changes – first we use e.Activity instead of this.  e.Activity is the clone of the replicator’s template (Sequence(2-4)).  Second, we have passed the parameter true to GetActivityByName.  This tells the method to look only in the context of the activity on which it was called.  This keeps the method from walking into other parts of the tree and returning the RootContext instance.


Hopefully this post eases some confusion around contexts and doesn’t make it worse.  Please post comments if you want clarifications on anything written above or if you want more information about one topic or another.  I will write a separate entry at some point to discuss how to manage contexts you create in custom activities.

Comments (28)

  1. SKovour says:

    First of all I have to appreciate the pain you have taken to explain about execution context.  Your blog helped me to understand signficance of execution context and what is difference between template activity and activity in context activity (Before reading this blog, I am not able to understand why I am getting an exeception from ExecuteActivity with a message that my activity is not in the context).  However now I am more in dielamma, as I am not sure WWF capabilities can help me to solve the problem I am planning to resolve.

    I am doing a feasability study to see whether WWF can solve our workflow issue in the doamin of medical imaging applications.  We would like to model serveral clinical image processing blocks as activities as elements in pipe line pattern.  Specifically we are wondering whether you can modify sequence activity for example to a behavior similar to a pipe line element.  One of the challenging requirements (with respective to using WWF) is, properties of piple line elements can be modifable by hosting applicaiton and in that case it is expected that piple line starting from the segment whose properties are modified, need to be re-executed.  And this can happend several times in course of workflow.  I am not really sure how I model this control behaviour in a custom activity that is derived from composite activity.  

    Any help on this is highly appreciated.  I am not sure whether Microsoft officially supporting WWF through technical consultancy.  


  2. ntalbert says:


    Your scenario is an interesting one and it is definitely one which can enabled by Windows Workflow Foundation (WF).  While I’m not going to provide an exact solution for your scenario, I will get you started with some potential architectures for getting the behavior you desire.

    The Root Activity
    The pipeline container activity needs to support the more generic concept of being able to interrupt the current work to arbitrarily go back and start over from some point.  This, surprisingly enough, should be pretty easy to implement.

    First, the activity could create a queue at execution time to which it will subscribe for QueueItemAvailable.  The handler will expect that the item passed on the queue is the name of the next activity that should be executed.  The logic for the QueueItemAvailable handler might look something like this (psuedo-code):

    ActivityExecutionContext context = (ActivityExecutionContext)sender;
    string nextName = (string)q.Dequeue();

    Assert(context.ExecutionContextManager.Contexts.Count <= 1, “Should only have at most one child executing.”);

    if (context.ExecutionContextManager.Contexts.Count > 0)
     AEC childContext = context.ExecutionContextManager.Contexts[0];
     if (childContext.Activity.ExecutionStatus == ActivityExecutionStatus.Executing)


    The logic for the Activity.Closed handler might look like:

    // Complete the context for the closed activity

    if (NextIndexToExecute >= this.EnabledActivities.Count)

    AEC childContext = context.ExecutionContextManager.CreateExecutionContext(this.EnabledActivities[NextIndexToExecute]);
    childContext.Activity.Closed += OnActivityClosed;

    this.SetNextIndex(NextIndexToExecute + 1);

    Now, there are probably some logic bugs in the above code that need to be ironed out (is there a race between an event coming in and an activity closing which could cause the “rework” not to occur?), but the idea is that some external event can cause cancellation and an activity’s execution logic can be written to have a dynamically decided “next child to execute”.

    Updating the Pipeline Components
    WF already has a mechanism for adding and removing components at runtime (the WorkflowChanges dynamic update capabilities).  You could simply piggyback on these and have a change a pipeline component be a remove followed by an add.  This would require a little more engineering in some places (you’d have to cancel the child first if you were trying to remove an executing one), but would probably be one of the easier approaches.  Note that this will also have a performance implication because every dynamic change is persisted with the workflow’s instance.  That is to say that if you remove 30 activities and add 30 activities then your persistence footprint will now include a record for each removal and each addition.

    An alternate solution is to implement a generic “property update” capability either on the Root activity (the pipeline container described above) or on the base pipeline component class.  This could be implemented as a queue onto which the host places some path to be updated and some value to to assign.  Because of requirements around the state of an activity and event delivery (events are only delivered to non-Initialized, non-Closed activities), you’d probably want to implement this on the root.  So, for example, the queue item available handler for this queue might look like:

    PropertyUpdate update = q.Dequeue();
    Activity act = this.Activities[update.ActivityName];
    PropertyInfo pInfo = act.GetType().GetPropertyInfo(update.PropertyName);

    Example Host Usage
    WorkflowInstance wi = wr.GetWorkflow(wfGuid);
    wi.EnqueueItem(“UpdateQueue”, new PropertyUpdate(“SomeActivity”, “SomeProperty”, 5), null, null);
    wi.EnqueueItem(“StartRework”, “SomeActivity”, null, null);

    Note that if the pipeline container activity is the one listening for UpdateQueue then it can be smart enough to start the rework from the earliest activity that is updated.  You’ll also probably want to be able to “batch” updates seeing as you might change one or more components at a time (and several properties on each) but don’t necessarily want that to translate into the work being restarted that many times.

    This is just a quick, first guess solution, but with a little work I think it could get you well along the path.  I’m sure that there are requirements of yours which will conflict with some of the examples given above, but the point is that you can do pretty much anything you can imagine with WF.

  3. Brian Noyes’ post &quot;Understanding Windows Workflow and its complexities&quot; has me thinking.

    I know a few…

  4. Howard Richards says:

    Am I hitting something to do with ExecutionContext here? I am trying to subclass StateActivity. I created an event that I raise during the first EXECUTE method – but if the workflow event code modifies DependencyProperties, the values don’t seem to be visible to the subclass code?


    Protected Overrides Function Execute(ByVal executionContext As System.Workflow.ComponentModel.ActivityExecutionContext) As System.Workflow.ComponentModel.ActivityExecutionStatus

              ‘raise event

      Mybase.RaiseEvent(StateSubclass.InitializeTask, Me, EventArgs.Empty)

      ‘cannot access values set by event handler here?

      ‘ console.WriteLine("Value is " & me.someProperty)


    End Sub          

  5. ntalbert says:

    Yes, you most likely are hitting something to do with execution contexts.  My guess would be that your event handler for InitializeTask is NOT accessing the activity by casting the sender argument, but instead is accessing some field on the workflow which represents the activity.

    If that is the case (you are NOT using sender) then the problem you are seeing is that the handler is changing the template activity and not the current instance.  State machine starts each state in a new execution context, so the template activity is never actually executed.

    Let me know if you are still having issues.

  6. The second in my series of alternate execution patterns ( part 1 ) I recently worked with a customer

  7. john.muller says:

    Hi. Very nice post, thanks.

    I have been using the AEC and was wondering about performance.  As a result I thought it to be a good idea to execute any ‘first’ branch within the default AEC and any subsequent branches within new child AEC’s.  Is my assumption correct that I might be gaining anything using this solution (next to a bit more complicated logic to manage these AEC’s)?

    My initial assumption was that creating new child AEC’s incur a bit of a performance hit – firstly, the actual code to create a new instance has to incur a tiny performance hit, and secondly persisting these workflows + their associated AEC’s has to incur a performance hit (I think).

    So, I guess the question is two-fold :).  Is my assumption correct, and do I gain anything using the aforementioned solution.


  8. ntalbert says:

    john.muller, you will be unable to do what you are attempting.  The activity execution state machine terminates in the Closed state.  This means that once you execute an activity it cannot ever be executed again.  This includes trying to execute it in a new context.

    By spawning a new context with an activity which has not yet executed, you are cloning an Initialized activity.  Attempting to do so with a Closed activity will simply give you a clone of a Closed activity – we’ll throw an exception preventing you from doing this.

    Note that creating contexts is definitely a costly operation.  If you need to squeeze every last bit of perf out then unrolling loops will give you a noticable gain.

  9. lcorneliussen says:


    how can i access the right activity from within a declarative condition? And how inside the Excecute-Event of an activity.

    I would like to make the conditions react directly on activity properties, because my workflow ist just a xml definition and its base class should only have a really limited set of properties.

    It would be awesome if you still watch the comments on this blog.


    Lars (originally from Germany, but now in Mexico)

  10. ilohraphael says:

    Hello Nate,

    I was wondering if you’ve got any pointers on how to use the replicator activity in a state machine workflow. I’ve tried using the WssTaskActivity but that keeps throw some exceptions about the number of activities contained that implement the IEventActivity.

    Any ideas will be welcomed.



  11. ntalbert says:

    Lars, unfortunately I can’t be much help for your scenario as I’m wholly unversed in declarative workflows and pretty unfamiliar with the syntax for our rules.  That said, the WF forums are a great place for these types of questions.  The forums are very active these days:

    Sorry I couldn’t be more help directly … and sorry about the late reply.

  12. ntalbert says:

    Raphael, the short version is that you cannot use replicator in a state machine in the manner that you are trying to use it.  State machine is very strict about being composed only of States.  Each state, in turn, either contains more states or events.  Events MUST start with IEventActivities and must not contain IEventActivities anywhere else in their flow.

    The benefit of these restrictions is that the state machine can guarantee that all events in a state are atomic (you will never get part of the event’s work done while another event is running), it can guarantee mutual exclusion between the events in a state (only one will fire at a time), and it can guarantee that events are largely CPU bound.

    Replicator is designed to be used with sequential style workflows and allows for easy management of multiple, similar, tasks such as the WssTaskActivity.  If you want to do the same thing in StateMachine then you need to approach the problem from a completely different standpoint.  Instead of replicating the task that everyone should do, you need to view it as a state where you create all of the tasks, a state where you wait for all of the tasks to complete (or be updated, etc), and finally a state where you do your final set of work.

  13. You say that we cannot use the Replicator in a state machine workflow.  Instead, you say we have to program in a completely different manner.  Please provide a link to an example to back up what you say.  I’ll give $500 dollars to the first person who provides a real example of a state machine workflow that creates an unknown-until-run-time set of parallel tasks based upon InfoPath task forms, all of which have to finish before the state of the workflow can move forward to the next state.  I have a feeling that my money is pretty safe.

  14. ntalbert says:

    Frederick, I’m not terribly familiar with the WssTaskActivity (and the other Sharepoint activities) and I was definitely speaking from a theoretical point of view – when you want to replicate things which have some sort of receive in them inside of a state machine you have to separate the receive into its own event handler.

    For questions of how to make use of WssTaskActivity in scenarios where the list of users is dynamic I’ll redirect you to the Sharepoint Workflow forums:

  15. We are about to start a great day showing technologies like .NET 3.5, Visual Studio 2008, and SharePoint

  16. Hi,

    We are developing an application using Windows workflow and InformixV7.x version. I have got by batch service and Persistence servivce working perfectly. But we would like to have a feature where in if a workflow instance and the application objects associated with that workflow is in process by a user then the other users should not be able to modify it.

    We are using ASP.NET 2.0 for implementing this application and the Runtime is created as an application level object and the persistence service is created and initialized at application on start itself.

    Also we would like to have the locking based on Application IDs rather than arbitrary GUID.

    What do you suggest ?

    1) Do some modification (if so please tell us the modification) to persistence service and get this done

    2) Handle the application as part of application objects retrieval and persistence.

    Please help us in this regard

  17. ntalbert says:

    heman, that type of integration scenario is a bit out of scope of this blog.  I’d suggest you post the same information on the MSDN WF forum (  These forums are actively monitored by the product team and should be the quickest route to a solution.

  18. says:

    That was an awesome explanation – thanks!

    I’m impressed you were doing this 4 years ago!