Any Questions?


Is there anything about Windows Workflow Foundation (WF) that you would like to know and feel is an appropriate topic for this blog?  If so, please add a comment to this post.  I will use the comments as seeds for future postings.  Even if someone has already posted your specific question, please add your own comment as well so that I can do a better job prioritizing.


Note that posting here does not in any way guarantee that I will respond, but I will try to do the best that I can.  Another good resource for general WF questions is the http://www.windowsworkflow.net.  This site has articles and samples of its own as well as links to other blogs and related MSDN forums.


Comments (30)

  1. Quango says:

    I am someone who designed my own workflow system for a previous business a few years ago, and is now hoping to use WWF rather than build my own. I’ll tell you about how I am thinking of using WWF see if that helps..

    My first perception is that a lot of the examples concentrate on building a workflow process. A higher-level overview seems to be missing (or perhaps I missed it?) – what are all the ‘building blocks’, how do the workflow assemblies, runtime, persistence, tracking etc. all work together, and how they might interact with existing applications and databases. Obviously there may be so many possible combinations as things are so flexible, but an example of how this might work would be useful.

    For example, my app has a data-access layer with business logic, based on SQL 2005, with the front-end in ASP.NET 2.0. I have some back end processing running on the server as NT services. I assume I will need to have a service running the workflow runtime as my workflow ‘engine’, and persist to a database.

    My first test use of WWF in this application is to be a credit control workflow. When a customer’s account is unpaid beyond the due date, it will need to initiate a credit control workflow process. As this is largely event-driven I am building it as a state machine WF.

    The first state DetermineAction is transitory: will decide if the nice reminder letter (slightly overdue) goes out, or the firm-demand letter goes out, before setting a new WaitForAWeek state.  If the debt is very large, it may decide to skip the letters and go straight to the RequiresPhoneCall state, I guess that this might be a good place to Rules but I have not got into those much yet.

    At any point in the workflow the customer might pay up, so we need to have a workflow-level activity handler (is that the right name?) that completes the WF, regardless of the current state. That seems to be possible having eventActivities at the top of the workflow, although I’ve not seen it written about (I discovered it by accident!).

    The WaitForAWeek state will have a design-time delay of seven days, after which it reverts to DetermineAction state again. If we have sent the first letter, we go to the second. If the second has been sent, we go to RequiresPhoneCall state. However I might need to make the delay dynamic based on the size of the debt, so larger debts are chased more quickly – have not seen how the DelayActivity could be made dynamic rather than static so far.

    The RequiresPhoneCall state is where I need figure out how ‘assignment’ of workflow activities works in WWF.  I have not yet figured out how to locate all workflows in a current state but I assume it can be done.

    Acutal assignement seems to be something that’s up to the developer to implement, presumably either in the workflow by storing the assignment there or by storing the assignment in the app along with the instanceID. I don’t know how efficient trying to query a set of workflow would be (especially if they have been persisted to db?) = what’s the best way.

    When someone chases the debt they may need to see what workflow has done – is a state history something provided or something we developers need to store?

    As we expand workflow into other areas it’s likely on person would need to deal with lots of different activities, so I would want to have a way of presenting all the activities to the user. This is allied to the earlier issue of where I store assignment: presumably it would be best done by me on my database. However if I do that, and some other activity has ended the workflow (e.g. customer has paid) my system will be out of date.

    Hope that’s a useful example and some of the questions I am encountering as I try to figure out WWF. That said it feels to me that it could be the next killer-app for Microsoft.

  2. ntalbert says:

    Since there was a lot of ground covered in your post I’m going to just quickly (and somewhat superficially) answer the questions that I think are less pressing for you before diving into the questions at the end.  

    1. We have a few samples which show somewhat end to end usage of Windows Workflow Foundation (WF), but nothing at the scale of your project nor in the same problem space as your project.  The larger samples are found in the Applications directory under Samples. 
    2. The handers inside a state machine are called Event Handlers and not Activity Handlers.
    3. Any Event Handler defined on a parent state is available to substates – so if your root state has an Event Handler then this handler will be available in ALL states.
    4. Delays can be set dynamically.  I have not done this in a state machine, but there should be two ways to get the results you want.  First, in the state initialization handler you can access the delay activity in a CodeActivity by walking the tree and set the TimeoutDuration property.  Second, you can subscribe to the InitializeTimeoutDuration event, cast the sender to DelayActivity, and set the TimeoutDuration on that object.

    Now, for the meatier questions …

    Assignment of Workflow Tasks

    In your post you mention that you have not found a way to assign “workflow activities”.  To clarify, WF uses the term activities only to refer to the entities which make up a workflow.  Activities do not have any notion of ownership or assignment because this largely has no meaning for things like ParallelActivity, CodeActivity, or DelayActivity.  When we think about ownership and assignment we are usually picturing external entities which are informally called tasks.  For the purposes of this blog we’ll consider a task to be any work item that must be performed outside the scope of the workflow but must be tracked by the workflow.

    That said, you hit the nail on the head with your comment about actual assignment being something that is up to the developer to implement.  This is true for tracking the task internal to the workflow as well.  We provide mechanisms, like the ExternalDataExchangeService, for communicating data between the workflow instance and the host (and vice versa), but we do not inherently support the notion of a piece of work that someone owns.  You could track this as a field on your workflow, as a field on a custom activity, as an entry in a separate database, or in any other manner.

    At the bottom of this post I will touch on one possible architecture, but first let me say that the next release of SharePoint Services will enable workflow task management.  I don’t really have any information on this, but if you search online I’m sure you will come across something that lets you know what they are up to.

    Querying Workflow State

    You asked how hard it is to query a workflow for its current state.  Workflow state really falls into two categories – 1) which State is your StateMachineWorkflow currently in and 2) is the workflow instance running, blocked, suspended, etc.  I am assuming that you are looking for (1) and unfortunately that goes into the realm of my lack of knowledge about state machine.  I do know that it is possible using the StateMachineWorkflowInstance class to find out the state of a loaded workflow and I also know that this information is not readily extracted from the persistence database, but I do not know the implications of trying to determine the state of an arbitrary number of unloaded workflows.  That being the case, I’ll cover a possible architecture which could work below.

    State History

    Our tracking services provides you with a nice way of determining the history of your workflow instance.  Out of box we ship a tracking service which stores the data in a SQL database and comes with a few helper methods for querying the tracking data.  By specifying a custom profile you can get as much (every execution status change of every activity with extraction of data) or as little (only transitions to new state machine states) tracking data as you like.  There should be some samples which cover the use of tracking.

    Possible Architecture

    Since your workflow instance cannot be easily and performantly queried to determine the owner of a PhoneCall task, it is best to store this data separately.  Additionally, since your scenario seems to have a 1:1 correspondence between tasks to be performed and workflow instances it makes sense to store the instance ID with the rest of the task data as opposed to storing task data in the workflow instance.  For example, picture the following implementation of the phone call state:

    StateInitializer
      CreateTask // Creates the task in the database
      AssignTask // Kicks off logic to assign the task an owner
      SetState(WaitForTaskComplete)

    WaitForTaskComplete
      TaskCompleted // HandleExternalEventActivity which is notified on task completion
      DeleteTask // Clean up the task in the database
      SetState(DetermineAction)

    Outside of the workflow you would create a database (let’s call it the “application database”) which stored the task data (workflow instance ID, owner, link to customer data) and the CreateTask activity (or set of activities) would populate this database.  This could either be done by connecting to SQL directly in the activity or by creating an ExternalDataExchange interface and performing the work in a local service.  The latter has the distinct advantage of separating implementation from your workflow and is the recommended technique.

    TaskCompleted when raised, would cause the task to be deleted from the database (or perhaps simply updated with a resolution).  You would also have the same DeleteTask logic in your root level handler which is resposible for exiting the workflow if the customer pays the bill.

    The last piece would be your task owner interface.  Whether web or windows based, this GUI could simply query the application database for any tasks assigned to this owner.  On completion of a task the host would need to raise the appropriate event for the appropriate workflow instance.

    I hope this helps get you going and answers some of your questions.  This was all pretty highlevel, so if you want any clarifications on anything, just ask.

  3. Howard Richards says:

    Many thanks for the very comprehensive reply! – and congratulations to the WF team on a really excellent product. I think "killer app" is an appropriate epithet.

    Point 3 about "root" Event Handlers (it that the right term?) is what I guessed so it’s good hear confirmation – I would suggest if someone does a statemachine sample they add one of these to illustrate its usage. For example in my app I need a "CustomerHasPaidDebt" Event Handler in all tasks to stop the process, so these are ideal.

    I did finally find a bit of an ‘overview’ of WF in the helpfile with Beta2 – i had only been searching online when I posted.

    Assignment: your points on assignment again are what I suspected – that the persistence store should not be treated as if it’s a queriable database, e.g. find all workflow of type X in state Y and assigneduser = Z.

    I think assignment is going to be a FAQ as most workflow (certainly most statemachine workflow) will use it in some way, and it might be an idea to provide a sample approach (the helpdesk app seems like a logical one to extend so that it is possible to show, although it does not use a statemachine).

    Assigment held on the app raises the issue of concurrency since workflow might move on to a new state by the time the user tries to action the task. I think the best route is as you suggest to let workflow handle assignment (in stateInitialisation perhaps) and also deassignment (in stateFinalization), notifying the app via the interface that it needs to create or delete the task for assignment tracking purposes.

  4. bryant says:

    Hello,

    I just created my first WF workflow and I’m curious if what I did is correct. I wanted to do a latitude/longitude lookup using a web service after the user enters a location string. The user clicks the update button and I fire off my workflow which reads/writes to the user profile table. In order for this to happen in the background I added the DefaultWorkflowSchedulerService to my service in my web.config. Everything seems to be working fine during testing, but I’m wondering if this is a valid use of WF?

    Thanks!

  5. ntalbert says:

    First of all, let me apologize for my late reply.  I was out of the office for a while and then, as usually happens, slammed with work.

    Bryant, this is one of those good/bad scenarios.  Windows Workflow Foundation has no qualms about using the DefaultWorkflowSchedulerService in an ASP.NET hosted scenario and, as you’ve seen in your testing, everything will work just fine.  The problem is that ASP.NET does NOT like people using up all its threads.

    Our ManualWorkflowSchedulerService was written specifically for the ASP.NET scenario to make sure that we were running on the minimal number of threads.  The call to RunWorkflow is an explicit “gifting” of the current thread (the request thread in the IIS case) to run as much of a specific workflow instance as possible.  When you introduce the DefaultWorkflowSchedulerService you are, by default, allowing as many as 4 threads to run workflow.

    Is that a bad thing?  In most cases, no.  Don’t quote me on these numbers, but .NET supports something like 25 threads per processor per Process.  If you are doing anything with Transactions, then at least one of those threads needs to be “available” to avoid deadlock in some cases.  The number of threads ASP.NET requires depends on the situation – if you are running a single web service in an isolated AppPool then I can’t imagine ever running into an issue.

    Last but not least, let’s go to the “official word” – this is the statement that covers everyone’s butt and will hopefully steer you to a design that is free from danger.  Ask yourself why you need to be doing the processing in the background?  Is all processing tied to a request?  Does the processing take very long?  Is new processing ever driven by a “delay” in the workflow?

    If you are doing a lot of processing then it is often better to model this as a windows service.  Using Windows Communication Foundation (WCF) you can directly expose your windows service as a consumable web service.  Alternately you can use WCF or .NET Remoting (last generation technology at this point) to communicate between your current web service and your windows service.

    If you aren’t doing much processing and it is all tied to the request, then consider using the ManualScheduler.  While this will add additional latency to your responses, it can be considered a constant value as opposed to the unpredictable response times that you’ll get when you’ve got an unknown number of active workflow instances vying for processor time.

    In general, try to avoid using the DefaultWorkflowSchedulerService in ASP.NET/IIS.  If you do use it, use it with caution and expect that there may be a few unknown problems which pop up during stress.

  6. eertl says:

    Is there any way to use delay in an state machine workflow?

  7. ntalbert says:

    eertl,

    Short Answer:
    The DelayActivity can only be used as the first executing activity within an EventDrivenActivity inside a StateMachineWorkflow.  The “timer” is considered to have started when the state is entered.  If the state is reentered then the timer is reset.

    Long Answer:
    First a little background on the StateMachineWorkflow.  The direct children of the workflow are StateActivity objects and each StateActivity can be composed of sub states as well as EventDrivenActivity objects.

    An EventDrivenActivity, by definition, must start with an activity which implements the IEventActivity interface.  There are three out of box activities implementing this interface: DelayActivity, HandleExternalEventActivity, and WebServiceInputActivity.  

    The IEventActivity defines a protocol by which a parent (in this case the state machine) can subscribe for an event on behalf of the child.  Allowing the parent to subscribe for the event enables the parent to control execution based on the event’s arrival.  The StateMachineWorkflow uses this to manage execution of the children in such a way that only children which are valid (in the current state) and for which an event has arrived are executed.  ListenActivity and EventHandlersActivity work in a similar way.

    So, this brings us back to your question:  The StateMachineWorkflow does NOT allow IEventActivity implementors to exist in the body of an EventDrivenActivity at any depth EXCEPT as the first executing child.  The reason is that, by definition, an IEventActivity is blocking.  The StateMachineWorkflow’s execution relies on the fact the execution of an EventDriven is a short lived process which is simply used to transition the workflow to another state or ready the workflow to receive a new event.

    If IEventActivity objects were allowed in the middle of an EventDrivenActivity then the workflow would spend an indeterminate amount of time BETWEEN transitions.  It would essentially be waiting on an event in a half-state and this would break the model.  Therefore, DelayActivity can only be used as the first activity of an EventDrivenActivity inside a StateMachineWorkflow.  Note that this restriction does not apply to sequential workflows.

  8. matra says:

    Hi,

    Can you answer some of the questions posted here:

    http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=452829&SiteID=1

    Thanks.

  9. prideu2 says:

    Hi there! Great blog you have.

    I have a strange workflow to implement with WF.

    First, there’re 4 states, and 4 approvers of course for each state. The strange thing is that when the user initiates the workflow, he/she must specify the user for each of the first two states, the others are fixed.

    So it would be:

            First State       Approver defined by user (creator)

           2nd State          Approver defined by user (creator)

           3rd State           Fixed

           4th State           Fixed

    Do you have any idea of how could this apparently simple workflow could be implemented?

    thanks in advance!

    Victor

  10. ntalbert says:

    prideu2, I’m not sure where you are running into trouble.  First, just to clarify, is there a single approver per state making a total of 4 approvers per workflow, or are there 4 approvers per state?  One sentence of your post contradicts the table you’ve created.

    Second, are you wondering how the workflow creator could specify parameters (the approvers) at creation time?  Are you wondering how to have the workflow creator specify the approvers when states 1 and 2 are entered?  Are you wondering how to set data on your activities at runtime?

    Let me know and I’ll try to help.

    Nate

  11. AndyBurns says:

    I’m trying to build a workflow modification form for MOSS that allows a user to reassign workflow (InfoPath form) tasks to other users (in case another user is away, etc.). The ECM Sample Starter Kit has an example that uses an InfoPath Modification form to do just that. However, it is sequential, and I’m using a state machine workflow.

    Looking at the code for the example, I can see that it has an ‘EnableWorkflowModification’ step to, well, enable a modification form. I looked this up on the MSDN site, and found that the pages I saw (http://msdn2.microsoft.com/en-us/library/ms550177.aspx and (http://msdn2.microsoft.com/en-us/library/ms480794.aspx) discuss the ‘scope’ of the enabled form.

    However, scopes don’t seem to make much sense in the context of a state machine workflow. The modification form would need to be enabled for a state, not a scope. There are no ‘IEventActivity’ steps to block the progress of the workflow while it is in a scope. Thus, ‘rest’ condition of the workflow is in a StateActivity, waiting for an event – and it is only in this condition that a user would have a chance of actually using a modification form.

    Could you clarify how I’m meant to use Modification forms in SharePoint 2007 state machine workflow? Or is this too specific to SharePoint?

  12. ntalbert says:

    AndyBurns, sorry that it took me so long to get back to you, but you should talk to the SharePoint people about this.  This is far too specific to their usage of our product for me to be any help.  Again, sorry for the delay.

  13. AndyBurns says:

    Just to confirm Quango’s comment, I just checked, and it does not appear to be possible to dynamically set a DelayActivity’s timeout duration in state machine workflow.

  14. ntalbert says:

    AndyBurns,

    From my response to Quango:

    Delays can be set dynamically.  I have not done this in a state machine, but there should be two ways to get the results you want.  First, in the state initialization handler you can access the delay activity using a CodeActivity by walking the tree and set the TimeoutDuration property.  Second, you can subscribe to the InitializeTimeoutDuration event, cast the sender to DelayActivity, and set the TimeoutDuration on that object.

  15. Howard Richards says:

    I tried to mimic this behaviour when I created my own subclass activity. Problem was that after the ‘user’ code (in the event handler) would set a dependency property value, the subclass was not able to see the property — the event was generated in Execute() – can you explain this??

  16. ntalbert says:

    Howard,

    Did you see the response I posted to your question on the Spawned Contexts thread?  Here is a copy of it:

    Yes, you most likely are hitting something to do with execution contexts.  My guess would be that your event handler for InitializeTask is NOT accessing the activity by casting the sender argument, but instead is accessing some field on the workflow which represents the activity.

    If that is the case (you are NOT using sender) then the problem you are seeing is that the handler is changing the template activity and not the current instance.  State machine starts each state in a new execution context, so the template activity is never actually executed.

    Let me know if you are still having issues.

  17. Howard Richards says:

    Thanks for the reminder…

    Ugh. Oh dear. If that is the only way to do it, then I quit. It’s too easy to forget/miss this when writing in workflow – it’s just yet another counter-intuitive approach that WF imposes. Uninstall WF and 3.0..

    As you can see from my original post in March (‘Quango’ at the top) I’ve been looking at workflow for a LONG time and overcome many of the conceptual problems.  I even managed to overcome the horrific minefield that is ‘HandleExternalEventActivity’ with its Heath-Robinson string of interfaces, classes and services to send a single message, by stuffing workflow inside a webservice and writing webmethod wrappers.

    But in the end I found that using WF would generate more effort and coding than it acutally provides in benefits. There are just too many problems at every turn once you try to implement it in a real-world situation and I know I’m not alone in this feeling!

    WF is too complicated. Its like being required to have a degree in mechanical engineering to be allowed to drive a car. It’s not the direction .NET has been taking us since 2000 – simpler, higher-level and more powerful classes.

    What worries me is that if experience developers like myself are giving up on it in the beta phase, does this system stand any chance of making it to 2.0 ?

  18. mconstantin says:

    I have an activity property whose type is like a metadata hierarchy: the root is a parent whose "value" is a collection of metadata items; each item is an integer metadata, string metadata, etc. or another parent metadata. An integer metadata has a Name, Value, Default (value) properties, etc. an so on.

    The problem is when I want to bind say an integer property in another activity to an item inside this hierarchy. The binding editor only shows the "root" and item[0]. This metadata hierarchy is actually populated at design time, e.g. the root has a collection of 5 items, so I’d like to see item[0], item[1], etc. and bind to the Value property of 3rd item (like activity1.Metadata.Value[3].Value).

    The second situation is the reverse of this: I’d like to bind say activity1.Metadata.Value[3].Value in the hierarchy to an integer property of another activity. But there is only one dependency property (of type MetadataProperty) in my activity, i.e. item 3 does not have a corresponding (bindable) dependency property.

    Questions:

    1. can the bindable editor be replaced with a custom one?

    2. what are the formats for the Path property of the ActivityBind? seems that could be like Myproperty.SubProp1, but does it support an expression like this: Metadata.Value[3].Value? besides, the ActivityBind class is sealed: I cannot enhance it, or even change its serialization, can I?

    3. How does the property grid add the "blue dot" for property binding? is it tied in having a dependency property with the name of the property and "Property" appended to it?

  19. Bill Bassler says:

    I have a scenario where I need to create a set of workflows in the context of a transaction. For example this is a high-level description of what I need to do. The use of the wording "Transaction" is meant only to specify that all operations must either succeed or fail.

    Start a "Transaction"

    Write  "Batch Header" info into a database table (A batch ID, Time. number of Items included in Batch etc)

     For each Item key submitted in

       Create an approval process workflow

           Write Item mapping ID, status info into the database. Mainly for application data grid binding, status

           display etc.

       EndFor each

    End "Transaction"

    The issue is that I want all submitted items to be "committed to the supporting database tables and the all the workflows to also be created, intialized and persisted. If any "exceptional" issues are detected during the course of this process I want ALL the data and workflow related instances to rollback to a pre-attempt state. i.e. no data or workflows. What is the most robust way to acheive this?

  20. ntalbert says:

    mconstantin, allow me to apologize for never responding to your question.  In the interest of having a complete archive, my response would have been: I’m not that familiar with the designer or the binding mechanism (I’m more of a core runtime guy) so I’d suggest the MSDN WF forums for questions of that nature.  These forums are actively watched by members of the team and turn around time is pretty good.

  21. ntalbert says:

    Bill Bassler, this is an interesting (and not uncommon) scenario that you bring up.  WF in NetFx 3.0 has very limited support for what we’ll call BYOT (bring your own transaction) but luckily that support intersects with what you are interested in doing.

    It’s been a whlie since I’ve worked with this stuff, so some of my details might be a bit off, but it should go something like this …

    The WorkflowInstance.Unload API does not suppress the ambient transaction.  This means that if you call Unload with Transaction.Current set then the SqlWorkflowPersistenceService will use that transaction to save the instance into the database.

    This is a rather dangerous feature, however, because it can be difficult to actually save a workflow in the desired state given that it could be running at any arbitrary point on any arbitrary thread.  Unless, of course, you are using the ManualSchedulerService.

    So, the proper way to set this up is to configure the WorkflowRuntime with the SqlWorkflowPersistenceService and the ManualSchedulerService.  Create as many instances as you need, call Unload under the same transaction, and then commit the transaction.

    One other caveat here is about the state of the instance in the database.  If you want the SqlWorkflowPersistenceService to automatically pick up and run your newly created workflow then they need to be in the Executing state and they need to be NotBlocked.  When you first Create a workflow instance this is not the case.  I believe that you can simply call Start on your newly created instances in order to both change the instance’s state as well as schedule the first work item if your desire is to have another machine watching the same DB to automatically pick up and run the workflows.

  22. Bill Bassler says:

    I’m not getting the data rollback results that I believe I would expect.  I do not observe a rollback of the persistence data in WF InstanceState table when an exception is thrown from the hosting application in the scope of a transaction. I do see a rollback of the data of the "hosting app" data.  I would expect no corresponding row is persisted in the InstanceState table as was the state of the database prior to the transaction? All tables are in the same database.

    Another question is "call Unload with Transaction.Current set". Is this something that I need to do explicitly? Maybe this is problem?  Please see the prototyped code below. Thanks in advance.

    WorkflowRuntime workflowRuntime = Application["WorkflowRuntime"] as WorkflowRuntime;

    ExternalDataExchangeService exchangeService = new ExternalDataExchangeService();

           workflowRuntime.AddService(exchangeService);

    ManualWorkflowSchedulerService scheduler = workflowRuntime.GetService(typeof(ManualWorkflowSchedulerService)) as ManualWorkflowSchedulerService;

    workflowRuntime.WorkflowCompleted += new EventHandler<WorkflowCompletedEventArgs>(workflowRuntime_WorkflowCompleted);

    workflowRuntime.StartRuntime();

    WorkflowInstance instance;

    using (TransactionScope scope = new TransactionScope())

    {

               // Persist some host app data.

               Case currentCase = new Case();

               currentCase.InitiationDate = DateTime.Now;

               currentCase.Save();

               instance = workflowRuntime.CreateWorkflow(typeof(SharedWorkflows.Workflow));

               instance.Unload();

               // Simulate an exception being thown in the hosting app.

               int div = 0;

               int i = 1 / div;

               scope.Complete();

    }

    // Attempt to execute the workflow synchronously on our thread.

    scheduler.RunWorkflow(instance.InstanceId);

  23. ntalbert says:

    Bill Bassler, I’m not sure why you are seeing the behavior you mention above.  At the end of the this post please find the code for a console app which demonstrates the pattern I’m suggesting.  I’ve run this on my machine and with the throw commented out I get four rows in the database.  With the throw in place I get zero rows in the InstanceState table.

    TransactionScope sets up Transaction.Current for you so there is nothing special that needs to be done there.  Another place to check is your connection string.  I believe (but don’t quote me on this) that there are settings which disable to usage of System.Transactions transactions for SQL connections.

           static void Main(string[] args)

           {

               WorkflowRuntime runtime = new WorkflowRuntime();

               SqlWorkflowPersistenceService persistence = new SqlWorkflowPersistenceService("Data Source=localhost\sqlexpress;Initial Catalog=WFTemp;Integrated Security=SSPI");

               ManualWorkflowSchedulerService scheduler = new ManualWorkflowSchedulerService();

               runtime.AddService(persistence);

               runtime.AddService(scheduler);

               try

               {

                   using (TransactionScope scope = new TransactionScope())

                   {

                       for (int i = 0; i < 4; i++)

                       {

                           WorkflowInstance instance = runtime.CreateWorkflow(typeof(MyWorkflow));

                           instance.Start();

                           instance.Unload();

                       }

                       // throw new ApplicationException();

                       scope.Complete();

                   }

               }

               catch (Exception e)

               {

                   Console.WriteLine("Caught exception: " + e);

               }

               Console.WriteLine("Done");

               Console.ReadKey();

           }

           class MyWorkflow : SequenceActivity

           {

               public MyWorkflow()

               {

                   this.CanModifyActivities = true;

                   CodeActivity code1 = new CodeActivity();

                   code1.ExecuteCode += new EventHandler(code1_ExecuteCode);

                   this.Activities.Add(code1);

                   CodeActivity code2 = new CodeActivity();

                   code2.ExecuteCode += new EventHandler(code2_ExecuteCode);

                   this.Activities.Add(code2);

                   this.CanModifyActivities = false;

               }

               void  code2_ExecuteCode(object sender, EventArgs e)

               {

                   Console.WriteLine("Code2");

               }

               void  code1_ExecuteCode(object sender, EventArgs e)

               {

                   Console.WriteLine("Code1");

               }

           }

  24. Bill Bassler says:

    After some further investigation I find that the 3rd party data access framework TransactionScope object, which is supposed to be a wrapper of (esTransactionScope, EntitiySpaces) TransactionScope object is the culprit. Once I changed to use ADO.net TranactionScope, which EntitySpaces also supports I get the appropriate rollback.

    The next problem that I see is that a new connection appears to be created from each instance of the workflow that’s created. The pool is exhausted at 100 by default. The transaction scope must encapsulate the header record and any instance mapping table inserts. There only needs to be one connection. Not oine one for each workflow creation. Can this be acheived? Need something like the SharedConnectionWorkflowCommitBatchService to address this? Help please.

    using (TransactionScope scope = new TransactionScope())

                   {

                       // Simulate updates to job header tables in hosting app.

                       Case currentCase = new Case();

                       currentCase.InitiationDate = DateTime.Now;

                       currentCase.CardTransactionID = 1000;

                       currentCase.InitiationUserID = 1;

                       currentCase.Save();

                       // Create batch of instances

                       for (int i = 0; i < 250; i++)

                       {

                           WorkflowInstance instance = runtime.CreateWorkflow(typeof(Workflow1));

                           instance.Unload();

                           System.Diagnostics.Debug.WriteLine("Case: " + i.ToString());

                       }

                       //throw new ApplicationException();

                       scope.Complete();

                  }

  25. ntalbert says:

    And Bill, now I must direct you to the MSDN workflow forums (http://forums.microsoft.com/MSDN/ShowForum.aspx?ForumID=122&SiteID=1).  I’m not familiar enough with how to finesse the SQL persistence service to reuse connections across instances but someone on that forum should be able to help (or tell you if it is even possible).  This might actually be something that you can control through the connection string.

    Just in case you get a "that can’t be done" answer from the forums: Worst case scenario you’ll need to create your own persistence provider.  Because of the nature of your scenario it should be relatively simple to create a "BatchSaveWorkflowsPersistenceService" which simply implements the SaveInstanceState method and calls the appropriate stored procedure provided by SqlWorkflowPersistenceService.  Your implementation can reuse the connection to its heart’s content.

Skip to main content