volatile and MemoryBarrier()…

One thing I love about my job is that I learn something new all the time.  Today I got a little bit smarter about volatile.  One of the devs on the Indigo team was asking about the double check locking pattern. Today the Design Guidelines doc says:


public sealed class Singleton {

   private Singleton() {}

   private static volatile Singleton value;

   private static object syncRoot = new Object();


   public static Singleton Value {

          get {

                 if (Singleton.value == null) {

                        lock (syncRoot) {

                               if (Singleton.value == null) {

                                      Singleton.value = new Singleton();




                 return Singleton.value;






He wanted to know if “volatile” was really needed. Turns out the answer is “sorta”.  Vance, a devlead on the CLR JIT team explained that the issue is around the CLR memory model… Essentially the memory model allows for non-volatile reads\writes to be reordered as long as that change can not be noticed from the point of view of a single thread.  The issue is, of course, there is often more than one thread (like the finalizer thread, worker threads, threadpool threads, etc). volatile essentially prevents that optimization.    As a side note. notice some other folks have a little problem in this space.    A major mitigation here is that x86 chips don’t take advantage of this opportunity… but it will theoretically cause problems in IA64.  As I was writing this I noticed that Vance has already done a very good write up a while ago…


That part I knew… what we news to me is there is a better way to do volatile, and that is with an explicitly memory barrier before accessing the data member..   We have a an API for that: System.Threading.Thread.MemoryBarrier().   This is more efficient than using volatile because a volatile field requires all accesses to be barriers and this effects some performance optimizations. 


So, here is the “fixed” double check locking example..


public sealed class Singleton {

   private Singleton() {}

   private static Singleton value;

   private static object syncRoot = new Object();


   public static Singleton Value {

          get {

                 if (Singleton.value == null) {

                        lock (syncRoot) {

                               if (Singleton.value == null) {

 Singleton newVal = new Singleton();

// Insure all writes used to construct new value have been flushed.


                                      Singleton.value = newVal;         // publish the new value




                 return Singleton.value;




I have not completely internalized this yet, but my bet is it is still better to just make ” value” volatile to ensure code correctness at the cost of (possibly) minor perf costs.





Update:  Vance some new information about how this works in 2.0.. 



Update 2: Even more great information from Joe



Update 3: This gets even better with 3.5, again from Joe!



Comments (76)

  1. Pavel Lebedinsky says:

    I prefer the version that explicitly uses memory barrier. With volatile it can be difficult to figure out why exactly it was needed, because it affects all accesses to the variable.

    It’s also worth noting that .NET volatile is nothing like C/C++ volatile. The latter doesn’t make any memory visibility guarantees and is generally useless for multithreaded programming (though I heard about plans to make C++ volatile act more like .NET volatile in Whidbey).

    Finally, if you find all this memory visibility stuff mind-boggling (and if you don’t you probably haven’t thought enough about it), remember that you can always play it safe and use lock() or other high level synchronization primitives to guard access to shared data. This will take care of everything. It’s only when you start writing your own synchronization code that you need to worry about memory barriers.

  2. David Levine says:

    This link describes the memory barrier issue in great detail.


  3. Bart Jacobs says:

    Could it be that your "fixed" example is broken? Should there not also be a memory barrier prior to the first read of Singleton.value (in the condition of the if statement)? Otherwise, it might be that the field values of the new object are read from cache, right?

  4. Ken Brubaker says:

    An <a href="http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnbda/html/singletondespatt.asp">msdn article</a> on singletons says that double-checking is built-in with C#. Which is correct?

  5. Ken, both are correct. If all you need to get instance of class is write "new Singleton()" – go get framework feature. However, this is not always the case. For example, you may need to check configuration to decide instance of which class to create. Then you do double-checking by hands.

  6. Ned says:

    I know that http://www.cs.umd.edu/~pugh/java/memoryModel/DoubleCheckedLocking.html has dignataries explaining why the Double-checked locking pattern is unsafe. I also know that it talks about ‘memory barriers’ making it safe. Unfortunately, I lack the understanding to tell if your article addresses the issues raised in that "Declaration" of unsafety. Can anyone help?



  7. I have a somewhat unique perspective on this question because I was an operating system programmer on one of the first large-scale multiprocessors (Sequent) in the 1980s, and now I’m an application programmer.

    So. Chris Brumme wrote all about this in the context of the CLR, too:


    As a person who has sat on both sides of this issue, I agree totally with Chris’ comments about this subject, and disagree with Vance’s. The CLR specifies a memory model that is a poor tradeoff for the real world. Your job is to enable reliable applications, not prevent them.

    Don’t drink that performance improvement Kool-aid that the hardware guys are serving. CPU performance is not the limiting factor on apps these days: programmer productivity and reliability are far more important.

    Any environment in which double-check locking doesn’t work in the natural way is simply broken. The CLR team should specify a default memory consistency model which is as strong as existing x86 implementations. If you want to allow that model to be broken by a select few people using #pragmas or similar kinds of hints, fine. Just don’t inflict that complexity on the rest of us who are busy trying to use the CLR to add business value in the real world.

    Jeff Berkowitz

    pdxjjb at hotmail.com

  8. Brian Pattinson says:

    Two comments:

    Rather than placing a complete MemoryBarrier, is it not sufficent to place a MemoryWriteBarrier? All you are trying to enforce is that the writes used to initialize the singleton are not moved after the write that sets the singleton reference.

    Regarding Jeff Berkowitz’s comment, I agree with the goal of making it simple and reliable for the average user. But if you are not trying to wring the last ounce of performance, using the "lock everything" approach accomplishes the goal. The cost of acquiring an uncontened lock is not that high. I view the double lock approach in the same light I would view a pragma based approach – if you care enough about performance to use it, you had better understand the ramifications.

    Both good posts though – this is an area many are confused about.



  9. Vance Morrison says:

    To answer Brian’s question, yes, only a write barrier (not a full read-write memory barrier is needed). This is in fact what is suggested in


    However, this API has not made its way into the BCL yet, so you have to do a full memory barrier at the present time. Since this happens on the rare code path this is not a big deal in this case.

    To comment on Jeff Berkowitz issue, I can say two things

    1) First, I completely agreee with his tradeoff (you just want straigtforward things to work without any tricky issues). This argues for NOT using double check locking in the vast majority of cases (only those places you know you need the scaling). Thus you might see it in libraries, but you woudl expect to see it only VERY rarely in actual applications. As mentioned int he article above, simply placing a lock around the whole method is simple and will always work without these subtlties.

    2) For the reaons Jeff mentions (people program to the simpler model whether we tell them about it or not), when the CLR comes out on weak memory model machines, we are very likely by default going to support the strong x86 model. Only by opting in to the weaker model will you have to worry about this.

    Having said this, I am not comfortable telling people ‘you can skip the memory barrier, because the runtime will make it right’. Multiprocessors are arlready common, and the memory model is a MAJOR issue for scaling. In 10 years we could easily be regretting the decision above.

    Thus I prefer to actually tell people: if you want simplicity, just put locks around the whole thing. If that is not good enough, you really should step up and write the code properly for a weak model (There really are not that may lock-free patterns like the one above, and if you follow recipes, you should also be fine).

  10. Alexei Zakharov says:

    I think the implementation without using volatile is missing one memory barrier. According to


    memory barriers are required for both read and write code paths. The read path extracted from the code is:

    if ( Singleton.value == null ) // false

    {// not executed }

    return Singleton.value;

    There is no memory barrier on this path. In the CLR memory model as described in Chris Brumme’s blog (http://blogs.msdn.com/cbrumme/archive/2003/05/17/51445.aspx), only volatile loads are considered "acquire", but normal loads can be reordered.

    The correct implementation will be:

    public sealed class Singleton {

    private Singleton() {}

    private static Singleton value;

    private static object sync = new object();

    public static Singleton Value {

    get {

    Singleton temp = Singleton.value;

    System.Threading.Thread.MemoryBarrier(); // this is important

    if ( temp == null ) {

    lock ( sync ) {

    if ( Singleton.value == null ) {

    temp = new Singleton();


    Singleton.value = temp;




    return Singleton.value;




    Let me expand on the performance of the two implementations of the double checked locking pattern. Obviously we want to make the read path faster and don’t care about the write path because the write path is taken only once. The read path extracted from the code is:

    // using volatile (Singleton.value is volatile)

    get {

    if ( Singleton.value == null ) {

    // … not taken


    return Singleton.value;


    // using memory barriers

    get {

    Singleton temp = Singleton.value;


    if ( temp == null ) {

    // … not taken


    return Singleton.value;


    The volatile load in the first code has the acquire semantics and is equivalent to the non-volatile load plus the memory barrier in the second code. There are two volatile loads in the first code and only one memory barrier in the second. So I expect the code with memory barriers to perform faster than the code that uses volatiles. But as any performance speculations it has to be taken with a grain of salt. I haven’t done any measurements here.

  11. Pavel Lebedinsky says:

    I agree with Alexei and Bart in that a read barrier is also needed on the read path.

    Here’s another example from MSDN that is broken in this regard:


    The fact that this code is wrong was confirmed by Neill Clift in a comp.programming.threads post:


    The explanation provided by MSDN is also wrong by the way – CPU caches have nothing to do with this problem.

  12. JD says:

    Don’t use Double-check locking. You’ll get it wrong.

    Prove otherwise by coming up with an exmaple that isn’t wrong, put money on it, someone will disprove it.

  13. Among other things, you need to understand weak memory models.

  14. John Doty says:

    Alexei, I don’t understand the need for a memory barrier in the "read path".

    We are making the following writes:

    1. Write member variables in new Singleton constructor

    2. Write new Singleton to temp

    3. Write temp to Singleton.value

    We are making the following reads:

    A. Read Singleton.value

    B. Read member of Singleton.value

    The problem we have is that a thread executing the reads may see the writes in the wrong order, that is, it might see 3 before it sees 1. But the effect of putting a full memory barrier means that 3 cannot move above 1 in any respect. So if A sees the results of 3, then B *must* see the results of 1.

  15. John Doty says:

    More specifically… you are correct if the memory barrier between 2 and 3 is only a release barrier. In that case, there is nothing to keep 3 from moving up to before A, and that’s why you need an acquire barrier between A and B.

    But, to be entirely accurate, what you need is an acquire barrier between A and 3. Putting a full memory barrier between 2 and 3 is certainly sufficient.

  16. Niclas Lindgren says:

    Just a question Pavel Lebedinsky link


    I am no expert in the area, but is this really a broken article? I believe it is merely missing one assumption it is making, that is that the call to "CacheComputedValue" is only made by the high prio thread.

    Thus both threads are not doing the "CacheComputedValue", which means on a single CPU this code is perfectly fine, since the low prio thread will not have a read race with him.

    On a multi CPU however suddenly you have this read race, and/or cache problem, whichever you prefer calling it. And suddenly your code is not working (not suprise of course, but still). The solution is also valid since Interlock would enforce a memory barrier, thus effectivly publishing the hashed value before publishing the state telling it is computed. Do not confuse this with a singleton, where multiple threads also wants to create an instance of something, not only access it.

    To comment on Alexi and Bart about the memory barrier before reading A, I do not see the use for this since you have a full memory barrier after the creation of the object and before the publishing (release store) of the pointer to the object. This ensures that whenever other threads get the pointer to the object, it will be a nicely created object, since you cannot move neither read nor write through a full barrier and you have the read of the pointer synchronized.

    Indeed this might give you a few threads entering the synrhonized path just to find that the object has now a valid pointer, since leaving a lock is a release action and show ensures that, and also ensures that the reading done at the return of the function will not move above the lock(nor through the MemoryBarrier of course). I might of course have missed something?

    The only reason I can think if is that the value if the pointer is cached during the first check, and is then later reused for the second check, making it become null twice, however the lock should prevent this as it should be a release/store operation propagating the value through all caches (and the compiler should know better than to reuse the register when there is a lock involved since it is release/store operation).

    If this is the case, then it is effectly prevented by a synchronized read barrier before reading the pointer in the first place

    What is it I do not see?

  17. Pavel Lebedinsky says:

    I might be totally wrong about this, but here’s how I understand it.

    > 1. Write member variables in new Singleton constructor

    > 2. Write new Singleton to temp

    > 3. Write temp to Singleton.value

    > A. Read Singleton.value

    > B. Read member of Singleton.value

    If there is no memory barrier between the last two memory accesses, B can be fetched before A.

    So if you observe reads and writes to the main memory, you might see the following sequence:






    and that would be a problem.

  18. Niclas Lindgren says:


    B should never be able to get ahead of A, if it does the correctness of the program can never be proven, not even in a single thread, single CPU environment. Since a read of a memory location should never be able to move ahead of a write to the same location.

    If a thread does the Read of A, but uses its cached NULL value, it should be refresh by the fact that we are leaving(and entering, but leaving will be a release) a lock, so the cached value will be updated and the return value will be the B reading, which now will not race with the writes of 1,2 and 3 anymore, which it did before.


  19. I would like to see the Alexei reply to John Doty’s reply "Alexei, I don’t understand the need for a memory barrier in the "read path". Or other. Cheers!

  20. The following two seem to suggest (In my mind) that Alexei is right and the only way to do this is a lock around the whole thing or read and write memory barrier which the lock gives you (I think.) Are these papers wrong or could some guru clear this up once and for all in terms of the .Net memory model? Cheers!

    Andrew Birrell

    "An Introduction to Programming with C# Threads"


    The "Double-Checked Locking is Broken" Declaration (IBM)


  21. Jon Skeet says:

    The latter paper relates to the Java memory model, not the .NET one. The former paper seems to rely on the latter paper.

    It would make sense to me that a read memory barrier is *not* required here, as read B depends on A as far as I can see. (That is, until you’ve completed A, you don’t know which piece of memory B needs to read.)

  22. Niclas Lindgren says:

    In answer to Jon Skeet:

    The barrier(second barrier) is not there to prevent B from coming before A, it is there to prevent 1,2 and 3 from happening after the value read in A has been published. However if you merely were questioning the first barrier before A, then I concur to your statement.

  23. Jon Skeet says:

    If 1, 2 and 3 happen after the value read in A has been published, then A will return null, and you’ll enter the lock, which is fine, surely?

  24. Niclas Lindgren says:

    Jon Skeet:

    I misplaced number 3 in my statement above , I meant only 1,2 from happening after 3(as 3 is the publish of the same variable read in A), so the reading of A might see that the object exists, but it might not be fully initialised, and thus we have a problem.

    I do not however understand why there is a need for a second barrier before the reading of A, since the caches should be synhronized properly automatically by doing a release action(and the reading done in A cannot be reordered passed a conditional write of the same location) when you are leaving the lock and as long as the value read in A is not updated before the write 1 and 2, we should be fine.

  25. Jon Skeet says:

    Ah, I think I see what you mean.

    Although there’s a memory barrier after 1 and 2, that only stops those writes from being delayed – it doesn’t prevent the write to three from being brought forward. Is that what you meant?

    Oh, it’s all too much… I’ll stick with locking 🙂

  26. Jon Skeet says:

    In fact, thinking about it, given that write 3 can be brought forward, I don’t see how this can work at all with as many explicit memory barriers as you like – unless there is a lock or the value is volatile, you can’t prevent 3 from being visible before 1 and 2, because memory barriers only talk about writes being delayed, not brought forward. A memory barrier due to a volatile write prevents *that write* from being brought forward before earlier writes (effectively) but an arbitrary explicit memory barrier only determines that things which occur before that memory barrier in the IL must occur before that memory barrier in execution.

    What am I missing?

  27. Niclas Lindgren says:

    Yes exactly Jon, that’s what I meant.

    I believe memory barrier’s semantics are that no write can cross it, up or down, but I am not sure about it. If it doesn’t have this semantic, then what is the point of the barrier in the first place?

    Assuming it has this meaning, then putting a barrier there will make sure that 1 and 2 are definitaly seen before 3, which is what we want.

    But I do agree with you, normal locking is good enough, if you have a design that falls on this, I mean doesn’t scale or has a performance bottleneck here(in the singleton), then redesign the code instead. Rule number one in a threaded application is to not have more threads then you have parallell work, where most threads should work on their own instances of data. Given of course that they must share some.

    And finally, don’t use singletons. If your design rely on it, then the design is most likely not suffering from performance issues, so go for the full locking scheme. If the program is performance critical, then you don’t want to have a dynamic behaviour anywhere and you won’t to know the characteristic of your program at all times. I put it very black and white of course.

  28. Pavel Lebedinsky says:

    > It would make sense to me that a read memory

    > barrier is *not* required here, as read B depends

    > on A as far as I can see. (That is, until you’ve

    > completed A, you don’t know which piece of

    > memory B needs to read.)

    It’s true in this particular case however in general there can be no dependency between reads.

    For example in the MSDN article discussed above (http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dllproc/base/synchronization_and_multiprocessor_issues.asp) both reads are from global variables so as far as I can tell there’s nothing preventing the CPU from reordering them. In this case you have to use another memory barrier on the read path.

  29. Jon Skeet says:

    It seems to me that the MemoryBarrier documentation is sorely lacking. If it *just* has the semantics of a volatile read and a volatile write, then it has very little use at all – reads can still come later than the barrier, and writes can still come earlier than the barrier. As Niclas says, only if it’s a bidirectional barrier is it useful. Assuming I’m not missing something, that is…

    Pavel: certainly in the general case it won’t be true. I think we’re really after working out just how optimised the singleton implementation here can be though. *If* the memory barrier before the assignment of value is bidirectional, I think we can get away with just that. If it’s not, just using MemoryBarrier calls will never do enough.

  30. David Zarlengo says:

    I’d like to point out that Alexei’s "correct implementation" is different from Brad’s implementation, even considering the extra MemoryBarrier call. Specifically, the assignment "Singleton temp = Singleton.value", prior to the first MemoryBarrier call seems to open a hole in his code by eliminating the purpose of the temporary variable as an *independent* place to cache the pointer to the memory space of the instance.

    A simple proof of its brokenness is to replace temp with Singleton.value everywhere it is used, which is correct to do since you assign it to value as the first step in the property. With this replacement in mind, the hole is obvious:

    if ( Singleton.value == null ) {

    Singleton.value = new Singleton(); // Oops!

    System.Threading.Thread.MemoryBarrier(); // Pointless now.

    Singleton.value = Singleton.value; // Oops!


    Consider Brad’s code again. The *read path* should be as simple as this, and is correct:

    if (Singleton.value == null) {

    // write path


    return Singleton.value;

    This will work as intended so long as Singleton.value is not null. To determine when Singleton.value is not null, it’s necessary to look at the write path.

    if (Singleton.value == null) {

    lock (syncRoot) {

    // Abort to read path unless…

    if (Singleton.value == null) {

    // Initialize a temporary variable, completely independent of Singleton.value, to an instance of the singleton.

    Singleton newVal = new Singleton();

    // Insure all writes used to construct the temporary variable have been flushed, ensuring that the temporary is complete.


    // What is the value of Singleton.value here? Obviously, it is still null.

    Singleton.value = newVal; // publish the new value

    // What is the value of Singleton.value here? Now, correctly, it is the new value. Leave the 2nd check.


    // Leave the lock, subsequent threads entering the critical section will abort to the read path.


    // Leave the 1st check, finish the read path.


    return Singleton.value;

    I’m inclined to believe that Singleton.value will always be null until precisely when it is assigned, never earlier – due to the DCL pattern, the temporary variable, and the MemoryBarrier.

    Something to think about, anyway…

  31. Does this spin version work? Why or why not? Cheers!

    public sealed class Singleton


    private static int spinLock = 0; // lock not owned.

    private static Singleton value = null;

    private Singleton() {}

    public static Singleton Value()


    // Get spin lock.

    while ( Interlocked.Exchange(ref spinLock, 1) != 0 )


    // Do we have any mbarrier issues?

    if ( value == null )

    value = new Singleton();

    Interlocked.Exchange(ref spinLock, 0);

    return value;



    This would help answer a few related questions for me on how Interlocked works with mem barriers and cache, etc. TIA — William

  32. Tatsuhiko Machida says:

    With thread local variable,we can ignore other thread.

    We do not need synchorinization.

    With thread local variable,we can get simple,safe,quick code.

    public sealed class Singleton {

    private Singleton() {}

    private static object syncRoot = new Object();

    // thread shared variable

    private static Singletone sharedvalue;

    // thread local variable for singletone


    private static Singleton value;

    public static Singleton Value {

    get {

    if (Singleton.value == null) {

    lock (syncRoot) {

    if (Singleton.sharedvalue == null) {

    Singleton.sharedvalue = new Singleton();





    return Singleton.value;




    Gime me your some comment,please.

  33. Keith Hill says:

    Tatsuhiko, in this case you don’t really have a singleton. Each thread will get a single copy of the Singleton class. Hence, it really doesn’t follow the semantics of the singleton pattern.

  34. Tatsuhiko Machida says:

    But I think,in this case Singleton class is a very class not a struct.

    So threads will use very same instance of singleton class.

    That is what we want to do,I think.

    Using thread local variable,we can get back DCL performance.

    Perhaps,I have less understanding than you,

    about the semantics of the singleton pattern.

    But,I understand that we want share same instance with many thread,

    we hate overhead of synchronization per accessing the singleton.

    With thread local variable,we can share same instance with many thread

    without overhead of synchronization except initialization.

    it is pragmatic enough to use thread local variable,I think.

    Anyway,I like response for my article,so I thank you for your comment.

  35. Joe Cheng says:

    Tatsuhiko, I think the reason this technique has not received more attention (I first heard of it a couple of years ago) is because thread local variables have had their own performance problems, at least in common implementations of Java at the time. If I remember correctly, under low contention, simply using a lock was faster.

    Again, that was a couple of years ago, and on the JVM, not the CLR. I’d be interested if you or anyone else has info to share about the performance of thread local storage on .NET.

  36. Keith Hill says:

    Help me out here because I’m not getting the ThreadStatic approach. The docs clearly say this about ThreadStaticAttribute:

    A static (Shared in Visual Basic) field marked with ThreadStaticAttribute is not shared between threads. Each executing thread has a separate instance of the field, and independently sets and gets values for that field. If the field is accessed on a different thread, it will contain a different value.

    So again, how can this be a "Singleton" for all threads in an AppDomain when each thread winds up getting its own "Singleton"?

  37. Tatsuhiko Machida says:

    Joe,Thank you for your advice.

    I did not think of performance of using thread local variable itself.

    I hope that thread local variable access cost on the CLR is lower than JVM.

    Keith,in my expamle,real singleton instance is set to ‘sharedvalue’

    which is normal static field,not ThreadStatic.

    ThreadStatic field which is named as ‘value’ is cache of ‘sharedvalue’.

    That’s all.

    the flow is …..

    (1) Begining states are….

    Singleton.sharedvalue == null

    ThreadA’s Singleton.value == null

    ThreadB’s Singleton.vluae == null

    (2) ThreadA access Singleton.value.

    (2.1) Singleton.sharedvalue is initialized.

    (2.2) ThreadA’s Singleton.value is initialized.

    so,results are

    Singleton.sharedvalue == somevalue;

    ThreadA’s Singleton.value == somevalue;

    ThreadB’s Singleton.value == null;

    In this state,different thread get different value from Singleton.value.

    The description about ThreadStatic of the docs you read,indicates this situation.

    (3) ThreadB access Singleton.value.

    results are

    Singleton.sharedvalue == somevalue;

    ThreadA’s Singleton.value == somevalue;

    ThreadB’s Singleton.value == somevalue;

    In this state,different thread get same value from Singletone.value.

    threads are share same instance of Singleton.

    I wish this explanation make you understand about what I meant.

    Anyway,thank you for your comments.

  38. Keith Hill says:

    Oh I see. I should have looked a little more closely at your sample. You wind up with a little extra checking per thread (value == null) but that happens only once per thread so it doesn’t hurt perf much.

    However, don’ t you need to either use the MemoryBarrier trick or use volatile on the sharedValue field? Since sharedValue is not ThreadStatic it is shared amongst all threads. So wouldn’t it be possible for the assignment of sharedValue to happen after the read (sharedValue == null) on a MP system?

  39. Tatsuhiko Machida says:

    I’m very glad that my explanation,which wrote with my poor english,

    could make you understand.

    Now,Synchronization tricks are not needed.

    It’s guarded by most basic synchronization mechanism,lock statement.

    All accessing ‘sharedValue’ is inside of lock block.

    I fear MemoryBarrier or other tricks.

    What happens in MPU is too complext for me to conquer.

    Only simple synchronization way,like CriticalSection,make me feel that

    I can unserstand what will happen.

    ( For me,even ReaderWriterLock is very difficult to conquer in MPU,

    Multi-thread always make me to fear that I would make my software unreliable.)

    That is why,I post my articles.

    Using thread local variable,we only use basic synchronization mechanism,shutting out

    tricky synchronization technique.

    I am enjoying this discussion,

    because I learned lots of thing this discussion about synchorinization,and it is good for

    my English study.

    Keith,thank you for your comment.

  40. Niclas Lindgren says:

    Tatsuhiko Machida:

    It is indeed a valid technique, but I too have been living under the impression that thread local storage is as bad as taking a lock (which is used to do to update the thread global area to create the thread local areas).


    I am glad to see that the barrier after read A wasn’t needed, however I now see, with new eyes, why it was there in the first place, he was using a temp variable in the read path too!! bad bad, we are trying to synchronize the pointer, we don’t want another silly value to synch as well =)

    Interesting thread.

  41. Imagine my surprise then after finally tracking down a threading issue only to discover that the bug was in fact caused by the .NET synchronized Hashtable and it turns out that the synchronized Hashtable is in fact not thread safe, but thread safe

  42. Anonymous says:

    Brad Abram’s blog entry on volatilty and memory barriers

  43. Dirks WebLog says:

    Ich habe mich die letzten Tagen in Vorbereitung auf den heutigen Patterns-WebCast ein wenig ausgiebiger…

  44. Dirks WebLog says:

    Ich habe mich die letzten Tagen in Vorbereitung auf den heutigen Patterns-WebCast ein wenig ausgiebiger…

  45. Dieser Post stammt aus Dirks Web-Log

    Das Singleton – das unbekannte Wesen

    Ich habe mich die letzten…

  46. Dieser Post stammt aus Dirks Web-Log

    Das Singleton – das unbekannte Wesen

    Ich habe mich die letzten…

  47. Ich habe mich die letzten Tagen in Vorbereitung auf den heutigen Patterns-WebCast ein wenig ausgiebiger…

  48. Ken Brubaker says:

    Notes on the February Atlanta C# Users Group.

  49. If you have developed traditional Windows Client/Server applications on single-CPU machines for all your…

  50. Tiho's blog says:

    Brad Abrams on volatile and MemoryBarrier(). Someone sent this to the&amp;nbsp;team and I couldn&amp;#8217;t…

  51. luonet says:

    前面两章主要涉及了一些预备的知识,从这一章起,我们将真正开始单例模式的研究 Singleton 首先看一个最常见的单例模式的实现,也是很多人常用的一种方式: Singleton 设计模式的下列实现采用了

  52. ekampf 2.0 says:

    The Singleton implementation in the snippet I gave works fine as a lazy, thread-safe Singleton as it

  53. Porque cash advance loan oregon pay day loan cash advance

  54. Work from home mlm business opportunity. Work from home opportunities. Work at home.

  55. You’ve been kicked (a good thing) – Trackback from DotNetKicks.com