In an effort to release simple, streamlined APIs, we spend a lot of time poring over every aspect of our types.
One of the types that we know is getting used a lot both internally and externally is LazyInit<T>.
One of LazyInit<T>’s constructors takes in a LazyInitMode enum which allows you to initialize a value in one of three modes:
- EnsureSingleExecution – which ensures that if multiple threads attempt to initialize a LazyInit<T> concurrently, that only one of the initializer delegates will execute
- AllowMultipleExecution – which allows multiple threads to execute the initializer delegate and race to set the value of the LazyInit<T>
- ThreadLocal – which allows multiple threads to execute the initializer delegate and stores a local copy of the value for each thread
AllowMultipleExecution is motivated primarily by performance. If we allow the threads to race, we don’t need to take a lock and will never need to block any of the threads attempting to initialize the LazyInit<T>. Additionally, it’s theoretically useful if you have an operation in your initializer delegate that you don’t want to occur while under a lock.
The former motivation is validated by a quick and dirty perf test: AllowMultipleExecution typically performs 1-2x faster than EnsureSingleExecution for sufficiently small initializer delegates (longer running delegates typically see no improvement and also result in more wasted CPU time as the work that some of the threads produced will be discarded). While 2x is great, to see significant perf gains, you’d essentially need a lot (thousands?) of LazyInit<T> instances that could all potentially be initialized by multiple threads. Remember that this only affects the first time a LazyInit<T> is initialized, so calling Value many times would not affect performance.
While the latter motivation is important, we have few concrete scenarios.
On top of all this, the scenarios that might benefit from this mode are heavily limited. Only under a specific set of circumstances would you choose to use AllowMultipleExecution:
- You’re sharing a LazyInit<T> instance between threads.
- You are sure you won’t throw an exception in your initializer delegate (the exception semantics for this are very strange, e.g. if one thread fails and one succeeds which wins the race?).
- Your initializer delegate doesn’t rely on some thread-local state that can result in different generated values.
- The speed of your initializer delegate is just slow enough that taking a lock might block another thread for too long.
- The speed of you intializer delegate is just fast enough that it won’t result in multiple CPUs wasting tons of cycles.
Given all the usage restrictions and the limited scenarios that would see a performance improvement, we’re unsure whether this mode is useful. Before we make any decisions on whether to keep it or remove it, we thought it best to reach out to you first and ask: are you using AllowMultipleExecution? If not, would you?