Memoization and Anonymous Recursion

Keith Farmer brought it to my attention that there is at least a little confusion about how closures work.  Hopefully, I can help shed a little light on the subject.  The question is why doesn’t the following code actually memoize fib in the call to Test?

Func<int, int> fib = null;
fib = n => n > 1 ? fib(n – 1) + fib(n – 2) : n;

Before I explain why this code doesn’t work, I want to return to the example that I used in my last post.

Func<int, int> fib = null;
fib = n => n > 1 ? fib(n – 1) + fib(n – 2) : n;
Func<int, int> fibCopy = fib;
Console.WriteLine(fib(6));                      // displays 8
Console.WriteLine(fibCopy(6));                  // displays 8
fib = n => n * 2;
Console.WriteLine(fib(6));                      // displays 12
Console.WriteLine(fibCopy(6));                  // displays 18

Probably the easiest way to see why this code behaves so strangely is to show what code the C# compiler generates.  This can easily be done with ildasm.exe or reflector.  There are two lambdas defined in the code.  The first one captures the local variable named fib and the second does not capture any locals.  Because fib is capture by a lambda, it must be hoisted on to the heap.  So a class is declared that contains the captured local.

class DisplayClass
  public Func<int, int> fib;

Now the compiler must emit a method that corresponds to the first lambda.  This method will need access to the members of DisplayClass and must also be convertible to a Func<int,int> delegate.  Today the compiler achieves this by emitting the method on the display class as an instance method (we could also have used currying but that is a story for another day).

class DisplayClass
  public Func<int, int> fib;
  public int M1(int n)
    return n > 1 ? fib(n – 1) + fib(n – 2) : n; // <– 1st lambda body

The second lambda does not capture any local variables.  So this method can be emitted as a static method.

static int M2(int n)
  return n * 2;

Finally, we are ready to show the emitted code for the original code fragment.

DisplayClass locals = new DisplayClass();
locals.fib = null;
locals.fib = locals.M1;
Func<int, int> fibCopy = locals.fib;
Console.WriteLine(locals.fib(6));       // displays 8
Console.WriteLine(fibCopy(6));          // displays 8
locals.fib = M2;
Console.WriteLine(locals.fib(6));       // displays 12
Console.WriteLine(fibCopy(6));          // displays 18

Notice how the first call to fib is really just a call to M1 and when M1 “recurses” on fib it just ends up calling M1 again because that is what is assigned to fib.  The call to fibCopy is a little trickier because the original call is really a call to M1 as well but when it “recurses” it invokes fib instead of fibCopy which also happens to be M1 at the time.  So the first two calls behave as expected.

Now it starts to get a little strange.  First we assign M2 to fib.  Then when we invoke fib on the next line it doesn’t invoke our original “fib” function M1 anymore but it not invokes M2.  This of course displays the result of multiplying 6 by 2.

Now for the strangest part, we invoke fibCopy.  But fibCopy actually still references M1 and not M2 and since 6 > 1 then it “recurses” by invoking fib twice with 5 and 4 respectively and then summing the results.  But the calls to fib actually invoke M2 now.  So the results of the calls to fib are 10 and 8 which then are summed to produce 18.

Now let’s return to our original problem.

Func<int, int> fib = null;
fib = n => n > 1 ? fib(n – 1) + fib(n – 2) : n;

Notice that the function passed to Test is memoized but the body of the function still calls the unmemoized fib.  The function itself is memoized by all of the “recursive” calls are not.  So it probably will not behave as intended.  This can be corrected by doing the following.

Func<int, int> fib = null;
fib = Memoize(n => n > 1 ? fib(n – 1) + fib(n – 2) : n);

Now the calls to fib when n > 1 are made to the memoized fib delegate.

If you read my last post on anonymous recursion in C# which introduces the Y combinator then you may have tried the following.

Func<int, int> fib = Y<int, int>(f => n => n > 1 ? f(n – 1) + f(n – 2) : n).Memoize();

This behaves very similarly to the incorrectly written fib above.  Calls to the function itself are memoized but all of the recursive calls are not.  This is because fib is memoized but f is not.  Erik Meijer pointed me to a fantastic paper that discusses how to memoize functions that are the fixed point of functionals.  In this paper, Cook and Launchbury present a function written in Haskell that enables memoized anonymous recursive functions.

memoFix f = let g = f h
                h = memo g
            in h

Using the MemoizeFix function in C# looks very similar to using the Y combinator, but instead of just returning a recursive function, it would also memoize the function and all the recursive calls.

Func<int, int> memoFib = MemoizeFix<int, int>(f => n => n > 1 ? f(n – 1) + f(n – 2) : n);

But there are two problems with implementing the function in C#.  First, Haskell lets programmers write mutually recursive definitions (g is defined in terms of h and h is defined in terms g).  Second, Haskell is a lazy language.  A straightforward implementation in C# will either produce a null reference exception or a stack-overflow because of the eager evaluation and mutually recursive definitions.  Fortunately, lambdas can be used to solve both problems.

static Func<A, R> MemoizeFix<A, R>(this Func<Func<A, R>, Func<A, R>> f)
  Func<A, R> g = null;
  Func<A, R> h = null;
  g = a => f(h)(a);
  h = g.Memoize();
  return h;

Here we define both g and h before they are assigned their final values.  Then instead of assigning f(h) to g which would immediately invoke f causing a Null Reference exception, we assign a lambda to g which will be evaluated sometime in the future when h has a non-null value.  The rest of the function is very similar to the Haskell version.

Notice what happens when memoFib is invoked.  This will actually invoke h which was defined in MemoizeFix, but h is really just a memoized version of g.  Since this is the first invocation of h and there will be no precomputed value and so g will be invoked.  But when g is invoked then it actually invokes f passing h (memoized g) as the function to use for recursion.  This is exactly what we want.  Beautiful isn’t it?

Comments (11)

  1. kfarmer says:

    Complete speculation…

    So you could define a Y<,> delegate, have a convert to Y<,> extension to Func<,> as well as memoization extensions to both Y and Func, and not have to mess with constructors, right?

    Func<int, int> memoFib = MemoizeFix<int, int>(f => n => n > 1 ? f(n – 1) + f(n – 2) : n);

    .. could become:

    var memoFib = ( f => (int n) => n > 1 ? f(n – 1) + f(n – 2) : n).ConvertToYCombinator().Memoize();

    Presuming type inference works well enough to key off the (int n) in this case.

  2. wesdyer says:

    Unfortunately, given the current language spec that is not possible.  There are at least two reasons why that is not possible.

    1.  Extension methods cannot use a lambda as the receiver.

    2.  Type inference cannot be "keyed off" the int n.

    I would really like #1 to be addressed sometime but #2 doesn’t seem to be as big of an issue.

    Also, the ConvertToYCombinator().Memoize() would not really memoize the recursive calls.

  3. kfarmer says:

    How is (1) the case?  Your MemoizeFix uses the lambda g as the receiver of the Memoize extension, does it not?  Or is it really a parsing thing — the extension is valid against the variable g (which holds a lamba) rather than the bare lambda?

    If so, and in any case, yes (1) wants to be fixed (as it were).

  4. bobcalco says:

    Hi Wes!

    I’m relatively new to your blog but it’s already become my primary Microsoft blog, and I check it out every day.

    I’m transitioning (thank God) from a Java-based company to a rather large .NET shop where I’ll play the role of Domain Architect in their IS department responsible for Application Design & Frameworks (focus on .NET). I start the new job at the end of this month. They’re in .NET 1.1 land and thinking of moving to 2.0 in the "next 12 to 18 mos"… anyway. The point is I’m very happy to be in the .NET world (again), in a role where I can influence the direction of things.

    It seems all the really interesting language developments are taking place in the CLR, and Java is just playing catch up with features like generics (type erasure? Good Lord, who thought that was a good idea?) and the proposal for closures in JDK 7. But they simply didn’t have the right vision. One language to rule them all? Who wants that? (Well, besides mindless systematizers and goose-steppers…)

    As another language geek myself, though, I can barely whip up any enthusiasm for Java lately–the innovations at Redmond not just in C# but with experimental compilers like SML.NET and C-omega are really outstanding. I’ve believed for some time that the future of "managed" computing is concurrent + multi-paradigm, not just multi-lingual per se. Good to see Microsoft heading in that direction as soon as the next release of C#.

    For some more nifty language ideas, check out this diamond in the rough: The syntax is gawky (pretty counterintuitive to most programmers I gather), but the semantics are truly fascinating–a real multi-paradigm bytecode compiler with network transparency and lightweight threading. Let me know what you think about it at

    I’ve been dabbling with my own .NET language called Muse that incorporates some functional and relational features (relational in the sense of logic programming), the goal being one day to add constraint-based programming features a la Oz some day.

    Muse was intended to be a language especially suitable for business logic programming on the .NET platform. But I have not had the free time to go where I’d like to go with it (still at the specification/initial prototype level); meanwhile, bright folks like yourself are innovating at the speed of light in C#. Frankly, you have my dream job. 🙂

    All of this is by way of saying, keep up the great work! I’m very excited to see the future of .NET programming through your blog. I plan to get my own up and running soon at

  5. Jafar says:

    Hi Wes,

    I’m glad to see that someone is talking about how to apply advanced functional programming techniques in C# 3.0.  However the more generic functions I write (Memoization, FixedPoint, Curry) the more I notice what appears to be a major weakness in C# generics, namely the fact that I must rewrite the function for every argument count.  Surely this can be avoided.  What about something like this:


    public static Func<*,T> Memoize<*,T>(this Func<*,T> func)


    var dict = new Dictionary<Struct<*>,T>();

    return (* args) =>


    var key = new Struct<*>(args);

    if (!dict.HasKey(key))

    dict[key] = func(args);

    return dict[key];



    Func<int,int> fib = null;

    fib = n = (n  > 1) ? fib(n-1) + fib(n-2) : 1;

    var memoizedFib = fib.Memoize(); // yes, I know the memoization wont work correctly in this example, just showing my intent


    This would involve creating a generic Struct class that behaves similarly to Func, allowing you to specify an unlimited number of types (I don’t remember if this was done in C-Omega or not).  It would also be ideal if both Struct and Func were handled at the IL level instead of being converted by the compiler to Func0 and Struct0 so that all functional languages could use the construct to interoperate.  I assume this was considered but decided against for some reason.  Can you enlighten me, because I’m concerned that doing any advanced functional programming with this limitation will be very painful.

  6. wesdyer says:


    Yes, there is a difference between a lambda and a local of a delegate type.  A lambda does not really have a type.  It is implicitly convertible to "matching" delegate types and also to Expression<…> (expression trees).  Whereas the local has a type.  Currently when extension methods are resolved they do not perform delegate conversions (conversions from method groups or lambdas to delegates).  It is kind of sad, but we want to get this thing out the door and this is something that can be added later.


    Congrats on the new job!  Sounds like you will be having lots of fun.  I agree that there are a lot of exciting things happening at Microsoft in programming languages (C# 3.0, F#, Iron Python, …).  I also agree that concurrency and multiparadigm languages are the most fruitful looking avenues for the future.

    I’ll definitely check out the link and anytime you want to discuss (via email, posts, whatever) languages I’d love to do so.


    I agree that it is a little annoying.  I have felt this pain as well especially when I have just recently written some Haskell code or something of that sort.  Fortunately, in C# that pain is mostly felt while writing library like features.  Consuming these functions is much easier.  

    That being said, early on it was decided that delegates were not structural and so we are in the state that we are today.  That decision was made in the beginning.  I’ll have to ask one of the old timers why it is the case.

    I can say that the language design team is aware of the pain.  I’m not sure if the particular solution that you propose was ever discussed.

    Actually, I love your example because I was going to blog very soon about how to do cool composite keys for multiarg memoization.

    Also, you may have noticed that I haven’t use var very much.  Don’t get me wrong, I love var (I actually implemented it).  I usually don’t use var in blog posts so that I am very explicit about what I am doing.  Personally, since I think it is a sin (in almost any case)to declare variables without an initialization, I almost always use var.

  7. Jafar says:

    I’m glad you feel my pain. 🙂 It’s good to know that Haskell has the same issue and there is not some obvious solution that’s been overlooked.  I’m not a compiler designer and I can just imagine the kinds of conflicts and problems my approach could lead to.

    Given that this pain exists I feel like it would make a lot of sense for Microsoft to release a functional programming API with LINQ so that each programmer doesn’t end up reinventing the same basic functions (once for each argument count ;-).

  8. Jafar says:

    One of my favorite features of functional languages is tail recursion.  You’ve detailed how we can use the fixed point function to define a recursive lambda function.  Wouldn’t it make sense for C# to optimize tail calls?

    In absense of this would it be possible to write an Expression::Compile extension method that would recognize tail recursion and emit the .tail IL instruction?  Is it even possible to create a recursive Expression?  I’d appreciate if you could demonstrate.

    I notice that in the CTP the constructor for expression objects appears to be internal.  Is this by design?  If so I’m disappointed.  It seems like the two methods of code generation in .NET are each unsatisfactory.  The CodeDOM is high-level but it is inefficient to compile at run-time.  The IL generator is fast, but it is too low-level, requiring me to do IL programming.  I thought that generating Expressions at run-time would be the perfect compromise because it is high-level and fast.  The new object initializers would even make it relatively readable.  Wouldn’t it make sense to allow the developer to create Expressions?

  9. wesdyer says:


    I kind of doubt that we will be able to release a functional API in the Orcas timeframe.  In my estimation, it is unfortunate but it is vital that we ship Orcas as soon as possible.  There is a much higher chance that we may do something post Orcas.  I personally plan to post a functional programming library either here or on codeplex.  In fact, I already have said library but plan to post it a little later on.  Possibly closer to the beta.

    I do think it makes sense to optimize tail recursive calls.  C# 3.0 runs on the Whidbey CLR and the .tail instruction is known to be a little inefficient.  In self recursive calls we could emit a jump instruction to go back to the beginning of the method.  Recently, I’ve been looking into a number of optimizations that can be done in our compiler.  It is very interesting because in the past (pre C# 3.0) most of the code we generated was a pretty straightforward 1:1 mapping from C# to IL.  But with C# 3.0 we are introducing a lot of higher level abstractions.  Furthermore, the C# 2.0 features that are not as straightforward (iterators and anonymous methods) are being used *a lot* more.  So in short, I will definitely be looking into improving matters.  During Orcas our priority is to get correct reasonable code generation and good tool support and if we have time then we may improve code generation.  I will talk about some cool tricks we do in some of the features but there is room for improvement.  For a good resources on the tail instruction see:

    Alas, don’t be dismayed.  Even though the constructor is internal, there is a factory that will produce them for you.  This factory is called Expression.

  10. Jafar says:

    Thanks so much,

    It’s fantastic to be able to get this kind of back and forth with the people who are working on my favorite language.

    I better stop pestering you so you can get that tail call optimization in by the deadline 🙂

    Seriously though, keep up the good work.

  11. Jomo Fisher says:

    Jomo Fisher—My current job is in the C# team working on LINQ to SQL. Because of nature of programming