The Virtues of Laziness

It seems that I riled some people up with my blog post yesterday.  After some thought, I think the primary reason that there was some backlash is because some people feel that I violated one of the sacred principles of FP:  lists are *the* data structure.  Well, let me set the matter straight.  I love lists.  Especially the singlely linked immutable kind that are found in many functional languages.  Furthermore, Microsoft is not trying to launch some massive marketing campaign to sell IEnumerable<T> as the *new* list.  So to be clear, I talked about IEnumerable<T> yesterday because people who have used functional languages will wonder where are the lists in C# 3.0.  IEnumerable<T> is not the same as a list, but there are many similarities between IEnumerable<T> and lazy lists that I pointed out yesterday when I showed they are isomorphic by describing the bijective mapping between them.

Furthermore, it is the case that some data structures are generally better than others.  But there are tradeoffs between using the various data structures.  Also, *not* all of the tradeoffs are captured by looking exclusively at their time complexity for some operation.  There is of course the space complexity and there is the complexity of implementation itself.  And don't forget that often they have different characteristics for different operations.  The key point here is that a given problem will have a number of constraints and each of the competing designs has a number of trade-offs.  As an interviewer, when I notice a candidate is not aware of the trade-offs that he is making then I start to worry that they were not considered at all.  Furthermore, when I notice that a candidate seems to favor one data structure to the exclusion of others, I start to wonder how many tools are in the toolbox.  But after observing this behavior many times, I am convinced that it often is not the coding problem that is driving usage of some data structure but the mode of thinking that the candidate employs.

One more thing before I continue on.  At least one reader wondered why I said the following:

"Where most programmers who are accustomed to imperative style would naturally use an array, a variable, or a mutable object, a functional programmer will often use a list."

Yes, I meant to say exactly what I said.  In fact, when I wrote it, I paused before deciding to include it because I thought it might be misunderstood.  When I first read SICP, the most mind bending and rewarding topic was at the end of chapter 3.  The section on streams.  One of the motivations that was given for using a stream (infinite list) was that variables that change their value over time cause problems.  One way to address this was to have streams represent the state of the variable over time where each element of the stream represents a state of the variable at some point in time.

So without further ado, let's take a look at streams...

What we want is an infinite list.  The problem is that infinite list can never actually be fully realized because of their infinite nature.  So if we attempt to realize an infinite list either the computer will enter an infinite loop or it will run out of resources (stack overflow, out of memory, etc.).

We can overcome this problem by having lazy lists.  Lists where the next element in the list is not realized until it is needed.  Yesterday, I presented one such lazy list which uses an enumerator to realize the next element.  Today, I present another which has a more intriguing definition.

class Stream<T> : IList<T>
  Func<IList<T>> next;
  T value;

  public Stream(T value, Func<IList<T>> next)
    this.value = value; = next;

  public IList<T> Next { get { return next(); } }
  public T Value { get { return value; } }

This lazy list is very similar to a normal list.  The only difference is that instead of taking a list as the value of the next node in the list, it takes a function which will evaluate to the next node in the list.  But this difference is critical.

The first difference can easily be seen by imaging a list type, ErrorList, that when constructed throws an exception.

class ErrorList<T> : IList<T>
  public ErrorList()   { throw new Exception(); }
  public IList<T> Next { get { throw new Exception(); } }
  public T Value       { get { throw new Exception(); } }

Now consider the following code:

var list = new List<int>(1, new ErrorList<int>());           // error, exception thrown
var stream = new Stream<int>(1, () => new ErrorList<int>()); // no error

An exception is thrown when the list is constructed but when the stream is constructed no exception is thrown.  Unless the Next property is evaluated on the stream, there will never be an exception.

The second difference can be seen in the following code:

IList<BigInteger> onesList = null;
onesList = new List<BigInteger>(1, onesList);
IList<BigInteger> onesStream = null;
onesStream = new Stream<BigInteger>(1, () => onesStream);

If you try to list all of the values in onesList, then you will notice that it only contains one value.  Whereas onesStream contains an infinite number of values.  The reason is that when we constructed onesList, we passed in onesList but onesList had the value null at the time so the next node was set to null.  But in the stream case we passed in a function that would be evaluated sometime in the future.  By the time that we evaluate it, it will return the proper value of onesStream.

A third difference is found in the performance of the two lists.  With the lazy lists, parts of the list that are never realized are never paid for.  So it is a pay as you go model as opposed to pay everything up front.  Furthermore, less space can be required since the whole list is not necessarily held in memory at once but only a process decription that can compute each element as it is required.

So now we have this infinite stream of ones, but can we do anything more interesting?  Sure.  First, let's define a zip function between lists.

  public static IList<V> Zip<T,U,V>(Func<T,U,V> f, IList<T> list1, IList<U> list2)
    if (list1 == null || list2 == null)
      return null;
    return new Stream<V>(f(list1.Value, list2.Value), () => Zip<T,U,V>(f, list1.Next, list2.Next));

Yes, that is somewhat of a mouthful.  Here is what it does.  It takes two lists where the lists may differ from each other in what kind of elements they contain.  It also takes a function from the type of the first list's elements and the type of the second list's elements and returns possibly a new type.  Then if either of the lists is empty we return an empty list (null).  But if both lists are non-empty then we return a stream where the first element is the application of the given function to the first element of both of the lists and the rest of the list will be the evaluation of Zip on the rest of the two lists.  It is important that Zip uses a stream because these lists may be infinite and if we try to immediately evaluate the entire list we will run out of resources.

Now that we have Zip, let's put it to use to define the natural numbers.

IList<BigInteger> ones = null;
ones = new Stream<BigInteger>(1, () => ones);
IList<BigInteger> natural = null;
natural = new Stream<BigInteger>(0, () => Zip((x, y) => x + y, natural, ones));

So what we are saying is that the natural numbers begin with zero and then are followed by the sum of the first natural number with the first element of ones (0 + 1).  The second natural number is the sum of the second element of natural numbers with the second element of ones (1 + 1) and so on.  This works because each element of natural numbers is defined only in terms of the elements of natural numbers that occur previous to it.

So now we can easily define the odd numbers (2k + 1) and even numbers (2k).  But first we need a map function for lists.

public static IList<U> Map<T, U>(Func<T, U> f, IList<T> list)
  if (list == null)
    return null;
  return new Stream<U>(f(list.Value), () => Map(f, list.Next));

Now here are the definitions of odd and even numbers.

IList<BigInteger> odds = Map(x => 2 * x + 1, natural);
IList<BigInteger> evens = Map(x => 2 * x, natural);

We can also define the fibonacci sequence as a stream.

IList<BigInteger> fibs = null;
fibs = new List<BigInteger>(0, new Stream<BigInteger>(1, () => Zip(add, fibs, fibs.Next)));

What we are saying here is that the first fibonacci number is zero and the second is one but then the next one is the sum of the first number and the second number and so on.  This is similar to the natural numbers definition which used itself to compute itself.

If you try it out, you will also notice that it isn't very efficient.  This is because we are back to our exponential time complexity.  But this can easily be remedied by memoizing the next function in the constructor to the stream:

  public Stream(T value, Func<IList<T>> next)
    this.value = value; = next.Memoize();

Now it has linear time complexity by trading off some space (linear as well).

Let's finish by solving the famous problem of producing the Hamming numbers.  The problem is to list all of the positive integers who have no prime factors other than 2, 3, or 5 in ascending order.  The first ten Hamming numbers are:

1, 2, 3, 4, 5, 6, 8, 9, 10, 12

This problem is notoriously difficult without lazy evaluation and is used to demonstrate the power of laziness.

To solve this problem first note that the first hamming number is 1.  Then if h is a hamming number so is 2h, 3h, and 5h.  So we can define three streams which map the hamming numbers to 2h, 3h, and 5h respectively.  The only remaining requirement is that they must be in order.  We can maintain this invariant by defining a function named Merge:

public static IList<T> Merge<T>(IList<T> list1, IList<T> list2) where T : IComparable<T>
  if (list1 == null)
    return list2;
  else if (list2 == null)
    return list1;
  int c = list1.Value.CompareTo(list2.Value);
  if (c < 0)
    return new Stream<T>(list1.Value, () => Merge(list1.Next, list2));
  else if (c > 0)
    return new Stream<T>(list2.Value, () => Merge(list1, list2.Next));
    return new Stream<T>(list1.Value, () => Merge(list1.Next, list2.Next));

Now we are ready to define the Hamming numbers.  Notice how close the definition in the code is to our description:

IList<BigInteger> hamming = null;
hamming = new Stream<BigInteger>(1, () => Merge(Map(x => x * 2, hamming), Merge(Map(x => x * 3, hamming), Map(x => x * 5, hamming))));

Now for fun, try to think of how to do it without lazy evaluation.

Comments (22)

  1. Tom Kirby-Green says:

    Excellent stuff, I’m reminded of the "streams are just delayed lists" section in SICP.

    If you keep up posting like this Wes then I think the .NET community is going to have to aware you a ‘Brummie’ (as in Chris Brumme). Brummies are awarded for regularly posting uber long, can’t-find-this-stuff-anywhere-else, technical blog entries. 🙂

  2. Martin Murphy says:

    Keep it up.  This is excellent stuff.

  3. wesdyer says:

    Thanks Tom.  Yes, much of the latter half of this post was directly inspired by that section of SICP.  I think that was my favorite section even more so than the metacircular evaluator.

  4. Nathan says:

    Excellent post.  That’s a great solution for hamming numbers.

  5. Cale Gibbard says:

    It’d be a shame not to show the analogous Haskell code for this:

    Haskell is a language which is lazily evaluated throughout (well, the standard just says non-strict semantics, but lazy evaluation is the most common implementation of that).

    The types of hamming and (#) (that code’s name for the ordered merge operation) are inferred by the compiler, and pattern matching makes the breaking down of the list in the merge algorithm very clean.

    For those unfamiliar with the language, comprehension might be helped by knowing that (x:xs) means a list which starts with x, and the rest is a list called xs, and xxs@(x:xs) is a pattern that matches a nonempty list, binding the first element to x, the rest to xs, and the whole list to xxs. The little bits after the | symbol are called guards, and are conditions which are tried in turn to see which right hand side of the equation applies.

  6. wesdyer says:


    Thank you for posting the solution to the hamming problem in Haskell.  Haskell is a very elegant language that I love and use often.

  7. Bradley Grainger says:

    The C/C++ Users Journal had an article giving an elegant solution to this problem in both Haskell and C++ (using the STL). The code, and a discussion, can be found at LtU:

  8. Jules says:

    Here is a Ruby solution:

    hammings = [1]

    ms       = {2 => 0, 3 => 0, 5 => 0}

    20.times do

     n,m ={|m,i| [hammings[i]*m,m]}.min

     hammings << n if hammings.last != n

     ms[m] += 1


    hammings # => [1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24]

  9. wesdyer says:


    Very nice.  I especially liked the discussion on LtU.  The Haskell solution is basically what I implemented but I did not implement a scale function and instead just used map.


    I like your iterative solution.  Can you reformulate it to be an infinite sequence?

  10. Patrick Lioi says:

    Fascinating stuff.  But, oh no!  I thought I understood closures until I saw the following:

    IList<BigInteger> onesStream = null;

    onesStream = new Stream<BigInteger>(1, () => onesStream);

    My (wrong) understanding was that when you define a lambda that references variables in the surrounding scope, a special class is created behind the scenes whose member variables match the name and types of the variables being captured.  So, I would have expected the lambda to still return null the first time that it was actually exectued.  

    How does the lambda "know" that the null got overwritten, since that appears to happen right *after* the closure is set up?

  11. wesdyer says:


    Don’t worry, your understanding wasn’t too far amiss.  All I need do is quote you and make a slight modification:

    "When you define a lambda that references variables in the surrounding scope, a special class is created behind the scenes whose member variables *are the* variables being captured.  So, I would have expected the lambda to return *the current value of the variable* the first time that it was actually exectued."

    For another example that will clear things up see this post:


  12. James says:

    I have been visiting this blog for a month or so after stumbling on it from Google.  I am behind the curve and I usually have to read each paragraph more than once to try and comprehend 50% of it.

    Any links to resources like "Yet another language Novice" or what SICP stands for and some links to working on my learning curve of C# and Functional Language would be greatly appreciated.  The paradigm shift for me is difficult to wrap my brain around.

  13. Tom Kirby-Green says:

    SICP stands for "Structure and Interpretation of Computer Programs" by Harold Abelson, Gerald Jay Sussman and Julie Sussman (MIT Press). I ignored this book 15 years ago when I was at Uni (I was young and foolish and besotted with C at the time, where as now I’m old[er] and foolish). I’m reading it now, and wow – I can literally feel my mind expanding. I can’t recommend it enough.

  14. wesdyer says:


    It is very commendable that you are working hard to educate yourself.  I am sure you will get there.  I agree whole heartedly with Tom and discussed the process learning to think functionally in the following post:

    And if you don’t understand something then just ask.  Don’t ever feel embarrassed or like you are taking our time or whatever.  Someone will either answer your question or I may make a post revolving around the answer to the question.  If you have a question then it is likely that many other people have the same question.  We are all fellow travelers on the road to understanding.

  15. Welcome to the twenty-first Community Convergence. I’m Charlie Calvert, the C# Community PM, and this

  16. Welcome to the twenty-first Community Convergence. I’m Charlie Calvert, the C# Community PM, and this

  17. Grant Woodside says:

    I understand IEnumerable<T> and lazy lists. Can other data structures, like a binary tree, be lazy?

  18. wesdyer says:


    Yes, other structures can be lazily built as well.  Here is an example with binary trees:

      static void Test()


          var array = new[] { 1, 2, 3, 4, 5, 6, 7 };

          var tree = array.CreateTree();



      static IBinaryTree<T> CreateTree<T>(this T[] array)


          return CreateTree(array, 0, array.Length – 1);


      static IBinaryTree<T> CreateTree<T>(T[] array, int start, int end)


          if (end < start)

              return null;

          var mid = (end – start) / 2 + start;

          return new LazyBinaryTree<T>(array[mid], () => CreateTree(array, start, mid – 1), () => CreateTree(array, mid + 1, end));


    Where LazyBinaryTree is very similar in definition to LazyList.

    A more practical example is perhaps a parse tree where sections of the parse tree are not realized (parsed) until they are needed.

  19. Design patterns have been all of the rage for a number of years now. We have design patterns for concurrency,

  20. Agro says:

    It’s black magic for me – but not all must new all. Best regards.

  21. David Wagner says:

    I realize I’m commenting very late on this post, but I only ran across it just now.  Is the Hamming problem so hard without lazy streams?  Here’s my solution to print all Hamming numbers up to 1000, in sorted order:

    import java.util.TreeSet;

    public class Ham {

       public static void main(String[] argv) {

           TreeSet<Integer> s = new TreeSet<Integer>();


           while (true) {

               int i = s.pollFirst();

               if (i>1000)









  22. wesdyer says:

    Yes, that works quite nicely.  But doesn’t the solution just act like a lazily built stream?  

Skip to main content