A Joel On Software reader asked the other day for examples of recursive functions other than old chestnuts like Fibonacci or factorial. Excellent question. I suggested topological sort, of course, but there are plenty of other examples that are way better than the Fibonacci numbers. Why do I think Fibonacci is bad? Read on.

In case it's slipped your mind, the Fibonacci numbers are defined as follows:

fib(1) = 1

fib(2) = 1

fib(n) = fib(n-1) + fib(n-2)

thus, they go 1, 1, 2, 3, 5, 8, 13, 21, …

The Fibonacci numbers have many interesting properties. For example, suppose you have a pair of rabbits that every year gives birth to a pair of rabbits, rabbits take one year to reach maturity, and rabbits never die. In that (admittedly unrealistic) situation, the number of pairs alive at any given time is a Fibonacci number. Clearly the number alive this year is the number alive the year before (who all live) plus the number alive two years before (who are now mature enough to breed.) Fibonacci numbers also have a relationship to the Golden Mean, logarithmic spirals, and many other interesting nooks of mathematics. But that's not what I want to talk about today.

Because they have a straightforward recursive definition, it's natural to introduce recursive programming to students by simply translating the definition into code:

function fib_1(n)

{

if (n == 1 || n == 2)

return 1;

else

return fib_1(n-1) + fib_1(n-2);

}

I am **in principle** very much for this pedagogic approach. Recursive programming can be difficult to get your head around. Starting with recursive definitions, getting your head around those, and then using those definitions in code is a pretty good way to approach the material. If you think of lists as being "zero or more items in a row", it is conceptually hard to think of anything other than an iterative approach to processing the list. If you think of a list as "a list may be empty, or may be an item followed by a list", then the definition of the data leads itself to a recursive programming approach very naturally.

**Practically** however, this is probably the worst possible simple way to solve the Fibonacci problem. When you introduce recursion by using it to solve a problem which iteration solves much, much better, students come away thinking that recursion is dumb -- that it is both harder to understand and produces worse solutions. Better to pick an example where the iterative solution is not better!

In fact, this is a good whiteboarding question for interviews: implement me a recursive Fibonacci algorithm, tell me why it is terrible, and write me a better algorithm. A good candidate should be able to come up with the above and something like this:

function fib_2(n)

{

if (n == 1 || n == 2)

return 1;

var fibprev = 1;

var fib = 1;

for (var cur = 2 ; cur < n ; ++cur)

{

var temp = fib;

fib += fibprev;

fibprev = temp;

}

return fib;

}

Typical questions that I might ask at this point in an interview are

- What are the asymptotic running time and memory usage of the two algorithms you've written?
- Can you write me an algorithm with even better asymptotic running time and memory usage? Alternatively, can you convince me that this is as good as it gets?
- Suppose I asked you to make this algorithm absolutely as fast as possible. What would you do?
- Now suppose that I do not care much about raw speed but I do want the method to be robust in the face of bad input. Now what do you do?

and so on. (Anyone who cares to propose answers to these can leave comments; consider this a spoiler warning if you want to work them out for yourself. Working out the asymptotic running time of recursive fib is quite easy and produces a somewhat surprising result.)

Another old chestnut that's often used as an introduction to recursion is the famous **Tower Of Hanoi** problem. Briefly stated, you have a number of disks with a hole in the middle, and three spikes upon which you can place disks. The disks are all of different sizes and are arranged from biggest to smallest in a pyramid on one spike. You must move one disk at a time from spike to spike, you must never place a larger disk on top of a smaller disk, and you must end up with the entire pyramid on a different spike than you started with.

The recursive solution is straightforward. Moving a "sub-pyramid" of size one is obviously trivial. To move a sub-pyramid of size n, first move a sub-pyramid of size n-1 to an extra spike. Then move the bottommost disk of the size n pyramid to the other extra spike. Then move the sub-pyramid of size n-1 onto the disk on the second extra spike. A nice recursive solution!

Many recursive implementations for this old chestnut exist. But I'm not a big fan of this problem as a pedagogic approach to recursion for two reasons. First, it's **totally contrived** (as is “fib“). Second, there is a very simple iterative solution for this problem, which is almost never mentioned when it is used as an example of the power of recursive programming! Again, we should pick examples where recursion is **clearly** preferable.

In case you're curious, the iterative solution can be produced by writing a program that implements these rules in a loop:

- Number the disks 1 to n starting from the top.
- Never move an odd disk onto an odd disk.
- Never move an even disk onto an even disk.
- Never move a larger disk onto a smaller disk.
- Never move the same disk twice in a row.
- Never move a disk onto an empty spike unless you have to.
- Move disks until there are no legal moves.

If you don't believe me, try it yourself with ace through ten of a deck of cards sometime. Or, perhaps start with ace through four -- ace through ten will take you a while. Both the recursive and iterative solutions are optimal in the number of moves. The recursive solution is O(n) in terms of memory usage, the iterative solution is O(1).

Good points about Fibanocci and Towers of Hanoi. A better (IMO) problem to work with recursively is the "missionaries and cannibals" problem. As a side note, this problem would require the students to already be fairly proficient at programming (we did it 3/4 of the way through our first semester course IIRC).

For the interested, I will describe this problem. You have N missionaries and M cannibals on one side of a river (N > M). There is one rowboat which can hold two people in it. If there are ever more cannibals on one side of the river than there are missionaries on that side, the cannibals eat all of them (failed solution).

Now show me the optimal set of steps to move all missionaries and all cannibals to the other side of the river.

Note: I'm fairly sure this could be done iteratively too, but its a LOT easier to figure out what to do if you approach it recursively.

Good one Eric,

I've often thought about how hideous the fibonacci recursion example is. Actually I believe that I've got a book (maybe WEP, but I'm not sure) that discusses this as well.

Personally I've always liked "search in a binary tree" for my example. Or, if people don't understand binary trees, search in a sorted array for my examples of recursion.

Indeed -- binary search of a sorted array, either recursive or iterative, is another good interview problem because it requires intense attention to detail. The number of ways to get an off-by-one error is huge!

Hmm -- in your iterative sample, wouldn't a check to see if n < 1 be important? Otherwise when you get to your loop, cur < n won't be true until you hit an integer overflow ... at which point you have a very wrong answer.

I would use a recursive-descending parser to explain the occasional advantage of using recursion over an iterative approach.

The simplest recursion? Directory/file search. Much better than iterative logic.

> in your iterative sample, wouldn't a check to see if n < 1 be important?

That's why I ask a question about robustness.

> when you get to your loop, cur < n won't be true until you hit an integer overflow

The sample is in JScript, not C. There's no such thing as integer overflow in JScript. When a number gets too big, we have a special value for "infinity".

1) The recursive fib sequence will require 2 frames (function return pointers + some) on the stack for each recursive call, not to mention the overhead of setting up the next function calls. Ouch! Would this be O(2^n)?

2) The gist of fib_2() is likely as good as it can get based on my knowledge of generating Fib sequences (w/o cheating with a google search)

3) Not sure how JScript deals with a var declaration in the loop (If it reallocates space for temp instead of changing the contents of what temp points to). If it reallocates, I would declare temp previous to the loop.

4) As Shawn pointed out, bounds checking to prevent overflows and negative numbers in fib_2() would be sufficient to make it robust.

As for a recursion example, I always think of getting a complete (recursive) directory listing as a good one - easy for the student to comprehend the problem even if you are not explaining it to a math genius.

I have to agree with you on The Tower example. In the real world, I never decided to pick a recursive solution because the problem at hand reminded me of the Tower example.

Require all of your students to learn Scheme or Lisp as their first programming language and then forbid the use of the interative functions in both languages so that you have to use recursion for everything...

That has to be the WORST way to teach recursion. It took me years to get over my hatred of recursion because of that!

Actually, there is an iterative algorithm for Hanoi that requires much less rules to remember:

while (problem not solved)

{

move the smallest disk one position clockwise;

perform the only available move not involving the smallest disk;

}

It yields the same sequence of moves as the recursive algorithm.

The recursive Fibonacci function has one more interesting property; if you call fib_1(n) once, fib_1(n-1) gets called once, fib_1(n-2) twice, fib_1(n-3) three times, and fib_1(n-k) gets called fib_1(k+1) times.

As for teaching recursion: quick sort and tasks involving tree traversal (naive tree-based set or map, dir /s).

> Ouch! Would this be O(2^n)?

Good guess. Can you prove it?

> likely as good as it can get

I'll give you a hint -- there is an O(1) fib algorithm.

> Not sure how JScript deals with a var declaration in the loop

All var declarations are logically hoisted to the nearest lexical scope, and braces do not define scopes. There's a good blog topic...

> bounds checking to prevent overflows and negative numbers in fib_2() would be sufficient to make it robust.

Oh really?

What if you pass in too many arguments? Too few? Strings? Dates? Arbitrary COM objects? null? undefined?

Isn't the running time for a recursive fibonacci algorithm proportional to the fibonacci number that it's trying to compute?

To write the fastest possible generator, it's possible to put all the answers in a table (written by an iterative algorithm). The table doesn't get too large because the fibonacci sequence grows quite fast so you're soon beyond 32 bit numbers. (Can't remember how soon now.)

Often, it's preferable to use recursion for

- formal specification of the a problem, and

- derivation of a solution.

The special case here would be the so called tail recursion, which can always be rewritten into an efficient iteration.

By itself (without derivation) the fib_2 has limited value, both for teaching or job interview purposes. After one writes down this algorithm, it's not likely students or candidates will find themselves able (unless they're really brilliant) to find the even better solution with a time complexity O(log N).

This solution is suggested by Kaldewaij in "Programming: The Derivation of Algortithms" (Prentice Hall International Series in Computer Science), p. 88.

> I'll give you a hint -- there is an O(1) fib algorithm.

There exists an analytical formula for fib(n) that involves the Golden Ratio. Can’t remember it off the top of my head, though.

Binet’s Formula: fib(n) = (Phi^n-(-Phi)^(-n))/sqrt(5), where Phi = (sqrt(5)+1)/2.

This is O(1) if you consider raising a float to the power an O(1) operation.

Another thing to squeeze things down a bit more (This method only validates where necessary, but is still robust):

change

if (n == 1 || n == 2)

return 1;

to

if( n < 3 )

{

if( n > 0 )

return 1;

else

// throw your favorite exception here

}

Of course, the above solution is optimal (binet's formula)

>> bounds checking to prevent overflows and negative numbers in fib_2() would be sufficient to make it robust.

> Oh really?

A hint to what Eric may be referring to - JavaScript is not statically typed.

With regard to recursion in general: not all programming languages were created equal. Supporting tail recursion can make all the difference between a valid recursive implementation and an invalid one.

Another thing is that some languages make it very easy to define lists recursively without specifying the functions explicitly. For example Haskell:

fib = 1 : 1 : zipWith (+) fib (tail fib)

For more Haskell examples see http://cubbi.org/serious/fibonacci/haskell.html

BTW BeyondJS makes it possible to define the Fibonacci in JavaScript in the same way:

var fibs = (0).lazy(1);

fibs.extend(fibs.zipWith("+", fibs.tail()));

May not be as pretty as Haskell, but it works! Now to display the first 10 values in the sequence:

fibs.head(10).foreach(alert);

The interesting thing is that thanks to lazy evaluation the performance of Haskell and BeyondJS is actually quit good. This is because once a value in the sequence has been calculated it's saved and not calculated again.

You can find BeyondJS at http://w3future.com/html/beyondJS/

I don't think you can have an algorithm with O(1). Using the Binet Formula leads to an algorithm with O(log N) time complexity (even though apparently you can use a Field so that you only use powers of integers, not real numbers. I woudn't know how to do that, but it's mentioned here: http://www.cs.wisc.edu/~mhock/SSL/fibcalc.pdf

Space complexity should always be at least proportianl to the length of the resulting fibonacci number which (I believe) is proportianl to phi^n.

I meant proportional to phi^n / 2^n. Of course if you implement it using say int32s this is not really relevant, you'd just overflow.

> Binet’s Formula: fib(n) = (Phi^n-(-Phi)^(-n))/sqrt(5), where Phi = (sqrt(5)+1)/2

You can simplify this by noticing that Phi is greater than 1 so Phi^-1 is less than 1 and Phi^-n is much less than 1 for large n. So you'll get the same answer by rounding Phi^n/sqrt(5) to the nearest integer. Note that this formula doesn't work for negative integers.

The easiest way to derive the formula (that I know of) is using a generating function.

> I don't think you can have an algorithm with O(1)

It's O(1) as long as you have a fixed precision for numbers. In general, F_n has O(n) bits in the result so every algorithm must take at least O(n) time.

> apparently you can use a Field so that you only use powers of integers

Well, rationals, which can be implemented using integers and some fixup work. The trick requires creating a field extension, a field extension requires a field (naturally), and the integers is not a field. If you had a suitable implementation of integers mod p you could do it with that finite field. Your results may look a little strange at times though!

So if neither "Fibonacci numbers" nor the "Tower of Hanoi" problems are good pedagogic examples ... what would you use to teach recursion to students?

It depends on the purpose of the class of course. But in general, I'd try to find an example where a recursive approach is clearly better, and where the data being manipulated is defined recursively.

I agree with the commenters above who suggested that file systems are a good place to start. Many students understand them at a practical level already, and you can quite easily define them recursively: A file system is a DAG of folders, a folder may contain zero or more files and zero or more folders. Once you have the concept of a recursively defined DAG down, lots of topics become teachable.

Directed Acyclic Graph, is it? A traditional DOS or Windows 9x or NT <=4.0 file system is just a directed tree for one drive, and a directed forest for the whole system. Expanding the class to DAGs allows for directories that are accessible via several distinct paths. Symlinks (aka junction points)? But those same symlinks can introduce cycles into the graph. Although not recommended, they are possible.

That leads to a practical problem. Suppose you have a file system supporting symlinks, namely, NTFS 5. How do you traverse it, starting at a given root, visiting each directory only once (even if it is accessible from the root through more than one path), and not getting caught in loops?

Bah, no one liked my example. 🙁

But yes, most forms of graph searching are more easily understood when phrased in recursive terms (and frequently easier to write).

Correct recursive function for Fibonacci can be based on Lucas numbers:

F(n+m) = 1/2 * ( F(n)*L(m)+F(m)*L(n))

where L is Lucas number (like Fibonacci but start from (1,3) not a (1,1))

Taking in account L(x) = F(x-1) + F(x+1)

And a little bit optimized

F(n+m) = F(n-1)*F(m) + F(n)*F(m+1)

This way you can create pretty fast recursive algo to calc big Fibonacci numbers.

Fibonacci calculation was once at All-Ukrainian Informatics Contest (4 hours total to solve 4 CS problems). Test cases was up to F(65000) - about 14Kb in output.

IMHO, teaching recursive functions starting with these very simple examples isn't such a bad practice. Recursive factorial, fibonacci and tower of hanoi are easy to understand and useful to explain the concept. The recursive solution for the tower of hanoi is far from optimal, but elegant, and a very simple solution for a problem that a novice could believe it's very complex. After the concept is understood, starting from those solutions, other concept could be introduced: lazy evaluation, backtracking cut, algorithm complexity, optimizations and so on. Graphs are more complex concepts, and should be introduced later.

If we talk about the basic concepts maybe we should start with: a=a+1 🙂 Most of the people have a basic math background before they start to learn programming, so the above basic concept could raise some understanding problem. After that, many will use a++ but maybe not so many will know why ++a is recommended ;).

These days its seems that the algorithms optimizations isn't the top priority for most of the real applications (unfortunately), instead the top priority is to reduce the developing time and of course the cost. So, RAD tools, scripted language, GC and so on are used, and all of them have optimizations tradeoffs. Also, how many of you encountered in a real application code the famous bubble sort algorithm? 🙂 But, perhaps, the application work well and the client was happy. Until the 'profiler' say's otherwise almost any solution is acceptable if the application is delivered on time, isn't it? ;).

So, if the optimizations aren't such a problem in the real applications, why should we bother about them when the purpose is the understanding of a concept? But, I'm totally agree that teachers should explain also the alternatives.

Best regards,

Bogdan

One particularly good teaching application for recursion is the Merge Sort.

Among the benefits: Merge Sort is one of the simplest O(N•LogN) sorting algorithms to write.

We have internal email lists for questions about programming languages. Here's one that came across...

Hi Eric,

I implemented your iterative rules for solving Towers Of Hanoi in vbscript and had lots of fun thanks!

The problem has always been presented to me in a slightly more stringent way, in that the tower starts on the left spike and must be moved to the right hand spike to win.

Solving this slightly more stringent scenario can be achieved by adding one more rule concerning the first move:

The first move should always be from the first

column to

a) The second column for even numbers of disks.

b) The third column for odd numbers of disks.

Also, by playing around with my code, I found by trial and error that:

The number of iterations needed to solve for n disks is equal to double the number of iterations required for n-1 disks plus one.

Thanks for such a great blog and good luck with the marriage!

Regards,

Tim

Well, it would be surprising if the number of moves was any different! After all, the algorithm for moving a tree of size n is essentially

* move a tree of size n-1 onto an empty spike in k moves.

* move the bottom disk, one move

* move a tree of size n-1 onto the bottom disk, again, in k moves.

So the total number of moves has to be 2k+1.

Um... Brett's suggestion of Directory/File search was exactly what I was thinking when I read this post. The student would be less distracted by understanding problem domain and more focused on the concept of recursion. Add to that, a Directory/File search is something that a person is more likely to actually use.

I still remember a homework assignment to solve the tower of Hanoi problem without recursion. I got a 60% on a well written, well commented, and perfectly working program - I didn't google the answer, but probably came up with something similar to what was listed here (it may not have been optimal number of moves, but that wasn't part of the assignment).

I lost 40% because my professor was convinced the best (or maybe the only?) way to solve this problem was using a stack to simulate recursion (which is probably the dumbest solution that actually works).

Somehow we need to actually teach students how to think - otherwise they grow up to be Professors that teach this way, and poison more minds. Teaching someone to actually think about and understand what they are doing is hard, if not impossible - but I do appreciate reading posts like this. I feel better knowing someone is trying.

My favorite example to teach recursion is that of printing a linked list in reverse. Heaven help you it's a slow process to handle iteratively.

To print the contents of said list in forward order (assume C++ classes)

printr(list *node){

if(NULL==node)

return;

cout << node->data;

printr(node->next);

}

To print in reverse order merely requires moving the cout to after the call to printr(node->next).

Lets have a look at a development trend that seems to be doing the rounds these days in both software and hardware. How a simple idea can gather momentum initially, and then some mould and fungii, and then a ton of other crap that no one wants as well.

PingBack from http://mike.hostetlerhome.com/2007/01/19/about-recursion/

Binet's formula with the rounding is sort of cheating. The best exact algorithm I know of for fibonacci uses matrix exponentiation, and it runs in log(n) time.

[1 1] ^ n

[1 0]

puts fib(n) in the top right element (easy to prove). With binary exponentiation, it can be extremely fast.

My last comment isn't entirely clear - what i mean with that equation is to raise a 2x2 matrix to the nth power.

Eric,

I observe that it is usually easier to demonstrate that recursive algorithms do what they are designed for. This is particularly well illustrated in your Tower of Hanoi examples, where you give an iterative algorithm, which to me, at least, is somewhat cryptic: I can understand its description, but it is not straightforward to see why it works.

I think code's being clearly correct is an important benefit. Whether this benefit is more or less important than efficiency depends on the circumstances. As a (oversimplified) rule of thumb, I make my code as clearly correct as possible, and then if it is not fast enough, I sacrifice some of this clarity for speed.

To summarize: I think there are strong arguments for the recursive forms of your functions, and I don't agree with your unqualified characterisations of them as 'the worst' or'terrible'.

If the naive fibonacci example isn't fast enough, I'd try a recursive definition based on a helper function returning adjacent pairs before I resorted to your iterative definition. I think your iterative definition would be faster, but that you have paid a cost in code clarity to make it so.

Eric,

"What if you pass in too many arguments? Too few? Strings? Dates? Arbitrary COM objects? null? undefined? "

This makes a number of presumptions about the working environment that weren't specified. A strongly, statically typed language deals with these errors already (at compile time).

The same goes, for that matter, for Steven Bone's comments about the recursive method and stack frames. Most introduction to programming students are (hopefully) not being bombarded with such implementation details; the goal of such courses should be to teach the concepts (and the relevance of such comments varies a lot by language and platform).

Fundamentally, the students are being taught that a) there are recursive algorithms that are easy to write and easy to verify (as astrolabe noted), a valuable property to say the least and b) that recursion is even possible, something that most are not familiar with. If a student is objecting on grounds of the recursive method requiring too many function calls, I wouldn't think he's advanced--it shows, in fact, that he's entirely missing the point.

Perhaps there are better examples, but the student is (hopefully) being taught how to express concepts in code. That's what programming languages are all about, after all; expressing abstract concepts in an unambiguous, machine-comprehensible manner. If the student is worried, at such an early stage (freshman year!) about real-world applicability, perhaps he should consider enrollment in such an institution as DeVry University, where the cut-and-paste approach to software development is not frowned upon! That is, by far, the fastest way to real-world relevance.

Bogdan Sintoma wrote, "These days its seems that the algorithms optimizations isn't the top priority for most of the real applications (unfortunately), instead the top priority is to reduce the developing time and of course the cost," and I can't help but reply. In a nutshell, Bogdan, programmer time is, in nearly every domain, far more expensive than CPU time. Even more to the point, incorrect execution is nearly always more expensive than slow execution. This is why garbage collection (which can, in fact, be far faster than manual memory management, as an aside), strong typing, and a complete *non* focus on preemptive optimization is so prevalent.

That, or because most programmers are stupid. Correctness hasn't necessarily improved by leaps and bounds.

But consider for a minute a tail recursive optimized language such as scheme... and/or an implementation of the fib function that is linearlly recursive. This is O(n), and uses recursion.

(define (fib n)

(letrec ((fh (lambda (n nxt rs)

(cond

((= n 0) rs)

(else

(fh (- n 1) (+ nxt rs) nxt))))))

(fh n 1 0)))

(fib 3)

Andrew,

tail recursion an iteration really are the same thing. Your fib implemantation is a good example for this, since it uses the same algorithm fib_2 does.

This wasn't an issue for me. I learned recursion before iteration. My first programming language was scheme -- iteration was introduced after recursion as "tail-recursion".

Here is the sample provided by Andrew Gwozdziewycz reworded in OCaml (another programming system with tail call optimization):

<pre>

open Printf;;

let fib = function n ->

let rec fh fh_num fh_km1 fh_km2 =

if fh_num = 0 then fh_km2

else fh (fh_num - 1) (fh_km1 + fh_km2) fh_km1

in

fh n 1 0;;

printf "fib: %d" (fib 9);;

</pre>

I first saw recursion in programming demonstrated as a way of computing factorials. Later (in college) I was taught Towers of Hanoi, Fibonacci, and the usual. I don't believe this hurt me, because the instructor was careful to warn us of the inefficiency. This example coincidentally stresses the principle that recursion often gives very easy-to-program and understand solutions to problems, but is rarely the most efficient way to solve something.

This also reminded me of the closed formula for the Fibonacci sequence, which you can derive from generating functions:

http://mathworld.wolfram.com/BinetsFibonacciNumberFormula.html

For the towers of Hanoi, a significant advantage of the recursive solution is that it goes hand-in-hand with an inductive proof of its correctness. That is, you can show the recursive code is correct using a pretty straightforward inductive argument.

But showing that the iterative code is correct is harder ... of course correctness proofs are rarely of value in practice, but they are certainly good things when you can get them so easily.

Indeed, how do you convince a reader of the non-iterative program who doesn't believe it works in every case?

And the concern for the O(1) versus O(n) memory usage is a red herring: it takes 2**n moves, and so it is dominated by time. If you are solving a 64-disk instance, then there's hardly any difference between using 1 unit of memory as opposed to using 64 units when you have to wait thousands of years for a solution.

Eric, your very premise is fundamentally flawed. Others such as Andrew Gwozdziewycz hinted at it, but even more directly,.. if one uses a real compiler your assumptions are rendered valueless.

Using the Franz Allegro Lisp compiler (first compiler I had at had - I'm sure others can do it too), and this definition:

(defun fib (x)

(cond

((or (= x 1)

(= x 2)) 1)

(t

(+ (fib (- x 1)) (fib (- x 2))))))

compiled and tracing Fib I see exactly one call. If I force it to interpret, say for pedagogical reasons, I can see the recursive call structure when tracing.

Pedagogically if you want to teach students to be good programmers, you first teach them the tools of the trade. I would much rather see anyone working for me write that then your fib_2. Why? it's easier to maintain, it's easier to see what it does, I guarantee it took less time to write and debug. It's good clean functional code with no side effects.

But more importantly it works and it works right. And with my modern compiler it works efficiently (because most modern compliers can get rid of tail recursion quite easily). Why do I want to teach my students to second guess and try to out-think the compiler? They should write what's simplest, then they should profile the thing and it turns out you forced them to use a dumb complier for some reason, THEN and ONLY THEN should you and they worry about the next steps to optimize. Trying to optimize before understanding the problem needing optimizing is a huge part of the problem in the way most students are currently taught and try to program. Please don't perpetuate this.

I agree that assignments are more interesting if they have a purpose. And computing Fibonacci numbers for no apparently good reason is a little pointless. More grounded tasks always help. But for just looking under the hood, sometimes it helps when they can see the call graph, and not have to worry about all the underlying processing, because it's simple. As such I would be more inclined to use an example like that in class to show the idea without getting mired in the details, and then have them practice with something more meaningful, maybe like sort as others have mentioned.

Couldn't agree more !

The first recursion problem I was set at university (after the obligatory introduction via factorial) was, "Write a program that draws a tree."

Obvious example when you think about it, I guess ;-).

The second one was, "Write a program which draws a randomly winding avenue of trees receding into the distance."

They also gave us a bonus question; "Write a program that draws something cool."

To put this in context, this was an introductory paper in Pascal - they did a bit of spoon feeding, and pretty much everyone "got it" - and at the same time, several rather exceptional students wrote some damned fine code.

I think the same lecturer set the following question in a programming competition I was in (my team came third last as I recall) ... you are dropped in the middle of a forest. It's a commercial forest of infinite size, and the trees are planted on the intersections of a grid of 1 metre squares. So without loss of generality, your position is (x,y) where 0<x<=y<1. It's a mature forest, and at your eye level all you can see are trunks of uniform radius r. i.e. the branches are all well above your head. Assuming that you yourself have radius 0, and you are given (x,y,r) as inputs ... how many tree trunks are fully visible at eye level ? Partially visible ?

"To learn recursion, you must learn recursion.."

I think this is the best expression, I ever read about recursion so far...

Kevin: I think you should stop hard-coding your (fib 6) call (or whatever value you used for testing). Otherwise your compiler will perform the computation for you during the compile stage and your fib function will just become 'return the pre-computed result' function.

Teaching recursion with the Fibonacci sequence as an example is not a good idea. Mostly because it can be taught without recursion.

Here is my VBA function:

Function Fibonacci(N As Single)

Fibonacci = Round(((Sqr(5) + 1) / 2) ^ (N + 1) / Sqr(5))

End Function

This is a great article just the same. Thanks for posting it.

I'm with Kevin on this one -- teaching introductory students to worry about writing obfuscated code like the fib_2 example just in case they have to program with a dumb compiler that can't even optimize out tail-recursion seems a bad idea.

PS: I do think that the Fibonacci problem is probably boring to most students and I'm all for finding more interesting problems, so I do agree with that; I'm just not a fan of code that's written in a more difficult manner than it needs to be just because the writer didn't trust the compiler. It makes it harder for humans to read and hides your intent, which might actually make it harder for a good compiler to optimize. And yes, I know, this is a simple example and fib_2 isn't *that* difficult to read, my objection is more about the underlying premise of this article -- that's it's better to write more confusing code even if your tools allow more readable code to run just as efficiently.

PPS to Andy: Huh? I doubt that Kevin was hard-coding a call to a particular value like (fib 6). Optimizing out tail-recursion has been common among compilers for decades, so I'm fairly confident that what he says about Franz's Lisp compiler is correct. It's just not that rare a thing to see in compilers.

<i>with a dumb compiler that can't even optimize out tail-recursion</i>

This isn't a counter-argument, but the Java compiler sadly doesn't optimize out tail recursion. It's a well known bug, but there doesn't seem to be much movement in the Java camp to fix this.

The .NET byte code does support this, from what I've read.

Not that Java and .NET are my favorite language[s| frameworks], but I think this is yet another place where .NET is doing better than Java.

Fibonacci as a teaching point is nice in an algorithms class because you can then get into finding closed form solutions for recursive functions.

I suppose like others here, I tend to disagree with the premise of this article. I think that an obviously correct implementation that closely resemble the problem is one of the greatest reasons for using and teaching recursion.

:)I realize that this post is now about 3 months old, but I noticed that nobody mentioned memoization! Was this because it is too obvious?

Personally, my first instinct is to write the naieve fibonnaci and then memoize it. I've personally seen this able to compute numbers well beyond the bounds of a 32-bit integer fast enough that I didn't feel the need to optimize it further. For whatever that's worth.

How do you get "three months" out of "May 19th, 2004"? This post is 34 months old.

I am not sure why it is kicking up such a fuss now. People keep on posting it to reddit. It is not a particularly interesting post.

Another simple problem (courtsey SICP) is to compute exponentiation i.e. a ^ b where both 'a' and 'b' are integers. This is quite cool, because you get to not only _understand_ recursion, but also some amount of complexity O(n) vs O(log n) etc.

A sample homework we had in our computer science class was to print out a linked list backwards.

The Best Way to solve the Problem is by using the Stack Structure and Composite Design Pattern , Which wil help you to reduce the LOC than the recursion.

About Compsite Pattern : http://www.exciton.cs.rice.edu/JavaResources/DesignPatterns/composite.htm

Design Pattern will provide the *Good and Exact* Solution to many of the Problems where the recursion is happening around. Even a repeat call will make the memory to be eaten wider ! So , Composite pattern need to be taught before the Recursion Class 🙂

---

ananthmv

One thing to consider is that proletariat languages generally rely on iteration. The iterative solutions are better, primarily because you've thought them through more. If you implement tail recursion and memoization, the run times of the recursive solutions are the same as of the iterative. The memory usage is still slightly greater (functional code tends to be slightly more memory-intense), but not by the large factors shown.

That's one typical trade-off with programming functionally: functional code parallelizes much better (since you don't store change any data in variables, order of operations doesn't matter). If you run it on a multicore cell processor, it is much faster. On the other hand, since you create new data instead of overwriting old data, the memory usage tends to be greater.

Sounds like someone has been reading code complete...

Would this work?

http://en.wikipedia.org/wiki/Ackermann_function

Interesting blog entry and ensuing discussion. Netting it out: There are a lot of ways to solve a problem. Recursion is often inefficient in space and time. However, when you introduce a topic, the particular example you give is not nearly as important as the independent student exercise.

In high school, I had a brilliant AP Computer Science teacher. He presented fibonacci as a recursive solution, and then he had us step through the algorithm on paper. I guess you can say, "Real programmers don't use pencils!" but the fact of the matter is that this was a useful exercise for understanding the difference in control flow from a while or for loop. This is a form of microanalysis. (Another form of microanalysis is to have students visualize the stack frames stacking on top of each other which each function call, in order to appreciate that this function is LIFO-to-the-extreme under the hood.) After we traced through the program, our teacher said, "Now your [independent student] assignment is to write this using the iterative approach you are familiar with." Note that this is a completely different perspective and demonstrates your argument that fibonacci is only good in principle is a strawman. The real failure is always going to be in the assignments you give students, because, as John C. Reynolds has pointed out in The Craft of Programming (1981), the most difficult task in Computer Science is teaching an experienced programmer how to program. I agree with Reynolds: Giving assignments at the right level of difficulty is the only way to attract students who are often put off by "polemics and formalism". Therefore, I posit that my AP Computer Science teacher did a good job introducing fibonacci.

At the end of the lesson plan on recursion, my teacher summarized the following key points:

* Recursion is often more elegant than an iterative solution. (To emphasize this, he presented Towers of Hanoi. I cannot remember what he said about Towers of Hanoi as it relates to an iterative alternative.)

* Recursion requires the parameters to be put on the stack for each function call.

* Recursion does not stop until the base case is reached, and memory will continue to be allocated, stack frame on top of stack frame, until the base case is reached. For inputs that require many recurses, the program may run out of memory...

* Memory is finite and if you use too much of it, then your program will not be robust. It is important that your program does not crash unexpectedly.

* Iterative solutions tend to be faster and require less space, because their is less memory allocation (the implication being that memory allocation is time consuming and necessitates more memory being used).

All-in-all, it was a pretty good introduction to recursion. However, I think it should have been followed up by useful examples of recursion. It definitely could have been better, and I think the first step to making it better would be to have the students understand the difference between microanalysis and macroanalysis. When writing recursive algorithms, you should de-compose the problem using macroanalysis. J. Stanley Warford in Computer Systems 1st Edition has some good examples of the microanalytic versus macroanalytic approach, however he lacks enough focus on cases where you want to use recursion. He does mention recursive descent parsing and mutual recursion, however he does not present code for a recursive descent parser.

It is also worth noting that we use recursive thinking in mathematics, often when not realizing it. Calculating the determinant is one common example. I recently recommended to someone that they implement Lewis Carroll's method for calculating the determinent of a square matrix.

That being said, recursion is clearly a "Threshold Concept," so to speak, because once students learn it, the knowledge they gain is transformative, irreversible, and integrative. For these reasons, it is something students should master.

If you would interview me, I would probably tell you to check your math, because the first Fibonacci number is 0, not 1. They don't go "1, 1, 2, 3, 5, 8, ..." They go "0, 1, 1, 2, 3, 5, 8, ..."

So I guess I would have found a bug in the very definition of your exercise. Does that mean I get the position now? 😉

u have given very good points.

who is necessary for a good programmer.

i want to know what is cases in which we can`t found solution of recursion into itreative way.

that means we cann`t change recursive program in itrative way.

please mail me....

You are using a very inefficient recursive algorithm for Fibonacci. I was asked to solve the problem at school and came up with this one.

Where Fibonacci number at position 10 is Fib(10, 0, 1)

(C#)

public static long FibonacciOf(int Times, long TwoNumbersBack, long OneNumberBack)

{

if (Times == 0)

return TwoNumbersBack;

else if (Times == 1)

return OneNumberBack;

else if (Times == 2)

return OneNumberBack + TwoNumbersBack;

else

return FibonacciOf(Times - 1, OneNumberBack, OneNumberBack + TwoNumbersBack);

}

I actually programmed up a fast recursive fibonacci number generator that uses identities with 2n, 2n+1, 3n, 5n, and so on to produce a faster than O(n) solution. Granted, that still doesn't beat the analytic solution, but it's pretty cool. (I don't know the actual big O, but statistics tell me it's near O(n^.73).) Here's a snippet of the code in Java.

else if (n%2==0) {

int fm1 = fibonacci(n/2-1);

int fp1 = fibonacci(n/2+1);

return fp1*fp1-fm1*fm1;

}

"[...]suppose you have a pair of rabbits that every year gives birth to a pair of rabbits[...]"

I guess you meant "you have a pair of rabbits that every year gives birth to ONE rabbit"

No, I meant what I said. Do the math, you'll see that it works out. -- Eric