A row of Russian matryoshka nesting dolls, increasing in size. Photo · Hafsa LR / Unsplash
A doll containing a doll containing a doll containing a smallest doll. Recursion is exactly that — except for ideas.

The chef can do something deeply weird: a function can call itself. The same function. With a simpler argument. And not paradoxically, but cleanly. Each call gets its own stack frame, its own local variables, its own copy of the parameters. The function eventually hits a small enough case that it stops calling itself and returns. The chain unwinds, results combine, and the answer falls out.

This trick — recursion — is the natural fit for a wide class of problems: anything that contains a smaller version of itself. Tree traversals, file-system walks, grammar parsing, factorial, Fibonacci, divide-and-conquer sorts, every game tree, every search. Whenever the problem has a self-similar structure, the cleanest code is recursive.

Anatomy of a recursive function

Every recursive function has the same shape:

  1. A base case — the smallest version of the problem you can answer directly. Without one, the function calls itself forever and the chef runs out of stack.
  2. A recursive step — break the problem into smaller pieces, call yourself on the smaller pieces, combine the results.
// factorial — n! = n × (n-1) × ... × 1
int factorial(int n) {
    if (n <= 1) return 1;          // base case
    return n * factorial(n - 1);      // recursive step
}

factorial(5);
// = 5 * factorial(4)
//       = 5 * 4 * factorial(3)
//             = 5 * 4 * 3 * factorial(2)
//                       = 5 * 4 * 3 * 2 * factorial(1)
//                                       = 5 * 4 * 3 * 2 * 1 = 120

Each call gets a fresh frame on the stack. The chef pushes all the way down to factorial(1), which returns 1 directly, and then the stack unwinds, multiplying as it goes.

factorial(5)
factorial(4)
factorial(3)
factorial(2)
factorial(1) — base case
Blue = pushing. Orange = base case hit. Green = unwinding back up with the result.

Where recursion shines

Tree traversal

// visit every node in a binary tree (in-order)
void traverse(Node* n) {
    if (!n) return;      // base: empty tree
    traverse(n->left);      // recurse
    visit(n);
    traverse(n->right);     // recurse
}

Trees are recursive structures: a tree is either empty, or a node with a value and two trees. The traversal mirrors the structure exactly. To do this iteratively you'd need an explicit stack — much messier.

Divide and conquer

// merge sort — split, sort each half, merge
void mergesort(std::vector<int>& v) {
    if (v.size() <= 1) return;           // base
    auto left  = first_half(v);
    auto right = second_half(v);
    mergesort(left);                       // recurse
    mergesort(right);                      // recurse
    v = merge(left, right);
}

Mergesort, quicksort, fast Fourier transform, binary search — the entire family of divide-and-conquer algorithms is naturally recursive. They're also all O(n log n), and the log n factor comes from the depth of the recursion tree.

The dangers

Three things that bite:

Recursion vs iteration

Anything you can write recursively, you can write iteratively. Anything you can write iteratively, you can write recursively. They're equivalent in power. The difference is fit:

Modern compilers can often turn tail-recursive functions (where the recursive call is the last thing the function does) into iterative loops automatically — eliminating the stack frame growth. C++ does this opportunistically; some functional languages guarantee it.

Why this matters for AI

Game-tree search — Minimax (Week 38), Alpha-Beta (Week 43) — is recursion at its purest: "to evaluate this position, evaluate each move's resulting position, then combine". The natural shape of the algorithm is recursive, and so is the natural shape of the implementation.

Backpropagation through computation graphs is also recursive in spirit: walk back through the graph, each node propagating gradients to its children. Modern AI frameworks usually flatten this into iterative form for performance, but the conceptual model is recursive.

When the problem looks like itself, the solution probably should too.

Try it yourself

What's next

Now that we have recursion, we have the toolkit for a classic problem: putting things in order. Next week we meet five sorting algorithms — three of them historical curiosities, two of them genuinely useful, and the rule for picking between them.

Week 33 is Sorting.

Photo credit

Photo free under the Unsplash license. Dolls · Hafsa LR.

Quick check

1. Every recursive function must have…
  1. A static variable
  2. A base case that terminates the recursion
  3. A return value of int
Reveal Answer

Answer: B. Without a base case, the recursion never bottoms out and the stack overflows.

2. Without a base case, what happens?
  1. Infinite loop in heap memory
  2. Stack overflow — the call stack grows until the program crashes
  3. A compile error
Reveal Answer

Answer: B. Each call adds a frame to the stack. The OS gives every thread a fixed-size stack — exhausting it kills the program.

3. What is 'tail recursion'?
  1. Recursion that happens at the end of the program
  2. When the recursive call is the last thing the function does — some compilers can optimise it into a loop
  3. Recursion via a tail-pointer
Reveal Answer

Answer: B. Tail-call optimisation reuses the current stack frame. Functional languages rely on it; C++ does it best-effort.