Davide Spataro

Climbing Stairs

easy · Dynamic Programming · 22 min read
dp fibonacci recursion memoization matrix-exponentiation

Introduction

A staircase of nn steps. At each move you can hop one step or two. The question is plain: in how many distinct ways can you reach the top? Most candidates write a recursive solution in fifteen seconds and stop. That fifteen-second solution is exponential, and the gap between “obvious” and “good enough” is exactly why this problem keeps appearing.

This is the rite-of-passage problem for dynamic programming. Four approaches walk a careful arc — exponential recursion, memoized recursion, bottom-up sweep, and a matrix-exponentiation reframing — and each one teaches a distinct DP move that reappears across half a dozen later problems. The lesson isn’t the algorithm. It’s the discipline of asking, “my recursion is correct — what is it doing twice?”, and recognizing the answer.

The interview test isn’t writing the recurrence. It’s noticing what the recurrence is doing twice — and trading work for memory.

If you’re early in DP prep, walk all four approaches in order — the lessons compound. If you can write the bottom-up answer in your sleep, jump to the matrix-exponentiation section: it’s the unfamiliar one and the one most worth knowing.

What the interviewer is actually testing

  1. Recurrence recognition. Can you state, in one sentence, why the answer for nn stairs equals the answer for n1n-1 plus the answer for n2n-2? Candidates who can’t articulate that miss the entire problem.
  2. Awareness of overlapping subproblems. The naive recursion is correct; recognizing that it’s exponential because subproblems repeat is the DP pivot.
  3. Space discipline. The bottom-up array is O(n)O(n); collapsing to two variables is O(1)O(1). Spotting that compression is a senior signal.
  4. Knowing the next level. Matrix exponentiation isn’t required for this problem in most interviews — but mentioning it as the O(logn)O(\log n) option signals you’ve been deeper into algorithmics than the median candidate.

Problem statement

Figure · the five ways to climb 4 stairsEach colored row of arcs is one distinct climbing sequence. Short arcs are 1-step hops; tall arcs are 2-step hops. Five sequences end at stair 4.
012341 + 1 + 1 + 12 + 1 + 11 + 2 + 11 + 1 + 22 + 2
Problem

You’re climbing a staircase with nn steps. At each move you can hop either one step or two. Return the number of distinct ordered sequences of hops that bring you exactly to the top.

Example 1
Input n = 2
Output 2
Either two single-step hops, or one two-step hop. That's the simplest non-trivial case and the seed of the recurrence.
Example 2
Input n = 4
Output 5
The five distinct sequences: 1+1+1+1, 2+1+1, 1+2+1, 1+1+2, 2+2. The figure above visualizes them.
Example 3
Input n = 7
Output 21
By n = 7 the answer matches the 8th Fibonacci number — a clue, not a coincidence.

Clarifying questions — and why each one matters

Clarification questions

Q What's the upper bound on n?
A The most important clarification. For n ≤ 45, a 32-bit int holds the answer. By n = 100 the count exceeds 2⁶³ and you need big integers. By n = 10⁹ the linear approaches are too slow and matrix exponentiation becomes the only viable choice.
Q Are the hops just 1 and 2, or can the step set be arbitrary?
A Default is {1, 2}. If the interviewer generalizes to an arbitrary step set S, the recurrence becomes f(n) = sum over s in S of f(n − s) — same DP shape, just a wider summation. The matrix-exponentiation trick still applies but the matrix gets bigger.
Q Does the order of hops matter?
A Yes. The problem asks for distinct *ordered* sequences. 1 + 2 and 2 + 1 count as two different ways. If the interviewer says order doesn't matter, the problem becomes a partition-counting problem and the recurrence changes.
Q Does n = 0 count as one way (the empty climb) or zero ways?
A Convention varies. The cleanest mathematical convention is f(0) = 1 — the empty sequence is a valid 'way' to climb zero stairs. Without that, the recurrence has an awkward special case. Confirm with the interviewer; the LeetCode statement assumes n ≥ 1, so n = 0 doesn't usually arise.
Q Are negative n or non-integer n possible?
A No, by problem definition. If they were, the recurrence would need bounds checks; the standard formulation guarantees a positive integer.

A mental framework — the recurrence

Think of the very last hop. To stand on stair nn, the previous position was either stair n1n-1 (and you took a 1-step) or stair n2n-2 (and you took a 2-step). Those are the only two cases — they don’t overlap, and together they cover everything.

So the number of ways to reach stair nn is the number of ways to reach stair n1n-1 plus the number of ways to reach stair n2n-2:

f(n)=f(n1)+f(n2)f(n) = f(n - 1) + f(n - 2)

with base cases f(0)=f(1)=1f(0) = f(1) = 1. That’s the entire algorithmic content. Everything in the rest of the chapter is how to evaluate this recurrence efficiently.

If the recurrence looks familiar, it should: ff is the Fibonacci sequence, shifted by one. The chapter’s four approaches are the standard four ways to evaluate Fibonacci, and the lessons transfer to every recurrence with the same “constant-state” shape.

Discussion

Approach 01

Naive recursion — the recurrence, written verbatim

Translate the recurrence directly into recursive calls. Correct, exponential, your starting line.

Time O(2ⁿ) Space O(n)

The interview narration

Say this out loud:

“The recurrence is f(n)=f(n1)+f(n2)f(n) = f(n-1) + f(n-2) with f(0)=f(1)=1f(0) = f(1) = 1. The most direct translation is two recursive calls. That gives the right answer but the call tree branches at every level — O(2n)O(2^n) time. I’ll write it as a baseline then look for what’s expensive.”

How it works, step by step

The function takes nn, and:

  1. Base cases: if n1n \le 1, return 11 — there’s exactly one way to “climb” 0 or 1 stairs.
  2. Recursive case: otherwise, return climbStairs(n - 1) + climbStairs(n - 2).

That’s it. The code is a transcription of the recurrence.

The code

naive-recursion.cpp
int climbStairs(int n) {
if (n <= 1) return 1;
return climbStairs(n - 1) + climbStairs(n - 2);
}

Three lines. Correct on every input. Will hang on n = 50.

Why it’s exponential — look at the tree

Figure · recursion tree for f(5)Each call splits into two: f(k−1) and f(k−2). The same subproblems recur many times. f(2) is computed 3 times, f(3) 2 times, f(1) 5 times. That repetition is what makes the naive recursion exponential.
f(5)f(4)f(3)f(2)f(1)f(0)f(1)f(2)f(1)f(0)f(3)f(2)f(1)f(0)f(1)

The recursion tree branches into two children at every node. The leaves are calls with n1n \le 1, each returning 1. The total number of leaves equals the answer f(n)f(n), which is the (n+1)(n+1)-th Fibonacci number — a quantity that grows as ϕn\phi^n where ϕ1.618\phi \approx 1.618.

That’s already enough to make n=30n = 30 slow (106\sim 10^6 calls) and n=50n = 50 unbearable (1010\sim 10^{10} calls). The problem isn’t the recursion — it’s that the same subproblems are computed repeatedly. In the figure above, f(2)f(2) is computed three separate times for f(5)f(5). For larger nn, the redundancy explodes.

What it gets right

  • Correctness. No edge cases to worry about; the base case handles n=0n = 0 and n=1n = 1 uniformly.
  • It exposes the optimization. The recursion tree is the visual proof that overlapping subproblems exist. Once you see the tree, memoization is obvious.

Why it’s rejected

For n=50n = 50 this would take hours. Production code never ships exponential when polynomial exists. But the interview value of writing it is real: it’s how you demonstrate you’ve read the problem correctly before optimizing.

Approach 02

Memoized recursion — cache the repeats

Same recursion, with a memo table to ensure each subproblem is solved exactly once. Linear time, linear space.

Time O(n) Space O(n) memoizationtop-down

The insight

Look at the recursion tree above. The same f(k) value appears multiple times — sometimes dozens of times for moderately large nn. Each repetition recomputes a result we already produced. Caching the answer the first time we compute it, and returning the cached value thereafter, drops the algorithm from exponential to linear.

The cache: an array memo[0..n] holding either the computed answer or a sentinel (1-1) for “not yet computed.”

How it works, step by step

The recursion is unchanged except for two extra lines:

  1. Before computing, check the cache. If memo[n] != -1, return it immediately.
  2. After computing, store the result in memo[n] before returning.

The cache turns the recursion tree into a recursion DAG: each subproblem has a single computation, and subsequent calls just retrieve the answer. Total work: O(n)O(n) subproblems × O(1)O(1) work each = O(n)O(n).

The code

memoized.cpp
#include <vector>
namespace {
int solve(int n, std::vector<int>& memo) {
if (n <= 1) return 1;
if (memo[n] != -1) return memo[n];
memo[n] = solve(n - 1, memo) + solve(n - 2, memo);
return memo[n];
}
} // namespace
int climbStairs(int n) {
std::vector<int> memo(n + 1, -1);
return solve(n, memo);
}

The recursive structure is preserved; the caching is the only addition. The anonymous-namespace `solve` helper keeps the public API clean while sharing the memo across recursive calls.

A few details worth naming:

  • Sentinel -1 works because every answer is positive. If 0 or negative were valid, switch to std::optional<int>.
  • Stack depth still goes nn deep on the first call, so very large nn can overflow the call stack. Bottom-up (Approach 03) avoids it.

Complexity

  • Time: O(n)O(n). Each subproblem f(k)f(k) is computed exactly once.
  • Space: O(n)O(n) for the memo table plus O(n)O(n) recursion depth.

When memoized recursion beats bottom-up

  • When the recurrence has irregular dependencies. Climbing Stairs has a clean linear dependency, so bottom-up wins. But for problems where the set of needed subproblems is sparse (only a fraction of all possible f(k)f(k) are ever requested), top-down with memoization avoids computing the unused ones.
  • When the recursion mirrors the problem statement intuitively. Some problems read more naturally as “given nn, recurse on smaller inputs” than as “build up from base cases.” Reach for the form that’s easier to argue correct.
Approach 03

Bottom-up — two variables, one pass Recommended

Iterate from i = 2 up to n, computing each f(i) from the previous two values. The full memo array collapses to two integers.

Time O(n) Space O(1) bottom-upconstant-spacerecommended

The insight, stated cleanly

Watch the memoized recursion compute its values. By the time we know f(n)f(n), we’ve stored every f(k)f(k) for knk \le n. But we only ever use the two most recent values — once f(n)f(n) is computed, f(n2)f(n-2) and earlier are never read again.

That observation has two consequences. First, we can fill the array in order rather than recursively (which avoids the recursion overhead and the stack depth). Second, the array itself is unnecessary: at any moment, only the last two values matter. The whole memo collapses to two integers that we slide forward each iteration.

Live · bottom-up DP fillThe dp[] array fills left to right. Each new cell sums its two immediate left neighbors — the same recurrence the recursion tree was repeatedly evaluating, but each subproblem now solved exactly once.
Input:
·i=0·i=1·i=2·i=3·i=4·i=5
step 0 / 7

Initialize an array of size n+1 = 6.

How it works, step by step

Maintain two running values:

  • prev2 — the answer for two stairs ago. Initialized to f(0)=1f(0) = 1.
  • prev1 — the answer for one stair ago. Initialized to f(1)=1f(1) = 1.

For ii from 22 up to nn:

  1. Compute curr = prev1 + prev2. That’s f(i)f(i).
  2. Slide: prev2 = prev1, prev1 = curr. Now prev1 is the latest value, prev2 is the one before that.

After the loop, prev1 holds f(n)f(n).

The code

bottom-up.cpp
int climbStairs(int n) {
if (n <= 1) return 1;
int prev2 = 1; // ways to climb 0 stairs
int prev1 = 1; // ways to climb 1 stair
for (int i = 2; i <= n; ++i) {
const int curr = prev1 + prev2;
prev2 = prev1;
prev1 = curr;
}
return prev1;
}

Six lines of body. Two integers of state. The whole DP table fits in a couple of registers.

Why this is the production answer

  • Constant space. A signed 32-bit int holds f(n)f(n) for n45n \le 45 (f(46)=2,971,215,073f(46) = 2{,}971{,}215{,}073 overflows INT_MAX). A signed 64-bit long long holds it for n91n \le 91. Beyond that you need big integers, but the algorithm is the same.
  • No recursion. No stack-overflow risk for large nn.
  • One pass, simple loop. The branch predictor and the cache both love this code.
  • Easy to verify. Trace it on n=5n = 5 in your head: 1, 1, 2, 3, 5, 8. Done.

Edge cases

  • n=0n = 0: the loop never runs; prev1 = 1 is returned. Matches the convention f(0)=1f(0) = 1.
  • n=1n = 1: the loop never runs; prev1 = 1 is returned. Correct.
  • Large nn near the int limit: f(46)f(46) already overflows a signed 32-bit integer (it’s just above 2.97×1092.97 \times 10^9). Switch to long long if the spec hints at nn much past 45; this is a one-character code change and an easy interview win.

Complexity

  • Time: O(n)O(n). Single pass with constant-time work per iteration.
  • Space: O(1)O(1). Two integers.
Approach 04

Matrix exponentiation — O(log n) for the senior signal

Encode the Fibonacci recurrence as a 2×2 matrix multiplication. Repeated squaring evaluates M^n in O(log n) multiplications, giving a logarithmic-time answer.

Time O(log n) Space O(log n) mathdeep-dive

The insight

The recurrence f(n)=f(n1)+f(n2)f(n) = f(n-1) + f(n-2) is a linear transformation on the pair (f(n1),f(n2))(f(n-1), f(n-2)). Linear transformations compose via matrix multiplication, and matrix multiplication has a fast-power algorithm (the same trick that lets you compute ana^n in O(logn)O(\log n) via repeated squaring).

The transformation matrix is:

M=(1110)M = \begin{pmatrix} 1 & 1 \\ 1 & 0 \end{pmatrix}

It acts on the state vector (f(n1)f(n2))\begin{pmatrix} f(n-1) \\ f(n-2) \end{pmatrix} to produce (f(n)f(n1))\begin{pmatrix} f(n) \\ f(n-1) \end{pmatrix} — one Fibonacci step per matrix multiplication. A direct algebraic identity falls out: every entry of MnM^n is itself a value of ff:

Mn=(f(n)f(n1)f(n1)f(n2))M^n = \begin{pmatrix} f(n) & f(n-1) \\ f(n-1) & f(n-2) \end{pmatrix}

So f(n)f(n) — our answer — sits at position [0][0][0][0] of MnM^n. Compute MnM^n via repeated squaring and read off the top-left entry. That’s the whole algorithm.

How it works, step by step

The whole algorithm is repeated squaring of the matrix:

  1. Initialize the running result to the 2×22 \times 2 identity matrix II, and let base = M.
  2. Walk the bits of nn, low-to-high. While n>0n > 0:
    • If the lowest bit of nn is set, multiply the running result by base.
    • Square base in place: base = base · base.
    • Right-shift nn by one.
  3. Return the top-left entry of the final result.

The total number of multiplications is at most 2log2n2 \lceil \log_2 n \rceil (one squaring per bit, plus one accumulation per set bit). Each multiplication is O(1)O(1) on a fixed-size 2×22 \times 2 matrix.

The code

matrix-exponentiation.cpp
#include <array>
#include <cstdint>
namespace {
using Matrix = std::array<std::array<std::int64_t, 2>, 2>;
Matrix multiply(const Matrix& a, const Matrix& b) {
return {{
{a[0][0] * b[0][0] + a[0][1] * b[1][0],
a[0][0] * b[0][1] + a[0][1] * b[1][1]},
{a[1][0] * b[0][0] + a[1][1] * b[1][0],
a[1][0] * b[0][1] + a[1][1] * b[1][1]},
}};
}
Matrix power(Matrix base, int exp) {
Matrix result = {{{1, 0}, {0, 1}}}; // identity
while (exp > 0) {
if (exp & 1) result = multiply(result, base);
base = multiply(base, base);
exp >>= 1;
}
return result;
}
} // namespace
int climbStairs(int n) {
if (n <= 1) return 1;
const Matrix M = {{{1, 1}, {1, 0}}};
const Matrix result = power(M, n);
return static_cast<int>(result[0][0]);
}

Two helper functions: 2×2 multiply and fast power. The main entry is three lines once the helpers are in place.

When this is the right answer

  • Astronomically large nn. For n=1018n = 10^{18}, the bottom-up loop is hopeless; matrix exponentiation evaluates the answer in 60 multiplications.
  • Variants where the recurrence is more complex. Generalized step sets (f(n) = f(n-1) + f(n-3) + f(n-7)) become k×kk \times k matrices for some constant kk; the same fast-power trick applies, and you still get O(k3logn)O(k^3 \log n) — fast enough for any reasonable input.
  • Reading the question between the lines. If the interviewer asks “what if nn were a billion?” they’re handing you the matrix-exponentiation prompt.

When it’s overkill

  • Standard interview problem with n104n \le 10^4. Bottom-up is faster in practice for small nn because the constant factor is much smaller. Matrix exponentiation pays off only when nn is large enough that O(logn)O(\log n) wins decisively over O(n)O(n).

Complexity

  • Time: O(logn)O(\log n) multiplications, each O(1)O(1) on a fixed-size matrix.
  • Space: O(logn)O(\log n) for the recursion / iteration stack of repeated squaring.

Approach comparison

ApproachTimeSpaceWhen to reach for it
Naive recursionO(2n)O(2^n)O(n)O(n)Baseline only; never ship.
Memoized recursionO(n)O(n)O(n)O(n)When the recurrence is irregular or sparse.
Bottom-up two-varO(n)O(n)O(1)O(1)Default. Linear time, constant space.
Matrix exponentiationO(logn)O(\log n)O(logn)O(\log n)Astronomically large nn or generalized recurrences.

Default answer: the two-variable bottom-up sweep. State that out loud when you close.

Follow-ups to anticipate

The interviewer asks…What to do
”What if you can take 1, 2, or 3 steps?”Generalize the recurrence: f(n)=f(n1)+f(n2)+f(n3)f(n) = f(n-1) + f(n-2) + f(n-3). Same bottom-up sweep with three running variables.
”What if the step set is an arbitrary array SS?”f(n)=sSf(ns)f(n) = \sum_{s \in S} f(n-s). Bottom-up with an array; per-step work is O(S)O(\|S\|), so total is O(nS)O(n \cdot \|S\|).
”What if some stairs are broken (you can’t land on them)?”Set f(broken)=0f(\text{broken}) = 0 explicitly, then run the same recurrence. The “ways through” each broken stair drop to zero.
”What if you must take exactly kk hops to reach the top?“2D DP: f(n,k)f(n, k) = ways to reach stair nn in exactly kk hops. Recurrence: f(n,k)=f(n1,k1)+f(n2,k1)f(n, k) = f(n-1, k-1) + f(n-2, k-1).
”What if the answer doesn’t fit in a 64-bit int?”Big integer library, or modular arithmetic if the spec asks for the answer mod 109+710^9 + 7. The matrix-exp version handles modular arithmetic cleanly: every multiplication reduces mod pp.
”Climbing stairs with a cost — minimize total cost.”Different problem. The recurrence becomes f(n)=min(f(n1)+c[n1],f(n2)+c[n2])f(n) = \min(f(n-1) + c[n-1], f(n-2) + c[n-2]). See LC 746.
”What if you can also jump back (negative steps)?”Now it’s a graph reachability problem; counting paths can produce infinite loops unless you bound the path length.
”How do I prove the bottom-up answer is correct?”Induction on nn. Base cases are direct; inductive step uses the recurrence. Same proof as Approach 01, but the iteration order makes the dependencies obvious.

Pattern recognition — constant-state DP

The move this chapter teaches — evaluate a recurrence whose state is a fixed-size window over previous values — is a recurring shape in dynamic programming:

  • House Robber. f(i)=max(f(i1),f(i2)+nums[i])f(i) = \max(f(i-1), f(i-2) + \text{nums}[i]). Same two-variable collapse.
  • Best Time to Buy/Sell Stock. Two running values: best profit so far and minimum price so far. Constant state, linear sweep.
  • Maximum Subarray (Kadane’s). Two running values: current ending sum and best ever. Same shape.
  • Decode Ways (LC 91). f(i)f(i) depends on f(i1)f(i-1) and f(i2)f(i-2) subject to a string-validity check. Same recurrence shape, decorated.
  • Tribonacci. f(n)=f(n1)+f(n2)+f(n3)f(n) = f(n-1) + f(n-2) + f(n-3). Three-variable collapse instead of two; matrix exponentiation generalizes naturally.
  • Domino tilings. Surprisingly often reduce to Fibonacci-like recurrences; the matrix-exponentiation trick handles huge boards.

The reflex: when a recurrence depends on a constant number of previous values, the DP can always be done in that number of variables — and matrix exponentiation gives a O(logn)O(\log n) option if the input ranges demand it.

What to say out loud — the delivery script

Ten beats:

  1. “Let me restate: count the distinct ordered sequences of 1-step and 2-step hops that climb nn stairs.”
  2. Ask: “upper bound on nn? hops just 1 and 2? does order matter?”
  3. “The recurrence: to reach stair nn, the last hop was either a 1 or a 2, so f(n)=f(n1)+f(n2)f(n) = f(n-1) + f(n-2) with f(0)=f(1)=1f(0) = f(1) = 1. That’s Fibonacci.”
  4. “Naive recursion is O(2n)O(2^n) — same subproblems get computed many times. Let me name that and move on.”
  5. “Memoization caches each f(k)f(k) on first computation: O(n)O(n) time, O(n)O(n) space.”
  6. “Bottom-up makes the iteration order explicit: fill from i=2i = 2 to nn. The key observation is that we only ever need the last two values, so the array collapses to two variables — O(1)O(1) space.”
  7. Write the bottom-up code — six lines. Trace n=5n = 5: 1, 1, 2, 3, 5, 8.
  8. “Edge cases: f(0)f(0) and f(1)f(1) both return 1 directly; the loop just doesn’t execute. For large nn, switch to long long to avoid overflow.”
  9. “For the senior bonus: matrix exponentiation gives O(logn)O(\log n). The Fibonacci matrix raised to nn encodes f(n+1)f(n+1) and f(n)f(n) in its first row. Useful when nn is huge.”
  10. “For the standard interview, I’d ship the two-variable bottom-up. If the interviewer pushes on huge nn, I’d offer the matrix version.”

Summary

Climbing Stairs is the rite of passage for dynamic programming. The interview signal isn’t producing the optimal answer — it’s articulating the recurrence in one sentence, then walking the four-step optimization staircase: naive → memoized → bottom-up → matrix exponentiation. Each step teaches a different DP move, and every later DP problem reuses one or more of them.

The wider lesson is the constant-state recognition. Many DP problems have recurrences that depend on a fixed-size window of previous values — Fibonacci, House Robber, Kadane’s, decode-ways, tilings — and every one of them collapses to a fixed-size set of running variables. Internalize that shape here; the rest of the easier DP family becomes pattern matching.