Davide Spataro

Coin Change

medium · Dynamic Programming · 40 min read
dp memoization bottom-up optimization knapsack interview-strategy

Introduction

Coin Change is the rite of passage for dynamic programming. Nearly every serious technical interview at a top company includes at least one DP question, and Coin Change is the problem that teaches every piece of the DP toolkit with a setting — coins, change — that everyone has an intuition for. Master this chapter and the rest of DP stops feeling magical.

The problem also does something rare: it punishes greedy thinking politely. Greedy looks right. It works on the first example you try. Then it fails, silently, on an input you did not think of. Learning when greedy works, when it does not, and why is itself a compact lesson in algorithm design — the kind of reasoning an interviewer is directly trying to elicit.

Every DP solution starts the same way: name the subproblem, write the recurrence, then either memoize the recursion or tabulate it bottom-up. Everything after that is mechanical.

What the interviewer is actually testing

A Coin Change interview scores candidates on five axes:

  1. Do you recognize the optimization structure? Candidates who dive into code without stating the subproblem almost always get the recurrence wrong.
  2. Do you consider greedy and cleanly reject it with a counterexample? Or do you try greedy confidently and get caught?
  3. Can you derive the recurrence from first principles? The interviewer is watching for whether you see that C(t)=1+mincC(tc)C(t) = 1 + \min_c C(t - c), not whether you can memorize it.
  4. Can you translate recursion to either memoization or tabulation, fluently? Both are expected. Strong candidates mention both.
  5. Can you handle the follow-ups? Reconstruction, count-of-ways, unbounded vs. bounded coins — one of these is almost certainly coming.

Problem statement

Problem

Given an array of coin denominations II and an integer tt representing an amount, return the minimum number of coins required to sum exactly to tt. You have unlimited coins of each denomination. If the amount cannot be made, return 1-1.

Example 1
Input I = [1, 2, 5], t = 11
Output 3
5 + 5 + 1 = 11. No combination uses fewer than three coins.
Example 2
Input I = [1, 5, 8], t = 12
Output 4
1 + 1 + 5 + 5 = 12. Greedy would pick 8 first and use 6 coins in total.
Example 3
Input I = [2], t = 3
Output -1
No combination of 2s sums to 3 exactly.
Example 4
Input I = [1, 2, 5], t = 0
Output 0
Zero coins make zero.

Clarifying questions — and why each one matters

Clarification questions

Q Is t always non-negative? Can it be zero?
A Common edge case: t = 0 must return 0 (the empty combination). Negative t is typically out of scope but ask; some follow-ups allow it.
Q Are coin denominations always positive? Distinct? Sorted?
A Positive: usually yes. Distinct: treat as a set, not a multiset — duplicates in the input are harmless but add no information. Sorted: no, do not assume.
Q Do we have unlimited coins of each denomination, or a bounded count?
A The most important variant question. Unlimited is the standard framing; bounded (each coin has a finite count) is the 0/1 bounded-knapsack variant, with a different recurrence.
Q Return the count of coins, or the combination itself?
A Interviewers often first ask for the count, then follow up with 'now also return which coins'. Plan for reconstruction — see below.
Q What should we return when t is unreachable?
A Conventionally -1. Always confirm; some variants want +∞ or throw.
Q Any constraints on t or |I|?
A Large t (e.g. 10⁶) means the $O(t \cdot |I|)$ DP is fine. Extremely large t with small |I| may benefit from number-theoretic approaches; ask.
Q Will the input have repeated queries?
A If many queries against the same coin set, cache the entire DP table once and answer each query in O(1).

Discussion

Approach 00

Greedy — and why it fails here

Always take the largest coin that fits. Fast, intuitive, wrong in general. But it's instructive to know when it does work.

Time O(|I| log |I|) Space O(1) counterexample

The algorithm

Sort denominations descending. Repeatedly take the largest coin that does not exceed the remaining amount.

greedy-fails.cpp
#include <algorithm>
#include <functional>
#include <vector>
// ❌ Incorrect in general. Shown only for pedagogy.
// Works for canonical coin systems (e.g. US coins 1/5/10/25); fails otherwise.
int coinChangeGreedy(std::vector<int> I, int t) {
std::sort(I.begin(), I.end(), std::greater<int>());
int used = 0;
for (int coin : I) {
while (t >= coin) {
t -= coin;
++used;
}
if (t == 0) return used;
}
return -1; // greedy got stuck with a non-zero remainder
}

Largest-coin-first: fast to write, easy to break — keep it as a strawman, not a final answer.

Why it’s wrong (counterexamples)

  • I=[1,5,8],t=12I = [1, 5, 8], t = 12. Greedy: 8+1+1+1+1=58 + 1 + 1 + 1 + 1 = 5 coins. Optimum: 5+5+1+1=45 + 5 + 1 + 1 = 4 coins.
  • I=[2,5,8],t=12I = [2, 5, 8], t = 12. Greedy: takes 88, then gets stuck on remainder 44 (no 11 coin), returns 1-1. Optimum: 5+5+2=35 + 5 + 2 = 3 coins.

When greedy DOES work — and why US coins are a red herring

This is the kind of “extra mile” knowledge that distinguishes candidates. Greedy is optimal for a coin system iff the system is canonical (also called matroid-friendly or satisfying the greedy-choice property).

The US coin system ({1,5,10,25}\{1, 5, 10, 25\}) happens to be canonical — which is why most people who think about coin change from lived experience think greedy works. It doesn’t, in general. The Euro system is also canonical, as are most currencies engineered by humans. But arbitrary coin systems — which is what the interview problem gives you — are usually not.

There is a classical algorithm due to Pearson (2005) that tests whether a coin system is canonical in polynomial time. You don’t need to know it. You do need to know the concept exists, and that coin-change is the setting where greedy’s behavior is treacherous.

Approach 01

Brute force recursion

Try every coin at every step; recurse on the remainder; keep the minimum. Correct, exponential.

Time O(|I|^t) Space O(t)

The recurrence

Let C(t)C(t) be the minimum coins to reach exactly amount tt. Then:

C(t)={0if t=0+if t<01+mincIC(tc)otherwiseC(t) = \begin{cases} 0 & \text{if } t = 0 \\ +\infty & \text{if } t < 0 \\ 1 + \min_{c \in I} C(t - c) & \text{otherwise} \end{cases}

Read the recurrence out loud in the interview. It is not decorative — it is the specification for the code.

The code

recursion.cpp
#include <algorithm>
#include <climits>
#include <vector>
// Exponential-time brute force. Kept as a stepping stone to memoization.
static int solve(const std::vector<int>& I, int t) {
if (t == 0) return 0;
if (t < 0) return INT_MAX; // infeasible branch
int best = INT_MAX;
for (int coin : I) {
int sub = solve(I, t - coin);
if (sub != INT_MAX) best = std::min(best, sub + 1);
}
return best;
}
int coinChange(const std::vector<int>& I, int t) {
int ans = solve(I, t);
return ans == INT_MAX ? -1 : ans;
}

Direct translation of the recurrence: correct, but recomputes the same sub-amounts endlessly.

Why this is correct but exponential

The recursion tree branches by I|I| at every level, with depth up to t/min(I)t / \min(I). That’s exponential. Worse, the same amount is computed over and over — consider that both C(t1)2C(t-1) - 2 and C(t2)1C(t-2) - 1 land in C(t3)C(t - 3), which is computed twice. This overlap is not a flaw; it is the signal that DP applies.

The two DP properties you must name

Before moving to memoization, pause and say this out loud:

Saying these two properties explicitly, in this order, in the interview is a major signal. It shows you know why DP is applicable, not just that you’re reaching for it out of habit.

Approach 02

Top-down DP (memoization)

Same recursion as brute force, cache results by t. Exponential collapses to polynomial.

Time O(t · |I|) Space O(t) cacherecursive

The insight, made concrete

Watch what the brute force does on I=[1,2,5],t=11I = [1, 2, 5], t = 11:

  • C(11)=1+min(C(10),C(9),C(6))C(11) = 1 + \min(C(10), C(9), C(6))
  • C(10)=1+min(C(9),C(8),C(5))C(10) = 1 + \min(C(9), C(8), C(5))
  • C(9)=1+min(C(8),C(7),C(4))C(9) = 1 + \min(C(8), C(7), C(4))

C(9)C(9) is computed by both C(11)C(11) and C(10)C(10). C(8)C(8) by three callers. This overlap is what memoization collapses: cache each distinct call’s answer.

Recursion tree · C(t=4, I=[1,2,3])Turn Memoize on: subtrees that would repeat a target amount t are greyed and short-circuited (cache hit). With it off, every path expands — the contrast is the point of the DP section in the text.
−1−1−1−1−2−2−1−3−2−1−1−2−3−14321001cached002cached1cached001cached0
leaf (valid path) cache hit

Toggle “memoize” in the tree above. The greyed-out nodes are cache hits — no further work. Entire subtrees vanish. This is what converts an exponential walk into an O(tI)O(t \cdot |I|) one.

The code

memoization.cpp
#include <algorithm>
#include <climits>
#include <vector>
// O(t · |I|) time, O(t) space. Same recursion as brute force, cached on t.
static int solve(const std::vector<int>& I, int t, std::vector<int>& memo) {
if (t == 0) return 0;
if (t < 0) return INT_MAX;
if (memo[t] != -2) return memo[t];
int best = INT_MAX;
for (int coin : I) {
int sub = solve(I, t - coin, memo);
if (sub != INT_MAX) best = std::min(best, sub + 1);
}
return memo[t] = best;
}
int coinChange(const std::vector<int>& I, int t) {
std::vector<int> memo(t + 1, -2); // -2 = "not computed" sentinel
int ans = solve(I, t, memo);
return ans == INT_MAX ? -1 : ans;
}

Same recursion with a table keyed by amount — each $C(a)$ computed once.

Complexity, precisely

  • States: there are at most t+1t + 1 distinct values for the amount argument (the only varying parameter). Each is computed once.
  • Work per state: O(I)O(|I|) to iterate the coin set and combine.
  • Total time: O(tI)O(t \cdot |I|).
  • Space: O(t)O(t) for the memo table plus O(t/min(I))O(t / \min(I)) for the recursion stack.
Approach 03

Bottom-up DP (tabulation) Recommended

Build the same table iteratively, smallest amount first. The version you should ship.

Time O(t · |I|) Space O(t)

The transition

Memoization is a cache on top of recursion. Bottom-up throws away the recursion and fills the table in order.

Base: dp[0]=0dp[0] = 0, everything else initialized to \infty (unreachable).

Transition: for each amount aa from 11 to tt and each coin cIc \in I: dp[a]=min(dp[a],dp[ac]+1)if cadp[a] = \min(dp[a], \, dp[a - c] + 1) \quad \text{if } c \le a

Final: dp[t]dp[t] is the answer, or 1-1 if still \infty.

The code

bottom-up.cpp
#include <algorithm>
#include <vector>
// Classical bottom-up DP. O(t · |I|) time, O(t) space.
int coinChange(const std::vector<int>& I, int t) {
const int INF = t + 1; // any value strictly greater than the worst real answer
std::vector<int> dp(t + 1, INF);
dp[0] = 0;
for (int a = 1; a <= t; ++a) {
for (int coin : I) {
if (coin <= a && dp[a - coin] + 1 < dp[a]) {
dp[a] = dp[a - coin] + 1;
}
}
}
return dp[t] == INF ? -1 : dp[t];
}

Iterative 1D DP: fill amounts $1\ldots t$; the version that avoids recursion depth issues.

Live · Bottom-up DPPlay fills dp[a] left to right. Use the coin and amount controls to change the instance; hover a filled cell to highlight parent states; filter by coin to focus one transition. The caption under the table repeats the winning recurrence step.
012345678910110dp
Step 0 / 11Highlight:

dp[0] = 0 — no coins needed to make amount 0.

Coins

Try changing the coins or target. Hover any cell to see its parent cells — the (up to I|I|) cells from which dp[a]dp[a] could be derived.

Correctness — the invariant

State this in the interview.

Invariant. When we finish computing dp[a]dp[a], dp[a]dp[a] equals the minimum coin count over every valid combination summing to aa.

Proof (strong induction on aa).

  • Base (a=0a = 0): dp[0]=0dp[0] = 0 is trivially correct.
  • Step: assume dp[a]dp[a'] is correct for every a<aa' < a. At iteration aa, we compute dp[a]=mincI,ca(dp[ac]+1).dp[a] = \min_{c \in I, c \le a} (dp[a - c] + 1).
    • Each summand dp[ac]dp[a - c] is already correct by induction.
    • Every optimal combination for aa must end in some last coin cc^*; removing it leaves an optimal combination for aca - c^* using one fewer coin — by optimal substructure. So the minimum above hits the true optimum.
    • If no summand is finite, no combination exists; dp[a]dp[a] stays \infty.

Why is the outer loop over amounts correct?

A common interview trap: can you iterate over coins first instead? Try it — you get the same array values for minimum coins, but a subtly different answer shape if you swap the semantics (we’ll see this in the count-of-ways variant below). For minimum coins, both orderings work because the recurrence at aa only reads earlier amounts, and every earlier amount is fully finalized regardless of which coin is being processed.

This is worth saying explicitly in an interview: “iterating amount outer, coin inner is canonical here, and iterating coin outer also works for minimum coins — but for the count-of-ways variant the order matters, and I’ll come back to that.” Immediate strong signal.

Complexity

  • Time: O(tI)O(t \cdot |I|) — the two nested loops.
  • Space: O(t)O(t) for the 1D DP array. No recursion, so the stack is O(1)O(1).

Practical advantages over top-down

  • No stack overflow for large tt.
  • Cache-friendly linear memory — the 1D DP array is accessed sequentially.
  • Easier to reason about when debugging: print dp and inspect it.

Reconstruction — returning the actual coins

Interviewers love this follow-up. “Good, now also return which coins you used.” The change is small: alongside dp, keep from[a] = the coin chosen at amount aa. To reconstruct, walk backward from tt.

reconstruct.cpp
#include <algorithm>
#include <vector>
// Returns the actual coin combination (not just the count) that
// makes up t with the fewest coins, or {} if impossible.
std::vector<int> coinChangeReconstruct(const std::vector<int>& I, int t) {
const int INF = t + 1;
std::vector<int> dp(t + 1, INF);
std::vector<int> from(t + 1, -1); // from[a] = coin that achieves dp[a]
dp[0] = 0;
for (int a = 1; a <= t; ++a) {
for (int coin : I) {
if (coin <= a && dp[a - coin] + 1 < dp[a]) {
dp[a] = dp[a - coin] + 1;
from[a] = coin;
}
}
}
if (dp[t] == INF) return {};
std::vector<int> coins;
coins.reserve(dp[t]);
for (int a = t; a > 0; a -= from[a]) {
coins.push_back(from[a]);
}
return coins;
}

Track which coin updated each $dp[a]$ so you can walk backward and print the multiset.

Same time complexity, adds O(t)O(t) space. Mention this in your summary — interviewers often extend the problem here.

The dual problem — number of ways to make change

The most common follow-up of all: “Return the number of distinct combinations, not the minimum coin count.”

Problem

Given coin denominations II (unlimited supply) and target tt, return the number of distinct combinations summing to tt. Two combinations are distinct if they differ in multiset (how many of each coin), not permutation.

Example 1
Input I = [1, 2, 5], t = 5
Output 4
{5}, {2+2+1}, {2+1+1+1}, {1+1+1+1+1}.

The recurrence

ways(t)\text{ways}(t) = number of combinations of coins summing to tt. Same shape as before, but we sum instead of taking the minimum:

ways(t,C)=ways(t,C{c})+ways(tc,C)\text{ways}(t, C) = \text{ways}(t, C - \{c\}) + \text{ways}(t - c, C)

Read: “either include zero coins of value cc, or include at least one and recurse with one fewer”. The loop structure matters. Coin outer, amount inner.

count-ways.cpp
#include <vector>
// DUAL problem: number of distinct combinations that sum to t.
// Loop order matters: coin outer, amount inner — otherwise we count
// permutations (ordered sequences) instead of combinations.
long long countWays(const std::vector<int>& I, int t) {
std::vector<long long> dp(t + 1, 0);
dp[0] = 1;
for (int coin : I) { // outer: decide how many of this coin to include
for (int a = coin; a <= t; ++a) {
dp[a] += dp[a - coin];
}
}
return dp[t];
}

Count combinations: coin-major loop so each multiset is counted once (not permutations).

The loop-order trap — the bug interviewers love

Consider this (wrong) code:

for (int a = 1; a <= t; ++a) // WRONG order
for (int coin : I)
if (coin <= a) dp[a] += dp[a - coin];

On I=[1,2],t=3I = [1, 2], t = 3 this returns 3. It should return 2 (3, 3). Why? Because for a=3a = 3, the loop sums both dp[2]dp[2] (= 2) and dp[1]dp[1] (= 1), giving 2+1=32 + 1 = 3 — but {1+2}\{1+2\} has been counted twice (once as a 2-coin extending {1}\{1\}, once as a 1-coin extending {2}\{2\}). This computes the count of ordered sequences (permutations), not combinations.

The fix: coin outer, amount inner. Each coin gets a full sweep before the next coin is introduced, so we never count the same multiset under two different orderings.

This is the kind of micro-insight that reliably distinguishes strong candidates. Knowing why the order matters is the payoff for understanding what each loop means.

The knapsack connection

Coin Change is a special case of unbounded knapsack. The general unbounded-knapsack problem has items with both weights and values; you want maximum value in a capacity-WW pack. Coin Change is the degenerate version where every item has value = 1, and we want to minimize total value given exact-capacity constraint.

This connection is worth knowing because:

  • If a follow-up extends to weights-and-values, you already have the template.
  • The bounded (0/1) knapsack variant has a slightly different loop structure (aa inner iterated descending to prevent reusing an item) — recognize this when asked.
ProblemLoop orderUpdate
Min coins, unboundedamount outer, coin innerdp[a] = min(dp[a], dp[a-c] + 1)
Count combinations, unboundedcoin outer, amount innerdp[a] += dp[a - c]
0/1 knapsack (bounded, pick once)item outer, capacity inner descendingdp[w] = max(dp[w], dp[w - wt_i] + v_i)

Edge cases, explicitly

InputExpectedWhy
t=0t = 000 coinsEmpty sum.
I=I = \varnothing, t>0t > 01-1No denomination can contribute.
I={c}I = \{c\}, ctc \mid tt/ct / cAll the same coin.
I={c}I = \{c\}, ctc \nmid t1-1No combination hits tt exactly.
1I1 \in IAlways reachableBuild tt with tt ones.
Large tt with minI=1\min I = 1O(t)O(t) DP sizeCall out memory if the interviewer cares.
Duplicate values in IITreat as oneMultisets do not care about duplicate labels.

Approach comparison

ApproachTimeSpaceUse when
Greedy$O(I\log
Brute-force recursion$O(I^t)$
Top-down DP (memo)$O(t \cdotI)$
Bottom-up DP$O(t \cdotI)$

Follow-ups to anticipate

The interviewer asks…What to do
”Return the coin combination, not the count.”Track from[a]; reconstruct by walking back. Shown above.
”Count the number of ways instead.”Swap min for +; loop coin outer, amount inner. Shown above.
”Each coin can be used at most once.”0/1 knapsack: coin outer, amount inner descending.
”Each coin has a finite count kik_i.”Bounded knapsack: add an inner loop over counts, or decompose counts into binary multiples.
”Coin values and target can be negative or zero.”Zero coins create infinite combinations (degenerate). Negatives require bounded search.
”Space optimize further.”The 1D array is already optimal for standard coin change. For “ways”, coin-outer iteration naturally uses 1D.
”What if I want to minimize coins but also break ties by fewest distinct denominations?”Multi-objective DP: store a tuple (coins, distinct) per cell and lex-compare.
*“Very large tt (e.g., 101210^{12}), small $I

Pattern recognition — what Coin Change really teaches

The real payoff of this chapter is the three-step DP recipe, which applies to every DP problem in the book:

  1. Name the subproblem. What does f(state)f(\text{state}) mean? “Minimum coins to make amount aa.” Without this, you’ll write a broken recurrence.
  2. Write the recurrence. How does f(state)f(\text{state}) relate to ff of smaller states? C(t)=1+mincC(tc)C(t) = 1 + \min_c C(t - c).
  3. Decide memoize vs. tabulate. Same complexity; tabulation is usually what you ship.

Other chapters where this recipe applies directly:

  • Climbing Stairsf(n)=f(n1)+f(n2)f(n) = f(n-1) + f(n-2).
  • House Robberf(n)=max(f(n1),f(n2)+nums[n])f(n) = \max(f(n-1), f(n-2) + \text{nums}[n]).
  • Longest Increasing Subsequencef(i)=1+maxj<i,aj<aif(j)f(i) = 1 + \max_{j < i, \, a_j < a_i} f(j).
  • Edit Distancef(i,j)=f(i, j) = min of three updates.
  • 0/1 Knapsack, Unbounded Knapsack — as above.
  • Word Break, Partition Equal Subset Sum, Target Sum, Palindrome Partitioning II, Longest Palindromic Subsequence, Longest Common Subsequence.

Once the recipe becomes automatic — “OK, this is DP; my subproblem is …; my recurrence is …; I’ll tabulate” — the entire DP portion of the interview stops feeling like a separate skill and becomes the same skill applied over and over.

What to say out loud — the delivery script

  1. Restate the problem. “I have coin denominations and a target amount; I need the minimum number of coins summing to exactly the target.”
  2. Ask clarifying questions. Unlimited coins? Count vs. combination? Return value when unreachable?
  3. Mention greedy and reject it. “Greedy is tempting — always take the largest coin — but it fails on [1,5,8],t=12[1, 5, 8], t = 12. So I need something smarter.”
  4. Announce optimal substructure. “The optimal solution for tt uses some last coin cc, after which the rest is an optimal solution for tct - c. That’s optimal substructure.”
  5. Announce overlapping subproblems. “And the same subproblem (t3t - 3, say) gets reached by many paths, so this is DP.”
  6. Write the recurrence. C(t)=1+mincC(tc)C(t) = 1 + \min_c C(t - c), with base C(0)=0C(0) = 0, C(t<0)=+C(t < 0) = +\infty.
  7. Write the bottom-up code. Ten lines.
  8. Walk an example. Fill dp for I=[1,2,5],t=5I = [1, 2, 5], t = 5 by hand.
  9. State the invariant. “When we finish dp[a]dp[a], it is the true minimum for amount aa, by induction on aa.”
  10. Call out edge cases. t=0t = 0, empty II, unreachable tt, very large tt.
  11. Gesture at follow-ups. “If you want the actual coin combination, I track a from[] array and walk back. If you want the count of ways, the recurrence becomes sum instead of min, and the loop order flips.”
  12. Summarize. “I’d ship the bottom-up DP. O(tI)O(t \cdot |I|) time, O(t)O(t) space.”

Summary

Coin Change is the stepping stone. Learn the three-step recipe here — name the subproblem, write the recurrence, tabulate — and you will use it on every DP problem after. Know why greedy fails. State optimal substructure and overlapping subproblems by name. Be ready for reconstruction and count-of-ways follow-ups. Understand the loop-order distinction between combinations and permutations — it is the micro-insight that separates good candidates from excellent ones.

Coin Change is in the interview rotation because it discriminates: between candidates who pattern-match and candidates who reason. The chapter’s goal is to put you firmly in the second camp.