Davide Spataro

LRU Cache

medium · Design · 32 min read
hash-map linked-list design cache iterator splice stdlib

Introduction

Caches live at every level of modern computing — L1/L2/L3 on the CPU, page cache in the kernel, buffer pools in databases, HTTP caches in the browser, memoization tables in your code. They all share the same problem: memory is finite, so when the cache fills up, something has to go. Least Recently Used is the most common policy: evict whatever hasn’t been touched for the longest time.

Implementing LRU is not an algorithm problem. It is a data-structure composition problem. Your job is to pick two primitives whose operations are individually O(1)O(1) and cross-link them so that every cache operation stays O(1)O(1). That composition move — hash map for keyed lookup, doubly-linked list for ordered mutation, iterators to bridge them — shows up everywhere in systems code. Interviewers reach for LRU because recognizing it separates candidates who think in terms of algorithms from candidates who think in terms of data structures.

When no single data structure supports your operations at the target complexity, cross-link two that do.

If you’ve never composed stdlib containers this way, read the constant-time approach slowly. If you have, skip ahead to the splice and iterator-stability subsection — that’s where the interview signal hides.

What the interviewer is actually testing

Four signals, in priority order:

  1. API instinct. Do you pin down the semantics of get on a miss, get on a hit (does it update order?), put on an existing key, and put at capacity — before writing a line?
  2. Composition. Do you spot that no single standard container supports both O(1)O(1) lookup-by-key and O(1)O(1) ordered mutation? The leap from “I need one DS” to “I need two, cross-linked” is the core move.
  3. Iterator / pointer invariants. When you store a std::list iterator inside a hash map, do you know why it stays valid across splice, push_front, and neighbor erasures? If you don’t, you’ll write a subtle dangling-iterator bug.
  4. Idiomatic C++. splice, emplace_front, std::optional returns, move-only values, list::iterator as a value type — fluency here is the senior signal.

Problem statement

Problem

Design a class LRUCache parameterized by a capacity n1n \ge 1. It must expose two operations, both running in O(1)O(1) time on average:

  • get(key) — returns the value for key if present, otherwise a sentinel (std::nullopt in our C++ port, -1 in the LeetCode variant). A hit also marks key as most-recently-used.
  • put(key, value) — inserts or updates the pair. If inserting would exceed capacity, evict the least-recently-used entry first.
Example 1
Input cap = 1; put(1, A); get(1); put(2, B); get(1); get(2)
Output get(1) → A; get(1) → nullopt; get(2) → B
Capacity 1 evicts on every put of a new key. After put(2, B), key 1 is gone.
Example 2
Input cap = 2; put(a,1); put(b,2); get(a); get(b); put(c,3); get(a); get(b); get(c)
Output get(a) → 1; get(b) → 2; get(a) → nullopt; get(b) → 2; get(c) → 3
After put(c, 3) the LRU is 'a' (not 'b' — get(b) was the most recent op before put(c)), so 'a' is evicted.
Example 3
Input cap = 2; put(1, A); put(2, B); put(1, A2); put(3, C)
Output Cache state after each put: {1:A}, {2:B, 1:A}, {1:A2, 2:B}, {3:C, 1:A2}
Updating key 1 promotes it to MRU. When put(3, C) happens, 2 is the LRU and gets evicted — not 1.

Clarifying questions — and why each one matters

LRU has more moving pieces than most one-function problems. Pinning semantics up front saves you from rewriting code mid-interview.

Clarification questions

Q Does get on a hit update the LRU order?
A The single most load-bearing semantic question. Standard answer: yes — a hit promotes the entry to most-recently-used. If the interviewer says no, the problem is almost trivial (you only need the insertion order, which a plain queue gives you).
Q What does get return on a miss?
A Ask. LeetCode's canonical variant returns -1. An idiomatic C++ API returns std::optional<V>. Either is fine; pick one and be consistent.
Q Can the capacity be 0 or negative?
A Usually n ≥ 1 is guaranteed. If not, decide: is cap = 0 a no-op cache (every put evicts nothing and every get misses), or an error? The constant-time code below is defensive to cap ≥ 1.
Q Are keys unique?
A Yes — a cache is a map, not a multimap. A second put with the same key updates; it does not insert a duplicate.
Q What types are keys and values? Are keys hashable?
A For the canonical solution you need std::hash<K> and operator== on keys. Any LeetCode-style integer keys qualify. If keys are a custom type, ask whether the interviewer wants to see a custom hasher.
Q Is this thread-safe?
A Interview default: no. Mention that concurrent access needs external synchronization or a more involved lock-free design. Don't write that code unless asked.

A mental framework — why one data structure is not enough

List every O(1)O(1)-capable container in the standard library. Hash maps give you O(1)O(1) keyed lookup. Arrays give you O(1)O(1) random access. Doubly-linked lists give you O(1)O(1) insertion, deletion, and move-to-front when you already hold a pointer into them.

Now list the operations we need:

OperationWhat it needs
get hitFind the node by key; move it to the MRU end.
get missConfirm the key is not present.
put newInsert at MRU; possibly evict the LRU.
put existingFind the node; update the value; promote.
EvictFind and remove the tail (LRU).

Every row needs two things: a keyed lookup AND an ordered mutation. No single container covers both in O(1)O(1):

  • A hash map alone has no notion of order.
  • A linked list alone has no notion of keyed lookup.
  • An ordered map (like std::map) gets you O(logn)O(\log n) keyed access to sorted-by-timestamp data — better than linear, but still not constant.

The move: use two containers and cross-link them. The hash map resolves the key to a node in the list in O(1)O(1). The list maintains the MRU → LRU order and allows O(1)O(1) splicing to its front. Neither half works alone; together they do.

Discussion

Approach 01

Brute force — array plus linear scan

A vector holding entries in MRU → LRU order. Every operation scans linearly. Correct, slow, your baseline.

Time O(n) per op Space O(n)

The interview narration

Say this out loud:

“The straightforward way is a vector of (key, value) pairs kept in MRU-to-LRU order. Both get and put scan for the key, then shuffle the entry to the front — so each is O(n)O(n). I’ll code this as a baseline, then find something faster.”

How it works, step by step

The data structure is a single vector, ordered so that the most-recently-used entry sits at index 0 and the least-recently-used entry sits at the back. That ordering is the entire state — we can reconstruct the LRU policy from it alone.

get(key): walk the vector with std::find_if looking for the matching key. If we miss, return nullopt. If we hit, we’ve found the entry; remove it from its current position and re-insert at the front. Return the value. The “promotion to MRU” is physically moving the entry in the vector.

put(key, value): first, scan for the key. Three cases fall out:

  1. Key found (update): erase the old entry, then insert the new (key, value) at the front. Size is unchanged. The old value is discarded.
  2. Key absent, cache has room: insert (key, value) at the front. Size grows by one.
  3. Key absent, cache at capacity: drop the tail with pop_back, then insert the new entry at the front. This is the eviction case — the LRU is exactly the last element of the vector.

Notice how the position in the vector is the LRU metadata. We never need a timestamp, a counter, or an external ordering. That’s the one nice property to carry into the better solutions: we only need an ordering, not a clock.

The code

brute-force.cpp
#include <algorithm>
#include <cstddef>
#include <optional>
#include <utility>
#include <vector>
template <typename K, typename V>
class LRUCacheBrute {
public:
explicit LRUCacheBrute(std::size_t capacity) : capacity_(capacity) {}
std::optional<V> get(const K& key) {
auto it = find(key);
if (it == items_.end()) return std::nullopt;
auto entry = std::move(*it);
items_.erase(it);
items_.insert(items_.begin(), std::move(entry));
return items_.front().second;
}
void put(const K& key, V value) {
if (auto it = find(key); it != items_.end()) {
items_.erase(it);
} else if (items_.size() >= capacity_) {
items_.pop_back();
}
items_.insert(items_.begin(), {key, std::move(value)});
}
private:
using Entry = std::pair<K, V>;
std::vector<Entry> items_; // MRU front → LRU back
std::size_t capacity_;
auto find(const K& key) {
return std::find_if(items_.begin(), items_.end(),
[&](const Entry& e) { return e.first == key; });
}
};

A vector in MRU-first order; find-and-hoist on every access. Linear time per op but shows the semantics crisply.

Why both operations are O(n)O(n)

Two distinct linear costs contribute:

  • std::find_if on the vector scans up to nn entries. If we miss on get, we touched every element. Cost: O(n)O(n).
  • std::vector::insert(begin(), …) shifts every existing element right by one slot to make room. Cost: O(n)O(n).

Both operations do both of those, so the worst case is 2n2n work — which is still O(n)O(n). Even the “lucky hit at position 0” case does a linear insert because we erase then re-insert at the front. If you replace the vector with a std::deque, the insert-at-front is O(1)O(1) but the scan is still linear — you’ve only cut the constant, not the asymptote.

Edge cases the mechanics handle for free

  • Empty cache: std::find_if returns end(), we return nullopt. No special case needed.
  • put on existing key: the erase-then-insert dance updates the value and promotes in one path. We never special-case “exists” vs “new” beyond the size check.
  • Capacity 1: the vector holds at most one entry. Every new-key put pops the old and pushes the new. No off-by-one traps because pop_back on a non-empty vector is always valid.

Why it’s rejected

Production caches serve millions of get per second. O(n)O(n) per access doesn’t scale. And as a signal: we haven’t exercised any actual algorithmic insight yet — this is the version the interviewer wants us to improve, not to ship.

Approach 02

Log-time — ordered map on timestamps

Hash map for keyed lookup + std::map keyed on a monotonic timestamp for LRU retrieval. O(log n) per op. A useful intermediate stop.

Time O(log n) per op Space O(n) pedagogicalordered-map

The insight

We can trivially get O(1)O(1) keyed lookup with std::unordered_map. The hard part is ordering — finding the least-recently-used entry cheaply, and updating the “last touched” order on every access.

The move here: attach a monotonic timestamp to every entry. A global counter clock_ is incremented on every touch. Keeping the keys indexed both by key (for lookup) and by timestamp (for LRU retrieval) lets us answer both questions logarithmically.

Why two maps

Each of the two containers answers a different question:

  • unordered_map<K, {V, Stamp}> answers “given this key, what is its value and when was it last touched?” in O(1)O(1).
  • map<Stamp, K> answers “what key has the smallest timestamp?” in O(logn)O(\log n) — it’s just times_.begin().

The two maps are mirrors of each other: every live key has exactly one entry in each. Every time we touch a key, we must update both. If they fall out of sync — a timestamp lingering in times_ after the entry is gone — the next eviction picks the wrong key. That invariant (“the two maps name the same set of live keys, and the stamps match”) is what you’d assert in a debug build.

The “touch” dance

Every get hit and every put on an existing key must refresh the timestamp. That refresh is four steps, and the order matters:

  1. Read the old timestamp from the hash-map entry.
  2. Erase the old timestamp from times_ (the ordered map). Skipping this step leaves a stale entry that will later be “evicted” for a key that still exists.
  3. Bump the global clock; this is the new timestamp.
  4. Write the new timestamp back into both the hash-map entry and the ordered map.

In the touch(it) helper, those four steps are exactly the four lines. The critical one is step 2: the old ordered-map entry must be removed before the new one is inserted, or you’ll have two entries claiming the same key.

Walking through put at capacity

put on a new key when the cache is full is the only operation that evicts, and it’s worth tracing:

  1. Look up the key in values_. Not found → go to the eviction branch.
  2. values_.size() == capacity_ → we must evict.
  3. Read *times_.begin() — that’s the (smallest stamp, LRU key) pair.
  4. Erase the LRU entry from values_ (by key).
  5. Erase the LRU entry from times_ (by iterator).
  6. Bump the clock; insert the new (key, value, timestamp) into values_ and (timestamp, key) into times_.

Five of those six steps are O(logn)O(\log n) or O(1)O(1) on the hash map. The O(logn)O(\log n) is the ordered-map work; everything else is constant.

The code

log-time.cpp
#include <cstddef>
#include <map>
#include <optional>
#include <unordered_map>
#include <utility>
template <typename K, typename V>
class LRUCacheLog {
public:
explicit LRUCacheLog(std::size_t capacity) : capacity_(capacity) {}
std::optional<V> get(const K& key) {
auto it = values_.find(key);
if (it == values_.end()) return std::nullopt;
touch(it);
return it->second.first;
}
void put(const K& key, V value) {
if (auto it = values_.find(key); it != values_.end()) {
it->second.first = std::move(value);
touch(it);
return;
}
if (values_.size() == capacity_) {
auto oldest = times_.begin();
values_.erase(oldest->second);
times_.erase(oldest);
}
const Stamp t = ++clock_;
values_.emplace(key, std::pair{std::move(value), t});
times_.emplace(t, key);
}
private:
using Stamp = std::size_t;
using ValueIter = typename std::unordered_map<K, std::pair<V, Stamp>>::iterator;
std::unordered_map<K, std::pair<V, Stamp>> values_;
std::map<Stamp, K> times_;
std::size_t capacity_;
Stamp clock_ = 0;
void touch(ValueIter it) {
times_.erase(it->second.second);
const Stamp t = ++clock_;
it->second.second = t;
times_.emplace(t, it->first);
}
};

One hash map holds values and their last-touched timestamps; one ordered map holds the reverse, keyed on the timestamp for O(log n) LRU retrieval.

Why it is still beatable

logn\log n in a cache hot path is measurable. Every get on a hit calls std::map::erase and std::map::emplace, which are two red-black-tree rebalances — allocations, rotations, pointer chases. For a cache serving millions of ops per second, that shows up in flamegraphs. The next approach swaps the ordered map for a doubly-linked list, which gives us the MRU → LRU ordering for free without any tree rebalancing.

Complexity

  • Time: O(logn)O(\log n) per op. Every public op does O(1)O(1) hash-map work plus one or two O(logn)O(\log n) ordered-map operations.
  • Space: O(n)O(n) — two containers, each with one entry per live key.
Approach 03

Constant time — std::list + std::unordered_map Recommended

A doubly-linked list of (key, value) pairs in MRU → LRU order. A hash map keyed on K, valued by an iterator into the list. Both get and put are O(1).

Time O(1) per op Space O(n) idiomatic C++spliceiterator stability

The insight, stated cleanly

Take the brute-force idea — a list in MRU → LRU order — and ask: what was expensive? The linear scan to find a key. Replace that scan with a hash-map lookup whose value is an iterator into the list. Now every operation is a constant number of pointer hops:

  1. Hash-map the key → list iterator.
  2. splice the node to the front of the list in O(1)O(1).
  3. On eviction, read the back of the list, erase from the hash map, pop_back the list.

The piece of magic that makes the whole thing work is that std::list iterators are stable across insertions, erasures of other nodes, and splices. The iterator you stored in the hash map ten operations ago still points at the right node today — even though the chain of prev/next pointers around it has been shuffled many times. Storing iterators in the hash map is safe, and that safety is what enables O(1)O(1).

Live · LRU compositionTop row: the hash map, one entry per live key, holding the iterator into the list. Bottom row: the doubly-linked list in MRU → LRU order. Arrows show the map-to-node binding. Stepping through an op, watch how the list reorders while the map entries keep pointing at the right nodes — that's iterator stability in action.
Preset:
Step 0 of 8start
maplisthead (MRU)tailLRU····

Capacity = 2. Cache empty. Both the map and the list hold nothing.

Walking through each operation

Let the list be items_ (holding pair<K, V>) and the map be index_ (holding K → list::iterator). Capacity is cc. Trace each case:

get(key) — hit. (1) index_.find(key) returns an iterator-into-map. (2) The stored value is a list::iterator pointing at the node for key. (3) items_.splice(items_.begin(), items_, node_it) moves that node to the head — a single pointer relink, no allocation. (4) Return the value. The map entry is never touched; the stored iterator still points at the same node, which is now at the head instead of wherever it was.

get(key) — miss. (1) index_.find(key) returns index_.end(). (2) Return std::nullopt. The list is untouched. This is the cheapest op — one hash lookup.

put(key, value) — existing key. (1) index_.find(key) hits; we have the node iterator. (2) Overwrite the value in place: node_it->second = std::move(value). (3) Splice to head. No eviction, no size change.

put(key, value) — new key, room to spare. (1) index_.find(key) misses. (2) items_.size() < capacity_, so no eviction. (3) items_.emplace_front(key, std::move(value)) constructs a new node at the head. (4) index_.emplace(key, items_.begin()) records the new node’s iterator.

put(key, value) — new key, at capacity. The only case that evicts. Execute in this exact order:

  1. Read the LRU key: items_.back().first. This accesses the existing tail node.
  2. Erase the map entry for that key: index_.erase(/*lru key*/).
  3. items_.pop_back() — this destroys the tail node. The iterator that was in the map is now invalidated, but we already removed it in step 2.
  4. items_.emplace_front(key, std::move(value)) inserts the new node at the head.
  5. index_.emplace(key, items_.begin()) records its iterator.

The ordering of steps 1–3 is the correctness sensitivity of the whole design. If you pop_back first, step 1’s access is to freed memory (undefined behavior). If you erase the map entry after the pop_back, you’ve temporarily broken invariant #2 (a stored iterator points at a destroyed node) — still technically safe in this flow, but a code-review red flag.

The code

constant-time.cpp
#include <cstddef>
#include <list>
#include <optional>
#include <unordered_map>
#include <utility>
template <typename K, typename V>
class LRUCache {
public:
explicit LRUCache(std::size_t capacity) : capacity_(capacity) {}
std::optional<V> get(const K& key) {
auto it = index_.find(key);
if (it == index_.end()) return std::nullopt;
items_.splice(items_.begin(), items_, it->second);
return it->second->second;
}
void put(const K& key, V value) {
if (auto it = index_.find(key); it != index_.end()) {
it->second->second = std::move(value);
items_.splice(items_.begin(), items_, it->second);
return;
}
if (items_.size() == capacity_) {
index_.erase(items_.back().first);
items_.pop_back();
}
items_.emplace_front(key, std::move(value));
index_.emplace(key, items_.begin());
}
private:
using Entry = std::pair<K, V>;
using ListIt = typename std::list<Entry>::iterator;
std::list<Entry> items_; // MRU at front, LRU at back
std::unordered_map<K, ListIt> index_; // key → node in the list
std::size_t capacity_;
};

The production-quality LRU cache: one list, one hash map, splice to promote, pop_back to evict. Every public op runs a constant number of hash and pointer operations.

C++ features worth naming

Interviewers reward explicit mention of the language tools you’re leaning on.

  • std::list::splice(pos, self, it) — the O(1)O(1) move. Detaches the node at it from the list and re-links it before pos. No allocation, no copy, no iterator invalidation. This single call replaces “erase from old position, insert at front.”
  • Iterator stability. std::list<T>::iterator remains valid as long as the node it points at is alive. Splicing a different node, inserting at the front, or erasing the tail all leave your stored iterator pointing at the same node. std::vector has no such guarantee — any reallocation invalidates every iterator.
  • emplace_front(k, v) — constructs the pair in place, avoiding one copy.
  • std::optional<V> as return type — signals “may be absent” without needing a sentinel value or exceptions. std::nullopt is the idiomatic miss.
  • One map, not two. A common first cut stores values in one hash map and iterators in a second. Collapsing to list<pair<K, V>> plus unordered_map<K, iterator> halves the memory overhead and the lookup count per op.

Correctness, precisely

Invariant. After every operation:

  1. items_ holds exactly the live entries in strict MRU → LRU order.
  2. index_ maps each live key to the std::list iterator of its node in items_.
  3. items_.size() ≤ capacity_.

All three hold after construction (empty). Every public operation preserves them:

  • get hit: splice moves the node but doesn’t change which nodes are live; index_ entries still point at the right nodes; size unchanged.
  • put existing: we update the value in place (iterator stable) then splice. Same argument.
  • put new with room: we emplace_front a new node and insert its iterator into index_. Size grows by one — still ≤ capacity.
  • put new at capacity: we read the tail key, erase from index_, pop_back the list (invalidating only the evicted iterator), then do the “with room” case. The erased index_ entry matched the pop_backed node.

The subtlety is the order of the eviction steps. If you pop_back before reading the key, you read a dangling reference. If you erase the hash-map entry by the wrong key, the invariant breaks silently. Write the three lines in this exact order:

index_.erase(items_.back().first); // read key, then erase from map
items_.pop_back(); // erase node (this iterator only)
// ... now insert the new entry

Edge cases

ScenarioBehavior
get on empty cacheReturns nullopt. index_.find misses; no other work.
get on miss with non-empty cacheReturns nullopt. The list is untouched.
put on existing keyValue is updated in place; node splices to front. No size change, no eviction.
put at capacity 1Every new key evicts the previous one. Splice is a no-op (already at front).
Repeated get of the same keyEach call splices; after the first, the node is already at front and splice is a no-op on begin().

Complexity

  • Time: O(1)O(1) expected per op, amortized over the hash table’s load factor. Each operation does a constant number of hash lookups, pointer writes, and a single splice.
  • Space: O(n)O(n) for nn \le capacity live entries. Each entry costs one list node and one hash-map bucket slot with an iterator.

Approach comparison

ApproachgetputSpaceWhen to reach for it
Brute force (vector)O(n)O(n)O(n)O(n)O(n)O(n)Baseline in the narration; never in production.
Log-time (hash + ordered)O(logn)O(\log n)O(logn)O(\log n)O(n)O(n)Pedagogical middle step; mention it, don’t code it.
Constant (list + hash)O(1)O(1)O(1)O(1)O(n)O(n)Default. The production answer.

The constant-time solution is what you ship. State it out loud when you close.

Follow-ups to anticipate

The interviewer asks…What to do
”What if std::list isn’t available?”Hand-roll a doubly-linked list of nodes with prev/next pointers. Same algorithm, more code. Use sentinel head/tail nodes to avoid null checks.
”Make it thread-safe.”Coarsest: wrap every public method in a std::mutex. Finer: shard the cache by key hash to reduce contention. Lock-free LRU exists but is well beyond an interview.
”Implement LFU instead.”Different problem. LFU evicts by access count, not recency. The canonical O(1)O(1) LFU uses a list of lists keyed by frequency — roughly twice the code. Name it, sketch the idea.
”Add TTL (time-to-live) eviction.”Two eviction triggers now: LRU-on-capacity and TTL-on-time. Add an ordered structure on expiration timestamps (or run a sweep on access). Mention O(logn)O(\log n) for the TTL side alongside O(1)O(1) for LRU.
”What if values are large / non-movable?”Store std::unique_ptr<V> in the list so the list owns one allocation per value. std::move into put.
”Bounded by bytes, not entries.”Track a running size; on put, evict from the tail until the new entry fits. Same O(1)O(1) amortized per op (each entry evicted once).
”Concurrent readers, rare writers.”std::shared_mutex: read-lock on get only if you can split get into a pure-read fast path (a miss or a hit that doesn’t need promotion) and a slow path. Tricky; mention as a design, don’t code.
”LRU-K or ARC?”Adaptive caches that track longer histories. Out of scope for most interviews but worth naming — signals you’ve read a database textbook.

Pattern recognition — compose two primitives via an iterator

The move LRU teaches generalizes to any problem where a single container can’t give you every operation in the target complexity:

  • O(1) insert + O(1) remove-min on arbitrary elements. Intrusive heap node + a hash map from key to heap-node pointer. Same move: two DSes, one iterator bridge.
  • Ordered set with O(1) random sample. std::unordered_set for membership, a std::vector for sampling, cross-linked via index. See LeetCode “Insert Delete GetRandom O(1).”
  • LFU cache. Two layers of lists (keyed on frequency) bridged by a hash map.
  • Median from a stream. Two heaps (max-heap lower half, min-heap upper half). No hash map needed, but the composition reflex is the same: no single heap gives you the median.
  • Disjoint-set / union-find. Parent pointer array + rank-by-root — two arrays composed, one operation uses both.

The reflex you want: when the operations list has two or more entries whose O(1)O(1) implementations live in different data structures, compose them. Pick the cheapest bridge — usually an iterator, an index, or a pointer — and store it on the side.

What to say out loud — the delivery script

Ten beats. Rehearse until fluid.

  1. “Let me restate: a cache of capacity nn with get(key) and put(key, value), both O(1)O(1), with LRU eviction.”
  2. Ask the three clarifying questions: “Does get update order? What’s the return on a miss? Thread-safe?”
  3. “The brute force is a vector in MRU-to-LRU order — O(n)O(n) per op. Let me name that and move on.”
  4. “I need O(1)O(1) keyed lookup AND O(1)O(1) ordered mutation. No single container does both.”
  5. “The composition: a doubly-linked list holds the entries in MRU-to-LRU order. A hash map maps each key to its node — specifically, to a std::list iterator into the list.”
  6. “On get hit, I splice the node to the front in O(1)O(1) and return its value. On get miss, I return nullopt.”
  7. “On put with an existing key, I update the value and splice to front. On put with a new key and no room, I evict the tail — read the key first, erase from the map, then pop_back the list — and then insert.”
  8. Write the get — six lines. Trace through a hit and a miss.
  9. Write the put — ten lines. Trace the eviction ordering: “critical: the read of items_.back().first must happen before pop_back.”
  10. std::list iterators are stable across splice and unrelated erasures. That’s what makes the stored iterator in the hash map safe. I’d ship this.”

Summary

LRU Cache looks like a systems-design question and turns out to be a data-structure composition drill. The move — combine a hash map and a doubly-linked list, bridge them with an iterator, promote on every touch with splice, evict from the tail — produces an O(1)O(1) cache with about thirty lines of idiomatic C++. The pattern generalizes: whenever no single container gives you every operation at the target complexity, compose two and cross-link.

The interview signal is not the code length. It is recognizing the composition, stating the iterator-stability invariant aloud, and writing the eviction steps in the one correct order. Candidates who nail all three walk out distinguished.