Search
⌘K

Leetcode 146. LRU Cache

Implement an LRUCache that supports O(1) average-time get(key) returning the value or -1 and put(key, value) which inserts or updates a key and, if capacity is exceeded, evicts the least-recently-used key. The core challenge is maintaining key-value storage together with recency ordering to enable constant-time access, updates, and eviction.

Asked at:

Netflix

Netflix

Amazon

Amazon

S

Skild.ai

Apple


DESCRIPTION

Implement an LRUCache that supports O(1) average-time get(key) returning the value or -1 and put(key, value) which inserts or updates a key and, if capacity is exceeded, evicts the least-recently-used key. The core challenge is maintaining key-value storage together with recency ordering to enable constant-time access, updates, and eviction.

Input:

capacity = 2, operations = [
  ["put", 1, 1],
  ["put", 2, 2],
  ["get", 1],
  ["put", 3, 3],
  ["get", 2],
  ["put", 4, 4],
  ["get", 1],
  ["get", 3],
  ["get", 4]
]

Output:

[null, null, 1, null, -1, null, -1, 3, 4]


Explanation: put(1,1) and put(2,2) fill cache. get(1) returns 1 and marks it recent. put(3,3) evicts key 2 (LRU). get(2) returns -1 (evicted). put(4,4) evicts key 1 (LRU). get(1) returns -1. get(3) and get(4) return their values.

Constraints:

  • 1 <= capacity <= 3000
  • 0 <= key <= 10^4
  • 0 <= value <= 10^5
  • At most 2 * 10^5 calls to get and put
  • Both get and put must run in O(1) average time complexity

Understanding the Problem

Let's understand what we're being asked to build. We need to implement an LRU (Least Recently Used) Cache - a data structure that stores key-value pairs with a fixed capacity.

Tracing through an example with capacity=2:

  • put(1, 'a') → cache: {1:'a'}
  • put(2, 'b') → cache: {1:'a', 2:'b'} (at capacity!)
  • get(1) → returns 'a' and marks key 1 as recently used
  • put(3, 'c') → cache is full! Must evict least recently used key. Key 2 was accessed longest ago, so evict it → cache: {1:'a', 3:'c'}

Critical constraints: Both get() and put() must run in O(1) average time. This means we can't scan through all items to find the least recently used!

Edge cases: What if we get() a non-existent key? (return -1). What if we put() an existing key? (update its value AND mark it as recently used). What if capacity is 1?

Building Intuition

We need to track TWO things simultaneously: (1) key-value mappings for O(1) lookup, and (2) access order to know which item is least recently used.

The challenge is: when we access an item (via get or put), we need to instantly move it to the 'most recent' position. When evicting, we need to instantly remove the 'least recent' item.

A hash map alone gives us O(1) key-value lookup, but doesn't track order. An array tracks order, but inserting/removing from the middle is O(n).

We need a data structure that supports O(1) insertion, deletion, AND reordering. This points us toward a structure where we can move items to the front/back instantly.

Think of a playlist where songs move to the top when played:

  • When you play a song (access), it jumps to position #1
  • When you add a new song and the playlist is full, the song at the bottom (least recently played) gets removed
  • You need to find any song by name instantly (hash map), but also know which song is at the bottom (ordering)

This dual requirement - fast lookup by key AND fast reordering - suggests combining two data structures: one for O(1) key access, another for O(1) position changes.

Common Mistakes

Optimal Solution

The optimal approach combines a hash map with a doubly linked list. The hash map provides O(1) key lookup, mapping each key to its corresponding node in the linked list. The doubly linked list maintains access order, with the most recently used item at the head and least recently used at the tail. When accessing an item (get or put), we move its node to the head in O(1) time using the doubly linked structure. When evicting, we remove the tail node in O(1) time. This dual-structure design achieves O(1) for all operations.

|
capacity
|
2D array: [['put',1,1],['get',1],['put',2,2]]
Format: [['put', key, value], ['get', key]]
Visualization
class Node:
def __init__(self, key=0, value=0):
self.key = key
self.value = value
self.prev = None
self.next = None
def lru_cache_demo(capacity, operations):
"""
Simulates LRU Cache operations using hash map + doubly linked list.
Operations format: [['put', key, value], ['get', key], ...]
Returns list of results (get returns value or -1, put returns None).
"""
# Initialize cache structures
cache = {} # key -> Node mapping
head = Node() # Dummy head
tail = Node() # Dummy tail
head.next = tail
tail.prev = head
cap = capacity
results = []
def remove_node(node):
# Remove node from doubly linked list
prev_node = node.prev
next_node = node.next
prev_node.next = next_node
next_node.prev = prev_node
def add_to_head(node):
# Add node right after head (most recently used position)
node.prev = head
node.next = head.next
head.next.prev = node
head.next = node
def move_to_head(node):
# Move existing node to head (mark as recently used)
remove_node(node)
add_to_head(node)
def remove_tail():
# Remove least recently used node (before tail)
lru_node = tail.prev
remove_node(lru_node)
return lru_node
# Process each operation
for op in operations:
if op[0] == 'get':
key = op[1]
# Check if key exists in cache
if key in cache:
node = cache[key]
# Move to head (mark as recently used)
move_to_head(node)
results.append(node.value)
else:
# Key not found
results.append(-1)
elif op[0] == 'put':
key = op[1]
value = op[2]
# Check if key already exists
if key in cache:
# Update existing node
node = cache[key]
node.value = value
# Move to head (mark as recently used)
move_to_head(node)
else:
# Create new node
new_node = Node(key, value)
cache[key] = new_node
# Add to head
add_to_head(new_node)
# Check capacity
if len(cache) > cap:
# Evict least recently used
lru = remove_tail()
del cache[lru.key]
results.append(None)
return results
CacheCapacity: 0 / 2Cache is empty

Start LRU Cache with capacity 2

0 / 40

Test Your Knowledge

Login to take the complexity quiz and track your progress

Complexity Analysis

Time Complexity: O(1) Both get and put operations involve hash map lookup (O(1)), and linked list node insertion/deletion/movement (O(1) with doubly linked list). No iteration through the cache is needed.

Space Complexity: O(capacity) We store at most 'capacity' key-value pairs in the hash map, and the same number of nodes in the doubly linked list. Space grows linearly with capacity.

What We've Learned

  • HashMap + Doubly Linked List hybrid: When you need O(1) access AND O(1) ordering operations (move to front/back, remove), combine a hash map for lookups with a doubly linked list for order tracking. The hash map stores pointers to list nodes, enabling instant node location and manipulation.
  • Sentinel nodes eliminate edge cases: Use dummy head and tail nodes in your doubly linked list to avoid null checks when adding/removing nodes at boundaries. This means every real node always has non-null prev/next pointers, simplifying insertion and deletion logic dramatically.
  • Access equals update pattern: In LRU cache, every get() operation must update recency by moving the accessed node to the most-recent position (typically the head or tail). Forgetting this is the most common bug - reads aren't passive in cache implementations.
  • Capacity check timing matters: Always check capacity and evict after inserting the new element, not before. For updates (key exists), no eviction is needed. For new insertions at capacity, evict the LRU item, then add - this handles the edge case where capacity=1 correctly.
  • Bidirectional pointer maintenance: When manipulating doubly linked list nodes, always update four pointers in the correct order: the node's prev/next AND its neighbors' pointers. A common pattern is to extract a node (reconnect its neighbors), then insert it elsewhere (update new neighbors and the node itself).
  • Cache eviction policy abstraction: This LRU pattern extends to LFU (Least Frequently Used), MRU (Most Recently Used), and TTL caches. The core insight - combining hash map for O(1) lookup with an auxiliary structure (list, heap, or multiple lists) for O(1) policy enforcement - applies broadly to cache replacement algorithms.

Related Concepts and Problems to Practice

Overview
Lesson
Trie

The Trie data structure lesson is highly relevant because implementing a Trie requires similar design patterns to LRU Cache: maintaining a custom data structure with specific insertion, lookup, and traversal operations while optimizing for time complexity. Both require careful consideration of pointer management and state tracking.

This problem involves implementing a data structure from scratch with multiple methods (insert, search, startsWith), similar to implementing LRU Cache with get and put operations. Both require managing internal state efficiently and ensuring all operations meet specific time complexity requirements.

Overview
Lesson
Linked List

Understanding linked list fundamentals is essential for LRU Cache implementation, as the optimal solution uses a doubly linked list combined with a hash map to achieve O(1) operations. This lesson covers the pointer manipulation and node management concepts critical to building the LRU eviction mechanism.

Test Your Understanding

Why is cache the right data structure for this problem?

1

cache provides the optimal access pattern

2

It's the only data structure that works

3

It's the easiest to implement

4

It uses the least memory

Select an answer to see the explanation

Question Timeline

See when this question was last asked and where, including any notes left by other candidates.

Late February, 2026

Netflix

Netflix

Senior

Mid February, 2026

Amazon

Amazon

Mid-level

I wasn’t explicitly asked to design an LRU cache. Instead, the problem was framed as identifying the top N source IP addresses from a stream of network packets, with the option to trade accuracy for improvements in time and space complexity. My initial approach was to use a heap to track the top N IP addresses as the stream was processed. However, the interviewer asked whether it was possible to achieve better performance than a heap by allowing some loss of accuracy. I initially viewed it purely as a streaming and priority-queue problem and did not recognize it as an LRU-related question until the interviewer suggested thinking about it in terms of an LRU cache.

Mid February, 2026

S

Skild.ai

Senior

Comments

Your account is free and you can post anonymously if you choose.