Your Dashboard
Interview Coaching
Learn
System Design
ML System Design
Code
Behavioral
Salary Negotiation
Interview Guides
Leetcode 146. LRU Cache
Implement an LRUCache that supports O(1) average-time get(key) returning the value or -1 and put(key, value) which inserts or updates a key and, if capacity is exceeded, evicts the least-recently-used key. The core challenge is maintaining key-value storage together with recency ordering to enable constant-time access, updates, and eviction.
Asked at:
Oracle
Meta
Amazon
DESCRIPTION
Implement an LRUCache that supports O(1) average-time get(key) returning the value or -1 and put(key, value) which inserts or updates a key and, if capacity is exceeded, evicts the least-recently-used key. The core challenge is maintaining key-value storage together with recency ordering to enable constant-time access, updates, and eviction.
Input:
Output:
Explanation: put(1,1) and put(2,2) fill cache. get(1) returns 1 and marks it recent. put(3,3) evicts key 2 (LRU). get(2) returns -1 (evicted). put(4,4) evicts key 1 (LRU). get(1) returns -1. get(3) and get(4) return their values.
Constraints:
- 1 <= capacity <= 3000
- 0 <= key <= 10^4
- 0 <= value <= 10^5
- At most 2 * 10^5 calls to get and put
- Both get and put must run in O(1) average time complexity
Understanding the Problem
Let's understand what we're being asked to build. We need to implement an LRU (Least Recently Used) Cache - a data structure that stores key-value pairs with a fixed capacity.
Tracing through an example with capacity=2:
- put(1, 'a') → cache: {1:'a'}
- put(2, 'b') → cache: {1:'a', 2:'b'} (at capacity!)
- get(1) → returns 'a' and marks key 1 as recently used
- put(3, 'c') → cache is full! Must evict least recently used key. Key 2 was accessed longest ago, so evict it → cache: {1:'a', 3:'c'}
Critical constraints: Both get() and put() must run in O(1) average time. This means we can't scan through all items to find the least recently used!
Edge cases: What if we get() a non-existent key? (return -1). What if we put() an existing key? (update its value AND mark it as recently used). What if capacity is 1?
Building Intuition
We need to track TWO things simultaneously: (1) key-value mappings for O(1) lookup, and (2) access order to know which item is least recently used.
The challenge is: when we access an item (via get or put), we need to instantly move it to the 'most recent' position. When evicting, we need to instantly remove the 'least recent' item.
A hash map alone gives us O(1) key-value lookup, but doesn't track order. An array tracks order, but inserting/removing from the middle is O(n).
We need a data structure that supports O(1) insertion, deletion, AND reordering. This points us toward a structure where we can move items to the front/back instantly.
Think of a playlist where songs move to the top when played:
- When you play a song (access), it jumps to position #1
- When you add a new song and the playlist is full, the song at the bottom (least recently played) gets removed
- You need to find any song by name instantly (hash map), but also know which song is at the bottom (ordering)
This dual requirement - fast lookup by key AND fast reordering - suggests combining two data structures: one for O(1) key access, another for O(1) position changes.
Common Mistakes
Optimal Solution
The optimal approach combines a hash map with a doubly linked list. The hash map provides O(1) key lookup, mapping each key to its corresponding node in the linked list. The doubly linked list maintains access order, with the most recently used item at the head and least recently used at the tail. When accessing an item (get or put), we move its node to the head in O(1) time using the doubly linked structure. When evicting, we remove the tail node in O(1) time. This dual-structure design achieves O(1) for all operations.
Start LRU Cache with capacity 2
0 / 40
Test Your Knowledge
Login to take the complexity quiz and track your progress
Complexity Analysis
Time Complexity: O(1) Both get and put operations involve hash map lookup (O(1)), and linked list node insertion/deletion/movement (O(1) with doubly linked list). No iteration through the cache is needed.
Space Complexity: O(capacity) We store at most 'capacity' key-value pairs in the hash map, and the same number of nodes in the doubly linked list. Space grows linearly with capacity.
What We've Learned
- HashMap + Doubly Linked List hybrid: When you need O(1) access AND O(1) ordering operations (move to front/back, remove), combine a hash map for lookups with a doubly linked list for order tracking. The hash map stores pointers to list nodes, enabling instant node location and manipulation.
- Sentinel nodes eliminate edge cases: Use dummy head and tail nodes in your doubly linked list to avoid null checks when adding/removing nodes at boundaries. This means every real node always has non-null prev/next pointers, simplifying insertion and deletion logic dramatically.
- Access equals update pattern: In LRU cache, every get() operation must update recency by moving the accessed node to the most-recent position (typically the head or tail). Forgetting this is the most common bug - reads aren't passive in cache implementations.
- Capacity check timing matters: Always check capacity and evict after inserting the new element, not before. For updates (key exists), no eviction is needed. For new insertions at capacity, evict the LRU item, then add - this handles the edge case where capacity=1 correctly.
- Bidirectional pointer maintenance: When manipulating doubly linked list nodes, always update four pointers in the correct order: the node's prev/next AND its neighbors' pointers. A common pattern is to extract a node (reconnect its neighbors), then insert it elsewhere (update new neighbors and the node itself).
- Cache eviction policy abstraction: This LRU pattern extends to LFU (Least Frequently Used), MRU (Most Recently Used), and TTL caches. The core insight - combining hash map for O(1) lookup with an auxiliary structure (list, heap, or multiple lists) for O(1) policy enforcement - applies broadly to cache replacement algorithms.
Related Concepts and Problems to Practice
The Trie data structure lesson is highly relevant because implementing a Trie requires similar design patterns to LRU Cache: maintaining a custom data structure with specific insertion, lookup, and traversal operations while optimizing for time complexity. Both require careful consideration of pointer management and state tracking.
medium
This problem involves implementing a data structure from scratch with multiple methods (insert, search, startsWith), similar to implementing LRU Cache with get and put operations. Both require managing internal state efficiently and ensuring all operations meet specific time complexity requirements.
Understanding linked list fundamentals is essential for LRU Cache implementation, as the optimal solution uses a doubly linked list combined with a hash map to achieve O(1) operations. This lesson covers the pointer manipulation and node management concepts critical to building the LRU eviction mechanism.
Test Your Understanding
Why is cache the right data structure for this problem?
cache provides the optimal access pattern
It's the only data structure that works
It's the easiest to implement
It uses the least memory
Select an answer to see the explanation
Question Timeline
See when this question was last asked and where, including any notes left by other candidates.
Early November, 2025
Meta
Senior
Late October, 2025
Microsoft
Mid-level
Mid October, 2025
Oracle
Staff
Comments
Hello Interview Premium
Your account is free and you can post anonymously if you choose.