Your Dashboard
Interview Coaching
Learn
System Design
ML System Design
Code
Behavioral
Salary Negotiation
Interview Guides
Leetcode 460. LFU Cache
Implement an LFU cache with O(1) average-time get and put operations that evicts the least-frequently-used key when capacity is exceeded, breaking ties by least-recently-used. The cache must update a key's frequency (and recency) on both get and put, and support inserting and updating values.
Asked at:
Amazon
Oracle
DESCRIPTION
Implement an LFU cache with O(1) average-time get and put operations that evicts the least-frequently-used key when capacity is exceeded, breaking ties by least-recently-used. The cache must update a key's frequency (and recency) on both get and put, and support inserting and updating values.
Input:
Output:
Explanation: put(1,1) and put(2,2) fill cache. get(1) returns 1 and increases its frequency to 2. put(3,3) evicts key 2 (freq:1, least frequent). get(2) returns -1 (was evicted). get(3) returns 3 and increases its frequency to 2
Constraints:
- 1 <= capacity <= 10^4
- 0 <= key, value <= 10^5
- At most 2 * 10^5 calls to get and put combined
- All operations must run in O(1) average time complexity
- get(key) returns -1 if key doesn't exist
- put(key, value) updates value and increments frequency if key exists
Understanding the Problem
Let's trace through an example with capacity=2. Start with empty cache [].
Operation 1: put(1, 'a') → cache is now [(key:1, val:'a', freq:1)]
Operation 2: put(2, 'b') → cache is now [(1,'a',freq:1), (2,'b',freq:1)]. Cache is at capacity!
Operation 3: get(1) → returns 'a' and increases frequency → [(1,'a',freq:2), (2,'b',freq:1)]
Operation 4: put(3, 'c') → Cache is full! Must evict the least frequently used. Key 2 has freq:1 (lowest), so evict it → [(1,'a',freq:2), (3,'c',freq:1)]
Operation 5: put(1, 'd') → Key 1 already exists, update its value and increase frequency → [(1,'d',freq:3), (3,'c',freq:1)]
Key constraint: Both get and put operations must run in O(1) average time.
Edge cases: What if multiple items have the same lowest frequency? (tie-break by least recently used - evict the one that was accessed longest ago). What if we get() a non-existent key? (return -1). What if we put() an existing key? (update value AND increment frequency).
Building Intuition
We need to track THREE pieces of information for each cache entry: the value itself, how many times it's been accessed (frequency), AND when it was last used (for tie-breaking).
The challenge is: when evicting, we need to instantly find the item with minimum frequency, and if there's a tie, the least recently used among them.
A naive approach of scanning through all items to find the minimum frequency would make every operation O(n), violating the O(1) requirement.
But if we organize items by their frequency level, we can instantly access the least-frequently-used group. Within each frequency level, we need to track access order for tie-breaking.
Think of organizing books in a library by popularity:
- Books accessed 1 time go on shelf 1 (ordered by when they were last touched)
- Books accessed 2 times go on shelf 2 (ordered by when they were last touched)
- Books accessed 3 times go on shelf 3, etc.
When removing books, always look at the lowest-numbered shelf first. Within that shelf, books are ordered by time - remove the oldest one (front of the line).
When a book is accessed, it moves up one shelf and goes to the back of the line on that new shelf. This 'grouping by frequency + ordering by recency within each group' structure makes finding the eviction candidate instant.
Common Mistakes
Optimal Solution
The optimal approach uses a combination of hash maps and doubly-linked lists to achieve O(1) operations. Maintain a hash map from keys to cache nodes (containing value and frequency), another hash map from frequency levels to doubly-linked lists of nodes at that frequency, and track the minimum frequency level. When accessing a key, move it from its current frequency list to the next frequency list. When evicting, remove the least-recently-used node from the minimum frequency list. This structure allows instant lookups, updates, and evictions.
Initialize LFU Cache with capacity 3
0 / 18
Test Your Knowledge
Login to take the complexity quiz and track your progress
Complexity Analysis
Time Complexity: O(1) All operations (get, put, eviction) use hash map lookups and doubly-linked list operations, which are O(1). Moving a node between frequency lists involves constant-time pointer updates
Space Complexity: O(capacity) We store at most 'capacity' cache entries, each with associated metadata (frequency, pointers). The frequency-to-list mapping and key-to-node mapping both scale with the number of cached items
What We've Learned
- Multi-level hash map architecture: LFU cache requires nested hash maps - one mapping keys to values/frequencies, and another mapping each frequency level to a doubly-linked list of keys. This layered structure enables O(1) access while maintaining frequency ordering, a pattern applicable whenever you need fast lookups across multiple dimensions.
- Doubly-linked list for LRU within frequency: Within each frequency bucket, use a doubly-linked list to track recency order, allowing O(1) removal of the least-recently-used item when evicting. This combines LFU and LRU policies elegantly - the frequency determines which bucket, the list position determines which item within that bucket.
- Minimum frequency tracking optimization: Maintain a minFreq variable that only increments when the last item at that frequency is removed or promoted. This avoids scanning all frequency levels during eviction - you always know exactly which frequency bucket contains eviction candidates, ensuring true O(1) eviction time.
- Frequency promotion pitfall: When incrementing a key's frequency on access, you must remove it from the old frequency list AND add to the new frequency list in the correct order. Forgetting to update both structures or updating minFreq incorrectly (it should only reset to 1 on new insertions, not on promotions) breaks the eviction logic.
- Capacity-based eviction timing: Eviction happens before insertion when at capacity, not after. Check capacity first, evict the LRU item from the minFreq bucket, then insert the new item at frequency 1. This ensures you never exceed capacity and correctly reset minFreq to 1 for new insertions.
- Real-world caching systems: This pattern directly applies to CDN edge caching, database query caches, and browser resource management where you want to keep frequently-accessed items while still evicting stale popular items. The dual LFU+LRU policy balances long-term popularity with recent relevance better than either policy alone.
Related Concepts and Problems to Practice
LFU cache requires efficient tracking of frequencies and eviction of minimum frequency items. Heaps are a fundamental data structure for maintaining ordered elements with O(log n) operations, which is closely related to the priority-based eviction strategy needed in cache implementations.
medium
This problem teaches how to efficiently maintain and retrieve elements based on priority ordering, which is conceptually similar to tracking and evicting the least frequently used items in an LFU cache. Both require maintaining ordered access to elements with specific ranking criteria.
medium
Like LFU cache, this is a data structure design problem requiring implementation of multiple methods with specific time complexity requirements. It teaches how to design efficient data structures with multiple operations, handle state management, and optimize for O(1) or near-constant time operations.
Test Your Understanding
Why is cache the right data structure for this problem?
cache provides the optimal access pattern
It's the only data structure that works
It's the easiest to implement
It uses the least memory
Select an answer to see the explanation
Question Timeline
See when this question was last asked and where, including any notes left by other candidates.
Late October, 2025
Senior
Implement both LRU and LFU caches with thread safety consideration.
Late August, 2025
Oracle
Senior
Late April, 2025
Oracle
Principal
Comments
Hello Interview Premium
Your account is free and you can post anonymously if you choose.