Design an Online Game Leaderboard
Design a real-time gaming leaderboard system that displays global top rankings, user's current rank with surrounding players, and rankings within their friends list, handling millions of score updates daily.
Asked at:
Meta

A real-time game leaderboard is a ranking service built into an online game that continuously ingests players' scores and lets them see how they stack up globally and among their friends. Think of the leaderboards you see in Fortnite, Call of Duty, or a coding competition platform. Players expect their rank to update within seconds after each match. Interviewers ask this question to see if you can scale ultra-high write throughput, avoid hot shards, and deliver low-latency reads such as K-neighbors around a user. They’re testing your ability to combine streaming ingestion, in-memory indexing, sharding, and cache recovery while making the right consistency and correctness trade-offs. Expect to justify how you’ll keep updates near real-time without sacrificing availability or blowing up costs.
Common Functional Requirements
Most candidates end up covering this set of core functionalities
Users should be able to view their current score and global rank in near real-time.
Users should be able to see the K scores immediately above and below them on the global leaderboard.
Users should be able to see the K scores around them among their friends only.
Users should be able to view top-N leaderboards for time windows (e.g., daily, weekly, seasonal) that refresh near real-time.
Common Deep Dives
Common follow-up questions interviewers like to ask for this question
At this scale, a single sorted index becomes a contention and availability bottleneck. You need to shard the ranking structure and still compute accurate ranks quickly enough for user requests. - Partition by score range or hashed userId and keep per-shard sorted sets; maintain small shard metadata that tracks counts and score boundaries so you can compute rank via local rank + prefix sums. - Coalesce writes with a log (e.g., Kafka) compacted by userId to avoid thrashing; apply debouncing so each user updates at most every X ms while preserving final correctness. - Prefer monotonic score semantics (max score per epoch) to avoid reorders; if scores can decrease, support remove+add atomics and background reconciliation to prevent ghost entries.
Neighbor windows are latency-critical and accessed frequently. Global windows are index-friendly, but friend windows require filtering by a user-specific graph without exploding read costs. - For global, use rank lookup (e.g., ZRANK/ZREVRANK) and range queries around the index; cache the user’s latest rank briefly to avoid repeated rank scans under churn. - For friends, fetch the friend list (bounded), MGET their scores from a low-latency KV/cache, sort in memory, and slice K-around; cache the result for a few seconds to absorb bursts. - If friend graphs are very large, cap the friend count considered, sample reasonably, or precompute top-N friends periodically; add hedged reads to reduce tail latency.
The top ranks and famous players cause concentrated read and write pressure, which can degrade latency for everyone. Isolating hot sets and smoothing write storms are key. - Keep a separate, small top-M structure updated via a k-way merge of shard heads; replicate it widely for fast reads without hitting shard primaries. - Debounce and collapse per-user updates; use idempotent upserts keyed by (userId, epoch) and enforce backpressure via the ingestion log to survive spikes. - Use request coalescing and read-through caches for celebrity profiles; serve slightly stale (seconds) top lists to minimize lock contention.
Clear semantics avoid surprise rank flips and simplify recovery. You also need a plan to rebuild in-memory indexes quickly after incidents or seasonal resets. - Define deterministic tie-breakers (score desc, then lastUpdate asc, then userId) and time-bucketed/epoch leaderboards (daily/weekly/seasonal) with explicit keys. - Make updates idempotent and keyed by (userId, epoch); reject or reconcile late/out-of-order events; consider watermarking to finalize a season. - Rebuild caches from the durable store by replaying a compacted stream or loading snapshots; warm critical partitions first (e.g., top shards) and accept eventual consistency for long-tail ranks during rebuild.
Relevant Patterns
Relevant patterns that you should know for this question
Players expect near real-time rank changes after each game. You need streaming ingestion, push/pull refresh, and incremental merging to surface updates within seconds without overloading storage or recomputing the full leaderboard.
N-billion daily games create massive write throughput. Write coalescing, idempotent upserts, sharded in-memory indices, and backpressure via a durable log are essential to keep latency low and costs manageable.
Leaderboards concentrate load at the top ranks and around celebrity users. Isolating hot sets, request collapsing, debouncing, and separate top-M replicas prevent lock contention and thundering herds.
Relevant Technologies
Relevant technologies that could be used to solve this question
Similar Problems to Practice
Related problems to practice for this question
Both require maintaining frequently changing top lists and fast rank queries under heavy writes. Techniques like sharded sorted sets, incremental merges of shard heads, and approximate-to-exact reconciliation apply directly.
High fan-in, real-time updates with hotspots and the need to show near real-time views to many readers overlap strongly. Streaming ingestion, cache-first reads, and contention control are shared strategies.
Both ingest massive event volumes and need compaction/coalescing of per-entity updates before serving queries. Durable logs, idempotent processing, and eventual consistency trade-offs are architecturally similar.
Red Flags to Avoid
Common mistakes that can sink candidates in an interview
Question Timeline
See when this question was last asked and where, including any notes left by other candidates.
Late September, 2025
Meta
Manager
Late September, 2025
Meta
Senior
My solution was Redis sorted set based inspired from Alex Gu book. The interviewer followed up with - calculating top 10 amongst friend and 5 above and 5 below in friends might be expensive, can we do something different here.
Late September, 2025
Meta
Manager
Your account is free and you can post anonymously if you choose.