Back to Main
Learn System Design
Get Premium
Deep Dives
Numbers to Know
Understand what modern hardware can actually handle in 2025.
Intro
Our industry moves fast. The hardware we build systems on evolves constantly, which means even recent textbooks can become outdated quickly. A book published just a few years ago might be teaching patterns that still make sense, but quoting numbers that are off by orders of magnitude.
One of the biggest giveaways that a candidate has book knowledge but no hands-on experience during a system design interview is when they rely on outdated hardware constraints. They do scale calculations using numbers from 2015 (or even 2020!) that dramatically underestimate what modern systems can handle. You'll hear concerns about database sizes, memory limits, and storage costs that made sense then, but would lead to significantly over-engineered systems today.
This isn't the candidate's fault – they're doing the right thing by studying. But understanding modern hardware capabilities is crucial for making good system design decisions. When to shard a database, whether to cache aggressively, how to handle large objects – these choices all depend on having an accurate sense of what today's hardware can handle.
Let's look at the numbers that actually matter in 2025.
Modern Hardware Limits
Modern servers pack serious computing power. An AWS M6i.32xlarge comes with 512 GiB of memory and 128 vCPUs for general workloads. Memory-optimized instances go further: the X1e.32xlarge provides 4 TB of RAM, while the U-24tb1.metal reaches 24 TB of RAM. This shift matters because many applications that once required distributed systems can now run on a single machine.
Storage capacity has seen similar growth. Modern instances like AWS's i3en.24xlarge provide 60 TB of local SSD storage. If you need more, the D3en.12xlarge offers 336 TB of HDD storage for data-heavy workloads. Object storage like S3 is effectively unlimited, handling petabyte-scale deployments as a standard practice. The days of storage being a primary constraint are largely behind us.
Network capabilities haven't stagnated either. Within a datacenter, 10 Gbps is standard, with high-performance instances supporting up to 20 Gbps. Cross-region bandwidth typically ranges from 100 Mbps to 1 Gbps. Latency remains predictable: 1-2ms within a region, and 50-150ms cross-region. This consistent performance allows for reliable distributed system design.
These aren't just incremental improvements – they represent a step change in what's possible. When textbooks talk about splitting databases at 100GB or avoiding large objects in memory, they're working from outdated constraints. The hardware running our systems today would have been unimaginable a decade ago, and these capabilities fundamentally change how we approach system design.
Applying These Numbers in System Design Interviews
Let's look at how these numbers impact specific components and the decisions we make when designing systems in an interview.
Caching
Databases
Application Servers
Message Queues
Cheat Sheet
Common Mistakes In Interviews
Premature sharding
Overestimating latency
Over-engineering given a high write throughput
Conclusion
Schedule a mock interview
Meet with a FAANG senior+ engineer or manager and learn exactly what it takes to get the job.
© 2025 Optick Labs Inc. All rights reserved.