Common Problems
Rate Limiter
Try This Problem Yourself
Practice with guided hints and real-time feedback
Understanding the Problem
Requirements
"You're building an in-memory rate limiter for an API gateway. The system receives configuration from an external service that provides rate limiting rules per endpoint. Each endpoint can have its own limit with a specific algorithm. Here's an example configuration for one endpoint:{ "endpoint": "/search", "algorithm": "TokenBucket", "algoConfig": { "capacity": 1000, "refillRatePerSecond": 10 } }This config allows bursts up to 1000 requests, refilling at 10 requests per second.Your job is to build the in-memory rate limiter that enforces these rules."
Clarifying Questions
You: "I see the configuration includes algorithm-specific parameters. Are there different parameter sets for different algorithms?"Interviewer: "Yah, different algorithms need different parameters. So the algoConfig object always exists, but the parameters inside it vary."
You: "When a request comes in, what information do we receive? Client ID and endpoint, or something else?"Interviewer: "Yes, exactly. Each request provides a client ID and an endpoint. The client ID is just a string that uniquely identifies who's making the request."
You: "What should we return when checking a request? Just allowed/denied, or more detail?"Interviewer: "Return three things: whether it's allowed, how many requests remain in their quota, and if denied, when they can retry."
You: "What happens if a request comes in for an endpoint we don't have configuration for?"Interviewer: "Good question. Fall back to a default configuration. Don't reject requests just because we're missing config."
You: "Should the system handle concurrent requests from multiple threads?"Interviewer: "Don't worry about it to start. We'll get to it if we have time."
You: "Just to clarify scope, are we building distributed rate limiting across multiple servers, or single-process in-memory?"Interviewer: "Single process, in-memory. Keep it simple."
You: "And the configuration, is it dynamic, or loaded once at startup?"Interviewer: "Loaded at startup. Don't worry about hot-reloading config while the system is running."
Final Requirements
Requirements:
1. Configuration is provided at startup (loaded once)
2. System receives requests with (clientId: string, endpoint: string)
3. Each endpoint has a configuration specifying:
- Algorithm to use (e.g., "TokenBucket", "SlidingWindowLog", etc.)
- Algorithm-specific parameters (e.g., capacity, refillRatePerSecond for Token Bucket)
4. System enforces rate limits by checking clientId against the endpoint's configuration
5. Return structured result: (allowed: boolean, remaining: int, retryAfterMs: long | null)
6. If endpoint has no configuration, use a default limit
Out of scope:
- Distributed rate limiting (Redis, coordination)
- Dynamic configuration updates
- Metrics and monitoring
- Config validation beyond basic checksCore Entities and Relationships
Class Design
RateLimiter
LimiterFactory
Limiter
RateLimitResult
Final Class Design
Implementation
LimiterFactory
RateLimiter
Rate Limiting Algorithms
TokenBucketLimiter
Complete Code Implementation
Verification
Extensibility
1. "How would you add a new rate limiting algorithm?"
2. "How would you handle dynamic configuration updates?"
3. "How would you handle thread safety for concurrent requests?"
4. "How would you handle memory growth from tracking many clients?"
What is Expected at Each Level?
Junior
Mid-level
Senior
Purchase Premium to Keep Reading
Unlock this article and so much more with Hello Interview Premium
Currently up to 20% off
Hello Interview Premium
On This Page
Understanding the Problem
Requirements
Clarifying Questions
Final Requirements
Core Entities and Relationships
Class Design
RateLimiter
LimiterFactory
Limiter
RateLimitResult
Final Class Design
Implementation
LimiterFactory
RateLimiter
Rate Limiting Algorithms
Complete Code Implementation
Verification
Extensibility
1. "How would you add a new rate limiting algorithm?"
2. "How would you handle dynamic configuration updates?"
3. "How would you handle thread safety for concurrent requests?"
4. "How would you handle memory growth from tracking many clients?"
What is Expected at Each Level?
Junior
Mid-level
Senior
Schedule a mock interview
Meet with a FAANG senior+ engineer or manager and learn exactly what it takes to get the job.
