Search
⌘K
Get Premium
Concurrency
Coordination
Learn how to coordinate work between threads.
🗓️ Coordination is about threads communicating and handing off work. One thread produces tasks, another consumes them. A service sends a request, another service processes it. How do independent execution paths signal each other without burning CPU or corrupting state?
The Problem
Imagine you're building a task scheduler for a web app. Some work simply doesn't belong on the request path. It takes too long, and if you run it inline, every API call ends up blocked.
So you push that work into the background. Users sign up and need welcome emails. They upload profile photos that need resizing. Admins request monthly reports that take minutes to generate. API handlers enqueue tasks, and a pool of worker threads processes them asynchronously.
Conceptually, the architecture is simple: API handlers produce tasks, workers consume them, and something sits in between to coordinate the handoff.
Coordination
When load is steady, this works well, but cracks start to show on the edges.
First, consider what happens when workers are ready to run, but there's no work to do. The most naïve approach is to just keep checking for work until something shows up.
Busy-waiting (anti-pattern)
Python
while True:
if queue:
task = queue.pop(0)
execute(task)
This is busy-waiting, and it can be disastrous. Each worker spins in a tight loop, burning CPU while doing no useful work. With eight workers on an eight-core machine, you can consume 100% of your compute capacity just checking an empty queue. When tasks finally arrive, there's no CPU left to run them.
You might try to fix this by sleeping when there's no work.
Sleep-polling (anti-pattern)
Python
import time
while True:
if queue:
task = queue.pop(0)
execute(task)
else:
time.sleep(0.1)
That reduces CPU usage, but now you've traded waste for latency. A task that arrives 1 ms after a worker goes to sleep waits nearly 100 ms before being processed. Sleep longer and the system feels sluggish. Sleep shorter and you're back to burning CPU.
Now flip the problem around. What happens when producers are faster than consumers?
Say a marketing email goes out and 50,000 users click a link at once. Each request enqueues background work.
09:00:00.000 - Queue size: 0
09:00:00.100 - Queue size: 5,000
09:00:00.200 - Queue size: 12,000
09:00:00.300 - Queue size: 23,000
09:00:00.400 - Queue size: 38,000
09:00:00.500 - Queue size: 50,000Your eight workers can process maybe 100 tasks per second, which means draining a queue of thousands takes minutes. The delay itself isn't what kills you though. Memory is the real problem. Every single task sitting in that queue is an object on the heap consuming memory. If the queue is unbounded and keeps accepting new tasks, it just grows and grows until eventually you hit an OutOfMemoryError. When that happens, the entire service crashes. Not just your background processing, but the whole thing. Your API goes down and everything stops working.
This is a coordination problem. How do threads communicate and sequence their work? They need to signal each other ("work is ready"), wait efficiently without burning CPU, and handle the case where one side is faster than the other. Three things need solving:
- Efficient waiting — consumers should sleep when there's no work, waking immediately when work arrives
- Backpressure — producers should slow down when consumers can't keep up, preventing memory exhaustion
- Thread safety — the coordination mechanism itself must handle concurrent access without corruption
The Solutions
There are two fundamentally different approaches to solving these problems. Shared state coordination uses data structures that multiple threads access directly, like a queue that producers push to and consumers pull from. Message passing coordination avoids shared state entirely. Each component has its own inbox and communicates by sending messages.
Let's look at both.
Shared State Coordination
Wait/Notify (Condition Variables)
Blocking Queues
Message Passing Coordination
The Actor Model
Common Problems
Process Requests Asynchronously
Examples
Handle Bursty Traffic
Examples
Conclusion
Purchase Premium to Keep Reading
Unlock this article and so much more with Hello Interview Premium
Purchase PremiumTrack your interview readiness
Your personal checklist helps you know what to study and keep track of your progress.
View ChecklistReading Progress
On This Page

Schedule a mock interview
Meet with a FAANG senior+ engineer or manager and learn exactly what it takes to get the job.