Watch the author walk through the problem step-by-step
Watch Video Walkthrough
Watch the author walk through the problem step-by-step
Understanding the Problem
📹 What are Facebook Live Comments?
Facebook Live Comments is a feature that enables viewers to post comments on a live video feed. Viewers can see a continuous stream of comments in near-real-time.
Here's how your requirements section might look on your whiteboard:
FB Live Comments Requirements
The Set Up
Planning the Approach
Before you move on to designing the system, it's important to start by taking a moment to plan your strategy. Fortunately, for these common product style system design questions, the plan should be straightforward: build your design up sequentially, going one by one through your functional requirements. This will help you stay focused and ensure you don't get lost in the weeds as you go. Once you've satisfied the functional requirements, you'll rely on your non-functional requirements to guide you through layering on depth and complexity to your design.
I like to begin with a broad overview of the primary entities. Initially, establishing these key entities will guide our thought process and lay a solid foundation as we progress towards defining the API. Think of these as the "nouns" of the system.
Why just the entities and not the whole data model at this point? The reality is we're too early in the design and likely can't accurately enumerate all the columns/fields yet. Instead, we start by grasping the core entities and then build up the data model as we progress with the design.
For this particular problem, we only have three core entities:
User: A user can be a viewer or a broadcaster.
Live Video: The video that is being broadcasted by a user (this is owned and managed by a different team, but is relevant as we will need to integrate with it).
Comment: The message posted by a user on a live video.
In your interview, this can be as simple as a bulleted list like:
FB Live Comments Core Entities
Now, let's carry on to outline the API, tackling each functional requirement in sequence. This step-by-step approach will help us maintain focus and manage scope effectively.
Note that the userId is not passed in the request body. Instead, it is a part of the request header, either by way of a session token or a JWT. This is a common pattern in modern web applications. The client is responsible for storing the session token or JWT and passing it in the request header. The server then validates the token and extracts the userId from it. This is a more secure approach than passing the userId in the request body because it prevents users from tampering with the request and impersonating other users.
We also need to be able to fetch past comments for a given live video.
GET /comments/:liveVideoId?cursor={last_comment_id}&pageSize=10&sort=desc
Pagination will be important for this endpoint. More on that later when we get deeper into the design.
To get started with our high-level design, let's begin by addressing the first functional requirement.
1) Viewers can post comments on a Live video feed
First things first, we need to make sure that users are able to post a comment.
This should be rather simple. Users will initiate a POST request to the POST /comments/:liveVideoId endpoint with the comment message. The server will then validate the request and store the comment in the database.
FB Live Comments Create Comment
Commenter Client: The commenter client is a web or mobile application that allows users to post comments on a live video feed. It is responsible for authenticating the user and sending the comment to the Comment Management Service.
Comment Management Service: The comment management service is responsible for creating and querying comments. It receives comments from the commenter client and stores them in the comments database. It will also be responsible for retrieving comments from the comments database and sending them to the viewer client -- more on that later.
Comments Database: For the comments database, we'll choose DynamoDB because it is a fast, scalable, and highly available database. It's is a good fit for our use case because we are storing simple comments that don't require complex relationships or transactions, though, other databases like Postgres or MySQL would work here as well.
Let's walk through exactly what happens when a user posts a new comment.
The users drafts a comment from their device (commenter client)
The commenter client sends the comment to the comment management service via the POST /comments/:liveVideoId API endpoint.
The comment management service receives the request and stores the comment in the comments database.
Great, that was easy, but things get a little more complicated when we start to consider how users will view comments.
2) Viewers can see new comments being posted while they are watching the live video.
Now that we've handled comment creation, we need to tackle the challenge of comment distribution - ensuring that when one user posts a comment, all other viewers of the live video can see it.
We can start with the simplest approach: polling.
A working, though naive, approach is to have the clients poll for new comments every few seconds. We would use the GET /comments/:liveVideoId?since={last_comment_id} endpoint, adding a since parameter to the request that points to the last comment id that the client has seen. The server would then return all comments that were posted after the since comment and the client would append them to the list of comments displayed on the screen.
Polling
This is a start, but it doesn't scale. As the number of comments and viewers grows, the polling frequency will need to increase to keep up with the demand. This will put a lot of strain on the database and will result in many unnecessary requests (since most of the times there will be no new comments to fetch). In order to meet our requirements of "near real-time" comments, we would need to poll the database every few milliseconds, which isn't feasible.
In your interview, if you already know the more accurate, yet complex, solution, you can jump right to it. Just make sure you justify your decision and explain the tradeoffs.
In the case that you are seeing a problem for the first time, starting simple like this is great and sets a foundation for you to build upon in the deep dives.
3) Viewers can see comments made before they joined the live feed
When a user joins a live video, they need two things:
They should immediately start seeing new comments as they are posted in real-time
They should see a history of comments that were posted before they joined
For the history of comments, users should be able to scroll up to view progressively older comments - this UI pattern is called "infinite scrolling" and is commonly used in chat applications.
We can fetch the initial set of recent comments using our GET /comments/:liveVideoId endpoint. While we could use the since parameter we added earlier, that would give us comments newer than a timestamp - the opposite of what we want for loading historical comments. What we really want is something that will "give me the N most recent comments that occurred before a certain timestamp".
To do that, we can introduce pagination. Pagination is a common technique used to break up a large set of results into smaller chunks. It is typically used in conjunction with infinite scrolling to allow users to load more results as they scroll down the page.
Whenever you have a requirement that involves loading a large set of results, you should consider pagination.
When it comes to implementing pagination, there are two main approaches: offset pagination and cursor pagination.
Cursor based pagination is a better fit for our use case. Unlike offset pagination, it's more efficient as we don't need to scan through all preceding rows. It's stable - new comments won't disrupt the cursor's position during scrolling. It works well with DynamoDB's LastEvaluatedKey feature, and it scales better since performance remains consistent as comment volume grows.
1) How can we ensure comments are broadcasted to viewers in real-time?
Our simple polling solution was a good start, but it's not going to pass the interview. Instead of having the client "guess" when new comments are ready and requesting them, we can use a push based model. This way the server can push new comments to the client as soon as they are created.
There are two main ways we can implement this. Websockets and Server Sent Events (SSE). Let's weigh the pros and cons of each.
Pattern: Real-time Updates
Facebook Live Comments showcases the real-time updates pattern at massive scale. Whether it's broadcasting comments via Server-Sent Events, distributing updates through pub/sub systems, or coordinating across multiple servers, the same principles apply to any system requiring instant data delivery, from collaborative editing to live dashboards to gaming platforms.
User posts a comment and it is persisted to the database (as explained above)
In order for all viewers to see the comment, the Comment Management Service will send the comment over SSE to all connected clients that are subscribed to that live video.
The Commenter Client will receive the comment and add it to the comment feed for the viewer to see.
Astute readers have probably recognized that this solution does not scale. You're right. We'll get to that in the next deep dive.
2) How will the system scale to support millions of concurrent viewers?
We landed on Server Sent Events (SSE) being the appropriate technology. Now we need to figure out how to scale it. With SSE, we need to maintain an open connection for each viewer. Modern servers and operating systems can handle large numbers of concurrent connections—commonly in the range of 100k. Realistically, system resources like CPU, memory, and file descriptors become the bottleneck before you hit any theoretical limit. If we want to support many millions of concurrent viewers, we simply won't be able to do it on a single machine. We must scale horizontally by adding more servers.
The question then becomes how do we distribute the load across multiple servers and ensure each server knows which comments to send to which viewers?
Contrary to a common misconception, the capacity isn't limited to 65,535 connections. That number refers to the range of port numbers, not the number of connections a single server port can handle. Each TCP connection is identified by a unique combination of source IP, source port, destination IP, and destination port. With proper OS tuning and resource allocation, a single listening port can handle hundreds of thousands or even millions of concurrent SSE connections.
In practice, however, the server's hardware and OS limits—rather than the theoretical port limit—determine the maximum number of simultaneous connections.
Before we dive into solutions, let's understand the core challenge with horizontal scaling:
When we add more servers to handle the load, viewers watching the same live video may end up connected to different servers. For example:
UserA is watching Live Video 1 and connected to Server 1
UserB is watching Live Video 1 but connected to Server 2
Now imagine a new comment is posted on Live Video 1. If this comment request hits Server 1:
Server 1 can easily send it to UserA since they're directly connected
But Server 1 has no way to send it to UserB, who is connected to Server 2
This is our key challenge: How do we ensure all viewers see new comments, regardless of which server they're connected to?
Advanced candidates may point out the tradeoffs in different pub/sub systems. For example, Kafka is a popular pub/sub system that is highly scalable and fault-tolerant, but it has a hard time adapting to scenarios where dynamic subscription and unsubscription based on user interactions, such as scrolling through a live feed or switching live videos, is required. Redis pub/sub provides low latency but offers no message persistence (fire-and-forget), which could lead to missed messages during disconnections. Kafka, while having higher latency, provides message persistence and exactly-once delivery semantics, making it more suitable for scenarios where message delivery guarantees are critical.
However, the main concern with Redis is its potential for data loss due to periodic disk writes and the challenges of memory limitation, which could be a bottleneck for scalability. Additionally, while Redis offers high availability configurations like Redis Sentinel or Redis Active-Active, these add to the operational complexity of managing a Redis-based system. The pub/sub solution is the "correct academic answer" and should clearly pass the interview, but the reality is choosing the right pub/sub system is a complex decision that requires a deep understanding of the system's requirements and tradeoffs.
Both the pub/sub approach with viewer co-location and the dispatcher service approach are great solutions. The pub/sub is typically easier with fewer corner cases, so it is the one I would use in an interview, but both are "correct" answers.
Ok, that was a lot. You may be thinking, "how much of that is actually required from me in an interview?" Let's break it down.
Mid-level
Breadth vs. Depth: A mid-level candidate will be mostly focused on breadth (80% vs 20%). You should be able to craft a high-level design that meets the functional requirements you've defined, but many of the components will be abstractions with which you only have surface-level familiarity.
Probing the Basics: Your interviewer will spend some time probing the basics to confirm that you know what each component in your system does. For example, if you add an API Gateway, expect that they may ask you what it does and how it works (at a high level). In short, the interviewer is not taking anything for granted with respect to your knowledge.
Mixture of Driving and Taking the Backseat: You should drive the early stages of the interview in particular, but the interviewer doesn't expect that you are able to proactively recognize problems in your design with high precision. Because of this, it's reasonable that they will take over and drive the later stages of the interview while probing your design.
The Bar for FB Live Comments: For this question, I expect that candidates proactively realize the limitations with a polling approach and start to reason around a push based model. With only minor hints they should be able to come up with the pub/sub solution and should be able to scale it with some help from the interviewer.
Senior
Depth of Expertise: As a senior candidate, expectations shift towards more in-depth knowledge — about 60% breadth and 40% depth. This means you should be able to go into technical details in areas where you have hands-on experience. It's crucial that you demonstrate a deep understanding of key concepts and technologies relevant to the task at hand.
Advanced System Design: You should be familiar with advanced system design principles. For example, knowing how to use pub/sub for broadcasting messages. You're also expected to understand some of the challenges that come with it and discuss detailed scaling strategies (it's ok if this took some probing/hints from the interviewer). Your ability to navigate these advanced topics with confidence and clarity is key.
Articulating Architectural Decisions: You should be able to clearly articulate the pros and cons of different architectural choices, especially how they impact scalability, performance, and maintainability. You justify your decisions and explain the trade-offs involved in your design choices.
Problem-Solving and Proactivity: You should demonstrate strong problem-solving skills and a proactive approach. This includes anticipating potential challenges in your designs and suggesting improvements. You need to be adept at identifying and addressing bottlenecks, optimizing performance, and ensuring system reliability.
The Bar for Fb Live Comments: For this question, E5 candidates are expected to speed through the initial high level design so you can spend time discussing, in detail, how to scale the system. You should be able to reason through the limitations of the initial design and come up with a pub/sub solution with minimal hints. You should proactively lead the scaling discussion and be able to reason through the trade-offs of different solutions.
Staff+
Emphasis on Depth: As a staff+ candidate, the expectation is a deep dive into the nuances of system design — I'm looking for about 40% breadth and 60% depth in your understanding. This level is all about demonstrating that, while you may not have solved this particular problem before, you have solved enough problems in the real world to be able to confidently design a solution backed by your experience.
You should know which technologies to use, not just in theory but in practice, and be able to draw from your past experiences to explain how they'd be applied to solve specific problems effectively. The interviewer knows you know the small stuff (REST API, data normalization, etc) so you can breeze through that at a high level so you have time to get into what is interesting.
High Degree of Proactivity: At this level, an exceptional degree of proactivity is expected. You should be able to identify and solve issues independently, demonstrating a strong ability to recognize and address the core challenges in system design. This involves not just responding to problems as they arise but anticipating them and implementing preemptive solutions. Your interviewer should intervene only to focus, not to steer.
Practical Application of Technology: You should be well-versed in the practical application of various technologies. Your experience should guide the conversation, showing a clear understanding of how different tools and systems can be configured in real-world scenarios to meet specific requirements.
Complex Problem-Solving and Decision-Making: Your problem-solving skills should be top-notch. This means not only being able to tackle complex technical challenges but also making informed decisions that consider various factors such as scalability, performance, reliability, and maintenance.
Advanced System Design and Scalability: Your approach to system design should be advanced, focusing on scalability and reliability, especially under high load conditions. This includes a thorough understanding of distributed systems, load balancing, caching strategies, and other advanced concepts necessary for building robust, scalable systems.
The Bar for FB Live Comments: For a staff+ candidate, expectations are high regarding depth and quality of solutions, particularly when it comes to scaling the broadcasting of comments. I expect staff+ candidates to not only identify the pub/sub solution but proactively call out the limitations around reliability or scalability and suggest solutions. They likely have a good understanding of the exact technology they would use and can discuss the trade-offs of different solutions in detail.
Mark as read
Your account is free and you can post anonymously if you choose.
Your account is free and you can post anonymously if you choose.