Design a Price Drop Tracker like CamelCamelCamel
Design a system similar to CamelCamelCamel, focusing on tracking price history and alerts for products on e-commerce platforms.
Asked at:
Meta
A price drop tracker like CamelCamelCamel is a consumer service that monitors product prices on Amazon over time, shows historical charts, and alerts users when a product falls to a target price. Users typically paste a product URL, install a browser extension to add items in one click, and receive email or push notifications when deals appear. Interviewers ask this because it exercises event-driven thinking, disciplined external data ingestion under rate limits, time-series storage, and large-scale notification fan-out. You need to balance freshness with cost, protect user trust with validation and idempotency, and design APIs that cleanly model products, prices, and subscriptions. Expect to reason about quotas, scheduling, caching, and recovery paths rather than focus on scraping tricks.
Hello Interview Answer Key
Design a Price Tracking Service
System design answer key for designing a price tracking service like CamelCamelCamel, built by FAANG managers and staff engineers.
Common Functional Requirements
Most candidates end up covering this set of core functionalities
Users should be able to view historical price charts for a product across common ranges (7d, 30d, 1y, all).
Users should be able to subscribe to price-drop alerts by setting a target price or percentage and choosing channels (email, push).
Users should be able to add or find products to track by pasting a URL/ID or via a browser extension and see the current price with basic stats (min, max, average).
Users should be able to manage subscriptions (edit thresholds, pause/resume, unsubscribe) and choose regional/marketplace variants.
Common Deep Dives
Common follow-up questions interviewers like to ask for this question
Schedulers are where many designs fall apart: you must respect per-site rate limits and still keep hot products fresh. Interviewers look for explicit prioritization, backoff, and idempotent scheduling to avoid thundering herds. Treat it as a durable job system rather than cron. - You could model refresh as a priority queue where priority combines active subscribers, recent product page views/extension hits, price volatility, and last-refresh time; keep a tiny fast lane for manual or suspicious-change verifications. - You should partition scheduling by marketplace/region and enforce per-endpoint tokens or rate-limit buckets; apply exponential backoff and circuit breaking on errors and 429s. - You can persist schedule state and use idempotent enqueue keys (productId + time bucket) to avoid duplicates; store nextRunAt to support rescheduling, retries, and catch-up.
False alerts destroy user trust. Mixing data from APIs, a browser extension, and occasional crawls makes normalization and deduplication critical. Interviewers want to see event-style ingestion and guardrails for outliers. - Treat each observation as an event with an idempotency key (productId, source, observedAt); normalize currency, locale, and taxes; reject impossible values and sudden zeroes. - You could auto-accept small deltas but require a high-priority verification for large drops or low-trust sources; compare against recent median or a moving average before triggering alerts. - Consider hysteresis bands (for example 1-2%) and a short sliding-window dedupe in Redis to prevent flapping prices from spamming notifications.
Price drops can trigger massive notification spikes during sales. The fan-out path must be asynchronous, scalable, and idempotent. Interviewers look for event-driven designs with backpressure and failure isolation. - Publish a price-change event to Kafka; consumers look up subscribers via an inverted index keyed by productId (for example Redis sets or a partitioned subscription table) and batch user IDs. - You should rate-limit per user and per provider, and send in chunks; use an outbox table and a delivery state machine so retries are safe; de-duplicate on (userId, productId, newPrice). - Consider separate consumer groups per channel (email, push) with DLQs and jittered retries to smooth spikes and protect provider quotas.
Chart endpoints are read-heavy and latency-sensitive, especially via a browser extension overlaying product pages. Returning full raw time series for years is expensive and slow. Interviewers expect pre-aggregation and caching strategies. - Precompute daily, weekly, and monthly rollups and store them alongside raw points; choose the smallest series that satisfies the requested range to reduce payloads. - Cache hot chart responses and current summaries in Redis with short TTLs and versioned keys; downsample large ranges to keep responses under 500 ms. - Partition time-series storage by product and time; apply compression and retention on old raw points while keeping aggregates for all-time charts.
Relevant Patterns
Relevant patterns that you should know for this question
Price changes should immediately trigger notifications. An event-driven path decouples ingestion from fan-out, adds buffering during spikes, and avoids scanning the entire subscriptions table on a timer.
The workflow—ingest, validate, store, compute deltas, alert, deliver—is multi-stage. Durable steps, retries, idempotency keys, and compensations (e.g., verification failures) keep the pipeline correct and auditable.
Price history pages and extension overlays are read-heavy. Caching, pre-aggregations, and partitioned time-series storage are essential to serve sub-second charts at scale.
Relevant Technologies
Relevant technologies that could be used to solve this question
PostgreSQL fits product catalogs, users, and subscriptions with strong relational integrity. With partitioned tables and proper indexes, it can also store time-series price points and rollups reliably.
Similar Problems to Practice
Related problems to practice for this question
Both must collect external data under quotas and rate limits, schedule fetches intelligently, normalize heterogeneous responses, and handle politeness and backoff to avoid bans.
Refreshing prices and running verifications are priority jobs with retries and backoff. Designing a durable, partitioned scheduler maps directly to this system’s core freshness problem.
Incoming price observations resemble append-only events that require deduplication, validation, near-real-time processing, and triggering downstream actions (alerts), just like clickstream pipelines.
Red Flags to Avoid
Common mistakes that can sink candidates in an interview
Question Timeline
See when this question was last asked and where, including any notes left by other candidates.
Late August, 2025
Meta
Senior
Late August, 2025
Meta
Manager
Early August, 2025
Meta
Senior
Your account is free and you can post anonymously if you choose.