Search
⌘K

Design a Logging Library with Severity Filtering, Rate Limiting, and Aggregation

Implement a logging library across 4 progressive phases: severity-based filtering, unit testing, throttling to suppress duplicate logs within a time window, and aggregation to report suppressed log counts when the window expires.

Asked at:

LinkedIn

LinkedIn


Question Timeline

See when this question was last asked and where, including any notes left by other candidates.

Early March, 2026

LinkedIn

LinkedIn

Senior

Phase 1 (Severity Filtering): Implement a logger that initializes with a minimum severity level (DEBUG, INFO, WARN, ERROR). It should accept incoming logs and only print them if their severity is equal to or greater than the initialized level.Phase 2 (Testability): Add unit testing for the logger.Phase 3 (Rate Limiting): Developers are complaining about console spam (e.g., inside while loops). Implement a throttling mechanism. If an identical log (same severity, function name, line number, and message) occurs multiple times within an $X$-second window, only print the first occurrence and suppress the rest.Phase 4 (Aggregation): Silent suppression is dangerous. Modify Phase 3 so that instead of just dropping duplicate logs, we count them. When the $X$-second window expires and the next identical log comes in, print a summary of the suppressed logs (e.g., similar_lines: 5) using the timestamp of the first occurrence in that window. , I did not finish all 4 phases in time. Why? Because I fell into the classic Senior Engineer trap. When using the AI agent, my instinct was to manually read and line-by-line review every piece of code it generated before moving on. I treated the AI's code like a critical Pull Request. This was a mistake. According to the interviewer, a successful candidate completes all phases with a working solution. The AI coding round is actually an Engineering Manager Test, not a syntax test. They are evaluating your System Decomposition and Delegation. My advice to anyone taking this round: Do not over-review. You will waste the clock. Write strict constraints. Spend your time writing highly specific, constraint-driven prompts (e.g., "Implement Phase 4 using a Hash Map. Do not use background threads. Ensure O(1) time complexity."). Trust the Tests. Have the AI write the tests first. If the tests pass and cover the edge cases, move immediately to the next phase. Treat the AI like a fast, reckless Junior Dev. You define the boundaries, let them write the boilerplate, rely on the tests to catch bugs, and keep your velocity high.

Your account is free and you can post anonymously if you choose.