Search
⌘K

Tell me about a problem you had to solve that required in-depth thought and analysis.

Asked at:

Amazon

Amazon


Try This Question Yourself

Practice with feedback and follow-up questions

What is this question about

Interviewers use this question to assess how you approach complex problem solving when the answer is not obvious. They want to hear how you framed the problem, separated signal from noise, made decisions under uncertainty, and drove the work to a useful outcome. For more senior candidates, they are also checking whether the complexity and scope of the problem match the level they are hiring for.

  • Describe a time you had to untangle a complex problem with no obvious answer.

  • What's a difficult problem you've solved where the key was careful analysis rather than just quick execution?

  • Tell me about a situation where you had to dig deep to find the real root cause of something.

  • Can you walk me through a technically or operationally complex issue that took serious thought to resolve?

Ambiguity
Ownership
Scope
Communication

Key Insights

  • You do not get much credit for saying a problem was hard; you get credit for showing why it was non-obvious and how you reduced uncertainty step by step.
  • A strong answer usually includes discarded hypotheses or tradeoffs. If your story sounds like you immediately knew the answer, it often reads as shallow rather than impressive.
  • Choose a problem with complexity appropriate to your level. Senior and above candidates should usually avoid stories that are just about debugging an isolated bug unless the analysis had broader technical or organizational consequences.

What interviewers probe at
level

Top Priority

Show that you formed hypotheses, gathered evidence, and narrowed the problem instead of randomly trying things.

Good examples

🟢I listed the likely causes, reproduced the issue reliably, and checked each assumption one by one until I could isolate where the behavior diverged.

🟢I compared expected versus actual results across a few controlled cases, which helped me see that the issue only appeared when old and new data were mixed.

Bad examples

🔴I kept changing parts of the code until the output looked right, and eventually one version worked.

🔴I asked a few people for ideas, tried their suggestions, and used the one that solved it.

Weak answers rely on trial and error; strong answers show a disciplined method for reducing uncertainty.

Even at junior level, show that you considered more than one option and chose based on reasons, not convenience.

Good examples

🟢I had two possible fixes, one faster to ship and one cleaner long term, and I chose the safer short-term option because the deadline was close and I documented a follow-up for the broader cleanup.

🟢I decided not to patch around the symptom because the root cause would have kept affecting new cases, even though the deeper fix took a bit longer.

Bad examples

🔴Once I found a fix that passed the tests, I used it because I did not want to risk breaking anything else.

🔴My teammate suggested a simpler workaround, so I went with that without really comparing alternatives.

Weak answers treat the first working answer as good enough; strong answers show judgment about consequences and constraints.

Pick a problem that was meaningfully harder than a routine task, even if the scope was small.

Good examples

🟢I owned a small feature where the initial implementation worked in happy-path testing but failed under several real user states, so I had to understand edge cases and redesign part of the flow.

🟢I investigated why a report was inconsistently wrong across environments and had to trace interactions between data assumptions, time zone handling, and caching before finding the root cause.

Bad examples

🔴I had a failing unit test, so I stepped through the code and fixed a typo in a condition.

🔴The requirement was unclear, so I asked my teammate what they wanted and then built exactly that.

Weak answers describe routine execution; strong answers show a real puzzle with multiple plausible causes or constraints.

Valuable

Close the loop by explaining what changed after your analysis and what you would now do differently next time.

Good examples

🟢After the fix, I added a small test for the missed case and checked with my teammate a week later that the issue had not returned.

🟢The experience taught me to reproduce the problem before changing code, and I used that same approach on later tickets with less back-and-forth.

Bad examples

🔴I fixed it and moved on, and I assume it has been fine since then.

🔴The main result was that I learned debugging takes patience.

Weak answers stop at activity; strong answers show verified impact and a learning loop.

Example answers at
level

Great answers

In my first year, I worked on a small reporting feature that looked straightforward, but some users were getting different totals for the same date range. At first I thought it was just a query bug, but when I compared a few examples, I noticed it only happened for accounts that had activity near midnight. I reproduced the issue with a small set of test cases and found that one part of the pipeline was treating dates in local time while another used UTC. I talked through two fixes with my teammate and chose the smaller change in the aggregation layer because it solved the issue without risking other report logic right before release. After shipping it, I added tests around the edge cases and checked support tickets the next week to make sure the issue stopped showing up. That experience taught me to narrow a problem with controlled examples before changing code.

At my first job I was assigned to investigate reports that our mobile app was draining users' batteries after a recent update. I gathered device logs and added lightweight telemetry so I could compare sessions, then reproduced the problem on a test phone while watching the system profiler to see what was keeping the device awake. I discovered a background sync task was retrying aggressively on poor connections and waking the CPU frequently, so I changed the retry logic to use a capped backoff and to pause aggressive syncs when the app was in the background or on low battery. I rolled the change to a small percentage of users and monitored battery and sync success metrics — battery-related reports dropped by about a third while the app still stayed up to date for active users. The experience taught me to instrument before guessing and to weigh user-facing trade-offs like freshness versus battery life.

Poor answers

I had a bug in a feature I was building where the numbers on the page were sometimes wrong. I looked through the code for a while and tried a few changes until the output matched what I expected. Once the tests passed, I sent it for review and it was merged. It was a good example of solving a hard problem because I stayed on it until I found the fix.

Question Timeline

See when this question was last asked and where, including any notes left by other candidates.

Late November, 2024

Amazon

Amazon

Junior

Tell me about a time when you had to deep dive into a project, explain how you did it

Late July, 2024

Amazon

Amazon

Mid-level

Your account is free and you can post anonymously if you choose.