Search
⌘K

Tell me about an unexpected problem that you solved in an unconventional way, ahead of the customer asking

Asked at:

DoorDash

Amazon

Amazon


Try This Question Yourself

Practice with feedback and follow-up questions

What is this question about

This question is testing whether you notice important risks or opportunities before they become visible incidents, escalations, or customer complaints. Interviewers want to see proactive ownership under uncertainty: how you detected a non-obvious problem, decided it was worth acting on, and solved it in a way that wasn't just the default playbook. At higher levels, they are also looking for whether your anticipation and solution style matched the scope of your role.

  • Tell me about a time you identified a customer problem before customers reported it and fixed it in a non-obvious way.

  • Describe a situation where you saw an issue coming before it became visible and solved it differently than the standard approach.

  • Have you ever prevented a customer-facing problem before anyone formally escalated it? What did you do?

  • What's an example of spotting a hidden risk early and handling it creatively before users felt the impact?

Ownership
Ambiguity
Scope
Leadership

Key Insights

  • Don't tell a story where you simply reacted faster than the customer. The strongest answers show that you identified a meaningful latent problem before it was formally reported and took responsibility for addressing it.
  • The 'unconventional way' matters less than sounding clever and more than showing good judgment. You should explain why the normal path was insufficient and why your approach was a thoughtful fit for the constraints.
  • Many candidates forget to explain how they knew the issue was real before it became obvious. Show your detection process, the signals you noticed, and how you validated your hypothesis early.

What interviewers probe at
level

Top Priority

Even at junior level, don't stop at 'I fixed it'—show that the issue was actually prevented or made less likely for users.

Good examples

🟢After shipping the safeguard, I reran the failing scenario and watched early usage to confirm new users no longer got stuck.

🟢I checked with the people testing the flow and monitored the relevant errors for a while to make sure the issue really stopped happening.

Bad examples

🔴I merged the change, and since nobody complained right away, I assumed the problem was solved.

🔴I sent out the fix and moved on without checking whether the edge case still existed.

Weak answers equate implementation with resolution; strong answers verify that the customer-facing risk actually went down.

At junior level, interviewers mainly want to see that you noticed something meaningful, took initiative within your lane, and followed through instead of just handing it off.

Good examples

🟢While testing my task, I noticed new users could get stuck if they refreshed midway, so I reproduced it, confirmed it affected real signup paths, and proposed a small safeguard before release.

🟢I saw repeated internal failures that hadn't reached customers yet, and instead of ignoring them, I traced the pattern, checked with my teammate, and took on the fix as part of my work.

Bad examples

🔴I saw some confusing behavior in the feature, but I assumed product would hear about it if it mattered, so I waited until support opened a ticket and then I fixed it.

🔴I noticed the setup flow was brittle, so I mentioned it in standup and moved on because it wasn't assigned to me yet.

Weak answers treat early warning signs as someone else's problem; strong answers show the candidate felt accountable enough to investigate and act within appropriate scope.

You do not need a perfect analytical framework at junior level, but you should show that you checked whether the issue was real instead of chasing a random hunch.

Good examples

🟢I noticed a strange edge case during testing, then reproduced it a few times and checked expected behavior with a teammate before implementing a fix.

🟢A small inconsistency caught my attention, so I gathered a couple of examples, confirmed it wasn't just local data, and then addressed it.

Bad examples

🔴The flow felt weird to me, so I rewrote part of it even though I hadn't confirmed whether users would actually hit that path.

🔴I saw one odd log entry and assumed it meant a serious issue, so I changed the code right away.

Weak answers act on instinct alone; strong answers show basic validation that turns a suspicion into a credible problem statement.

Valuable

Your solution does not need to be flashy; it just needs to show that you adapted sensibly when the obvious path was too slow, risky, or incomplete.

Good examples

🟢The normal fix would have required a bigger change than we could safely make, so I added a guard and a simple fallback that addressed the customer risk immediately.

🟢I couldn't reproduce the issue in the usual environment, so I created a small local simulation to test the edge case and confirm a safe fix.

Bad examples

🔴Instead of using the existing approach, I built my own custom version because it seemed more interesting.

🔴I took a shortcut that bypassed our normal checks so I could solve it quickly.

Weak answers use unconventionality as a substitute for judgment; strong answers show adaptation that is grounded in the actual constraints.

Example answers at
level

Great answers

On a recent onboarding feature, I noticed during testing that if a new user refreshed at a certain step, we created part of their account but showed them an error page. Nobody had reported it yet because the feature was still rolling out, but I realized that if we shipped it broadly, support would probably start seeing confused users who thought signup had failed. The normal fix would have been to refactor the whole flow, which was too big for our release window, so I added a small recovery check that detected the partial state and resumed the user in the right place. I verified it by reproducing the issue several times and testing with a teammate on different accounts. After release, I watched the error pattern for the next week and it stayed flat while signup completion improved.

At a previous startup I was helping with the weekly billing run and noticed in a test dataset that a rare rounding edge case could make some customers’ invoice totals look off. I knew if those invoices went out we'd get upset emails and a lot of manual work for support and finance, and a full redesign of the billing calculation would take weeks. So I wrote a short script that scanned the upcoming billing batch for that exact anomaly and corrected the totals before invoices were generated, then walked the finance lead through the logic and ran it on a staging copy to prove it was safe. When we ran production that week the script caught and fixed 12 invoices that would have been wrong, avoiding any customer complaints; we logged the issue for a proper code fix in a later sprint.

Poor answers

I found an unexpected problem in one of our forms where a validation message looked inconsistent. I went ahead and rewrote the validation logic in a more custom way because I thought the existing approach was too rigid. It took a little extra time, but I like solving problems thoroughly. We shipped it and I didn't hear any complaints after that, so I think it worked out well.

Question Timeline

See when this question was last asked and where, including any notes left by other candidates.

Late August, 2025

DoorDash

Mid-level

Late January, 2025

Amazon

Amazon

Mid-level

Your account is free and you can post anonymously if you choose.