Tell me about a time when you worked to improve the quality of a product / service / solution that was already getting good customer feedback.
Asked at:
Amazon
Try This Question Yourself
Practice with feedback and follow-up questions
What is this question about
Interviewers are testing whether you improve something that is already working, not just whether you can rescue something broken. They want to see standards, curiosity, and judgment: how you detected remaining quality gaps, decided they were worth addressing, and translated a good baseline into a measurably better customer experience. For senior candidates especially, this also reveals whether you can raise the bar proactively rather than waiting for obvious failures.
“Describe a time you took a product that customers already liked and made it meaningfully better.”
“Have you ever improved something that was already considered successful? What did you change and why?”
“Tell me about a situation where 'good enough' wasn't good enough for you, even though users were mostly satisfied.”
“What's an example of raising the quality bar on an existing service rather than fixing an obvious failure?”
“Can you share a time when positive feedback masked a deeper quality issue you decided to address?”
Key Insights
- You should explain why further improvement mattered even though customers were already happy. The strongest answers show discernment about hidden risk, uneven experience, scale effects, or missed upside rather than improvement for improvement's sake.
- Don't frame the story as 'I polished it because I like high standards.' Show how you found a real quality signal beneath positive feedback and how you validated that the improvement actually helped.
- If the product was already seen as successful, the bar for changing it is higher. You need to show thoughtful prioritization, not just perfectionism.
What interviewers probe atlevel
Top Priority
At junior level, you do not need to own the entire strategy, but you should show that you helped drive a concrete improvement through completion.
Good examples
🟢After I noticed the issue, I put together examples, proposed a small fix, implemented it with my teammate's guidance, and helped verify the updated behavior before release.
🟢I didn't just point it out; I took the action item to investigate, made the change in the area I owned, and followed up with QA and support to make sure the fix addressed the problem.
Bad examples
🔴I mentioned to my lead that the flow could be improved, and after that the team eventually made some changes.
🔴I raised the issue in a meeting and shared a few suggestions, then I moved on to my next task since the product was already in good shape.
Weak answers stop at observation; strong answers show follow-through within an appropriate junior scope.
A strong junior answer shows judgment: you improved something real without treating every rough edge as equally urgent.
Good examples
🟢I focused on the step that was confusing users most and kept the change small enough to ship safely instead of rewriting the entire flow.
🟢I suggested a targeted fix that improved the common problem first, because a larger redesign would have delayed other committed work.
Bad examples
🔴Since customers already liked the feature, I used the chance to fix as many small issues as possible because it was a good cleanup opportunity.
🔴I kept expanding the work once I got into the code because I wanted the whole area to be perfect before we moved on.
Weak answers signal uncontrolled polish work; strong answers show the candidate chose an appropriately scoped improvement.
Valuable
You do not need complex metrics, but you should show some evidence that the change worked beyond 'we shipped it.'
Good examples
🟢After the change, I checked with support and saw the repeat questions on that step dropped, which gave me confidence the issue was real and the fix helped.
🟢I compared the before and after behavior with a few users in testing and confirmed they completed the task more smoothly after the update.
Bad examples
🔴We released the improvement and nobody complained afterward, so I considered it a success.
🔴I assumed the update made things better because the flow was cleaner and the team liked the result.
Weak answers infer success from activity or silence; strong answers use some direct signal to confirm the improvement.
Staff candidates are expected to leave behind mechanisms, standards, or alignment that improve quality beyond the single case.
Good examples
🟢I turned the lesson into shared guidance and review norms so teams could make more consistent quality decisions without needing me in every case.
🟢The effort resulted in a common way of measuring and discussing quality across teams, which helped us catch similar issues earlier in later launches.
Bad examples
🔴I got teams to make the needed improvements, but after that each team went back to its own process.
🔴I focused on resolving the current issue and did not invest much in broader changes because the immediate customer impact was the main thing.
Weak answers produce a one-off win; strong answers create leverage that raises the bar across multiple engineers or teams.
Example answers atlevel
Great answers
On a recent internal tool project, users were actually pretty happy because it saved them time compared to the old process. But while helping test a follow-up change, I noticed people often got stuck on one form field and had to ask what format it expected. I pulled a few support messages, confirmed that confusion was recurring, and suggested a smaller improvement instead of redesigning the whole page. I updated the field guidance, added clearer validation text, and worked with my teammate to test it before release. After that, support questions on that step dropped a lot over the next couple of weeks. What I was proud of was noticing that positive overall feedback still left room to remove friction for users.
At my previous job on a consumer web app, our overall reviews were excellent but I noticed a small steady trickle of emails from users who rely on screen readers saying parts of the checkout felt unusable. I pulled the pages into a local build, walked through them with a screen reader to see where labels and focus order were failing, and sketched a minimal set of fixes that wouldn’t change the design, just how elements are announced and navigated. I paired with a more senior engineer to implement clearer element descriptions and correct tab order, added a couple of simple tests to prevent regressions, and we shipped the change in a small patch. After that release those accessibility complaints stopped and we saw a small but measurable increase in checkout completion for keyboard/screen-reader sessions. I felt proud because improving quality here aligned with my belief that good products should work well for everyone, not only the majority of users.
Poor answers
I worked on a dashboard that users already liked, and I wanted to improve the quality by cleaning up the interface. I changed some spacing, renamed a few labels, and reorganized a couple of sections because I thought it looked more professional. The team liked the changes and merged them pretty quickly. It was a good example of me caring about quality even when customers were already happy.
Question Timeline
See when this question was last asked and where, including any notes left by other candidates.
Late October, 2024
Amazon
Mid-level
Hello Interview Premium
Your account is free and you can post anonymously if you choose.