Give me an example of a time when you asked for customer feedback.
Asked at:
Amazon
Meta
Try This Question Yourself
Practice with feedback and follow-up questions
What is this question about
Interviewers use this question to assess whether you actively close the loop with the people affected by your work rather than relying on assumptions. They want to see judgment about when feedback is worth seeking, how you gather it, and whether you turn what you hear into concrete changes. At higher levels, this also tests whether you build repeatable mechanisms for learning from customers, not just one-off conversations.
“Tell me about a time you went directly to users or customers to learn more about a problem.”
“Describe a situation where you proactively sought customer input instead of relying on internal assumptions.”
“Have you ever asked customers for feedback on something your team was building or changing? What happened?”
“Walk me through a time when customer conversations changed how you approached a product or engineering decision.”
Key Insights
- You should explain why you sought feedback at that moment. The strongest answers show a clear decision point, risk, or uncertainty that made direct customer input necessary.
- Don't treat customer feedback as a ceremonial step. Strong answers show how you asked good questions, separated signal from noise, and changed something meaningful based on what you learned.
- If you're senior or above, one customer conversation is usually not enough. Show how you translated individual feedback into a broader product, team, or organizational improvement.
What interviewers probe atlevel
Top Priority
At junior level, the key is showing initiative within your scope: you noticed uncertainty and chose to learn from users rather than guessing.
Good examples
🟢I noticed people were dropping off in a new setup flow, so I asked my lead if I could sit in on a few user calls and prepare questions about where they got confused.
🟢After shipping a small internal tool for operations staff, I reached out to two frequent users because I realized we were making assumptions about their daily workflow.
Bad examples
🔴We got a complaint through support, so I looked at it and counted that as customer feedback.
🔴My manager told me to ask a user what they thought, so I joined the call and listened.
Weak answers are passive or manager-driven; strong answers show the candidate recognized a gap and took reasonable initiative to learn.
A good junior answer shows you did something specific with the feedback and checked whether it actually helped.
Good examples
🟢Based on what I heard, I simplified the first step of the flow, then asked the same users to try it again and completion improved.
🟢I summarized the recurring confusion for my team, implemented the part I owned, and checked support tickets afterward to see whether the issue dropped.
Bad examples
🔴After hearing the feedback, I passed it to my lead and moved on to my next task.
🔴The customer had several suggestions, so we added most of them to show we were responsive.
Weak answers stop at collection or blindly implement requests; strong answers translate feedback into a focused change and close the loop.
Valuable
A strong junior story can be small, but it should still show clear ownership of a real customer-facing learning loop.
Good examples
🟢I owned improving one step in a user flow and gathered feedback directly from the people using that step.
🟢I worked within a defined project but could clearly explain the part I drove from feedback request through implementation.
Bad examples
🔴I changed the wording on a button after one coworker who uses our tool said it looked odd.
🔴I talked about a company-wide customer research program even though my role was mostly attending the meeting.
Weak answers are either too trivial or inflate borrowed scope; strong answers are modest but authentically owned.
Example answers atlevel
Great answers
On my last team, I worked on a small change to our account setup flow for internal customers in operations. After we released it, I noticed a few people were still asking the same questions in chat, so I asked my lead if I could talk directly with two of the users. I had them walk through the setup while I watched, and I realized the problem was not the backend logic I had changed but that the first step used terms they didn't recognize. I updated the labels and added a short explanation, then asked the same users to try it again. They finished without help, and the number of questions in chat dropped the next week. That experience taught me not to assume a feature is clear just because it works technically.
In my first job after college I owned a small feature to nudge new users to enable push notifications, and I noticed the opt-in rate stalled at about 8%. I recruited ten recent sign-ups and did five short phone interviews asking why they'd declined notifications and what would change their mind. Most said they didn't know what types of messages they'd get and were worried about spam or losing control. I rewrote the prompt to include two concrete examples of useful notifications and a direct link to manage settings, then rolled it to a small test group. Opt-ins jumped to around 20% the next week, and I learned that transparency and giving users control mattered more than persuasive copy.
Poor answers
I asked for customer feedback after I shipped a settings page update. I sent a message asking whether people liked the new page, and a few said it looked good, so I took that as a positive result. There were some other comments too, but they were mostly preferences, so I didn't think they were important. Overall, I think it was useful because we confirmed the release was fine.
Question Timeline
See when this question was last asked and where, including any notes left by other candidates.
Early November, 2025
Amazon
Senior
Late September, 2025
Meta
Senior
Mid December, 2024
Amazon
Mid-level
Hello Interview Premium
Your account is free and you can post anonymously if you choose.