Tell me about a time when you didn't have enough data to make the right decision.
Asked at:
Meta
Amazon
Try This Question Yourself
Practice with feedback and follow-up questions
What is this question about
Interviewers use this question to see how you operate under uncertainty when the facts are incomplete but a decision still has to be made. They want to understand whether you can reduce ambiguity intelligently, make proportionate bets, and own the outcome without pretending certainty you did not have. At higher levels, they also want to see whether you chose a decision process that matched the scale and risk of the situation.
“Describe a situation where you had to make a call before you had all the information you wanted.”
“Tell me about a decision you made under uncertainty. How did you handle it?”
“Have you ever had to move forward when the data was incomplete or conflicting? What did you do?”
“Walk me through a time when waiting for perfect information was not an option.”
“What's an example of a decision you had to make with partial evidence?”
Key Insights
- You do not need a story where you eventually got perfect data. Strong answers often show that you identified what was unknowable, narrowed what mattered most, and made a sensible decision anyway.
- You should explain how you calibrated risk, not just how you gathered more information. Interviewers are listening for whether you knew when to investigate further versus when to act with partial information.
- Be explicit about the decision quality, not only the outcome. A good answer can still involve a bad result if your process was disciplined, transparent, and adjusted as new evidence arrived.
What interviewers probe atlevel
Top Priority
At junior level, interviewers mainly want to see that you can recognize when you do not know enough and say so clearly rather than guessing blindly.
Good examples
🟢I realized I was making assumptions about how another part of the system behaved, so I called that out to my teammate and said I was not confident enough to choose an approach yet.
🟢I had two possible fixes, but I was missing usage information, so I was upfront that I did not know which one was safer and asked for help validating my assumptions.
Bad examples
🔴I did not have the full context, but I just picked the approach that felt most standard and moved forward because overthinking slows things down.
🔴There were a few unknowns, but I assumed they would work themselves out once implementation started, so I did not think it was necessary to raise concerns.
Weak answers hide uncertainty behind confidence or convenience; strong answers show self-awareness and accurate labeling of what was unknown.
You are not expected to solve ambiguity alone, but you are expected to do reasonable homework before asking others to decide for you.
Good examples
🟢Before asking for help, I reproduced the problem, checked recent changes, and narrowed it down to two likely causes so we could decide faster.
🟢I could not tell which option was safer, so I gathered a small amount of evidence from documentation and a quick test before bringing a recommendation.
Bad examples
🔴I was unsure which fix to use, so I sent my lead a message asking what to do without first checking logs or reproducing the issue myself.
🔴We did not have enough information, so I waited until the next planning meeting and hoped someone else would clarify it.
Weak answers outsource thinking immediately; strong answers show initiative scaled to junior scope.
Valuable
A strong junior answer does not end at the decision; it shows you stayed engaged, watched what happened, and learned from the result.
Good examples
🟢After we chose the path, I checked whether the fix actually addressed the issue and shared what we learned so we could improve similar decisions later.
🟢When early results suggested our assumption was wrong, I surfaced that quickly and helped switch to the backup option.
Bad examples
🔴Once my lead approved the approach, I implemented it and moved on to the next task since the decision had already been made.
🔴The result was mixed, but there were a lot of factors outside my control, so there was not much else for me to do.
Weak answers treat the decision as someone else's once approved; strong answers show continued ownership and learning.
Example answers atlevel
Great answers
On a bug fix project, I had to decide whether the issue was caused by a recent code change or by bad input coming from another service, and I did not have enough data to be sure. Before asking my lead to decide, I reproduced the bug, checked the recent change history, and added a little logging in a test environment so I could narrow it down. I found that both were possible, but I was still not confident enough to make a high-risk change to the shared code path. I told my lead what I knew, what I was still assuming, and suggested we start with the smaller fix that was easy to roll back. We shipped that first, watched the error rate, and it reduced most of the issue. The remaining cases confirmed the input problem too, so we handled that separately. What I learned was that when I do not have enough data, I should reduce the uncertainty as much as I can and then choose the safest reversible option.
At my last internship I had to decide whether to go ahead with a weekly mobile app release after the analytics dashboard showed a sudden drop in a core conversion metric, but we only had a handful of events so I couldn't tell if it was a real user problem or tracking noise. Rather than delay the release based on weak signals, I proposed a 5% staged rollout behind a feature flag, added targeted logging to capture the user flow (without storing PII), and worked with the product manager to agree on concrete rollback criteria. Over the next two days the small cohort's behavior stayed consistent while the broader drop pointed to a reporting pipeline issue, so we continued the rollout and fixed the analytics bug separately. Because we limited exposure and defined objective checks, we avoided blocking other teams and protected users from a risky mass release. I came away valuing incremental launches and measurable thresholds whenever the data is too thin to be certain.
Poor answers
I remember a time when I was debugging an issue and there was not enough data to know exactly what was wrong. I picked the fix that seemed most likely based on my experience with similar bugs and pushed it so we would not waste time investigating forever. It did not solve everything, but at least we made progress and showed we were moving quickly. In situations like that, I think it is better to be decisive and keep things moving.
Question Timeline
See when this question was last asked and where, including any notes left by other candidates.
Mid March, 2026
Meta
Manager
Early February, 2025
Amazon
Mid-level
Early November, 2024
Amazon
Mid-level
Tell me about a time when you didn't have enough data to make the right decision. What did you do? What path did you take? Did the decision turn out to be the correct one?
Hello Interview Premium
Your account is free and you can post anonymously if you choose.