Can you describe a time when you had to make a decision with incomplete information? What was the situation, and how did you handle it?
Asked at:
Meta
Amazon
Bloomberg
Anthropic
Try This Question Yourself
Practice with feedback and follow-up questions
What is this question about
Interviewers use this question to assess how you operate under ambiguity when the answer is not obvious and the facts are still emerging. They want to see whether you can make a reasonable decision without freezing, while also managing risk, updating your view as new information arrives, and taking ownership for the outcome. At higher levels, they are also evaluating whether you chose an ambiguity level and decision scope appropriate to your seniority.
“Tell me about a time you had to make an important call before you had all the facts.”
“Describe a situation where the data was unclear but you still had to move forward. What did you do?”
“Have you ever been in a position where waiting for certainty would have slowed things down too much? How did you handle it?”
“What's an example of a decision you made under ambiguity, and how did you manage the risk?”
“Can you walk me through a time when you had to choose a direction even though some key assumptions were still unproven?”
Key Insights
- You do not get extra credit for pretending you had enough data. Name what was unknown, why it mattered, and how you decided anyway.
- A strong answer is rarely just 'I made the best call I could.' Show how you reduced uncertainty, bounded the downside, and created a path to revisit the decision.
- For senior candidates especially, the real signal is not certainty but judgment: what assumptions you made, which risks you accepted, and how you kept progress moving without being reckless.
What interviewers probe atlevel
Top Priority
A strong answer shows you did not just guess—you chose a path that limited harm if you were wrong.
Good examples
🟢I put the change behind a flag and tested it with a small internal audience first so we could learn without affecting everyone.
🟢I chose the simpler path that preserved the old behavior as a fallback until we confirmed the integration details.
Bad examples
🔴I was not sure the change would work in production, but I merged it because we could always fix it later.
🔴I chose the more complete implementation even though I had not tested the edge cases, since I wanted to avoid rework.
Weak answers gamble on being right; strong answers preserve a safe fallback, smaller exposure, or easier rollback.
You are not expected to know everything yourself, but you are expected to show initiative in gathering enough information to make or support a decision.
Good examples
🟢I compared the docs with actual behavior in a small test, checked past examples in our codebase, and then asked my mentor a more specific question.
🟢I narrowed the uncertainty by reproducing the issue locally, writing down what I knew versus what I was assuming, and validating the highest-risk assumption first.
Bad examples
🔴I could not get a clear answer quickly, so I just followed my first instinct and started coding.
🔴I asked one teammate what they thought, and when they were unsure too, I picked the faster option.
Weak answers treat uncertainty reduction as optional; strong answers show a lightweight but deliberate effort to gather evidence before acting.
Valuable
Even at junior level, a good decision under ambiguity includes telling the right people what you assumed and what to watch for.
Good examples
🟢I told my mentor what I knew, what I was assuming, and which part I was least confident about before I implemented the change.
🟢After deciding, I documented the behavior we expected and asked the reviewer to pay special attention to the uncertain area.
Bad examples
🔴I made the change and mentioned it in the next standup after it was already done.
🔴I did not want to bother anyone until I was more sure, so I just proceeded and planned to explain later if needed.
Weak answers keep uncertainty private; strong answers make assumptions visible so others can support, correct, or monitor the decision.
You do not need a perfect result, but you should show that you checked what happened and improved your approach.
Good examples
🟢After release, I checked the logs and user behavior we were worried about, then updated the code comments and shared what I learned with my teammate.
🟢When one assumption turned out wrong, I corrected the implementation quickly and wrote down a small checklist I now use for similar integrations.
Bad examples
🔴The change did not cause immediate issues, so I considered it done and moved on.
🔴Once my lead said the decision was fine, I did not revisit the assumptions.
Weak answers stop at the decision; strong answers show validation, adaptation, and a learning loop.
Example answers atlevel
Great answers
In my first year, I was adding an integration to a third-party billing service, and their documentation did not match the responses I saw in the test environment. I needed to decide whether to keep building against the docs or pause and risk slipping my part of the release. I reproduced the issue with a small test, checked how a similar integration worked elsewhere in our codebase, and then asked my mentor a very specific question instead of just saying it was confusing. Based on that, I implemented the safer path that preserved the old behavior if the new call failed, and we released it behind a flag for internal users first. That let us confirm the real behavior before turning it on broadly. The docs were partly wrong, but because we had a fallback and a limited rollout, we avoided customer impact and I wrote up what we learned for the next person working with that vendor.
At a small consumer startup I worked on a redesign of our profile settings page where the product manager wanted both a simplified layout and a lot of advanced options, but we didn't have user research to say which users would prefer which approach. With a two-week deadline I built two lightweight prototypes, asked five colleagues and two customer-support reps to try common tasks, and timed how long it took them to complete each task while noting where they hesitated. The informal tests showed the simplified layout was faster for the majority and produced fewer questions from support, so I implemented that as the default and hid advanced options behind a clearly labeled link. After release I monitored support mentions and a small in-app survey; the feedback matched our tests, and I wrote up the experiment and results so future design decisions would have clearer criteria.
Poor answers
I had a case where the requirements were not totally clear for a small dashboard change. Since nobody had final answers yet, I went with what seemed most intuitive to me and built the full version so we would not lose time. I showed it in review and the team made a few adjustments, but overall it was faster than waiting around for more detail. I think in situations with incomplete information, it is usually best to just decide quickly and keep moving.
Question Timeline
See when this question was last asked and where, including any notes left by other candidates.
Early April, 2026
Meta
Junior
Mid March, 2026
Meta
Manager
Can you describe a time when you had to make a decision with incomplete information? What was the situation, and how did you handle it?
Mid February, 2026
Meta
Staff
Hello Interview Premium
Your account is free and you can post anonymously if you choose.