Search
⌘K
Common Patterns
Backtracking
Systematic search with pruning — how backtracking powers solutions to constraint satisfaction, configuration, and combinatorial problems.
Some problems just can't be solved with a clean formula. You need to try things, see if they work, and undo them when they don't. That's backtracking in a nutshell: a structured way to explore possibilities and bail early when you hit a dead end.
In AI coding interviews, backtracking shows up constantly. Configuration generators, scheduling problems, puzzle solvers, anything where you're searching through a combinatorial space for valid arrangements. These problems are perfect for this format because the solution structure is complex enough to be interesting but follows a recognizable template that you and the AI can collaborate on effectively.
You don't need to have backtracking memorized before walking into the interview. Interviewers aren't testing whether you've drilled hundreds of constraint satisfaction problems. They want to see that you can recognize when a problem calls for systematic search, articulate that to the AI, and verify that the generated solution actually prunes correctly. If you understand the core idea, you can work through the details in real time.
Common problem shapes
Before getting into the mechanics, it helps to know what backtracking problems actually look like in AI coding interviews. They tend to fall into a few categories:
- Placement problems -- arrange items in a grid or structure so they don't conflict. N-Queens, Sudoku, crossword puzzle generators.
- Scheduling and assignment -- assign tasks to resources, time slots to events, or workers to shifts, subject to constraints like availability and capacity.
- Configuration generation -- produce all valid configurations of a system. Think feature flag combinations that satisfy dependency rules, or network topologies that meet connectivity requirements.
- Path and partition problems -- find paths through a maze, partition sets into groups meeting certain criteria, or generate all valid orderings of tasks with dependencies.
The common thread: you're building a solution incrementally, and at each step you can check whether you've already violated a constraint. If you have, there's no point continuing down that path.
The core idea: search with early termination
Backtracking is depth-first search on a decision tree, with one critical addition: pruning. At each step, you make a choice. If that choice violates a constraint, you immediately abandon that entire branch and try the next option. No point exploring a path that's already broken.
Think of it like navigating a maze. You pick a direction, walk until you hit a wall, then back up to the last fork and try a different path. You don't restart from the beginning each time. You don't explore every inch of every dead end. You just back up to where you still had options.
The highlighted path is the one the algorithm actually follows to a solution. The faded branches with red X marks are paths that got pruned -- the algorithm checked a constraint, found a violation, and stopped exploring. Without pruning, you'd visit every single node. With it, you skip entire subtrees.
This is what makes backtracking practical. The theoretical search space for many of these problems is enormous, but good constraint checking cuts it down to something manageable.
The backtracking template
Almost every backtracking solution follows the same three-step rhythm: choose, explore, unchoose. You make a decision, recurse into it, and then undo that decision before trying the next option.
Here's the general shape in Python:
def backtrack(state, choices):
if is_solution(state):
result.append(state.copy())
return
for choice in choices:
if is_valid(state, choice):
state.apply(choice)
backtrack(state, choices)
state.undo(choice)That's it. Every backtracking problem is a variation on this skeleton. The differences are in what "state" looks like, how you define "choices" at each step, and what "is_valid" checks. The recursive structure stays the same.
The state.undo(choice) line is the part people forget, and it's the part that makes backtracking work. Without it, the decisions you made for one branch leak into the next branch, and everything breaks. When you're reviewing AI-generated backtracking code, this is the first thing to check: does the state get properly restored after each recursive call?
When prompting the AI to write a backtracking solution, explicitly mention the choose-explore-unchoose pattern. Something like "implement this using backtracking with proper state restoration after each recursive call" helps the AI produce cleaner code than a vague "find all valid configurations." The more structure you give it, the less cleanup you'll do.
Example: solving N-Queens
The N-Queens problem is the classic backtracking example for good reason. Place N queens on an N x N chessboard so that no two queens threaten each other. No shared rows, columns, or diagonals.
For a 4x4 board, here's what one valid solution looks like:
Queens are at positions (0,1), (1,3), (2,0), and (3,2). No two share a row, column, or diagonal.
Why is this a good backtracking problem? Because the constraints are tight. Each queen you place eliminates an entire row, column, and two diagonals. By the time you're placing the third or fourth queen, most of the board is already off-limits. This means the search tree gets pruned heavily, and the algorithm finishes quickly despite the theoretically huge number of possible arrangements (for an 8x8 board, there are over 4 billion ways to place 8 queens, but only 92 valid solutions).
The approach: place queens one row at a time. For each row, try each column. If placing a queen there doesn't conflict with queens already on the board, recurse to the next row. If no column works for the current row, backtrack to the previous row and try the next column there.
def solve_n_queens(n: int) -> list[list[int]]:
solutions = []
columns = set()
diag1 = set()
diag2 = set()
def backtrack(row: int, placement: list[int]):
if row == n:
solutions.append(placement[:])
return
for col in range(n):
if col in columns or (row - col) in diag1 or (row + col) in diag2:
continue
columns.add(col)
diag1.add(row - col)
diag2.add(row + col)
placement.append(col)
backtrack(row + 1, placement)
columns.remove(col)
diag1.remove(row - col)
diag2.remove(row + col)
placement.pop()
backtrack(0, [])
return solutionsNotice the pattern. For each column we try, we add it to our tracking sets (choose), recurse (explore), then remove it (unchoose). The three sets -- columns, diag1, and diag2 -- let us check constraints in O(1) instead of scanning the whole board each time.
The diagonal trick is worth understanding: two cells share a diagonal if their row - col values match (top-left to bottom-right diagonals) or their row + col values match (top-right to bottom-left diagonals). This is the kind of thing that's fine to let the AI generate -- but you should be able to explain it if the interviewer asks.
You don't need to have the diagonal formula memorized going in. If you tell the AI "use sets to track occupied columns and diagonals for O(1) conflict checking," it'll produce something like this. The interviewer cares that you knew to ask for efficient constraint checking, not that you remembered the exact math.
Constraint propagation: making pruning aggressive
Basic backtracking checks constraints when you make a choice. Constraint propagation goes further: after making a choice, it immediately figures out what that choice implies for future choices and eliminates options that are no longer viable.
Think about Sudoku. When you place a 5 in a cell, basic backtracking just checks "is this valid right now?" Constraint propagation also removes 5 from the possible values for every cell in the same row, column, and 3x3 box. When a cell gets reduced to a single possible value, that value gets placed automatically, which triggers more propagation.
This is the difference between "check if my choice is valid" and "check what my choice forces." When constraints are tight -- meaning each choice eliminates a lot of future options -- propagation makes backtracking dramatically faster. Problems that seem exponential on paper become nearly linear in practice because the search tree gets pruned so aggressively.
When constraints are loose, though, propagation doesn't help much. If each choice barely affects future options, you're back to brute-force search through an enormous tree. This is when you need to think about whether backtracking is even the right approach, or if you'd be better off with dynamic programming or a greedy strategy.
def solve_sudoku(board: list[list[int]]) -> bool:
possible = [[set(range(1, 10)) for _ in range(9)] for _ in range(9)]
for r in range(9):
for c in range(9):
if board[r][c] != 0:
propagate(possible, r, c, board[r][c])
return backtrack_sudoku(board, possible)
def propagate(possible, row, col, val):
for i in range(9):
possible[row][i].discard(val)
possible[i][col].discard(val)
box_r, box_c = 3 * (row // 3), 3 * (col // 3)
for r in range(box_r, box_r + 3):
for c in range(box_c, box_c + 3):
possible[r][c].discard(val)
def backtrack_sudoku(board, possible) -> bool:
cell = find_empty_cell(board)
if cell is None:
return True
row, col = cell
for val in list(possible[row][col]):
if is_valid_placement(board, row, col, val):
board[row][col] = val
saved = [[cell.copy() for cell in r] for r in possible]
propagate(possible, row, col, val)
if backtrack_sudoku(board, possible):
return True
board[row][col] = 0
for i in range(9):
for j in range(9):
possible[i][j] = saved[i][j]
return False
def find_empty_cell(board):
for r in range(9):
for c in range(9):
if board[r][c] == 0:
return (r, c)
return None
def is_valid_placement(board, row, col, val):
for i in range(9):
if board[row][i] == val or board[i][col] == val:
return False
box_r, box_c = 3 * (row // 3), 3 * (col // 3)
for r in range(box_r, box_r + 3):
for c in range(box_c, box_c + 3):
if board[r][c] == val:
return False
return TrueThe key detail here: when we backtrack, we restore the entire possible state. This is the same choose-explore-unchoose pattern, just with more state to manage. AI often gets the basic backtracking right but forgets to save and restore the propagation state. That's your number one thing to verify.
When reviewing AI-generated constraint propagation code, always check state restoration. The AI might correctly propagate constraints forward but fail to undo them on backtrack. This produces solutions that seem to work on small inputs but silently break on larger ones because stale constraints from abandoned branches contaminate the search.
What interviewers expect
Identifying a backtracking problem isn't usually the hard part. If the problem says "find all valid configurations" or "generate all arrangements satisfying these rules," it's almost certainly backtracking. Scheduling problems where resources have constraints, configuration generators where options depend on previous choices, puzzle solvers with rules that restrict placement -- these all follow the pattern.
The harder skill is communicating your approach before you start coding. Interviewers want to hear you describe the decision tree: what choices you're making at each step, what constraints you're checking, and when you're pruning. Something like "I'll try placing items one at a time, checking constraints after each placement, and backtracking when I hit a violation" tells the interviewer you have a mental model of the algorithm, not just a vague sense that recursion is involved.
When you prompt the AI, be specific about the pruning logic. "Write a backtracking solution" is too vague. "Write a backtracking solution that places queens row by row, tracking occupied columns and diagonals in sets, and skips any column that conflicts" gives the AI the constraint-checking strategy you want. The output will be better and you'll spend less time fixing it.
One thing that catches candidates off guard: interviewers in AI coding formats don't expect you to write the backtracking from scratch in your head. They expect you to know what backtracking is, when to use it, and how to direct the AI to produce a correct implementation. If you can articulate "this is a constraint satisfaction problem, I want to use backtracking with these specific constraints as my pruning criteria," that's already a strong signal. The implementation is the AI's job. Your job is the architecture and verification.
After the AI generates the code, verify three things:
- State restoration -- does every modification to shared state get undone after the recursive call? Look for the unchoose step. If the code modifies a set, list, or array before recursing, that same modification should be reversed after recursing.
- Correct pruning -- does the is_valid check actually catch all constraint violations? Missing a constraint means the algorithm explores branches it shouldn't, which usually produces wrong answers rather than just being slow.
- Termination -- does every recursive path eventually hit a base case? With backtracking, infinite loops typically come from the recursion not actually making progress. Each recursive call should be moving toward a smaller problem (fewer remaining choices, fewer empty cells, one more item placed).
A good way to test backtracking code is with small inputs where you can manually verify the answer. For N-Queens, n=4 has exactly 2 solutions. For Sudoku, use a board with only a few empty cells. If the algorithm produces the right count or the right answer on a small instance, it's probably correct. If it hangs or gives the wrong count, you know to look at pruning or state restoration.
When to use backtracking vs. other approaches
Backtracking is the right tool when you need to search a space of combinations and constraints let you prune aggressively. But it's not always the best choice.
Use backtracking when:
- You need all valid solutions, not just one
- Constraints are tight enough that most branches get pruned early
- The problem involves placing, assigning, or arranging items subject to rules
- There's no overlapping substructure (which would point toward DP)
Consider dynamic programming instead when:
- The problem has optimal substructure and overlapping subproblems
- You're optimizing a single value (minimum cost, maximum profit) rather than enumerating solutions
- You find yourself solving the same subproblem repeatedly during backtracking
Consider greedy instead when:
- A locally optimal choice at each step leads to a globally optimal solution
- You don't need to explore alternatives -- the first valid choice is always the best
- The problem has a matroid structure (if that rings a bell, great; if not, don't worry about it)
The tricky cases are problems that look like they need backtracking but actually have DP solutions. Subset sum, for instance, can be solved either way. Backtracking explores all subsets that might sum to the target. DP builds up from smaller sums. The DP approach is almost always faster, but it requires recognizing the overlapping substructure, which isn't always obvious.
There's also a middle ground worth knowing about: backtracking with memoization. If your backtracking solution visits the same state through different paths, you can cache results for states you've already fully explored. This is essentially how top-down DP works. If you notice your backtracking solution is slow and you suspect repeated work, mention this to the interviewer -- it shows you understand the relationship between these techniques.
If you're unsure whether a problem needs backtracking or DP, start by asking: "Am I looking for all valid solutions, or just the optimal one?" Enumeration problems (all permutations, all valid configurations) usually want backtracking. Optimization problems (shortest path, minimum cost) usually want DP. There are exceptions, but this heuristic gets you started in the right direction.
In an AI coding interview, it's completely fine to say "I think this is a backtracking problem because we need to enumerate valid configurations, but I want to double-check that there isn't a DP formulation that would be more efficient." That kind of reasoning out loud is exactly what interviewers want to hear. It shows you're thinking about algorithmic tradeoffs rather than pattern-matching to the first thing that comes to mind.
Common AI pitfalls with backtracking
Beyond the state restoration issue mentioned earlier, there are a few other patterns to watch for when the AI generates backtracking code.
Generating duplicates. If the problem asks for unique combinations and the input has duplicate values, the AI might produce solutions that include the same combination multiple times. The fix is usually sorting the input and skipping consecutive duplicates at each level of recursion. If you see this in the output, ask the AI to add deduplication.
Inefficient choice ordering. The order in which you try choices at each step can massively affect performance. Trying the most constrained choice first (the variable with fewest remaining options) is a well-known heuristic called Minimum Remaining Values, or MRV. AI doesn't always apply this by default. For Sudoku, it means picking the empty cell with the fewest valid candidates rather than just scanning left to right. If the solution seems correct but slow, this is often why.
Modifying the choices list during iteration. This is a subtle one. If the AI modifies the list of available choices while iterating over it, you get skipped or repeated elements. The fix is iterating over a copy or using index-based iteration. Python makes this especially easy to get wrong with for item in items when items is being mutated.
Putting it all together
Backtracking problems in AI coding interviews follow a predictable arc. You recognize the combinatorial search structure, describe the decision tree and constraints to the interviewer, prompt the AI with enough specificity that it generates clean code, and then verify the critical details: state restoration, correct pruning, and termination.
The pattern itself is simple. Choose, explore, unchoose. The complexity comes from the specific constraints of each problem and how aggressively you can prune.
Remember that interviewers using the AI-enabled format aren't expecting you to have every backtracking variant committed to memory. They're testing whether you can identify the pattern, communicate a plan, direct the AI effectively, and catch the bugs that matter -- especially around state restoration and pruning correctness. If you understand the template and can reason about constraints, you'll handle whatever variation shows up, even if you've never seen that exact problem before.
Purchase Premium to Keep Reading
Unlock this article and so much more with Hello Interview Premium
Reading Progress
On This Page
Schedule a mock interview
Meet with a FAANG senior+ engineer or manager and learn exactly what it takes to get the job.
