Search
⌘K
Common Patterns

Greedy & Bin Packing

When local optimization leads to global solutions — greedy strategies, bin packing heuristics, and the tradeoffs interviewers expect you to reason about.


Greedy algorithms come up constantly in AI coding interviews, and for good reason. Problems involving resource allocation, scheduling, interval management, and bin packing are natural fits for multi-file coding challenges because they're easy to state but surprisingly tricky to get right. The core tension that makes them interesting interview material is the tradeoff between correctness and speed: a greedy approach runs fast, but does it actually give you the right answer?
Here's the thing interviewers care about. They're not testing whether you've memorized the activity selection problem from your algorithms class. They want to see you reason about why a greedy approach works for a given problem, or catch when it doesn't. That reasoning is exactly what separates you from the AI assistant sitting right there in the editor. The AI will happily generate a greedy solution for almost anything you throw at it. Your job is to know when that's correct and when it's dangerously wrong.
You don't need deep prior expertise with greedy algorithms going into these interviews. Interviewers expect you to assimilate the core ideas quickly and apply them to unfamiliar problems. What matters is your reasoning process, not whether you've seen the exact problem before.

The greedy paradigm

The idea behind greedy algorithms is deceptively simple: at each step, make the choice that looks best right now, and never look back. No backtracking, no reconsidering. You commit to each local decision and hope (or prove) that the sequence of locally optimal choices leads to a globally optimal solution.
This works when two properties hold:
  1. Greedy choice property -- making the locally optimal choice doesn't prevent you from finding the globally optimal solution. In other words, you never need to undo a decision.
  2. Optimal substructure -- after making a greedy choice, the remaining problem is a smaller instance of the same type of problem.
When both hold, greedy gives you the optimal answer and usually runs much faster than alternatives like dynamic programming. When either fails, greedy gives you something that looks reasonable but might be arbitrarily far from optimal.

The classic example: coin change

Consider making change for 36 cents with coins of denominations 25, 10, 5, and 1. The greedy approach picks the largest coin that fits at each step: 25, then 10, then 1. Three coins total. And that happens to be optimal.
Making Change for 36 centsGreedy (Optimal here)251013 coins = optimalGreedy Fails (coins: 1, 3, 4)Target: 6Greedy picks:4113 coins (wrong!)Optimal:332 coins (correct)
But change the denominations to 1, 3, and 4 and try making 6 cents. Greedy picks 4, then 1, then 1 -- three coins. The optimal answer is 3 + 3 -- two coins. The greedy choice property doesn't hold here because picking the largest coin first backed us into a corner.
This is the kind of thing that trips up AI assistants. They'll default to greedy because it's simpler, and for standard US coin denominations it works. Hand them a weird denomination set and they'll produce the same greedy code without blinking. You need to catch that.

Interval scheduling

Interval scheduling is probably the most common greedy pattern you'll see. The problem: given a set of intervals (meetings, tasks, jobs), select the maximum number of non-overlapping intervals, or find the minimum number of resources to handle all of them.
The greedy insight is to sort intervals by end time, then greedily pick the earliest-ending interval that doesn't overlap with your last selection. This works because choosing the interval that finishes earliest leaves the most room for future intervals.
Interval Scheduling: Select Max Non-Overlapping0246810A (0-3)B (1-4)C (2-4)D (3-7)E (4-7)F (7-10)SelectedSkipped
Sort by end time, then walk through: pick A (ends at 3), skip B and C (overlap), skip D (overlaps), pick E (starts at 4, ends at 7), pick F (starts at 7). Three intervals selected -- that's the maximum.
def max_non_overlapping(intervals):
    intervals.sort(key=lambda x: x[1])
    selected = []
    last_end = float('-inf')

    for start, end in intervals:
        if start >= last_end:
            selected.append((start, end))
            last_end = end

    return selected
The time complexity is O(n log n) for the sort, then O(n) for the scan. This is one of those patterns where the greedy proof is clean and well-known, so if you or your AI produces this approach, you can be confident it's correct.
A closely related variant asks for the minimum number of rooms (or machines, or servers) to handle all intervals. That's a different problem with a different greedy strategy -- you track active intervals and count the maximum overlap at any point. Don't confuse the two. AI assistants sometimes do.
import heapq

def min_rooms(intervals):
    if not intervals:
        return 0

    intervals.sort(key=lambda x: x[0])
    heap = []

    for start, end in intervals:
        if heap and heap[0] <= start:
            heapq.heapreplace(heap, end)
        else:
            heapq.heappush(heap, end)

    return len(heap)
The trick here is maintaining a min-heap of end times for currently active intervals. When a new interval starts after the earliest ending one, you reuse that room. Otherwise you need a new room. This is O(n log n) and is a genuinely different greedy strategy from the max-non-overlapping version above -- same input shape, completely different objective, completely different algorithm.
AI assistants love to suggest greedy solutions because they're concise and elegant. But they're often wrong. This is actually a great place to demonstrate critical thinking in an interview. When the AI generates a greedy approach, ask yourself: does the greedy choice property actually hold here? Can I construct a counterexample? Showing that you question AI output rather than blindly accepting it is one of the strongest signals you can send.

Bin packing

Bin packing is where greedy gets really practical. The setup: you have items of various sizes and bins of fixed capacity. Pack all items into as few bins as possible. This shows up everywhere -- allocating containers to servers, fitting tasks into time slots, memory management, even distributing load across worker nodes.
The bad news is that optimal bin packing is NP-hard. The good news is that greedy heuristics get surprisingly close to optimal, and in an interview context, that's usually what's being asked for.

The main strategies

First-Fit: For each item, scan bins from left to right and place the item in the first bin that has room. Simple, fast, and decent.
Best-Fit: For each item, place it in the bin where it fits most tightly (least remaining space after placement). Slightly better packing, slightly more work to find the right bin.
First-Fit Decreasing (FFD): Sort items largest-first, then apply First-Fit. This is the one you actually want to use. Placing large items first avoids the situation where you've filled bins with small items and then have nowhere to put the big ones.
First-Fit Decreasing: Packing Items into BinsBin capacity = 10 | Items (sorted): 7, 5, 5, 4, 3, 2, 2Bin 173Bin 255Bin 3422Items: 7, 5, 5, 4, 3, 2, 2Total size: 28Lower bound: 3 binsFFD result: 3 bins(Optimal achieved!)Large itemsSmall items
FFD has a proven worst-case guarantee: it never uses more than (11/9) * OPT + 6/9 bins, where OPT is the optimal number. In practice, it usually does better than that.
def first_fit_decreasing(items, bin_capacity):
    items_sorted = sorted(items, reverse=True)
    bins = []

    for item in items_sorted:
        placed = False
        for i, current_load in enumerate(bins):
            if current_load + item <= bin_capacity:
                bins[i] += item
                placed = True
                break

        if not placed:
            bins.append(item)

    return len(bins), bins
This runs in O(n^2) in the worst case (n items, each scanning all existing bins). You can improve it to O(n log n) with a balanced BST or heap to track bin remaining capacities, but the O(n^2) version is totally fine for an interview unless the problem explicitly requires better.
In an interview, if you're asked a bin packing variant, start by stating the approach clearly: "I'll sort items in decreasing order and use first-fit, which gives a guaranteed approximation ratio." This shows you understand it's a heuristic and aren't claiming it's optimal. Interviewers appreciate that nuance.

Real-world framing

Interview problems rarely say "solve bin packing." Instead they're dressed up as practical scenarios:
  • Server allocation: You have N tasks with memory requirements and servers with fixed RAM. Minimize the number of servers.
  • Container packing: Ship items in boxes of fixed volume. Minimize boxes used.
  • Job scheduling with deadlines: Assign jobs to machines where each machine has a capacity per time window.
  • Disk partition management: Files of various sizes need to go onto fixed-size partitions.
The translation step is part of what's being tested. Recognizing that "minimize the number of servers needed" is just bin packing in disguise is valuable, and it's something the AI might not explicitly tell you.

Multi-dimensional bin packing

Things get trickier when items have multiple dimensions. A server might have both CPU and memory limits. A shipping box has length, width, and height. In these cases, simple one-dimensional heuristics don't directly apply, and the problem becomes significantly harder.
The good approach for interviews is to reduce multi-dimensional packing to a series of one-dimensional decisions where possible. For instance, if servers have both CPU and memory, you could pack by the "tighter" resource first (the one that will run out sooner) and check the other constraint as a feasibility filter. This isn't optimal, but it's practical, and stating the tradeoff explicitly shows maturity.
def first_fit_decreasing_2d(items, cpu_cap, mem_cap):
    items_sorted = sorted(items, key=lambda x: max(x[0]/cpu_cap, x[1]/mem_cap), reverse=True)
    bins_cpu = []
    bins_mem = []

    for cpu, mem in items_sorted:
        placed = False
        for i in range(len(bins_cpu)):
            if bins_cpu[i] + cpu <= cpu_cap and bins_mem[i] + mem <= mem_cap:
                bins_cpu[i] += cpu
                bins_mem[i] += mem
                placed = True
                break

        if not placed:
            bins_cpu.append(cpu)
            bins_mem.append(mem)

    return len(bins_cpu)
The sort key here uses the maximum normalized resource demand -- which dimension of this item is proportionally largest relative to capacity? That's a reasonable heuristic, though there are others. The point isn't to memorize this. It's to show you can extend a known pattern to a harder variant and be honest about the tradeoffs.
Interviewers don't expect you to have multi-dimensional bin packing algorithms memorized. What they want to see is that you can take a known technique, adapt it to new constraints, and clearly communicate what guarantees you're giving up. Saying "this is a heuristic and I'm not certain it's optimal" is exactly right.

When greedy fails

This is the section that matters most for interviews, because this is where you differentiate yourself from someone who just accepts whatever the AI spits out.
Greedy fails when the locally optimal choice at one step constrains future steps in ways that lead to a worse overall outcome. Here are the common traps:
The 0/1 knapsack problem. You have items with weights and values, a knapsack with a weight limit, and you want to maximize total value. Greedy by value-to-weight ratio works for the fractional knapsack (where you can take partial items), but fails for the 0/1 version where items are all-or-nothing. This is one of the most common problems where AI will confidently generate a greedy solution that's wrong.
Here's a concrete example. Capacity = 10. Items: (weight=6, value=6, ratio=1.0), (weight=5, value=5, ratio=1.0), (weight=5, value=5, ratio=1.0). Greedy by ratio picks the first item (weight 6, value 6), leaving capacity 4 -- and neither remaining item fits. Total value: 6. The optimal solution takes the two weight-5 items for a total value of 10. The greedy choice of picking the first item locked us out of the better combination.
This pattern appears in disguise all the time. "Select a subset of projects to maximize revenue within a budget" is 0/1 knapsack. "Choose which features to include in a sprint given time constraints" is 0/1 knapsack. If your AI generates a greedy sort-by-ratio approach for any of these, that's your cue to push back.
Shortest path with negative edges. Dijkstra's algorithm is greedy (always expand the nearest unvisited node). It works great with non-negative edge weights. Throw in a negative edge and it falls apart. You need Bellman-Ford instead.
The making change problem with non-standard denominations. As we saw above. US denominations happen to have the greedy choice property, but arbitrary denomination sets don't.
Task scheduling with dependencies. If tasks depend on each other, greedily picking the shortest task next can violate dependency constraints and produce an invalid schedule. You need topological sorting or DP.
Here's a pattern to watch for: your AI assistant generates a greedy solution, it passes the basic test cases, and you move on. Then a hidden edge case in the larger test suite breaks it. This is extremely common because greedy solutions often work on small or "nice" inputs but fail on adversarial ones. Always construct your own edge cases before trusting a greedy approach.
The general rule of thumb: if the problem has a constraint that forces you to consider combinations of choices rather than individual choices in isolation, greedy probably won't cut it. You likely need DP, backtracking, or some other approach that considers the global picture.

Quick diagnostic

When you see a new problem, ask yourself these questions before going greedy:
  1. Can I make a choice and never need to undo it? If not, greedy won't work.
  2. After making one choice, does the remaining problem look like a smaller version of the same thing? If not, optimal substructure doesn't hold.
  3. Can I construct a small counterexample where greedy gives the wrong answer? If yes, you've just proved greedy is wrong. If you can't find one after a couple of minutes, greedy is probably fine, but that's not a proof.

What interviewers expect

Interviewers asking greedy problems in AI coding interviews are testing a few specific things, and none of them are "can you recite the proof that activity selection is optimal."
Recognizing the pattern. Can you look at a problem and identify that it has greedy structure? This means noticing things like: the problem asks for a maximum or minimum, there's a natural ordering (by end time, by size, by ratio), and making a locally optimal choice doesn't seem to constrain future choices.
Reasoning about correctness. This is the big one. Can you articulate why greedy works for this specific problem, or identify when it doesn't? The gold standard is constructing a quick exchange argument: "If we had a non-greedy optimal solution, we could swap in the greedy choice without making it worse." You don't need to write a formal proof, but you should be able to sketch the reasoning.
Verifying AI output. When your AI generates a greedy solution, what do you do? Blindly accept it? That's a red flag. The interviewer wants to see you:
  • State what the greedy strategy is (e.g., "this sorts by end time and picks non-overlapping intervals")
  • Consider whether the greedy choice property holds
  • Test it against an edge case or two, especially adversarial ones
  • Flag if you think greedy might be wrong and suggest an alternative
Knowing the limits. If the problem requires an exact optimal solution and you can't prove greedy works, say so. "I think this needs DP because the greedy choice at step k affects what's available at step k+2" is a great answer. It shows you understand the algorithm design landscape well enough to pick the right tool.
A strong move in an interview is to say something like: "The AI suggested a greedy approach here. Let me verify that works by thinking about whether we can always commit to the locally optimal choice." Then walk through a small example. Even if the greedy approach turns out to be correct, the fact that you questioned it and verified it tells the interviewer you think critically about generated code.

The verification checklist

When you or your AI produces a greedy solution, run through this mentally:
  1. What's the greedy criterion? How are we deciding what to pick at each step?
  2. Is the sorting correct? Many greedy algorithms depend on processing items in a specific order. Wrong sort key = wrong answer.
  3. Does it handle ties? When two items are equally "greedy-optimal," does the tiebreaking matter?
  4. Edge cases: empty input, single item, all items identical, items that exactly fill capacity.
  5. Counterexample search: try to construct an input where greedy gives a suboptimal answer. Spend at least 30 seconds on this.
If the solution survives all of that, you're probably in good shape. And if it doesn't, you've just demonstrated exactly the kind of critical thinking the interview is designed to surface.

Putting it together

Greedy problems in AI coding interviews aren't about raw algorithmic knowledge. They're about judgment. The AI can write the code. It can sort intervals by end time and iterate through them faster than you can type the function signature. What it can't do reliably is decide whether greedy is the right approach in the first place.
That decision -- greedy vs. DP vs. something else entirely -- is yours to make. And how you make it, how you reason about it out loud, how you verify the AI's output against edge cases, how you push back when something feels wrong -- that's what the interview is really measuring.
The strongest candidates treat greedy as a tool they reach for deliberately, not a default they fall back on because it's simpler. They can articulate why it works, catch when it doesn't, and pivot to the right alternative without losing momentum. That's the skill worth practicing.

Purchase Premium to Keep Reading

Unlock this article and so much more with Hello Interview Premium

Schedule a mock interview

Meet with a FAANG senior+ engineer or manager and learn exactly what it takes to get the job.

Schedule a Mock Interview