Dynamic Programming in LeetCode: Solving 'Coin Change'

Hero image for Dynamic Programming in LeetCode: Solving 'Coin Change'. Image by Sarah Agnew.
Hero image for 'Dynamic Programming in LeetCode: Solving 'Coin Change'.' Image by Sarah Agnew.

Coin Change looks very much like the sort of problem a greedy strategy ought to solve. We've denominations, we want the fewest coins, so it feels natural to keep taking the largest one that fits. That instinct works for some coin systems, but not for all of them, which is exactly why the problem is such a good dynamic programming exercise.

The real challenge isn't writing a loop. It's recognising that local choices don't always produce the global optimum. With that in mind, the dynamic programming formulation becomes much easier to trust.


Why Greedy Can Fail

Imagine coin values of 1, 3, and 4, and a target of 6. A greedy approach would take 4, then 1, then 1, for a total of three coins. The optimal answer is actually 3 + 3, which uses only two. That small example is enough to show why we need a broader search over subproblems rather than a purely local choice.


The Dynamic Programming Idea

We define dp[amount] as the minimum number of coins needed to make that amount. Then for each amount, we consider every coin that could contribute to it. If we already know the best answer for amount coin, we can extend that answer by one coin and compare it with the current best.

That's the state transition. It looks mechanical once written down, but it is really just a disciplined way to reuse solved subproblems rather than solving them repeatedly.


What the State Really Means

This is the part I think is most worth slowing down for. dp[amount] is not "a list of coins" or "the last coin used". It is a very specific statement: the fewest coins needed to make exactly this amount.

Once that sentence is clear, the transition stops feeling like a memorised template. For amount 6, for example, we try each coin:

  • if we use coin 1, we need the best answer for amount 5
  • if we use coin 3, we need the best answer for amount 3
  • if we use coin 4, we need the best answer for amount 2
  • Each option is simply "best smaller answer plus one more coin". Dynamic programming becomes much less mysterious once we keep that meaning visible.

A Practical TypeScript Solution

export const coinChange = (coins: number[], target: number): number => {  const dp = new Array<number>(target + 1).fill(Number.POSITIVE_INFINITY);  dp[0] = 0;  for (let amount = 1; amount <= target; amount += 1) {    for (const coin of coins) {      if (coin <= amount) {        dp[amount] = Math.min(dp[amount], dp[amount - coin] + 1);      }    }  }  return Number.isFinite(dp[target]) ? dp[target] : -1;};

The array gives us a bottomup record of the best solution for every smaller amount. By the time we reach the target, the result is already waiting for us in dp[target].

A lot of people assume that dynamic programming is mostly about memorising templates. Templates can help, but the important part is understanding what each state means and why the transition is valid. If we cannot explain what dp[amount] represents, the code tends to become rote rather than reliable.

Another mistake is overlooking impossible states. Using Infinity or another sentinel value keeps that case explicit, which makes the final 1 return much easier to justify.


Bottom‑Up versus Top‑Down

We could also solve Coin Change with topdown recursion plus memoisation. That is the same core idea expressed in a different direction. The bottomup version is often easier to reason about in JavaScript because the iteration order is explicit and we avoid recursion overhead, but both approaches are really solving the same subproblem graph.

This is another useful dynamic programming lesson. Once we understand the state and transition, the remaining question is often just whether we want to discover answers on demand or build them iteratively from the base case.


Complexity and Why It is Acceptable

The time complexity here is O(target * coins.length), because for each amount we inspect each coin once. The space complexity is O(target) for the dp array.

That is exactly the kind of tradeoff dynamic programming is often making for us. We spend memory to avoid recomputing the same smaller answers again and again.


Why the Explanation Matters as Much as the Code

The function is straightforward to test because the boundary is small and the state meaning is clear. Maintainability comes from naming the state well and documenting why greedy fails, because that is the conceptual leap the future reader is most likely to need. Scalability is strong enough for the intended constraint ranges because the work grows with the target amount times the number of coin types.

For the original problem statement and the lowerlevel details behind the solution, these references are worth a look:


Wrapping up

Coin Change is a strong dynamic programming problem because it forces us to stop trusting the locally obvious move and instead reuse information from smaller solved amounts. That shift in thinking is the real value of the exercise.

Key Takeaways

  • Greedy selection fails because the locally largest coin isn't always part of the globally best answer.
  • Dynamic programming works by reusing the best answers for smaller amounts.
  • Clear state meaning matters more than memorising the shape of the loop.

Coin Change is a strong reminder that dynamic programming is really about disciplined reuse of information. When framed in terms of subproblems and state transitions, the solution becomes much less intimidating.


Categories:

  1. Algorithms
  2. Development
  3. Guides
  4. JavaScript
  5. LeetCode