Solving the 'Jump Game' Problem with Greedy Algorithms

Hero image for Solving the 'Jump Game' Problem with Greedy Algorithms. Image by Ravi Palwe.
Hero image for 'Solving the 'Jump Game' Problem with Greedy Algorithms.' Image by Ravi Palwe.

Jump Game is one of those problems that gets easier the moment you stop trying to remember a pattern and start asking what information you actually need to keep. A lot of first attempts go straight to recursion or dynamic programming because the problem looks like a branching search. From each position, you can jump several different distances, so it feels as if you need to explore a tree of possibilities.

That instinct is understandable, but it is heavier than the problem requires. Jump Game is not asking for the path you should take, and it is not asking for the minimum number of jumps. It is only asking whether the last index is reachable at all. That difference is what opens the door to a greedy solution.


What the Problem is Really Asking

You are given an array of nonnegative integers. Each value tells you the maximum jump length from that position. If values[0] is 2, you can move one or two places forward from index 0. The goal is to decide whether some valid sequence of jumps can land on the final index.

That sounds small, but it changes the shape of the solution completely. If the question were "what is the best route?" or "what is the fewest number of jumps?", we would need more information. For this version, we only care about whether the reachable region keeps extending far enough.


Why the Brute‑Force Idea Gets Messy Quickly

The most direct way to think about the problem is to try every possible jump from every reachable position. That works for correctness, but it repeats the same work constantly. If several different routes can reach the same index, a naive recursive solution ends up solving the same suffix of the array several times.

We can improve that with memoisation or bottomup dynamic programming. Both are valid approaches, and both are useful if you are trying to build intuition. The reason they are not the best final answer here is that they still track more state than we really need.

There is a simpler question hiding underneath the bigger one:

  • what is the farthest index we can currently reach?

Once that becomes the centre of the solution, the branching complexity falls away.


The Greedy Idea

As we scan from left to right, we keep a single number: the farthest index reachable so far.

If we are standing at index i and i is already beyond that farthest reachable point, then we are stuck. There is no legal way to land on i, which means there is no legal way to reach anything beyond it either.

If i is reachable, then this position may extend our frontier. The jump length at i lets us reach up to i + values[i], so we update the farthest reachable index if that goes further than what we had before.

That is the whole strategy. We are not committing to a specific jump at each step. We are simply keeping the best frontier any earlier reachable position has given us.


Why This Greedy Approach is Safe

This is the part worth understanding properly, because otherwise the solution can feel like a lucky trick.

The useful invariant is:

  • after processing index i, farthest is the greatest index reachable using only positions from 0 to i

If i is within the current frontier, then some earlier sequence of jumps can already reach it. That means the only new information at i is whether jumping from here extends the frontier further.

If i is outside the frontier, then no path from the start can land there. Since jumps only move forward, every later index is also unreachable. There is nothing left to discover.

That is why the greedy step is safe. We are not throwing away a better future option. We are preserving exactly the summary of the prefix that matters: how far the prefix can get us.

This is also why Jump Game is a good reminder that "greedy" is not one single kind of move. In Coin Change, a greedy algorithm makes a specific local choice and that choice can be wrong. In Jump Game, the greedy state is the farthest reachable boundary. We are not saying "always take the biggest jump now". We are saying "remember the best reach seen so far". That is a much safer claim.


A Practical TypeScript Solution

export const canJump = (values: number[]): boolean => {  let farthest = 0;  for (let index = 0; index < values.length; index += 1) {    if (index > farthest) {      return false;    }    farthest = Math.max(farthest, index + values[index]);    if (farthest >= values.length - 1) {      return true;    }  }  return true;};

There are only two real decisions in the loop:

  • fail immediately if the current index is unreachable
  • otherwise extend the frontier if this position gives us a longer reach

The early return when farthest reaches the last index is not strictly required, but it makes the intent clearer and avoids pointless work.


Walking through Two Useful Examples

Take [2, 3, 1, 1, 4].

At the start, farthest is 0.

  • at index 0, we can reach up to 2, so farthest becomes 2
  • at index 1, we are still reachable because 1 <= 2; from here we can reach 4, so farthest becomes 4
  • 4 is the last index, so we know the answer is true

Now take [3, 2, 1, 0, 4].

  • at index 0, farthest becomes 3
  • at index 1, it stays 3
  • at index 2, it stays 3
  • at index 3, it stays 3 because the value there is 0
  • index 4 is beyond farthest, so we return false

That second example is the one that usually makes the greedy logic click. The problem is not that we picked the wrong earlier jump. The problem is that every reachable route collapses at the same dead end.


Why Dynamic Programming is Still Worth Understanding

Even though greedy is the right final solution, it is still worth noticing what a dynamic programming version would look like. A DP approach might mark each index as reachable or unreachable based on earlier reachable positions.

That is a perfectly respectable way to solve the problem, and it can be a helpful bridge if the greedy jump feels too abrupt. The difference is that DP stores reachability for many positions individually, while greedy compresses that information into one frontier.

That compression works because the exact route no longer matters once all we need is the furthest point reached by the prefix. In other words, the greedy solution is not magic. It is a more compact summary of the same underlying reachability information.


Common Mistakes

One easy mistake is confusing Jump Game with Jump Game II. The second problem asks for the minimum number of jumps, which changes the reasoning. Reusing the same explanation for both usually produces confusion.

Another mistake is implementing a recursive search first and then trying to optimise it piecemeal. That often leaves you with correct code that still feels harder to explain than it should.

It is also common to see people describe the strategy as "always jump as far as possible". That is not actually what the algorithm does. Sometimes the route that eventually wins involves a shorter jump earlier on. The algorithm does not care, because it is not committing to a route. It only tracks whether some reachable route extends the frontier.


Complexity and What an Interviewer Usually Cares About

The greedy solution runs in O(n) time and O(1) extra space. That is the practical win, but it is not usually the most interesting part of the discussion.

What interviewers often want to hear is whether you can name the key observation cleanly:

  • if an index is beyond the farthest reachable point, the game is over
  • if an index is reachable, the only thing worth updating is the farthest point the prefix can now reach

Once you can say that clearly, the code tends to follow without much strain.

For the original problem statement and the lowerlevel details behind the challenge, these references are the useful ones:


The Real Lesson in Jump Game

Jump Game is a good greedy problem because it teaches restraint. The naive reaction is to preserve every possible future choice, but the problem does not ask for that much detail. It only asks whether the reachable region ever reaches the end.

Once we notice that, the problem becomes less about jumping and more about tracking a frontier. That is a useful habit well beyond interview questions. A lot of algorithm problems become simpler when we identify the smallest piece of state that still preserves the truth we care about.

That is exactly what the farthest reachable index is doing here. It is not a shortcut around the logic. It is the logic, compressed to the part that matters.


Categories:

  1. Development
  2. Guides
  3. JavaScript
  4. LeetCode