# 29. Jump Game

# Jump Game

## Introduction

In this problem we will investigate whether a solution exists for a game played in an array where you are the only player and you are initially located at the first cell of the array. Your goal is to get to the last cell by jumping from one cell to another a specified number of times. The array contains information about the length of the jump you can take from a cell.

There are a number of possible solutions to this problem and in this Chapter we will discuss two specific approaches. In particular:

In Section 33.2 we take a look at the most intuitive approach where we try all possible jumps in a backtracking-like manner.

In Section 33.3 we will refine the solution of Section 33.2 into one that uses a clever insight to visit the cells efficiently.

Finally, in Section 33.4 we will discuss an efficient and concise greedy solution.

## Problem statement

Write a function that takes as input an array \(I\) of non-negative integers. You are initially positioned at the beginning of the array (at index \(0\)) and your goal is to jump from cell to cell to the end of the array (cell \(|I|-1\)). If you are in the at index \(j\) you are allowed to jump to all cells within the following range: \([j-I_i,j+I_i]\) (each cell of the array contains the information about the longest jump you can take from there). The function should return true if you are able to reach the last cell of the array, otherwise it returns false.

Given \(I=[2,3,1,1,4]\) the function returns **true**. You jump from cell \(0\) to \(1\) and then take a \(3\) cells wide jump to the end of the array. See Figure 33.1. [ex:can_jump_example1]

Given \(I=[3,2,1,0,4]\) the function returns **false** because it is impossible to reach any cells with index higher than \(3\). See Figure 33.2: there is no incoming edge for the node with label \(4\). [ex:can_jump_example2]

. [fig:can_jump:example1]

. [fig:can_jump:example2]

## Backtracking

The first solution that we will investigate is based on an idea similar to the DFS where \(I\) is treated as an implicit graph where each cell (a node) is connected to all the others cells that can be reached by jumping from it. The set of cells you can reach from a given cell \(c\) is identified by the length of the jump you can perform from \(c\) which is stored within \(c\) itself. The idea is to use DFS to check whether the the last node of the graph is connected with the first one. In other words, whether there is a path from the first to the last node. We can proceed by adopting a recursive approach where we try to visit all the nodes that we can reach from the node we currently occupy and to continue this process until either we have reached the last node or there is no more jump to try; meaning in that case that there is no way to reach the last node (i.e. the last node is disconnected). As the implicit graph is not guaranteed to be acyclic, in order to make this approach work we need to ensure that we not jump back and forth from one cell to another in a cycle. This can happen if, for instance, you jump from a cell \(0\) to cell \(1\) and then back to cell \(0\). In order to overcome this issue we can only perform forward jumps so that it will be impossible to be stuck in a cycle and still manage to find an answer. When you jump to a cell \(i\) from a cell \(j\) s.t. \(j < i\) (you performed a forward jump) you know that you can also visit all cells \(j \leq k \leq i\) (all the cells in between \(j\) and \(i\)). If you only jump forward you are not going to need to visit any cell \(j \leq k \leq i\) using backward jumps as they are visited anyway when processing cells \(j\) by performing forward jumps from it. An implementation of this idea is shown in Listing [list:can_jump1_1]. This approach is correct and it will eventually find a solution but it is extremely inefficient. Its complexity is exponential in time as potentially the same cells are visited over and over[^28] and constant in space[^29].

Listing 1: Exponential time solution to the \textit{jump game} problem where only forward jumps are performed.

```
bool can_jump_DFS_forward_only_helper(const vector<int>& nums, const int n)
{
const int tgt = nums.size() - 1;
if (n == tgt)
return true;
int r = std::min(tgt, n + nums[n]);
for (int i = n + 1; i <= r; i++)
{
if (can_jump_DFS_forward_only_helper(nums, i))
return true;
}
return false;
}
bool can_jump_DFS_forward_only(const vector<int>& nums)
{
return can_jump_DFS_forward_only_helper(nums, 0);
}
```

## DFS

Another option for solving the cycle problem arising from the algorithm described in Section 33.2 (this solution can be in-fact thought of as optimized backtracking) is to keep track of the cells that we have already visited and every time we are about to perform a jump to a cell we first check whether that cell has already been visited and - if it has - the jump is discarded and not performed. As such, no cell is actually visited twice and consequently the complexity is in this case is \(O(|I|^2)\). In the worst-case scenario you must check for each cell whether all the other cells have been already visited. Listing [list:can_jump1] shows an implementation of this idea.

Listing 2: Quadratic time and linear space DFS solution to the \textit{jump game} problem using a visited array.

```
bool can_jump_DFS_helper(const vector<int>& nums,
<bool>& visited,
vectorconst int n)
{
const int tgt = nums.size() - 1;
if (n == tgt)
return true;
[n] = true;
visited
int l = std::max(0, n - nums[n]);
int r = std::min(tgt, n + nums[n]);
bool sol_found = (r == tgt);
for (int i = l; i <= r && !sol_found; i++)
{
if (visited[i])
continue;
= can_jump_DFS_helper(nums, visited, i);
sol_found }
return sol_found;
}
bool can_jump_DFS(const vector<int>& nums)
{
std::vector<bool> visited(nums.size(), false);
return can_jump_DFS_helper(nums, visited, 0);
}
```

Note that one optimization from which this solution (and perhaps also Listing [list:can_jump1_1]) can benefit would be to always try to jump the longest distance possible. Although this won’t change their asymptotic complexity in practice it might be faster.

## Greedy

There is, however, a much faster solution to this problem using the idea that we can return true if we can jump from the cell at index \(0\) to a cell from which we can reach the end of the array. If we apply the same reasoning to generic index \(i\) we end up with what is essentially a dynamic programming approach that - given \(G(x)\) is \(1\) if you can reach the end of the array from the cell \(x\) and \(0\) otherwise - is based on the following recursive formula: \[\begin{cases} G(|I|-1) = 1 \\ G(x) = 1 \: \: \text{if} \: \: \exists \: y > x \:\: \text{s.t.} \:\: y < (x+I_x) \: \: \text{and} \: \:G(y) = 1\\ \text{otherwise} \: \: G(x) = 0 \end{cases} \label{eq:can_jump:dpformula}\] Equation [eq:can_jump:dpformula] shows that a possible implementation would start processing cells from the last to the first and that for each element a linear time lookup for a suitable cell \(y\) might be needed. Therefore the complexity of this solution is quadratic in time. However we can drastically lower its complexity by noting that when processing cell \(x\) all we care about is whether the closest cell to the right from which you can reach the end of the array is reachable from \(x\). We can carry this information into a variable \(m\) down from cell \(|I|-1\) to cell \(0\) and update it after a cell is processed and this would effectively allow us to have a linear time solution.

To summarize, the linear time solution for this problem works as follows: We iterate the array \(I\) right-to-left and for each cell \(x\) we check whether we can reach \(m\) jumping from \(x\). If we can then \(x\) is the new leftmost cell from which we can reach the end of the array, thus \(m = x\). Otherwise we continue by processing cell \(x-1\) in a similar manner. Eventually we will have processed all cells and therefore we can return true if \(m = 0\) meaning that cell \(0\) is the leftmost cell from which we can jump to location \(|I|-1\), and false otherwise.

Listing 3: Greedy solution where we use the fact that the DP solution described by Equation \ref{eq:can_jump:dpformula} can be optimized if we only consider if it is possible to reach the closest cell from which we can jump to the end of the array.

```
bool can_jump_linear(const vector<int>& nums)
{
const int size = nums.size();
int m = size - 1;
for (auto i = size - 2; i >= 0; i--)
{
const int max_reach = i + nums[i];
if (max_reach >= m)
= i;
m }
return m == 0;
}
```