# Locating Roots

Recall that locating roots of a curve means to find the -coordinates of the points at which the curve crosses the -axis. To do this **analytically** means to do it by solving equations using algebra. We know how to find roots analytically for some functions, like quadratics, for example. That is, by solving for . We can also do it for some cubics and some trigonometric equations etc. However, there are many equations where we cannot find roots analytically and we must do it **numerically**. This is where we make ‘guesses’ for the roots and improve them successively via some calculation. This is sometimes known as **iteration**.

### Change of Sign

We can make a first guess at a root of the equation (where cannot be solved analytically) if we see a change in sign of between two given -values. For example, consider the graph of – we cannot find the roots analytically. We can see from the graph that and (using radians). It follows that since and , the curve has a root between and . Similarly, and , the curve has a root between and . See Example 1 for successive interval narrowing to get an approximation to the root.

This curve also illustrates some situations to be aware of. Note that and and there is no root between and . However, and and there are two roots. Hence, it is possible that if the curve has the same sign at two points, there could be no roots or an even number of roots. There are 4 roots between and , two -values where the curve is negative. Similarly, and and there is one root between and . Whereas, and and there are three roots between and . Hence, it is possible if the curve has different signs at two points, there could be an odd number of roots in between.

These examples illustrate the importance of choosing intervals sufficiently small to indicate the presence of a single root. The final situation to be aware of is the case where a curve has different signs at two points and has no roots in between. This can happen if the curve is not continuous – see some discontinuous curves. In these cases, a change of sign may be due to an asymptote rather than a root.

To summarise: **if the function **** is continuous on a sufficiently small interval ****, and **** and **** have opposite signs, then **** has a root on the interval ****.**

## Iteration – Recursive Formula

An alternative way to find roots is to rewrite the equation in the form . Finding the root that satisfies is then the same as finding the that satisfies the equation . On a graph, this is the point where the curve of crosses the straight line . By setting up the recursive formula , the solution can be found iteratively. This means that we choose and by setting , this formula says that . So, our next guess for the root is . Similarly, the next guess is , found by setting and evaluating . Recursively, and are our next guesses for the root and so on. By choosing a value for well, this formula may converge to a root in one of two ways.

This is an example of a **staircase diagram**. We put the first choice of into to find . This is our next choice to put in to so we go along horizontally to the line so we can put this value in. We move vertically up to find and so on, converging to the solution of , that is the original root, iteratively.

This is an example of a **cobweb diagram**. As before, we move vertically to find values and horizontally to find the new corresponding -values. Due to the shape of the curve, the convergence occurs by going around the solution creating a cobweb-like effect as opposed to the staircase effect seen in the other diagram.

In the above diagrams, the choice of does not affect the convergence. That is, all choices of result in staircase/cobweb convergence towards the root. This is dependent on the shape of the curve when compared with the straight line, however. The diagrams below show how the curve and the choice of can result in staircase/cobweb divergence.

Note that there may be several choices for rewriting in the form when finding the recursive formula . The choice of , as well as the choice of , may affect the convergence of the iteration. See Example 2. Even when the sequence converges, the location of the root needs to be justified – see Example 2 for this too.

## Locating Roots using the Newton-Raphson Method

The Newton-Raphson method is also an iterative procedure for locating roots. To solve , Newton-Raphson uses a specific recursive formula:

Notice the difference between this formula, that uses the derivative , as opposed to any in the iterations above. We revert to looking for the actual root as opposed to the intersection between two curves. To do this, the Newton-Raphson method works using tangents – root guesses are updated by finding where successive tangents cross the -axis.

The Newton-Raphson formula is essentially a rearrangement of the formula for a derivative. Consider the tangent at , for example. We find the gradient of the tangent (or derivative of the curve) using *rise over run*: . We can rearrange this to get . This basically says that we improve the guess for the root by moving backwards by an amount that is the *run*. If the gradient is *rise over run* then the *run* is *rise over gradient*. See Example 3 for the use of Newton-Raphson in a modelling example where we locate a stationary point rather than a root.

Since there is division by the derivative, the Newton-Raphson method can fail when the derivative is 0. That is, locating roots near a stationary point. This makes sense when considering that a tangent at a stationary points is flat and doesn’t cross the -axis. We are then unable to use Newton-Raphson to update the root guess. It follows that choosing at a stationary point will result in the method failing. In the case where is small, division by a small number creates a large update to the root guess and convergence can be slow. Hence, the choice of should be made away from any stationary points.