Consider the tangent to the function: where we've neglected cubic and higher powers of the error, since they will be much smaller than the squared term, when the error itself is small. This means that the number of correct decimal places doubles with each step, much faster than linear convergence.
This sequence will converge if which converge cubicly, tripling the number of correct digits at each iteration, which is 50% faster than Newton-Raphson.
The roots are calculated using the equation of the chord, i.e. We already know the roots of this equation, so we can easily check how fast the regula falsi method converges.
For our initial guess, we'll use the interval [0,2].
Since f is concave upwards and increasing, a quick sketch of the geometry shows that the chord will always intersect the x-axis to the left of the solution. We'll call our n approaches 1, each extra iteration reduces the error by two-thirds, rather than one-half as the bisection method would.
The order of convergence of this method is 2/3 and is linear.
While roots can be found directly for algebraic equations of fourth order or lower, and for a few special transcendental equations, in practice we need to solve equations of higher order and also arbitrary transcendental equations.
As analytic solutions are often either too cumbersome or simply do not exist, we need to find an approximate method of solution.
Numerical analysts and applied mathematicians have a variety of tools which they use in developing numerical methods for solving mathematical problems.
An important perspective, one mentioned earlier, which cuts across all types of mathematical problems is that of replacing the given problem with a 'nearby problem' which can be solved more easily.