# Gauss-Newton Methods

The Gauss-Newton method uses a first and second derivative of the change in WSS with parameter value to estimate the direction and distance the program should to go to reach a better point on the WSS surface. Equation 11.5.1 Equation for the next Parameter Value according to the Gauss-Newton Method - One Parameter Equation 11.5.2 Equation for the next Parameter Value according to the Gauss-Newton Method - Multiple Parameters

The nonlinear regression program usually performs the single and double differentiations numerically. (For special problems it would be useful to analytically determine equations for these differentiations but this is not generally performed). Numerical differentiation requires the calculation of the WSS with two values of each parameter. The relative stepsize between these values is represented by dT in Boomer. The default value is usually satisfactory. Each pass through the equations above provide a single iteration, hopefully getting closer to the global minimum with each iteration. The program needs a criteria to decide when it has converged on the solution and it is time to stop. This convergence criteria, dC, can be expressed as either the smallest fractional change in WSS or in the change in the parameter values.

• Relatively efficient (direction and step size determined)
• Works especially well near the minimum
• May become lost with poor initial estimates

## Damping Gauss-Newton Method

A major disadvantage of the Gauss-Newton method is that it might get lost. Further away from the minimum the shape of the WSS surface may be irregular and the differentiation equation may point in the wrong direction or distance. A local shallow slope might suggest the next best point to further away than would be best. A local minimum might even have a slope in the wrong direction leading the method even further away from the best answer. One way to compensate for this possible error is to damp the stepsize, reducing the distance by half for each damp. The damping may still not solve all the problems but it can be useful. Figure 11.5.1 Contour map of kel versus V showing probable Path of Damping-Gauss-Newton Method

## Marquardt Method

The Marquardt method is another variation of the Gauss-Newton method. Equation 11.5.3 Equation for Next Parameter value using Marquardt's Method

The Marquardt equation is similar to the Gauss-Newton equation above with the addition of the µI term. This term has the effect of allowing the method to act like the steepest descent method further from the global minimum where it is good and like the Gauss-Newton method closer to the answer where this method is better. Figure 11.5.2 Contour map of kel versus V showing probable Path of Marquardt Method

All of these Gauss-Newton methods can be quite efficient but the calculations are involved and they can get lost or cause numerical instability. The Damping-Gauss-Newton and the Marquardt methods can be useful and are included in a number of nonlinear regression methods.