The NonLinearConjugateGradientOptimizer does a line search for a zero in the gradient (see comment from source below), rather than a search for a minimum of the function (the latter is what is used in Numerical Recipes and in the simple discussion on Wikipedia (

http://en.wikipedia.org/wiki/Nonlinear_conjugate_gradient_method ). Is this wise? It seems a clever idea, but in a complicated surface with numerical errors the zero in the gradient may not be at a function minimum and the algorithm could be a deoptimizer. I ask because (in a problem too complex too easily reproduce) I'm sometimes getting junk as output of this routine.

Bruce

Comment for the LIneSearchFunction

350 * The function represented by this class is the dot product of

351 * the objective function gradient and the search direction. Its

352 * value is zero when the gradient is orthogonal to the search

353 * direction, i.e. when the objective function value is a local

354 * extremum along the search direction.

---------------------------------------------------------------------

To unsubscribe, e-mail:

[hidden email]
For additional commands, e-mail:

[hidden email]