

There are numerous examples when the optimization might not have
converged to the stopping condition, but the minimum discovered point
is better than the starting point that was initially provided. The
user should have the ability to at least decide if it is good enough,
or use it as a starting point into a different optimization run. This
is the behavior that most optimizers have, including MATLAB.
This functionality important when you have a time constraint, where any improvement in result is better than no information at all. There are also nonconvex regions of a function where the convergence rate is very slow, and you might want to stop.
A simple modification for now is to just let
org.apache.commons.math3.exception.TooManyEvaluationsException contain
within it the best found solution so far.

To unsubscribe, email: [hidden email]
For additional commands, email: [hidden email]


On Nov 17, 2012, at 6:57 PM, Konstantin Berlin wrote:
> There are numerous examples when the optimization might not have
> converged to the stopping condition, but the minimum discovered point
> is better than the starting point that was initially provided. The
> user should have the ability to at least decide if it is good enough,
> or use it as a starting point into a different optimization run. This
> is the behavior that most optimizers have, including MATLAB.
>
> This functionality important when you have a time constraint, where any improvement in result is better than no information at all. There are also nonconvex regions of a function where the convergence rate is very slow, and you might want to stop.
>
> A simple modification for now is to just let
> org.apache.commons.math3.exception.TooManyEvaluationsException contain
> within it the best found solution so far.
>
I agree that there are many situations where one is constrained by the amount of resources one can throw at a problem and the user would like the best solution given those resources (here, the maximum number of iterations) so I fully support a solution that lets us get the best solution that has been achieved. I wonder if it is possible, instead of using the exception, to specify a convergence checker that checks not only the tolerances, but could instead return success if the maximum number of iterations has been exceeded. I realize this will look inappropriate to some, but using the convergence checker in this way would be entirely up to the person calling it. If you can get back the number of iterations used, then presumably if that is equal to the max you'll know it hasn't really converged.
Bruce
> 
> To unsubscribe, email: [hidden email]
> For additional commands, email: [hidden email]
>
>


On 19 November 2012 21:47, Bruce A Johnson < [hidden email]> wrote:
> On Nov 17, 2012, at 6:57 PM, Konstantin Berlin wrote:
>
>> There are numerous examples when the optimization might not have
>> converged to the stopping condition, but the minimum discovered point
>> is better than the starting point that was initially provided. The
>> user should have the ability to at least decide if it is good enough,
>> or use it as a starting point into a different optimization run. This
>> is the behavior that most optimizers have, including MATLAB.
>>
>> This functionality important when you have a time constraint, where any improvement in result is better than no information at all. There are also nonconvex regions of a function where the convergence rate is very slow, and you might want to stop.
>>
>> A simple modification for now is to just let
>> org.apache.commons.math3.exception.TooManyEvaluationsException contain
>> within it the best found solution so far.
>>
>
> I agree that there are many situations where one is constrained by the amount of resources one can throw at a problem and the user would like the best solution given those resources (here, the maximum number of iterations) so I fully support a solution that lets us get the best solution that has been achieved. I wonder if it is possible, instead of using the exception, to specify a convergence checker that checks not only the tolerances, but could instead return success if the maximum number of iterations has been exceeded. I realize this will look inappropriate to some, but using the convergence checker in this way would be entirely up to the person calling it. If you can get back the number of iterations used, then presumably if that is equal to the max you'll know it hasn't really converged.
>
Or it just happened to converge at the same time as reaching max iter
count ... I don't think it's possible to distinguish those two cases
without further information.
>
> Bruce
>
>
>
>
>
>> 
>> To unsubscribe, email: [hidden email]
>> For additional commands, email: [hidden email]
>>
>>
>

To unsubscribe, email: [hidden email]
For additional commands, email: [hidden email]


Hello.
On Mon, Nov 19, 2012 at 04:47:08PM 0500, Bruce A Johnson wrote:
> On Nov 17, 2012, at 6:57 PM, Konstantin Berlin wrote:
>
> > There are numerous examples when the optimization might not have
> > converged to the stopping condition, but the minimum discovered point
> > is better than the starting point that was initially provided. The
> > user should have the ability to at least decide if it is good enough,
> > or use it as a starting point into a different optimization run. This
> > is the behavior that most optimizers have, including MATLAB.
> >
> > This functionality important when you have a time constraint, where any improvement in result is better than no information at all. There are also nonconvex regions of a function where the convergence rate is very slow, and you might want to stop.
> >
> > A simple modification for now is to just let
> > org.apache.commons.math3.exception.TooManyEvaluationsException contain
> > within it the best found solution so far.
> >
>
> I agree that there are many situations where one is constrained by the amount of resources one can throw at a problem and the user would like the best solution given those resources (here, the maximum number of iterations) so I fully support a solution that lets us get the best solution that has been achieved. I wonder if it is possible, instead of using the exception, to specify a convergence checker that checks not only the tolerances, but could instead return success if the maximum number of iterations has been exceeded. I realize this will look inappropriate to some, but using the convergence checker in this way would be entirely up to the person calling it. If you can get back the number of iterations used, then presumably if that is equal to the max you'll know it hasn't really converged.
Your proposal is brilliant: problem solved (IMHO)!
The convergence checkers are there to allow users to define their notion
of convergence; and the API provides the current iteration count.
We could add this feature in CM's "Simple...Checker" implementations by
adding one constructor (and an ifblock in the "converged" method) without
touching the optimizers.
The user who passes such a checker will indeed be aware of the implication:
the algorithm might quietly terminate before reaching the optimal solution.
Shall I
1. resolve the issue as "Not a Problem" (the user should implement his own
custom checker) or
2. add the constructors in the checkers defined within CM and resolve as
"Fixed"?
Regards,
Gilles

To unsubscribe, email: [hidden email]
For additional commands, email: [hidden email]


That would solve the problem. Seems a bit messy though, since you now have two conflicting stopping conditions, one you provide in the checker implementation, the other in the function call. I am not clear why if you are throwing an exception you don't provide more information inside it, so the user can adjust their logic based on it.
Also, it seems like some classes, like CMAESOptimizer, also take number iterations in their constructor.
On Nov 19, 2012, at 7:36 PM, Gilles Sadowski < [hidden email]> wrote:
> Hello.
>
> On Mon, Nov 19, 2012 at 04:47:08PM 0500, Bruce A Johnson wrote:
>> On Nov 17, 2012, at 6:57 PM, Konstantin Berlin wrote:
>>
>>> There are numerous examples when the optimization might not have
>>> converged to the stopping condition, but the minimum discovered point
>>> is better than the starting point that was initially provided. The
>>> user should have the ability to at least decide if it is good enough,
>>> or use it as a starting point into a different optimization run. This
>>> is the behavior that most optimizers have, including MATLAB.
>>>
>>> This functionality important when you have a time constraint, where any improvement in result is better than no information at all. There are also nonconvex regions of a function where the convergence rate is very slow, and you might want to stop.
>>>
>>> A simple modification for now is to just let
>>> org.apache.commons.math3.exception.TooManyEvaluationsException contain
>>> within it the best found solution so far.
>>>
>>
>> I agree that there are many situations where one is constrained by the amount of resources one can throw at a problem and the user would like the best solution given those resources (here, the maximum number of iterations) so I fully support a solution that lets us get the best solution that has been achieved. I wonder if it is possible, instead of using the exception, to specify a convergence checker that checks not only the tolerances, but could instead return success if the maximum number of iterations has been exceeded. I realize this will look inappropriate to some, but using the convergence checker in this way would be entirely up to the person calling it. If you can get back the number of iterations used, then presumably if that is equal to the max you'll know it hasn't really converged.
>
> Your proposal is brilliant: problem solved (IMHO)!
> The convergence checkers are there to allow users to define their notion
> of convergence; and the API provides the current iteration count.
>
> We could add this feature in CM's "Simple...Checker" implementations by
> adding one constructor (and an ifblock in the "converged" method) without
> touching the optimizers.
> The user who passes such a checker will indeed be aware of the implication:
> the algorithm might quietly terminate before reaching the optimal solution.
>
> Shall I
> 1. resolve the issue as "Not a Problem" (the user should implement his own
> custom checker) or
> 2. add the constructors in the checkers defined within CM and resolve as
> "Fixed"?
>
>
> Regards,
> Gilles
>
> 
> To unsubscribe, email: [hidden email]
> For additional commands, email: [hidden email]
>

To unsubscribe, email: [hidden email]
For additional commands, email: [hidden email]


Hi.
[Please do not toppost.]
On Mon, Nov 19, 2012 at 09:17:39PM 0500, Konstantin Berlin wrote:
> That would solve the problem. Seems a bit messy though, since you now have two conflicting stopping conditions, one you provide in the checker implementation, the other in the function call. I am not clear why if you are throwing an exception you don't provide more information inside it, so the user can adjust their logic based on it.
I rather think that it is a straightforward usage of the current API.
[And it is much cleaner to always get the result as the output of the
"optimize".]
Why "conflicting"? One is assimilated to a solution (as you requested) and
the other will generate an exception (signalling a failure).
As already pointed out, the exception is not the best place to store this
information: in CM, exceptions are assumed to signal incompatibilites of the
input to a method and that method's expectations.
> Also, it seems like some classes, like CMAESOptimizer, also take number iterations in their constructor.
IMO, that's a bug:
https://issues.apache.org/jira/browse/MATH873[See my remark in the other thread that was started to deal with your request
(with subject "Making last iteration data available when iterative algorithms
fail to converge").]
Best regards,
Gilles
>
> On Nov 19, 2012, at 7:36 PM, Gilles Sadowski < [hidden email]> wrote:
>
> > Hello.
> >
> > On Mon, Nov 19, 2012 at 04:47:08PM 0500, Bruce A Johnson wrote:
> >> On Nov 17, 2012, at 6:57 PM, Konstantin Berlin wrote:
> >>
> >>> There are numerous examples when the optimization might not have
> >>> converged to the stopping condition, but the minimum discovered point
> >>> is better than the starting point that was initially provided. The
> >>> user should have the ability to at least decide if it is good enough,
> >>> or use it as a starting point into a different optimization run. This
> >>> is the behavior that most optimizers have, including MATLAB.
> >>>
> >>> This functionality important when you have a time constraint, where any improvement in result is better than no information at all. There are also nonconvex regions of a function where the convergence rate is very slow, and you might want to stop.
> >>>
> >>> A simple modification for now is to just let
> >>> org.apache.commons.math3.exception.TooManyEvaluationsException contain
> >>> within it the best found solution so far.
> >>>
> >>
> >> I agree that there are many situations where one is constrained by the amount of resources one can throw at a problem and the user would like the best solution given those resources (here, the maximum number of iterations) so I fully support a solution that lets us get the best solution that has been achieved. I wonder if it is possible, instead of using the exception, to specify a convergence checker that checks not only the tolerances, but could instead return success if the maximum number of iterations has been exceeded. I realize this will look inappropriate to some, but using the convergence checker in this way would be entirely up to the person calling it. If you can get back the number of iterations used, then presumably if that is equal to the max you'll know it hasn't really converged.
> >
> > Your proposal is brilliant: problem solved (IMHO)!
> > The convergence checkers are there to allow users to define their notion
> > of convergence; and the API provides the current iteration count.
> >
> > We could add this feature in CM's "Simple...Checker" implementations by
> > adding one constructor (and an ifblock in the "converged" method) without
> > touching the optimizers.
> > The user who passes such a checker will indeed be aware of the implication:
> > the algorithm might quietly terminate before reaching the optimal solution.
> >
> > Shall I
> > 1. resolve the issue as "Not a Problem" (the user should implement his own
> > custom checker) or
> > 2. add the constructors in the checkers defined within CM and resolve as
> > "Fixed"?
> >
> >
> > Regards,
> > Gilles
> >

To unsubscribe, email: [hidden email]
For additional commands, email: [hidden email]


I am ok with option 2.
I guess my issue goes to my problem with the general API. The number
of iterations is a stopping condition, as well as all the other
conditions that are for some reason called convergence conditions. The
number of iterations condition is singled out as "bad", hence it
throws an exception, while others don't through an exception, and I
guess are considered "good".
However, a stopping condition, even if you exclude the number of
iterations condition, does not imply convergence, aka that you found
an min point. In a convex solver you have to look at the first and
second order optimality condition in order to declare "success". A
typical stop of the algorithm could be that it has not made enough
progress rather than found the min, but with the current framework
this is also considered "good", with the user does not have a clue
that something went bad unless they take the solution and start
checking themselves.
That is why if you look at matlab fminunc, it contains about 5
different flags for termination.
This probably goes along with the point in the 873 bug that talks
about internal vs external criteria.
On Nov 19, 2012, at 10:06 PM, Gilles Sadowski
< [hidden email]> wrote:
> Hi.
>
> [Please do not toppost.]
>
> On Mon, Nov 19, 2012 at 09:17:39PM 0500, Konstantin Berlin wrote:
>> That would solve the problem. Seems a bit messy though, since you now have two conflicting stopping conditions, one you provide in the checker implementation, the other in the function call. I am not clear why if you are throwing an exception you don't provide more information inside it, so the user can adjust their logic based on it.
>
> I rather think that it is a straightforward usage of the current API.
> [And it is much cleaner to always get the result as the output of the
> "optimize".]
>
> Why "conflicting"? One is assimilated to a solution (as you requested) and
> the other will generate an exception (signalling a failure).
>
> As already pointed out, the exception is not the best place to store this
> information: in CM, exceptions are assumed to signal incompatibilites of the
> input to a method and that method's expectations.
>
>> Also, it seems like some classes, like CMAESOptimizer, also take number iterations in their constructor.
>
> IMO, that's a bug:
> https://issues.apache.org/jira/browse/MATH873> [See my remark in the other thread that was started to deal with your request
> (with subject "Making last iteration data available when iterative algorithms
> fail to converge").]
>
>
> Best regards,
> Gilles
>
>>
>> On Nov 19, 2012, at 7:36 PM, Gilles Sadowski < [hidden email]> wrote:
>>
>>> Hello.
>>>
>>> On Mon, Nov 19, 2012 at 04:47:08PM 0500, Bruce A Johnson wrote:
>>>> On Nov 17, 2012, at 6:57 PM, Konstantin Berlin wrote:
>>>>
>>>>> There are numerous examples when the optimization might not have
>>>>> converged to the stopping condition, but the minimum discovered point
>>>>> is better than the starting point that was initially provided. The
>>>>> user should have the ability to at least decide if it is good enough,
>>>>> or use it as a starting point into a different optimization run. This
>>>>> is the behavior that most optimizers have, including MATLAB.
>>>>>
>>>>> This functionality important when you have a time constraint, where any improvement in result is better than no information at all. There are also nonconvex regions of a function where the convergence rate is very slow, and you might want to stop.
>>>>>
>>>>> A simple modification for now is to just let
>>>>> org.apache.commons.math3.exception.TooManyEvaluationsException contain
>>>>> within it the best found solution so far.
>>>>>
>>>>
>>>> I agree that there are many situations where one is constrained by the amount of resources one can throw at a problem and the user would like the best solution given those resources (here, the maximum number of iterations) so I fully support a solution that lets us get the best solution that has been achieved. I wonder if it is possible, instead of using the exception, to specify a convergence checker that checks not only the tolerances, but could instead return success if the maximum number of iterations has been exceeded. I realize this will look inappropriate to some, but using the convergence checker in this way would be entirely up to the person calling it. If you can get back the number of iterations used, then presumably if that is equal to the max you'll know it hasn't really converged.
>>>
>>> Your proposal is brilliant: problem solved (IMHO)!
>>> The convergence checkers are there to allow users to define their notion
>>> of convergence; and the API provides the current iteration count.
>>>
>>> We could add this feature in CM's "Simple...Checker" implementations by
>>> adding one constructor (and an ifblock in the "converged" method) without
>>> touching the optimizers.
>>> The user who passes such a checker will indeed be aware of the implication:
>>> the algorithm might quietly terminate before reaching the optimal solution.
>>>
>>> Shall I
>>> 1. resolve the issue as "Not a Problem" (the user should implement his own
>>> custom checker) or
>>> 2. add the constructors in the checkers defined within CM and resolve as
>>> "Fixed"?
>>>
>>>
>>> Regards,
>>> Gilles
>>>
>
> 
> To unsubscribe, email: [hidden email]
> For additional commands, email: [hidden email]
>

To unsubscribe, email: [hidden email]
For additional commands, email: [hidden email]


On Tue, Nov 20, 2012 at 05:08:23AM 0500, Konstantin Berlin wrote:
> I am ok with option 2.
Thanks. I'll add new constructors if nobody has objections. Please note that
by using "custom" checkers (even those defined in CM), you make some
algorithms deviate from their "standard" behaviour.
>
> I guess my issue goes to my problem with the general API. The number
> of iterations is a stopping condition, as well as all the other
> conditions that are for some reason called convergence conditions. The
> number of iterations condition is singled out as "bad", hence it
> throws an exception, while others don't through an exception, and I
> guess are considered "good".
Reaching the maximal count is "bad" because it means that the algorithm
could not fulfill the (other) input requirements.
The "maxEval" parameters was intended as a safeguard to prevent the
algorithm from _unexpectedly_ running too long. If the user knows that it
takes a long time to find the solution with the required accuracy, he can
increase "maxEval".
As I said previously, your request introduces a conflict: It becomes
possible that none of the convergence requirements is met.
> However, a stopping condition, even if you exclude the number of
> iterations condition, does not imply convergence, aka that you found
> an min point. In a convex solver you have to look at the first and
> second order optimality condition in order to declare "success". A
> typical stop of the algorithm could be that it has not made enough
> progress rather than found the min, but with the current framework
> this is also considered "good", with the user does not have a clue
> that something went bad unless they take the solution and start
> checking themselves.
But that's part of the behaviour inherent to some algorithm. However it will
terminate as it wishes (i.e. satisfying its internal convergence criteria).
>
> That is why if you look at matlab fminunc, it contains about 5
> different flags for termination.
That would probably require a new API.
Can you give some examples of usage?
Concrete proposals are always welcome and would hopefully open a discussion.
> This probably goes along with the point in the 873 bug that talks
> about internal vs external criteria.
Not sure. There, "internal" or "external" (for lack of better words) are
both "legitimate". The number of iterations is not (IMHO), because it's
sortof "unstable": If you increase the number of evaluations (or
iterations), you can get another solution.
I think that this difference in nature is a property that could serve to
define an API (as was done with the current one).
Regards,
Gilles

To unsubscribe, email: [hidden email]
For additional commands, email: [hidden email]


On Nov 20, 2012, at 7:56 AM, Gilles Sadowski < [hidden email]> wrote:
> On Tue, Nov 20, 2012 at 05:08:23AM 0500, Konstantin Berlin wrote:
>> I am ok with option 2.
>
> Thanks. I'll add new constructors if nobody has objections. Please note that
> by using "custom" checkers (even those defined in CM), you make some
> algorithms deviate from their "standard" behaviour.
>
>>
>> I guess my issue goes to my problem with the general API. The number
>> of iterations is a stopping condition, as well as all the other
>> conditions that are for some reason called convergence conditions. The
>> number of iterations condition is singled out as "bad", hence it
>> throws an exception, while others don't through an exception, and I
>> guess are considered "good".
>
> Reaching the maximal count is "bad" because it means that the algorithm
> could not fulfill the (other) input requirements.
> The "maxEval" parameters was intended as a safeguard to prevent the
> algorithm from _unexpectedly_ running too long. If the user knows that it
> takes a long time to find the solution with the required accuracy, he can
> increase "maxEval".
My point is that termination based on a criteria that you actually gave yourself is not an exception. It is expected behavior
and should be treated as such. There are other conditions that also prevent you from converging to a correct solution of the requested
tolerance. Like I mentioned, one such possible event is that you are not making enough progress. That does not imply that you are some
requested epsilon away from the minimum, but only says that you are getting there too slowly. All tolerance conditions in some ways are about
a time constraint, otherwise you would always set them to maximum precision. I think the exception should not be thrown. Alternatively an exception should
be thrown if the target functions gets NaNs or throws an exception. I don't think these conditions are checked for right now.
I agree that for some methods you can't tell, while for convex solvers you can look at firstorder optimization condition and so on. This information should be provided to the user on termination of the optimization and not hidden.
>
> As I said previously, your request introduces a conflict: It becomes
> possible that none of the convergence requirements is met.
>
>> However, a stopping condition, even if you exclude the number of
>> iterations condition, does not imply convergence, aka that you found
>> an min point. In a convex solver you have to look at the first and
>> second order optimality condition in order to declare "success". A
>> typical stop of the algorithm could be that it has not made enough
>> progress rather than found the min, but with the current framework
>> this is also considered "good", with the user does not have a clue
>> that something went bad unless they take the solution and start
>> checking themselves.
>
> But that's part of the behaviour inherent to some algorithm. However it will
> terminate as it wishes (i.e. satisfying its internal convergence criteria).
>
>>
>> That is why if you look at matlab fminunc, it contains about 5
>> different flags for termination.
>
> That would probably require a new API.
> Can you give some examples of usage?
> Concrete proposals are always welcome and would hopefully open a discussion.
I think the information provided on return by the optimizers should be reworked for the better. This would require some thinking, so I will hold off on proposing something now, due to limited time that I have to fully dedicate to this problem. I think MATLAB should be a good guide on how to organize optimizations, since it has a very well regarded library. I would be happy to comment on any proposals from people who might have some time to write them.
On this note I would like to say that the directory structure of optimizers are somewhat of a puzzle. The "general" directory should be renamed to "convex", or something like it. In any case nonlinear leastsquares methods are not general methods, but a special case of a newtonbased convex optimization. I think the library can benefit from better organization here.
Perhaps something like derivative based vs. direct vs. linear.
>
>> This probably goes along with the point in the 873 bug that talks
>> about internal vs external criteria.
>
> Not sure. There, "internal" or "external" (for lack of better words) are
> both "legitimate". The number of iterations is not (IMHO), because it's
> sortof "unstable": If you increase the number of evaluations (or
> iterations), you can get another solution.
> I think that this difference in nature is a property that could serve to
> define an API (as was done with the current one).
>
There are other "unstable" termination conditions, like I mentioned above. Throwing exception in this one case only is misleading. But in any case, termination of the algorithm based on a stopping condition is expected behavior.
My thoughts,
Konstantin

To unsubscribe, email: [hidden email]
For additional commands, email: [hidden email]


On Tue, Nov 20, 2012 at 11:59:07AM 0500, Konstantin Berlin wrote:
>
> On Nov 20, 2012, at 7:56 AM, Gilles Sadowski < [hidden email]> wrote:
>
> > On Tue, Nov 20, 2012 at 05:08:23AM 0500, Konstantin Berlin wrote:
> >> I am ok with option 2.
> >
> > Thanks. I'll add new constructors if nobody has objections. Please note that
> > by using "custom" checkers (even those defined in CM), you make some
> > algorithms deviate from their "standard" behaviour.
> >
> >>
> >> I guess my issue goes to my problem with the general API. The number
> >> of iterations is a stopping condition, as well as all the other
> >> conditions that are for some reason called convergence conditions. The
> >> number of iterations condition is singled out as "bad", hence it
> >> throws an exception, while others don't through an exception, and I
> >> guess are considered "good".
> >
> > Reaching the maximal count is "bad" because it means that the algorithm
> > could not fulfill the (other) input requirements.
> > The "maxEval" parameters was intended as a safeguard to prevent the
> > algorithm from _unexpectedly_ running too long. If the user knows that it
> > takes a long time to find the solution with the required accuracy, he can
> > increase "maxEval".
>
> My point is that termination based on a criteria that you actually gave yourself is not an exception. It is expected behavior
> and should be treated as such.
The current interpretation of "maxEval" is: If the algorithm convergence
criteria are not satisfied after "maxEval" evaluations, it is considered a
failure (and an exception is raised).
I consider this safer than always return a value without the user knowing
whether it fulfills the requirements or not.
> There are other conditions that also prevent you from converging to a correct solution of the requested
> tolerance. Like I mentioned, one such possible event is that you are not making enough progress. That does not imply that you are some
> requested epsilon away from the minimum, but only says that you are getting there too slowly.
And what happens when you call CM's "optimize" method in such cases?
> All tolerance conditions in some ways are about
> a time constraint, otherwise you would always set them to maximum precision. I think the exception should not be thrown.
Given the _definition_ of "maxEval", the exception must be thrown.
> Alternatively an exception should
> be thrown if the target functions gets NaNs or throws an exception.
> I don't think these conditions are checked for right now.
Did you try?
CM does not interfere with the user's function; it's up to him that his
function behaves nicely. If it returns NaN, nasty things will occur a some
point, as usual. If it throws an exception, it will abort the execution, as
usual.
>
> I agree that for some methods you can't tell, while for convex solvers you can look at firstorder optimization condition and so on. This information should be provided to the user on termination of the optimization and not hidden.
An example?
>
> >
> > As I said previously, your request introduces a conflict: It becomes
> > possible that none of the convergence requirements is met.
> >
> >> However, a stopping condition, even if you exclude the number of
> >> iterations condition, does not imply convergence, aka that you found
> >> an min point. In a convex solver you have to look at the first and
> >> second order optimality condition in order to declare "success". A
> >> typical stop of the algorithm could be that it has not made enough
> >> progress rather than found the min, but with the current framework
> >> this is also considered "good", with the user does not have a clue
> >> that something went bad unless they take the solution and start
> >> checking themselves.
> >
> > But that's part of the behaviour inherent to some algorithm. However it will
> > terminate as it wishes (i.e. satisfying its internal convergence criteria).
> >
> >>
> >> That is why if you look at matlab fminunc, it contains about 5
> >> different flags for termination.
> >
> > That would probably require a new API.
> > Can you give some examples of usage?
> > Concrete proposals are always welcome and would hopefully open a discussion.
>
> I think the information provided on return by the optimizers should be reworked for the better. This would require some thinking, so I will hold off on proposing something now, due to limited time that I have to fully dedicate to this problem. I think MATLAB should be a good guide on how to organize optimizations, since it has a very well regarded library. I would be happy to comment on any proposals from people who might have some time to write them.
If you are not satisfied with the API, the initiative to make it better
is on you. :)
>
> On this note I would like to say that the directory structure of optimizers are somewhat of a puzzle. The "general" directory should be renamed to "convex", or something like it. In any case nonlinear leastsquares methods are not general methods, but a special case of a newtonbased convex optimization. I think the library can benefit from better organization here.
You are perfectly right; it has already been pointed out several times.
This could be done in release 4.0. Would you open a ticket on the bug
tracking system?
>
> Perhaps something like derivative based vs. direct vs. linear.
"derivative based" cannot be a Java package name... ;)
>
> >
> >> This probably goes along with the point in the 873 bug that talks
> >> about internal vs external criteria.
> >
> > Not sure. There, "internal" or "external" (for lack of better words) are
> > both "legitimate". The number of iterations is not (IMHO), because it's
> > sortof "unstable": If you increase the number of evaluations (or
> > iterations), you can get another solution.
> > I think that this difference in nature is a property that could serve to
> > define an API (as was done with the current one).
> >
>
> There are other "unstable" termination conditions, like I mentioned above. Throwing exception in this one case only is misleading.
Please provide a concrete example of misleading behaviour.
> But in any case, termination of the algorithm based on a stopping condition is expected behavior.
Fortunately, it can be easily implemented as proposed. So, we keep the
current semantics of "maxEval", while providing the feature you need.
Regards,
Gilles

To unsubscribe, email: [hidden email]
For additional commands, email: [hidden email]

