Le 07/01/2016 16:16, Carlos M. Casas Cuadrado a écrit :

> Hi all,

Hi Carlos,

> there is an interface for univariate solvers and several implementations of

> it. Is it planned to add an interface for multivariate solvers (for zeroes

> of functions R^n -> R^n i.e. a set of n multivariate functions)?

No. For multivariate functions, it is often not realistic to find

multidimensional zeros. One coordinate may be zero while the other is

not zero. So the classical way is either to separate the variables or

to use an optimizer and try to minimize the norm. If there really is a

zero on all output coordinates, an optimizer for the norm should find it.

>

> Another issue: is it foreseen to add any method capable of optimising a

> non-linear cost function with non-linear equality and inequality

> constraints? Even if it's just a simple gradient method that would only

> find local minima under very specific conditions...

This is something we want to have since years ...

Up to now, we did not implement it. There are several workarounds, but

they are only workarounds, not perfect solutions.

The first workaround is for simple constraints (boundary constraints,

linear constraints, equality constraints). For these, instead of

solving the raw problem you can wrap it in some transformation layer

that will take care of the constraint and propose to the optimizer

an unconstrained view of the transformed problem. As an example, if

initial problem has a variable e that should remain between 0 and 1,

the transformed problem will use instead a variable transformedE between

minus infinity and plus infinity with a mapping function from [0;1] to

(-inf; +inf), such as logit. There are also possible mappings for

semi-infinite intervals. This works quite well, especially when the

optimum is not at the constraint. Numerical problems arise when you

come close, which obvisouly happens if the optimum is on the constraint.

The second workaround is for non-linear constraints. Here, it is even

more awkward. The principle in this case is to use an optimizer with

a direct method (i.e. that use only function value and not derivatives),

and to implement the constraint as a additive penalty on the cost (which

may be linear, or finite, or even infinite depending on your problem).

This is typically used with methods like Torczon's multidirectional

search or CMAES. In fact, CMAES is really well suited for this kind

of application, as it was designed to handle problems with a large

number of local minimums and discontinuities. I have seen a presentation

by CMAES inventor in Toulouse where he presented slides showing

optimizations of curves looking exactly like white noise.

So these are workarounds. We would love to have a real implementation

of non-linear constraints optimization (KKT conditions, Lagrange

multipliers, all this stuff). As always, contributions are welcome ...

best regards,

Luc

>

> Regards,

> Carlos.

>

---------------------------------------------------------------------

To unsubscribe, e-mail:

[hidden email]
For additional commands, e-mail:

[hidden email]