You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/docs/NLopt_Algorithms.md
+5-1Lines changed: 5 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -97,7 +97,7 @@ The local-search portion of MLSL can use any of the other algorithms in NLopt, a
97
97
98
98
LDS-based MLSL with is specified as `NLOPT_G_MLSL_LDS`, while the original non-LDS original MLSL (using pseudo-random numbers, currently via the [Mersenne twister](https://en.wikipedia.org/wiki/Mersenne_twister) algorithm) is indicated by `NLOPT_G_MLSL`. In both cases, you must specify the [local optimization](NLopt_Reference.md#localsubsidiary-optimization-algorithm) algorithm (which can be gradient-based or derivative-free) via `nlopt_opt_set_local_optimizer`.
99
99
100
-
**Note**: If you do not set a stopping tolerance for your local-optimization algorithm, MLSL defaults to ftol_rel=10<sup>−15</sup> and xtol_rel=10<sup>−7</sup> for the local searches. Note that it is perfectly reasonable to set a relatively large tolerance for these local searches, run MLSL, and then at the end run another local optimization with a lower tolerance, using the MLSL result as a starting point, to "polish off" the optimum to high precision.
100
+
**Note**: If you do not set a maximum evaluation number of maximum runtime the algorithm will run indefinitely. If you do not set a stopping tolerance for your local-optimization algorithm, MLSL defaults to ftol_rel=10<sup>−15</sup> and xtol_rel=10<sup>−7</sup> for the local searches. Note that it is perfectly reasonable to set a relatively large tolerance for these local searches, run MLSL, and then at the end run another local optimization with a lower tolerance, using the MLSL result as a starting point, to "polish off" the optimum to high precision.
101
101
102
102
By default, each iteration of MLSL samples 4 random new trial points, but this can be changed with the [nlopt_set_population](NLopt_Reference.md#stochastic-population) function.
103
103
@@ -123,6 +123,8 @@ Some references on StoGO are:
123
123
124
124
Only bound-constrained problems are supported by this algorithm.
125
125
126
+
**Note**: If you do not set a maximum evaluation number of maximum runtime the algorithm will run indefinitely.
127
+
126
128
### AGS
127
129
128
130
This algorithm adapted from [this repo](https://github.com/sovrasov/glob_search_nlp_solver).
@@ -145,6 +147,8 @@ References:
145
147
146
148
-[Implementation](https://github.com/sovrasov/multicriterial-go) of AGS for constrained multi-objective problems.
147
149
150
+
**Note**: If you do not set a maximum evaluation number of maximum runtime the algorithm will run indefinitely.
This is my implementation of the "Improved Stochastic Ranking Evolution Strategy" (ISRES) algorithm for nonlinearly-constrained global optimization (or at least semi-global; although it has heuristics to escape local optima, I'm not aware of a convergence proof), based on the method described in:
0 commit comments