Skip to content

Commit 1d41a65

Browse files
committed
Doc: Mention MLSL/AGS/STOGO stop criterion
Closes #187
1 parent 7d09398 commit 1d41a65

File tree

1 file changed

+5
-1
lines changed

1 file changed

+5
-1
lines changed

doc/docs/NLopt_Algorithms.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@ The local-search portion of MLSL can use any of the other algorithms in NLopt, a
9797

9898
LDS-based MLSL with is specified as `NLOPT_G_MLSL_LDS`, while the original non-LDS original MLSL (using pseudo-random numbers, currently via the [Mersenne twister](https://en.wikipedia.org/wiki/Mersenne_twister) algorithm) is indicated by `NLOPT_G_MLSL`. In both cases, you must specify the [local optimization](NLopt_Reference.md#localsubsidiary-optimization-algorithm) algorithm (which can be gradient-based or derivative-free) via `nlopt_opt_set_local_optimizer`.
9999

100-
**Note**: If you do not set a stopping tolerance for your local-optimization algorithm, MLSL defaults to ftol_rel=10<sup>−15</sup> and xtol_rel=10<sup>−7</sup> for the local searches. Note that it is perfectly reasonable to set a relatively large tolerance for these local searches, run MLSL, and then at the end run another local optimization with a lower tolerance, using the MLSL result as a starting point, to "polish off" the optimum to high precision.
100+
**Note**: If you do not set a maximum evaluation number of maximum runtime the algorithm will run indefinitely. If you do not set a stopping tolerance for your local-optimization algorithm, MLSL defaults to ftol_rel=10<sup>−15</sup> and xtol_rel=10<sup>−7</sup> for the local searches. Note that it is perfectly reasonable to set a relatively large tolerance for these local searches, run MLSL, and then at the end run another local optimization with a lower tolerance, using the MLSL result as a starting point, to "polish off" the optimum to high precision.
101101

102102
By default, each iteration of MLSL samples 4 random new trial points, but this can be changed with the [nlopt_set_population](NLopt_Reference.md#stochastic-population) function.
103103

@@ -123,6 +123,8 @@ Some references on StoGO are:
123123

124124
Only bound-constrained problems are supported by this algorithm.
125125

126+
**Note**: If you do not set a maximum evaluation number of maximum runtime the algorithm will run indefinitely.
127+
126128
### AGS
127129

128130
This algorithm adapted from [this repo](https://github.com/sovrasov/glob_search_nlp_solver).
@@ -145,6 +147,8 @@ References:
145147

146148
- [Implementation](https://github.com/sovrasov/multicriterial-go) of AGS for constrained multi-objective problems.
147149

150+
**Note**: If you do not set a maximum evaluation number of maximum runtime the algorithm will run indefinitely.
151+
148152
### ISRES (Improved Stochastic Ranking Evolution Strategy)
149153

150154
This is my implementation of the "Improved Stochastic Ranking Evolution Strategy" (ISRES) algorithm for nonlinearly-constrained global optimization (or at least semi-global; although it has heuristics to escape local optima, I'm not aware of a convergence proof), based on the method described in:

0 commit comments

Comments
 (0)