-
Notifications
You must be signed in to change notification settings - Fork 640
configurable tolerance for gradient stopping condition (#121) #585
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
src/api/optimize.c
Outdated
@@ -556,6 +556,9 @@ static nlopt_result nlopt_optimize_(nlopt_opt opt, double *x, double *minf) | |||
stop.ftol_abs = opt->ftol_abs; | |||
stop.xtol_rel = opt->xtol_rel; | |||
stop.xtol_abs = opt->xtol_abs; | |||
stop.tolg = (opt->tolg > 0.) | |||
? opt->tolg | |||
: 1e-8; /* SGJ: was 1e-6, but this sometimes stops too soon */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lets use the get_param/set_param api for this instead of adding it for all algorithms
that way it will also be accessible by languages bindings
and add a documentation entry
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, that's what I was looking for!
Move 'tolg' to generic parameters
src/api/optimize.c
Outdated
@@ -735,7 +735,7 @@ static nlopt_result nlopt_optimize_(nlopt_opt opt, double *x, double *minf) | |||
case NLOPT_LD_TNEWTON_PRECOND: | |||
case NLOPT_LD_TNEWTON_PRECOND_RESTART: | |||
#ifdef NLOPT_LUKSAN | |||
return luksan_pnet(ni, f, f_data, lb, ub, x, minf, &stop, opt->vector_storage, 1 + (algorithm - NLOPT_LD_TNEWTON) % 2, 1 + (algorithm - NLOPT_LD_TNEWTON) / 2); | |||
return luksan_pnet(ni, f, f_data, lb, ub, x, minf, &stop, opt->vector_storage, 1 + (algorithm - NLOPT_LD_TNEWTON) % 2, 1 + (algorithm - NLOPT_LD_TNEWTON) / 2, opt->get_param("tolg", 0.)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe we could set the default value here and simplify the rest instead of using 0 which only means use the default that will be set later
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I considered your idea, what stopped me from choosing it was :
- using a default here won't cover the case of the user entering an invalid value (negative or null is currently switched back to default)
- I find that the default value belongs more legitimately to the algorithm than to the generic caller
- it is technically cumbursome to define a variable inside a
case
(you need braces)
None of these items is a real issue. Their sum oriented me toward my current proposal, but I totally agree if you prefer the other way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, but it doesnt build though
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(shame on me!) my apologies...
#121
Luksan algorithms (BFGS, troncated Newton, Shifted variable-metric) use a hard-coded value for their gradient stopping condition.
The tolerance is exposed to the API, with current value as default.