Skip to content

Commit a19fc51

Browse files
authored
[Documentation] lvar/uvar and lcon/ucon explained (#166)
* lvar/uvar and lcon/ucon explained * documentation update * typo fix * typo fix
1 parent 4798cc2 commit a19fc51

File tree

3 files changed

+332
-2
lines changed

3 files changed

+332
-2
lines changed

docs/src/guide.jl

Lines changed: 35 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,26 +21,57 @@ using ExaModels
2121
# Now, all the functions that are necessary for creating model are imported to into `Main`.
2222

2323

24+
# ## ExaCore
2425
# An `ExaCore` object can be created simply by (Step 1):
2526
c = ExaCore()
2627
# This is where our optimziation model information will be progressively stored. This object is not yet an `NLPModel`, but it will essentially store all the necessary information.
2728

29+
# ## Variables
2830
# Now, let's create the optimziation variables. From the problem definition, we can see that we will need $N$ scalar variables. We will choose $N=10$, and create the variable $x\in\mathbb{R}^{N}$ with the follwoing command:
2931
N = 10
3032
x = variable(c, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i = 1:N))
31-
# This creates the variable `x`, which we will be able to refer to when we create constraints/objective constraionts. Also, this modifies the information in the `ExaCore` object properly so that later an optimization model can be properly created with the necessary information. Observe that we have used the keyword argument `start` to specify the initial guess for the solution. The variable upper and lower bounds can be specified in a similar manner.
33+
# This creates the variable `x`, which we will be able to refer to when we create constraints/objective constraints. Also, this modifies the information in the `ExaCore` object properly so that later an optimization model can be properly created with the necessary information. Observe that we have used the keyword argument `start` to specify the initial guess for the solution. The variable upper and lower bounds can be specified in a similar manner. For example, if we wanted to set the lower bound of the variable `x` to 0.0 and the upper bound to 10.0, we could do it as follows:
34+
# ```julia
35+
# x = variable(c, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i = 1:N), lvar = 0.0, uvar = 10.0)
36+
# ```
3237

38+
# ## Objective
3339
# The objective can be set as follows:
3440
objective(c, 100 * (x[i-1]^2 - x[i])^2 + (x[i-1] - 1)^2 for i = 2:N)
3541
# !!! note
3642
# Note that the terms here are summed, without explicitly using `sum( ... )` syntax.
3743

44+
# ## Constraints
3845
# The constraints can be set as follows:
3946
constraint(
4047
c,
4148
3x[i+1]^3 + 2 * x[i+2] - 5 + sin(x[i+1] - x[i+2])sin(x[i+1] + x[i+2]) + 4x[i+1] -
4249
x[i]exp(x[i] - x[i+1]) - 3 for i = 1:(N-2)
4350
)
51+
#
52+
# Note that `ExaModels` always assume that the constraints are doubly-bounded inequalities. That is, the constraint above is treated as
53+
# ```math
54+
# g^\flat \leq \left[g^{(m)}(x; q_j)\right]_{j\in [J_m]} +\sum_{n\in [N_m]}\sum_{k\in [K_n]}h^{(n)}(x; s^{(n)}_{k}) \leq g^\sharp
55+
# ```
56+
# where `g^\flat` and `g^\sharp` are the lower and upper bounds of the constraint, respectively. In this case, both bounds are zero, i.e., `g^\flat = g^\sharp = 0`.
57+
#
58+
# You can use the keyword arguments `lcon` and `ucon` to specify the lower and upper bounds of the constraints, respectively. For example, if we wanted to set the lower bound of the constraint to -1 and the upper bound to 1, we could do it as follows:
59+
constraint(
60+
c,
61+
3x[i+1]^3 + 2 * x[i+2] - 5 + sin(x[i+1] - x[i+2])sin(x[i+1] + x[i+2]) + 4x[i+1] -
62+
x[i]exp(x[i] - x[i+1]) - 3 for i = 1:(N-2);
63+
lcon = -1.0, ucon = 1.0
64+
)
65+
66+
# If you want to create a single-bounded constraint, you can set `lcon` to `-Inf` or `ucon` to `Inf`. For example, if we wanted to set the lower bound of the constraint to -1 and the upper bound to infinity, we could do it as follows:
67+
constraint(
68+
c,
69+
3x[i+1]^3 + 2 * x[i+2] - 5 + sin(x[i+1] - x[i+2])sin(x[i+1] + x[i+2]) + 4x[i+1] -
70+
x[i]exp(x[i] - x[i+1]) - 3 for i = 1:(N-2);
71+
lcon = -1.0, ucon = Inf
72+
)
73+
74+
# ## ExaModel
4475
# Finally, we are ready to create an `ExaModel` from the data we have collected in `ExaCore`. Since `ExaCore` includes all the necessary information, we can do this simply by:
4576
m = ExaModel(c)
4677

@@ -52,8 +83,11 @@ result = ipopt(m)
5283
println("Status: $(result.status)")
5384
println("Number of iterations: $(result.iter)")
5485

86+
# ## Solutions
5587
# The solution values for variable `x` can be inquired by:
5688
sol = solution(result, x)
89+
# This will return the primal solution of the variable `x` as a vector. Dual solutions can be inquired similarly, by using the `multipliers` function.
90+
5791

5892

5993
# ExaModels provide several APIs similar to this:

docs/src/parameters.md

Lines changed: 296 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,296 @@
1+
```@meta
2+
EditURL = "parameters.jl"
3+
```
4+
5+
# [Parameters](@id parameters)
6+
Parameters act like fixed variables. Internally, ExaModels keeps track of where parameters appear in the model, making it possible to efficiently modify their value without rebuilding the entire model.
7+
8+
### Creating Parametric Models
9+
10+
Let's modify the example in [Getting Started](@ref guide) to use parameters. Suppose we want to make the penalty coefficient in the objective function adjustable:
11+
12+
First, let's create a core:
13+
14+
````julia
15+
using ExaModels, NLPModelsIpopt
16+
c_param = ExaCore()
17+
````
18+
19+
````
20+
An ExaCore
21+
22+
Float type: ...................... Float64
23+
Array type: ...................... Vector{Float64}
24+
Backend: ......................... Nothing
25+
26+
number of objective patterns: .... 0
27+
number of constraint patterns: ... 0
28+
29+
````
30+
31+
Adding parameters is very similar to adding variables -- just pass a vector of values denoting the initial values.
32+
33+
````julia
34+
θ = parameter(c_param, [100.0, 1.0]) # [penalty_coeff, offset]
35+
````
36+
37+
````
38+
Parameter
39+
40+
θ ∈ R^{2}
41+
42+
````
43+
44+
Define the variables as before:
45+
46+
````julia
47+
N = 10
48+
x_p = variable(c_param, N; start = (mod(i, 2) == 1 ? -1.2 : 1.0 for i = 1:N))
49+
````
50+
51+
````
52+
Variable
53+
54+
x ∈ R^{10}
55+
56+
````
57+
58+
Now we can use the parameters in our objective function just like variables:
59+
60+
````julia
61+
objective(c_param, θ[1] * (x_p[i-1]^2 - x_p[i])^2 + (x_p[i-1] - θ[2])^2 for i = 2:N)
62+
````
63+
64+
````
65+
Objective
66+
67+
min (...) + ∑_{p ∈ P} f(x,θ,p)
68+
69+
where |P| = 9
70+
71+
````
72+
73+
Add the same constraints as before:
74+
75+
````julia
76+
constraint(
77+
c_param,
78+
3x_p[i+1]^3 + 2 * x_p[i+2] - 5 + sin(x_p[i+1] - x_p[i+2])sin(x_p[i+1] + x_p[i+2]) + 4x_p[i+1] -
79+
x_p[i]exp(x_p[i] - x_p[i+1]) - 3 for i = 1:(N-2)
80+
)
81+
````
82+
83+
````
84+
Constraint
85+
86+
s.t. (...)
87+
g♭ ≤ [g(x,θ,p)]_{p ∈ P} ≤ g♯
88+
89+
where |P| = 8
90+
91+
````
92+
93+
Create the model as before:
94+
95+
````julia
96+
m_param = ExaModel(c_param)
97+
````
98+
99+
````
100+
An ExaModel{Float64, Vector{Float64}, ...}
101+
102+
Problem name: Generic
103+
All variables: ████████████████████ 10 All constraints: ████████████████████ 8
104+
free: ████████████████████ 10 free: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
105+
lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 lower: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
106+
upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 upper: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
107+
low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 low/upp: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
108+
fixed: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 fixed: ████████████████████ 8
109+
infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 infeas: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
110+
nnzh: (-36.36% sparsity) 75 linear: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0
111+
nonlinear: ████████████████████ 8
112+
nnzj: ( 70.00% sparsity) 24
113+
114+
115+
````
116+
117+
Solve with original parameters:
118+
119+
````julia
120+
result1 = ipopt(m_param)
121+
println("Original objective: $(result1.objective)")
122+
````
123+
124+
````
125+
This is Ipopt version 3.14.17, running with linear solver MUMPS 5.8.0.
126+
127+
Number of nonzeros in equality constraint Jacobian...: 24
128+
Number of nonzeros in inequality constraint Jacobian.: 0
129+
Number of nonzeros in Lagrangian Hessian.............: 75
130+
131+
Total number of variables............................: 10
132+
variables with only lower bounds: 0
133+
variables with lower and upper bounds: 0
134+
variables with only upper bounds: 0
135+
Total number of equality constraints.................: 8
136+
Total number of inequality constraints...............: 0
137+
inequality constraints with only lower bounds: 0
138+
inequality constraints with lower and upper bounds: 0
139+
inequality constraints with only upper bounds: 0
140+
141+
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
142+
0 2.0570000e+03 2.48e+01 2.73e+01 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
143+
1 1.0953147e+03 1.49e+01 8.27e+01 -1.0 2.20e+00 - 1.00e+00 1.00e+00f 1
144+
2 3.2865521e+02 4.28e+00 1.36e+02 -1.0 1.43e+00 - 1.00e+00 1.00e+00f 1
145+
3 1.3995370e+01 3.09e-01 2.18e+01 -1.0 5.63e-01 - 1.00e+00 1.00e+00f 1
146+
4 6.2325715e+00 1.73e-02 8.47e-01 -1.0 2.10e-01 - 1.00e+00 1.00e+00f 1
147+
5 6.2324586e+00 1.15e-05 8.16e-04 -1.7 3.35e-03 - 1.00e+00 1.00e+00h 1
148+
6 6.2324586e+00 8.35e-12 7.97e-10 -5.7 2.00e-06 - 1.00e+00 1.00e+00h 1
149+
150+
Number of Iterations....: 6
151+
152+
(scaled) (unscaled)
153+
Objective...............: 7.8692659500473017e-01 6.2324586324374636e+00
154+
Dual infeasibility......: 7.9746955363607132e-10 6.3159588647976857e-09
155+
Constraint violation....: 8.3546503049092280e-12 8.3546503049092280e-12
156+
Variable bound violation: 0.0000000000000000e+00 0.0000000000000000e+00
157+
Complementarity.........: 0.0000000000000000e+00 0.0000000000000000e+00
158+
Overall NLP error.......: 7.9746955363607132e-10 6.3159588647976857e-09
159+
160+
161+
Number of objective function evaluations = 7
162+
Number of objective gradient evaluations = 7
163+
Number of equality constraint evaluations = 7
164+
Number of inequality constraint evaluations = 0
165+
Number of equality constraint Jacobian evaluations = 7
166+
Number of inequality constraint Jacobian evaluations = 0
167+
Number of Lagrangian Hessian evaluations = 6
168+
Total seconds in IPOPT = 0.441
169+
170+
EXIT: Optimal Solution Found.
171+
Original objective: 6.232458632437464
172+
173+
````
174+
175+
Now change the penalty coefficient and solve again:
176+
177+
````julia
178+
set_parameter!(c_param, θ, [200.0, 1.0]) # Double the penalty coefficient
179+
result2 = ipopt(m_param)
180+
println("Modified penalty objective: $(result2.objective)")
181+
````
182+
183+
````
184+
This is Ipopt version 3.14.17, running with linear solver MUMPS 5.8.0.
185+
186+
Number of nonzeros in equality constraint Jacobian...: 24
187+
Number of nonzeros in inequality constraint Jacobian.: 0
188+
Number of nonzeros in Lagrangian Hessian.............: 75
189+
190+
Total number of variables............................: 10
191+
variables with only lower bounds: 0
192+
variables with lower and upper bounds: 0
193+
variables with only upper bounds: 0
194+
Total number of equality constraints.................: 8
195+
Total number of inequality constraints...............: 0
196+
inequality constraints with only lower bounds: 0
197+
inequality constraints with lower and upper bounds: 0
198+
inequality constraints with only upper bounds: 0
199+
200+
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
201+
0 4.0898000e+03 2.48e+01 2.70e+01 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
202+
1 2.1810502e+03 1.49e+01 8.27e+01 -1.0 2.20e+00 - 1.00e+00 1.00e+00f 1
203+
2 6.5137192e+02 4.27e+00 1.36e+02 -1.0 1.43e+00 - 1.00e+00 1.00e+00f 1
204+
3 2.4064340e+01 3.08e-01 2.18e+01 -1.0 5.62e-01 - 1.00e+00 1.00e+00f 1
205+
4 8.6476680e+00 1.72e-02 8.45e-01 -1.0 2.10e-01 - 1.00e+00 1.00e+00f 1
206+
5 8.6474398e+00 1.15e-05 8.07e-04 -1.7 3.39e-03 - 1.00e+00 1.00e+00h 1
207+
6 8.6474398e+00 8.42e-12 7.91e-10 -5.7 2.03e-06 - 1.00e+00 1.00e+00h 1
208+
209+
Number of Iterations....: 6
210+
211+
(scaled) (unscaled)
212+
Objective...............: 5.4592422674820063e-01 8.6474397516914987e+00
213+
Dual infeasibility......: 7.9051456536755353e-10 1.2521750715422049e-08
214+
Constraint violation....: 8.4190432403374871e-12 8.4190432403374871e-12
215+
Variable bound violation: 0.0000000000000000e+00 0.0000000000000000e+00
216+
Complementarity.........: 0.0000000000000000e+00 0.0000000000000000e+00
217+
Overall NLP error.......: 7.9051456536755353e-10 1.2521750715422049e-08
218+
219+
220+
Number of objective function evaluations = 7
221+
Number of objective gradient evaluations = 7
222+
Number of equality constraint evaluations = 7
223+
Number of inequality constraint evaluations = 0
224+
Number of equality constraint Jacobian evaluations = 7
225+
Number of inequality constraint Jacobian evaluations = 0
226+
Number of Lagrangian Hessian evaluations = 6
227+
Total seconds in IPOPT = 0.003
228+
229+
EXIT: Optimal Solution Found.
230+
Modified penalty objective: 8.647439751691499
231+
232+
````
233+
234+
Try a different offset parameter:
235+
236+
````julia
237+
set_parameter!(c_param, θ, [200.0, 0.5]) # Change the offset in the objective
238+
result3 = ipopt(m_param)
239+
println("Modified offset objective: $(result3.objective)")
240+
````
241+
242+
````
243+
This is Ipopt version 3.14.17, running with linear solver MUMPS 5.8.0.
244+
245+
Number of nonzeros in equality constraint Jacobian...: 24
246+
Number of nonzeros in inequality constraint Jacobian.: 0
247+
Number of nonzeros in Lagrangian Hessian.............: 75
248+
249+
Total number of variables............................: 10
250+
variables with only lower bounds: 0
251+
variables with lower and upper bounds: 0
252+
variables with only upper bounds: 0
253+
Total number of equality constraints.................: 8
254+
Total number of inequality constraints...............: 0
255+
inequality constraints with only lower bounds: 0
256+
inequality constraints with lower and upper bounds: 0
257+
inequality constraints with only upper bounds: 0
258+
259+
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
260+
0 4.0810500e+03 2.48e+01 2.69e+01 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
261+
1 2.1767809e+03 1.49e+01 8.26e+01 -1.0 2.20e+00 - 1.00e+00 1.00e+00f 1
262+
2 6.5050886e+02 4.27e+00 1.36e+02 -1.0 1.43e+00 - 1.00e+00 1.00e+00f 1
263+
3 2.4276149e+01 3.07e-01 2.18e+01 -1.0 5.61e-01 - 1.00e+00 1.00e+00f 1
264+
4 8.8465512e+00 1.72e-02 8.43e-01 -1.0 2.09e-01 - 1.00e+00 1.00e+00f 1
265+
5 8.8451636e+00 1.15e-05 8.04e-04 -1.7 3.40e-03 - 1.00e+00 1.00e+00h 1
266+
6 8.8451630e+00 8.47e-12 7.88e-10 -5.7 2.05e-06 - 1.00e+00 1.00e+00h 1
267+
268+
Number of Iterations....: 6
269+
270+
(scaled) (unscaled)
271+
Objective...............: 5.5805444714793528e-01 8.8451629872947741e+00
272+
Dual infeasibility......: 7.8812124187921384e-10 1.2491721683785540e-08
273+
Constraint violation....: 8.4678930534209940e-12 8.4678930534209940e-12
274+
Variable bound violation: 0.0000000000000000e+00 0.0000000000000000e+00
275+
Complementarity.........: 0.0000000000000000e+00 0.0000000000000000e+00
276+
Overall NLP error.......: 7.8812124187921384e-10 1.2491721683785540e-08
277+
278+
279+
Number of objective function evaluations = 7
280+
Number of objective gradient evaluations = 7
281+
Number of equality constraint evaluations = 7
282+
Number of inequality constraint evaluations = 0
283+
Number of equality constraint Jacobian evaluations = 7
284+
Number of inequality constraint Jacobian evaluations = 0
285+
Number of Lagrangian Hessian evaluations = 6
286+
Total seconds in IPOPT = 0.003
287+
288+
EXIT: Optimal Solution Found.
289+
Modified offset objective: 8.845162987294774
290+
291+
````
292+
293+
---
294+
295+
*This page was generated using [Literate.jl](https://github.com/fredrikekre/Literate.jl).*
296+

docs/src/simd.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ The mathematical statement of the problem formulation is as follows.
88
\begin{aligned}
99
\min_{x^\flat\leq x \leq x^\sharp}
1010
& \sum_{l\in[L]}\sum_{i\in [I_l]} f^{(l)}(x; p^{(l)}_i)\\
11-
\text{s.t.}\; &\left[g^{(m)}(x; q_j)\right]_{j\in [J_m]} +\sum_{n\in [N_m]}\sum_{k\in [K_n]}h^{(n)}(x; s^{(n)}_{k}) =0,\quad \forall m\in[M]
11+
\text{s.t.}\; &g^\flat \leq \left[g^{(m)}(x; q_j)\right]_{j\in [J_m]} +\sum_{n\in [N_m]}\sum_{k\in [K_n]}h^{(n)}(x; s^{(n)}_{k}) \leq g^\sharp,\quad \forall m\in[M]
1212
\end{aligned}
1313
```
1414
where $f^{(\ell)}(\cdot,\cdot)$, $g^{(m)}(\cdot,\cdot)$, and

0 commit comments

Comments
 (0)