-
Notifications
You must be signed in to change notification settings - Fork 8
Add parameters #148
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add parameters #148
Conversation
|
I believe the test failure is unrelated, there seems to be a segmentation fault after the tests pass with oneAPI. I've verified that tests pass (and exit normally) on CUDA and AMD machines but don't have a oneAPI machine to debug that. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good job, what you have implemented correspond exactly to what I had in mind when opening #147. I have a first round of comments, but the PR overall looks good to me. I think we need to discuss what would be an appropriate naming for the parameter θ (I would prefer avoiding using special character in the source code), but I am open for discussion.
|
Documentation failure seems unrelated, it wants CUDA for this doctest. |
|
Once #150 is merged, we should be able to more reliably test it on oneAPI |
|
In the long term, I think we'd need |
|
Thanks @klamike for the great contribution. @frapac and I talked about this a few days ago, but I was quite surprised that someone could implement this feature. Thanks for the great contribution again. Once this is merged, maybe we should talk about how to implement |
@sshin23 To the best of my knowledge, I don't know any package with a similar scope. |
|
Looks complete once AC OPF test is implemented. @klamike, could you include a few timing results comparing native iterator vs parameter? e.g., |
julia> @benchmark (constraint(core, ci*x[i]^2 for (ci,i) in $itr)) setup=(core=ExaCore();x=variable(core,N))
BenchmarkTools.Trial: 10000 samples with 1 evaluation per sample.
Range (min … max): 10.459 μs … 2.625 ms ┊ GC (min … max): 0.00% … 97.01%
Time (median): 25.042 μs ┊ GC (median): 0.00%
Time (mean ± σ): 35.316 μs ± 69.982 μs ┊ GC (mean ± σ): 27.94% ± 14.66%
█
▅▃█▄▂▂▂▂▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▂▂▂▂▂▂▂▂▁▂▂▂▁▂▂▂▂▂▁▂▂▂▂▂▂▂▂▂▂▂▂ ▂
10.5 μs Histogram: frequency by time 393 μs <
Memory estimate: 450.06 KiB, allocs estimate: 52.
julia> @benchmark (p=parameter(core,$c);constraint(core, p[i]*x[i]^2 for i in $r)) setup=(core=ExaCore();x=variable(core,N))
BenchmarkTools.Trial: 10000 samples with 1 evaluation per sample.
Range (min … max): 5.333 μs … 2.773 ms ┊ GC (min … max): 0.00% … 98.35%
Time (median): 17.333 μs ┊ GC (median): 0.00%
Time (mean ± σ): 26.428 μs ± 77.636 μs ┊ GC (mean ± σ): 29.77% ± 11.98%
█▂
▄██▄▂▂▂▁▁▁▁▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂ ▂
5.33 μs Histogram: frequency by time 373 μs <
Memory estimate: 385.86 KiB, allocs estimate: 45.
julia> @benchmark (constraint(core, p[i]*x[i]^2 for i in $r)) setup=(core=ExaCore();x=variable(core,N);p=parameter(core,c))
BenchmarkTools.Trial: 10000 samples with 1 evaluation per sample.
Range (min … max): 4.250 μs … 4.786 ms ┊ GC (min … max): 0.00% … 98.91%
Time (median): 15.334 μs ┊ GC (median): 0.00%
Time (mean ± σ): 23.522 μs ± 95.209 μs ┊ GC (mean ± σ): 32.62% ± 11.21%
█
▄█▇▃▂▂▁▂▂▂▁▂▁▁▂▁▁▁▁▁▁▁▁▁▁▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂▂ ▂
4.25 μs Histogram: frequency by time 382 μs <
Memory estimate: 289.83 KiB, allocs estimate: 43.setup codejulia> N = 10000
10000
julia> r = 1:N
1:10000
julia> c = randn(N)
10000-element Vector{Float64}:
-0.6352719903799927
-0.12775893123446538
-0.24148166078449262
-1.6000072348190741
-1.174734326510433
-0.15526451279613748
0.8432186018943468
0.7863513028511108
0.189521661646511
⋮
-0.253292836923081
-1.5877525419989467
0.5995938282419363
-0.596106071874957
-0.6036390173372269
0.304000167443382
-0.19262677690139543
-1.36375097726357
julia> itr = zip(c, r)
zip([-0.6352719903799927, -0.12775893123446538, -0.24148166078449262, -1.6000072348190741, -1.174734326510433, -0.15526451279613748, 0.8432186018943468, 0.7863513028511108, 0.189521661646511, -1.3184799952362174 … -0.16566209264835513, -0.9681662718985207, -0.253292836923081, -1.5877525419989467, 0.5995938282419363, -0.596106071874957, -0.6036390173372269, 0.304000167443382, -0.19262677690139543, -1.36375097726357], 1:10000)Is this what you meant @sshin23 ? Going to look at how long it takes to solve 10k ACOPFs next.. Do you have some benchmarking infra already set up that I can tap into? Also, can you take a look at the test failure? It's due to |
This PR adds parameters to ExaModels, which act like fixed variables to the user but to the solver look like scalars. This allows users to modify parameter values between solves without having to rebuild the model and without adding any extra variables and constraints. It also makes it possible to implement kernels for batches of problems.
Main changes:
AbstractNodecalls from two-argument(i, x)to three-argument(i, x, θ). In fact, in general, everywherexis passed,θis now passed right after it. That is why there are so many tiny changes :) Note that all user-facing functions where them::ExaModelis passed e.g.obj,cons, etc. are unchanged; they just takexand always use the current value ofm.θ.parameter,ParameterNodeandParameterSource, which act likevariable,VarandVarSourceresp, andset_parameter!for updating the values.ExaCoreandExaModelto have an additional fieldθ, storing the initial/current value of the parameters.Future work:
MOI.Parameterin the MOI extension. I am hesitant to touch it due to Unsafe use of MOI #100Par,ParSource, etc then renameθCloses #147
cc @frapac