From 7d633d1291450f15e99582019408992b20cfb1a1 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Wed, 27 Nov 2024 20:22:10 +0000 Subject: [PATCH] build based on 60b9261 --- previews/PR314/.documenter-siteinfo.json | 2 +- previews/PR314/backend/index.html | 14 +- previews/PR314/generic/index.html | 2 +- previews/PR314/index.html | 2 +- previews/PR314/mixed/index.html | 2 +- .../{d8cc26de.svg => 8289e1fc.svg} | 464 +++++++++--------- previews/PR314/performance/index.html | 56 +-- previews/PR314/predefined/index.html | 2 +- previews/PR314/reference/index.html | 2 +- previews/PR314/sparse/index.html | 2 +- previews/PR314/sparsity_pattern/index.html | 6 +- previews/PR314/tutorial/index.html | 2 +- 12 files changed, 278 insertions(+), 278 deletions(-) rename previews/PR314/performance/{d8cc26de.svg => 8289e1fc.svg} (73%) diff --git a/previews/PR314/.documenter-siteinfo.json b/previews/PR314/.documenter-siteinfo.json index 1f37afba..0c71f97d 100644 --- a/previews/PR314/.documenter-siteinfo.json +++ b/previews/PR314/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-11-27T20:21:54","documenter_version":"1.8.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-11-27T20:22:01","documenter_version":"1.8.0"}} \ No newline at end of file diff --git a/previews/PR314/backend/index.html b/previews/PR314/backend/index.html index 8751c93a..0351705f 100644 --- a/previews/PR314/backend/index.html +++ b/previews/PR314/backend/index.html @@ -57,9 +57,9 @@ return g end

Finally, we use the homemade backend to compute the gradient.

nlp = ADNLPModel(sum, ones(3), gradient_backend = NewADGradient)
 grad(nlp, nlp.meta.x0)  # returns the gradient at x0 using `NewADGradient`
3-element Vector{Float64}:
- 0.8569226471150723
- 0.6828735091612729
- 0.4646794332656391

Change backend

Once an instance of an ADNLPModel has been created, it is possible to change the backends without re-instantiating the model.

using ADNLPModels, NLPModels
+ 0.39759258682585186
+ 0.23118596814884873
+ 0.9667460681111437

Change backend

Once an instance of an ADNLPModel has been created, it is possible to change the backends without re-instantiating the model.

using ADNLPModels, NLPModels
 f(x) = 100 * (x[2] - x[1]^2)^2 + (x[1] - 1)^2
 x0 = 3 * ones(2)
 nlp = ADNLPModel(f, x0)
@@ -128,10 +128,10 @@
            jhess: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0               jhprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0     
 

Then, the gradient will return a vector of Float64.

x64 = rand(2)
 grad(nlp, x64)
2-element Vector{Float64}:
-  158.69683444036966
- -104.53645842270369

It is now possible to move to a different type, for instance Float32, while keeping the instance nlp.

x0_32 = ones(Float32, 2)
+  44.01630697475976
+ -30.9642965850053

It is now possible to move to a different type, for instance Float32, while keeping the instance nlp.

x0_32 = ones(Float32, 2)
 set_adbackend!(nlp, gradient_backend = ADNLPModels.ForwardDiffADGradient, x0 = x0_32)
 x32 = rand(Float32, 2)
 grad(nlp, x32)
2-element Vector{Float64}:
-  16.451425552368164
- -12.616652488708496
+ -79.80408477783203 + 96.12127685546875 diff --git a/previews/PR314/generic/index.html b/previews/PR314/generic/index.html index 7f9dc42d..bb705be3 100644 --- a/previews/PR314/generic/index.html +++ b/previews/PR314/generic/index.html @@ -1,2 +1,2 @@ -Support multiple precision · ADNLPModels.jl
+Support multiple precision · ADNLPModels.jl
diff --git a/previews/PR314/index.html b/previews/PR314/index.html index 7b816f6b..cbadb498 100644 --- a/previews/PR314/index.html +++ b/previews/PR314/index.html @@ -127,4 +127,4 @@ output[2] = x[2] end nvar, ncon = 3, 2 -nls = ADNLSModel!(F!, x0, nequ, c!, zeros(ncon), zeros(ncon))source

Check the Tutorial for more details on the usage.

License

This content is released under the MPL2.0 License.

Bug reports and discussions

If you think you found a bug, feel free to open an issue. Focused suggestions and requests can also be opened as issues. Before opening a pull request, start an issue or a discussion on the topic, please.

If you want to ask a question not suited for a bug report, feel free to start a discussion here. This forum is for general discussion about this repository and the JuliaSmoothOptimizers, so questions about any of our packages are welcome.

Contents

+nls = ADNLSModel!(F!, x0, nequ, c!, zeros(ncon), zeros(ncon))source

Check the Tutorial for more details on the usage.

License

This content is released under the MPL2.0 License.

Bug reports and discussions

If you think you found a bug, feel free to open an issue. Focused suggestions and requests can also be opened as issues. Before opening a pull request, start an issue or a discussion on the topic, please.

If you want to ask a question not suited for a bug report, feel free to start a discussion here. This forum is for general discussion about this repository and the JuliaSmoothOptimizers, so questions about any of our packages are welcome.

Contents

diff --git a/previews/PR314/mixed/index.html b/previews/PR314/mixed/index.html index 9841fb5f..6290d38c 100644 --- a/previews/PR314/mixed/index.html +++ b/previews/PR314/mixed/index.html @@ -101,4 +101,4 @@ }

Note that the backends used for the gradient and jacobian are now NLPModel. So, a call to grad on nlp

grad(nlp, x0)
2-element Vector{Float64}:
  -12.847999999999999
   -3.5199999999999996

would call grad on model

neval_grad(model)
1

Moreover, as expected, the ADNLPModel nlp also implements the missing methods, e.g.

jprod(nlp, x0, v)
1-element Vector{Float64}:
- 2.0
+ 2.0 diff --git a/previews/PR314/performance/d8cc26de.svg b/previews/PR314/performance/8289e1fc.svg similarity index 73% rename from previews/PR314/performance/d8cc26de.svg rename to previews/PR314/performance/8289e1fc.svg index 422e5ce3..49f28504 100644 --- a/previews/PR314/performance/d8cc26de.svg +++ b/previews/PR314/performance/8289e1fc.svg @@ -1,275 +1,275 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/previews/PR314/performance/index.html b/previews/PR314/performance/index.html index 655ab510..a49c2b6b 100644 --- a/previews/PR314/performance/index.html +++ b/previews/PR314/performance/index.html @@ -267,33 +267,33 @@ stats[back][stats[back].name .== name, :time] = [median(b.times)] stats[back][stats[back].name .== name, :allocs] = [median(b.allocs)] end -end
[ Info:  camshape with 1000 vars and 2003 cons
-[ Info:  catenary with 999 vars and 332 cons
-┌ Warning: catenary: number of variables adjusted to be a multiple of 3
-@ OptimizationProblems.PureJuMP ~/.julia/packages/OptimizationProblems/9qr9C/src/PureJuMP/catenary.jl:20
-┌ Warning: catenary: number of variables adjusted to be greater or equal to 6
-@ OptimizationProblems.PureJuMP ~/.julia/packages/OptimizationProblems/9qr9C/src/PureJuMP/catenary.jl:22
-┌ Warning: catenary: number of variables adjusted to be a multiple of 3
-@ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/9qr9C/src/ADNLPProblems/catenary.jl:4
-┌ Warning: catenary: number of variables adjusted to be greater or equal to 6
-@ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/9qr9C/src/ADNLPProblems/catenary.jl:6
-┌ Warning: catenary: number of variables adjusted to be a multiple of 3
-@ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/9qr9C/src/ADNLPProblems/catenary.jl:4
-┌ Warning: catenary: number of variables adjusted to be greater or equal to 6
-@ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/9qr9C/src/ADNLPProblems/catenary.jl:6
-[ Info:  chain with 1000 vars and 752 cons
-[ Info:  channel with 1000 vars and 1000 cons
-[ Info:  clnlbeam with 999 vars and 664 cons
-[ Info:  controlinvestment with 1000 vars and 500 cons
-[ Info:  elec with 999 vars and 333 cons
-[ Info:  hovercraft1d with 998 vars and 668 cons
-[ Info:  marine with 1007 vars and 488 cons
-[ Info:  polygon with 1000 vars and 125251 cons
-[ Info:  polygon1 with 1000 vars and 500 cons
-[ Info:  polygon2 with 1000 vars and 1 cons
-[ Info:  polygon3 with 1000 vars and 1000 cons
-[ Info:  robotarm with 1009 vars and 1002 cons
-[ Info:  structural with 3540 vars and 3652 cons
using Plots, SolverBenchmark
+end
[ Info:  camshape with 1000 vars and 2003 cons
+[ Info:  catenary with 999 vars and 332 cons
+┌ Warning: catenary: number of variables adjusted to be a multiple of 3
+└ @ OptimizationProblems.PureJuMP ~/.julia/packages/OptimizationProblems/9qr9C/src/PureJuMP/catenary.jl:20
+┌ Warning: catenary: number of variables adjusted to be greater or equal to 6
+└ @ OptimizationProblems.PureJuMP ~/.julia/packages/OptimizationProblems/9qr9C/src/PureJuMP/catenary.jl:22
+┌ Warning: catenary: number of variables adjusted to be a multiple of 3
+└ @ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/9qr9C/src/ADNLPProblems/catenary.jl:4
+┌ Warning: catenary: number of variables adjusted to be greater or equal to 6
+└ @ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/9qr9C/src/ADNLPProblems/catenary.jl:6
+┌ Warning: catenary: number of variables adjusted to be a multiple of 3
+└ @ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/9qr9C/src/ADNLPProblems/catenary.jl:4
+┌ Warning: catenary: number of variables adjusted to be greater or equal to 6
+└ @ OptimizationProblems.ADNLPProblems ~/.julia/packages/OptimizationProblems/9qr9C/src/ADNLPProblems/catenary.jl:6
+[ Info:  chain with 1000 vars and 752 cons
+[ Info:  channel with 1000 vars and 1000 cons
+[ Info:  clnlbeam with 999 vars and 664 cons
+[ Info:  controlinvestment with 1000 vars and 500 cons
+[ Info:  elec with 999 vars and 333 cons
+[ Info:  hovercraft1d with 998 vars and 668 cons
+[ Info:  marine with 1007 vars and 488 cons
+[ Info:  polygon with 1000 vars and 125251 cons
+[ Info:  polygon1 with 1000 vars and 500 cons
+[ Info:  polygon2 with 1000 vars and 1 cons
+[ Info:  polygon3 with 1000 vars and 1000 cons
+[ Info:  robotarm with 1009 vars and 1002 cons
+[ Info:  structural with 3540 vars and 3652 cons
using Plots, SolverBenchmark
 costnames = ["median time (in ns)", "median allocs"]
 costs = [
   df -> df.time,
@@ -302,4 +302,4 @@
 
 gr()
 
-profile_solvers(stats, costs, costnames)
Example block output +profile_solvers(stats, costs, costnames)Example block output diff --git a/previews/PR314/predefined/index.html b/previews/PR314/predefined/index.html index 46ee832c..124c039c 100644 --- a/previews/PR314/predefined/index.html +++ b/previews/PR314/predefined/index.html @@ -55,4 +55,4 @@ SparseADJacobian, SparseReverseADHessian, ForwardDiffADGHjvprod, -} +} diff --git a/previews/PR314/reference/index.html b/previews/PR314/reference/index.html index f8c299f7..6bafbfba 100644 --- a/previews/PR314/reference/index.html +++ b/previews/PR314/reference/index.html @@ -115,4 +115,4 @@ get_nln_nnzj(nlp::AbstractNLPModel, nvar, ncon)

For a given ADBackend of a problem with nvar variables and ncon constraints, return the number of nonzeros in the Jacobian of nonlinear constraints. If b is the ADModelBackend then b.jacobian_backend is used.

source
ADNLPModels.get_residual_nnzhMethod
get_residual_nnzh(b::ADModelBackend, nvar)
 get_residual_nnzh(nls::AbstractNLSModel, nvar)

Return the number of nonzeros elements in the residual Hessians.

source
ADNLPModels.get_residual_nnzjMethod
get_residual_nnzj(b::ADModelBackend, nvar, nequ)
 get_residual_nnzj(nls::AbstractNLSModel, nvar, nequ)

Return the number of nonzeros elements in the residual Jacobians.

source
ADNLPModels.get_sparsity_patternMethod
S = get_sparsity_pattern(model::ADModel, derivative::Symbol)

Retrieve the sparsity pattern of a Jacobian or Hessian from an ADModel. For the Hessian, only the lower triangular part of its sparsity pattern is returned. The user can reconstruct the upper triangular part by exploiting symmetry.

To compute the sparsity pattern, the model must use a sparse backend. Supported backends include SparseADJacobian, SparseADHessian, and SparseReverseADHessian.

Input arguments

  • model: An automatic differentiation model (either AbstractADNLPModel or AbstractADNLSModel).
  • derivative: The type of derivative for which the sparsity pattern is needed. The supported values are :jacobian, :hessian, :jacobian_residual and :hessian_residual.

Output argument

  • S: A sparse matrix of type SparseMatrixCSC{Bool,Int} indicating the sparsity pattern of the requested derivative.
source
ADNLPModels.set_adbackend!Method
set_adbackend!(nlp, new_adbackend)
-set_adbackend!(nlp; kwargs...)

Replace the current adbackend value of nlp by new_adbackend or instantiate a new one with kwargs, see ADModelBackend. By default, the setter with kwargs will reuse existing backends.

source
+set_adbackend!(nlp; kwargs...)

Replace the current adbackend value of nlp by new_adbackend or instantiate a new one with kwargs, see ADModelBackend. By default, the setter with kwargs will reuse existing backends.

source diff --git a/previews/PR314/sparse/index.html b/previews/PR314/sparse/index.html index 0206dd0a..41584d85 100644 --- a/previews/PR314/sparse/index.html +++ b/previews/PR314/sparse/index.html @@ -187,4 +187,4 @@ jprod_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jtprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jtprod_lin: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jtprod_nln: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 hess: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 hprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jhess: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 jhprod: ⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅⋅ 0 -

The section "providing the sparsity pattern for sparse derivatives" illustrates this feature with a more advanced application.

Acknowledgements

The package SparseConnectivityTracer.jl is used to compute the sparsity pattern of Jacobians and Hessians. The evaluation of the number of directional derivatives and the seeds required to compute compressed Jacobians and Hessians is performed using SparseMatrixColorings.jl. As of release v0.8.1, it has replaced ColPack.jl. We acknowledge Guillaume Dalle (@gdalle), Adrian Hill (@adrhill), Alexis Montoison (@amontoison), and Michel Schanen (@michel2323) for the development of these packages.

+

The section "providing the sparsity pattern for sparse derivatives" illustrates this feature with a more advanced application.

Acknowledgements

The package SparseConnectivityTracer.jl is used to compute the sparsity pattern of Jacobians and Hessians. The evaluation of the number of directional derivatives and the seeds required to compute compressed Jacobians and Hessians is performed using SparseMatrixColorings.jl. As of release v0.8.1, it has replaced ColPack.jl. We acknowledge Guillaume Dalle (@gdalle), Adrian Hill (@adrhill), Alexis Montoison (@amontoison), and Michel Schanen (@michel2323) for the development of these packages.

diff --git a/previews/PR314/sparsity_pattern/index.html b/previews/PR314/sparsity_pattern/index.html index 99e4197b..81e648ea 100644 --- a/previews/PR314/sparsity_pattern/index.html +++ b/previews/PR314/sparsity_pattern/index.html @@ -29,7 +29,7 @@ @elapsed begin nlp = ADNLPModel!(f, xi, lvar, uvar, [1], [1], T[1], c!, lcon, ucon; hessian_backend = ADNLPModels.EmptyADbackend) -end
3.067071666

ADNLPModel will automatically prepare an AD backend for computing sparse Jacobian and Hessian. We disabled the Hessian computation here to focus the measurement on the Jacobian computation. The keyword argument show_time = true can also be passed to the problem's constructor to get more detailed information about the time used to prepare the AD backend.

using NLPModels
+end
2.967382671

ADNLPModel will automatically prepare an AD backend for computing sparse Jacobian and Hessian. We disabled the Hessian computation here to focus the measurement on the Jacobian computation. The keyword argument show_time = true can also be passed to the problem's constructor to get more detailed information about the time used to prepare the AD backend.

using NLPModels
 x = sqrt(2) * ones(n)
 jac_nln(nlp, x)
49999×100000 SparseArrays.SparseMatrixCSC{Float64, Int64} with 199996 stored entries:
 ⎡⠙⢦⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠳⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎤
@@ -78,7 +78,7 @@
 
   jac_back = ADNLPModels.SparseADJacobian(n, f, N - 1, c!, J)
   nlp = ADNLPModel!(f, xi, lvar, uvar, [1], [1], T[1], c!, lcon, ucon; hessian_backend = ADNLPModels.EmptyADbackend, jacobian_backend = jac_back)
-end
1.934235989

We recover the same Jacobian.

using NLPModels
+end
1.686572144

We recover the same Jacobian.

using NLPModels
 x = sqrt(2) * ones(n)
 jac_nln(nlp, x)
49999×100000 SparseArrays.SparseMatrixCSC{Float64, Int64} with 199996 stored entries:
 ⎡⠙⢦⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠳⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⎤
@@ -90,4 +90,4 @@
 ⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⠲⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⢦⡀⠀⠀⠀⠀⠀⎥
 ⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠳⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⢦⡀⠀⠀⠀⎥
 ⎢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠳⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⢦⡀⠀⎥
-⎣⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠳⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⠦⎦

The same can be done for the Hessian of the Lagrangian.

+⎣⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠳⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⠦⎦

The same can be done for the Hessian of the Lagrangian.

diff --git a/previews/PR314/tutorial/index.html b/previews/PR314/tutorial/index.html index 5a072a32..6a40ca26 100644 --- a/previews/PR314/tutorial/index.html +++ b/previews/PR314/tutorial/index.html @@ -1,2 +1,2 @@ -Tutorial · ADNLPModels.jl
+Tutorial · ADNLPModels.jl