You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/controller/nonlinmpc.jl
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -566,7 +566,7 @@ This method is really intricate and I'm not proud of it. That's because of 3 ele
566
566
567
567
- These functions are used inside the nonlinear optimization, so they must be type-stable
568
568
and as efficient as possible. All the function outputs and derivatives are cached and
569
-
updated in-place if required to use the efficient [`value_and_jacobian!`](@extref DifferentiationInterface DifferentiationInterface.value_and_jacobian!)`.
569
+
updated in-place if required to use the efficient [`value_and_jacobian!`](@extref DifferentiationInterface DifferentiationInterface.value_and_jacobian!).
570
570
- The `JuMP` NLP syntax forces splatting for the decision variable, which implies use
571
571
of `Vararg{T,N}` (see the [performance tip][@extref Julia Be-aware-of-when-Julia-avoids-specializing]
572
572
) and memoization to avoid redundant computations. This is already complex, but it's even
@@ -577,7 +577,7 @@ This method is really intricate and I'm not proud of it. That's because of 3 ele
577
577
Inspired from: [User-defined operators with vector outputs](@extref JuMP User-defined-operators-with-vector-outputs)
578
578
"""
579
579
functionget_optim_functions(mpc::NonLinMPC, ::JuMP.GenericModel{JNT}) where JNT<:Real
580
-
# ----- common cache for Jfunc, gfuncs, geqfuncs called with floats -------------------
580
+
# ----------- common cache for Jfunc, gfuncs and geqfuncs ----------------------------
Copy file name to clipboardExpand all lines: src/estimator/mhe/construct.jl
+40-39Lines changed: 40 additions & 39 deletions
Original file line number
Diff line number
Diff line change
@@ -1314,97 +1314,98 @@ Also return vectors with the nonlinear inequality constraint functions `gfuncs`,
1314
1314
This method is really intricate and I'm not proud of it. That's because of 3 elements:
1315
1315
1316
1316
- These functions are used inside the nonlinear optimization, so they must be type-stable
1317
-
and as efficient as possible.
1317
+
and as efficient as possible. All the function outputs and derivatives are cached and
1318
+
updated in-place if required to use the efficient [`value_and_jacobian!`](@extref DifferentiationInterface DifferentiationInterface.value_and_jacobian!).
1318
1319
- The `JuMP` NLP syntax forces splatting for the decision variable, which implies use
1319
1320
of `Vararg{T,N}` (see the [performance tip](@extref Julia Be-aware-of-when-Julia-avoids-specializing))
1320
1321
and memoization to avoid redundant computations. This is already complex, but it's even
1321
1322
worse knowing that most automatic differentiation tools do not support splatting.
1322
-
- The signature of gradient and hessian functions is not the same for univariate (`nZ̃ == 1`)
1323
-
and multivariate (`nZ̃ > 1`) operators in `JuMP`. Both must be defined.
1324
1323
1325
1324
Inspired from: [User-defined operators with vector outputs](@extref JuMP User-defined-operators-with-vector-outputs)
0 commit comments