I would like to ask for help, I don’t know the name of the method I’m trying to find, but I think it’s called sensitivity analysis.
I have two input parameters to my optimisation problem, A and B, or B and C, as described below. I provide an initial guess for the vector X, and at the minimum solved for by IPOPT the optimum values of the vector X are returned and is the solution to my problem.
IPOPT is not aware of the input pairs specifically (A and B, or B and C) because I don't know how to do that, but my functions use them to solve the problem and they are constant scalars. I have a function, G(A, B), which I solve for the minimum of using IPOPT (with some equality constraints). At convergence I obtain the solution to my problem, the vector X. X is X(A, B). This works well and my code behaves as expected.
For downstream analysis in another code, I now need partial derivatives at the optimum point, specifically dX/dA_const_B and dX/dB_const_A. I had wondered/hoped that as part of the solution mechanism I might be able to obtain these derivative vectors at convergence in IPOPT? Note, I do not want to do sensitivity analysis, and it’s these raw derivative vectors that I need. Is it possible to obtain these partial derivative vectors from IPOPT in this scenario?
Secondly, I solve another type of optimisation problem in IPOPT, in this case my input parameters are B, and C. I have a function H(B, C), which I solve for the minimum of using IPOPT (with some equality constraints). At convergence I obtain the solution to my problem, the vector X and the scalar A. Here A is the same parameter A in the first example, but here is an output rather than an input. Here X is X(B, C) and A is A(B, C).
As before, I now need partial derivatives at the optimum point, specifically dX/dA_const_B and dX/dB_const_A. In this second scenario, A(B, C) has been obtained at the optimum point as an output, whereas B was an input parameter. Is it possible to obtain these partial derivatives from IPOPT in this second scenario?
I hope this makes sense, I’m happy to answer any questions to clarify. Been using IPOPT for sometime so I have a good handle that my code works, but these derivative vectors as a function of inputs, or in the second scenario other scalar outputs, are a new requirement.
I would like to ask for help, I don’t know the name of the method I’m trying to find, but I think it’s called sensitivity analysis.
I have two input parameters to my optimisation problem, A and B, or B and C, as described below. I provide an initial guess for the vector X, and at the minimum solved for by IPOPT the optimum values of the vector X are returned and is the solution to my problem.
IPOPT is not aware of the input pairs specifically (A and B, or B and C) because I don't know how to do that, but my functions use them to solve the problem and they are constant scalars. I have a function, G(A, B), which I solve for the minimum of using IPOPT (with some equality constraints). At convergence I obtain the solution to my problem, the vector X. X is X(A, B). This works well and my code behaves as expected.
For downstream analysis in another code, I now need partial derivatives at the optimum point, specifically dX/dA_const_B and dX/dB_const_A. I had wondered/hoped that as part of the solution mechanism I might be able to obtain these derivative vectors at convergence in IPOPT? Note, I do not want to do sensitivity analysis, and it’s these raw derivative vectors that I need. Is it possible to obtain these partial derivative vectors from IPOPT in this scenario?
Secondly, I solve another type of optimisation problem in IPOPT, in this case my input parameters are B, and C. I have a function H(B, C), which I solve for the minimum of using IPOPT (with some equality constraints). At convergence I obtain the solution to my problem, the vector X and the scalar A. Here A is the same parameter A in the first example, but here is an output rather than an input. Here X is X(B, C) and A is A(B, C).
As before, I now need partial derivatives at the optimum point, specifically dX/dA_const_B and dX/dB_const_A. In this second scenario, A(B, C) has been obtained at the optimum point as an output, whereas B was an input parameter. Is it possible to obtain these partial derivatives from IPOPT in this second scenario?
I hope this makes sense, I’m happy to answer any questions to clarify. Been using IPOPT for sometime so I have a good handle that my code works, but these derivative vectors as a function of inputs, or in the second scenario other scalar outputs, are a new requirement.