Nonlinear Constraints with Gradients
This example shows how to solve a nonlinear problem with nonlinear constraints using derivative information.
Ordinarily, minimization routines use numerical gradients calculated by finite-difference approximation. This procedure systematically perturbs each variable in order to calculate function and constraint partial derivatives. Alternatively, you can provide a function to compute partial derivatives analytically. Typically, when you provide derivative information, solvers work more accurately and efficiently.
Objective Function and Nonlinear Constraint
The problem is to solve
subject to the constraints
Because the fmincon
solver expects the constraints to be written in the form , write your constraint function to return the following value:
.
Objective Function with Gradient
The objective function is
.
Compute the gradient of with respect to the variables and .
.
The objfungrad
helper function at the end of this example returns both the objective function and its gradient in the second output gradf
. Set @objfungrad
as the objective.
fun = @objfungrad;
Constraint Function with Gradient
The helper function confungrad
is the nonlinear constraint function; it appears at the end of this example.
The derivative information for the inequality constraint has each column correspond to one constraint. In other words, the gradient of the constraints is in the following format:
Set @confungrad
as the nonlinear constraint function.
nonlcon = @confungrad;
Set Options to Use Derivative Information
Indicate to the fmincon
solver that the objective and constraint functions provide derivative information. To do so, use optimoptions
to set the SpecifyObjectiveGradient
and SpecifyConstraintGradient
option values to true
.
options = optimoptions('fmincon',... 'SpecifyObjectiveGradient',true,'SpecifyConstraintGradient',true);
Solve Problem
Set the initial point to [-1,1]
.
x0 = [-1,1];
The problem has no bounds or linear constraints, so set those argument values to []
.
A = []; b = []; Aeq = []; beq = []; lb = []; ub = [];
Call fmincon
to solve the problem.
[x,fval] = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options)
Local minimum found that satisfies the constraints. Optimization completed because the objective function is non-decreasing in feasible directions, to within the value of the optimality tolerance, and constraints are satisfied to within the value of the constraint tolerance.
x = 1×2
-9.5473 1.0474
fval = 0.0236
The solution is the same as in the example Nonlinear Inequality Constraints, which solves the problem without using derivative information. The advantage of using derivatives is that solving the problem takes fewer function evaluations while gaining robustness, although this advantage is not obvious in this example. Using even more derivative information, as in fmincon Interior-Point Algorithm with Analytic Hessian, gives even more benefit, such as fewer solver iterations.
Helper Functions
This code creates the objfungrad
helper function.
function [f,gradf] = objfungrad(x) f = exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1); % Gradient of the objective function: if nargout > 1 gradf = [ f + exp(x(1)) * (8*x(1) + 4*x(2)), exp(x(1))*(4*x(1)+4*x(2)+2)]; end end
This code creates the confungrad
helper function.
function [c,ceq,DC,DCeq] = confungrad(x) c(1) = 1.5 + x(1) * x(2) - x(1) - x(2); % Inequality constraints c(2) = -x(1) * x(2)-10; % No nonlinear equality constraints ceq=[]; % Gradient of the constraints: if nargout > 2 DC= [x(2)-1, -x(2); x(1)-1, -x(1)]; DCeq = []; end end