Matlab provides the function fminunc to solve unconstrained optimization problems.
A Basic call of fminunc top
Without any extra options the syntax is
[x,fval]=fminunc('objfun',x0)
where
objfun: name of a function file in which the objective function is codedNotes:
x0: (column) vector of starting values
x (1st output): optimal solution vector (column)
fval (2nd output): optimal function value
Example: top
Minimize the objective function
(1) You first have to code the objective function. Open a new M-file in the editor and type in:
function f=objfun(x)
f=(x(1)^2+x(2)^2)^2-x(1)^2-x(2)+x(3)^2;
Save the file under (any) name -- here we choose objfun.m. If the file is saved under this name then you have access to it and can retrieve the value of the function for any input vector x. For example, if you want to know the value at (1,1,1), type (command window or script file) objfun([1;1;1]) and execute. The answer in the command window is 3.
(2) Now we can apply fminunc with a properly chosen starting value to find a minimum. We choose x0=[1;1;1] and execute the following commands in the command window:
>> x0=[1;1;1];[x,fval] = fminunc('objfun',x0)
Warning: Gradient must
be provided for trust-region method;
using line-search
method instead.
> In C:\MATLABR12\toolbox\optim\fminunc.m at line 211
Optimization terminated
successfully:
Current search
direction is a descent direction, and magnitude of
directional derivative
in search direction less than 2*options.TolFun
x =
0.49998491345499
0.50000453310525
-0.00000383408095
fval =
-0.49999999985338
The comment below the command line tells that no information about the gradient was provided which may lead to non-optimal performance.
B Call of fminunc with gradient information supplied top
Optimization programs usually performs better if gradient information is exploited. This requires two modifications:
(1) The objective file must be coded such that the gradient can be retireved as second output. For the function above this requires the following extension of the function file:
function [f,gradf]=objfun(x)
f=(x(1)^2+x(2)^2)^2-x(1)^2-x(2)+x(3)^2;
gradf=[4*x(1)*(x(1)^2+x(2)^2)-2*x(1);4*x(2)*(x(1)^2+x(2)^2)-1;2*x(3)];
The 2nd output argument, gradf, is the gradient vector of f written as column vector.
(2) The program has to be`told' that it shall exploit gradient information. This is done by specifying one of the optimization options, and the program has to be informed that it has to use this option. The general syntax is
>> options=optimset('GradObj','on');
>> [x,fval]=fminunc('objfun',x0,options)
For the Example, now with gradient information supplied, we execute in the command window:
>> options=optimset('GradObj','on');
>> x0=[1;1;1];[x,fval]=fminunc('objfun',x0,options)
Optimization terminated
successfully:
Relative function
value changing by less than OPTIONS.TolFun
x =
0.50045437772043
0.49981153795642
0.00003452966310
fval =
-0.49999989244986
As you can see, the values differ slightly from those obtained before,
and are indeed more accurate.
Matlab's HELP DESCRIPTION top
FMINUNC Finds the
minimum of a function of several variables.
X=FMINUNC(FUN,X0)
starts at X0 and finds a minimum X of the function
FUN.
FUN accepts input X and returns a scalar function value F evaluated
at
X. X0 can be a scalar, vector or matrix.
X=FMINUNC(FUN,X0,OPTIONS)
minimizes with the default optimization
parameters
replaced by values in the structure OPTIONS, an argument
created
with the OPTIMSET function. See OPTIMSET for details. Used
options
are Display, TolX, TolFun, DerivativeCheck, Diagnostics, GradObj,
HessPattern,
LineSearchType, Hessian, HessMult, HessUpdate, MaxFunEvals,
MaxIter,
DiffMinChange and DiffMaxChange, LargeScale, MaxPCGIter,
PrecondBandWidth,
TolPCG, TypicalX. Use the GradObj option to specify that
FUN
also returns a second output argument G that is the partial
derivatives
of the function df/dX, at the point X. Use the Hessian option
to
specify that FUN also returns a third output argument H that
is
the 2nd partial derivatives of the function (the Hessian) at the
point
X. The Hessian is only used by the large-scale method, not the
line-search
method.
X=FMINUNC(FUN,X0,OPTIONS,P1,P2,...)
passes the problem-dependent
parameters
P1,P2,... directly to the function FUN, e.g. FUN would be
called
using feval as in: feval(FUN,X,P1,P2,...).
Pass
an empty matrix for OPTIONS to use the default values.
[X,FVAL]=FMINUNC(FUN,X0,...)
returns the value of the objective
function
FUN at the solution X.
[X,FVAL,EXITFLAG]=FMINUNC(FUN,X0,...)
returns a string EXITFLAG that
describes
the exit condition of FMINUNC.
If
EXITFLAG is:
> 0 then FMINUNC converged to a solution X.
0 then the maximum number of function evaluations was reached.
< 0 then FMINUNC did not converge to a solution.
[X,FVAL,EXITFLAG,OUTPUT]=FMINUNC(FUN,X0,...)
returns a structure OUTPUT
with
the number of iterations taken in OUTPUT.iterations, the number of
function
evaluations in OUTPUT.funcCount, the algorithm used in OUTPUT.algorithm,
the
number of CG iterations (if used) in OUTPUT.cgiterations, and the first-order
optimality
(if used) in OUTPUT.firstorderopt.
[X,FVAL,EXITFLAG,OUTPUT,GRAD]=FMINUNC(FUN,X0,...)
returns the value
of
the gradient of FUN at the solution X.
[X,FVAL,EXITFLAG,OUTPUT,GRAD,HESSIAN]=FMINUNC(FUN,X0,...)
returns the
value
of the Hessian of the objective function FUN at the solution X.
Examples
FUN can be specified using @:
X = fminunc(@myfun,2)
where MYFUN is a MATLAB function such as:
function F = myfun(x)
F = sin(x) + 3;
To minimize this function with the gradient provided, modify
the MYFUN so the gradient is the second output argument:
function [f,g]= myfun(x)
f = sin(x) + 3;
g = cos(x);
and indicate the gradient value is available by creating an options
structure with OPTIONS.GradObj set to 'on' (using OPTIMSET):
options = optimset('GradObj','on');
x = fminunc('myfun',2,options);
FUN can also be an inline object:
x = fminunc(inline('sin(x)+3'),2);
See also OPTIMSET, FMINSEARCH, FMINBND, FMINCON, @, INLINE. top