General-Purpose Unconstrained Non-Linear Optimization
An algorithm for general-purpose unconstrained non-linear optimization. The algorithm is of quasi-Newton type with BFGS updating of the inverse Hessian and soft line search with a trust region type monitoring of the input to the line search algorithm. The interface of ‘ucminf’ is designed for easy interchange with ‘optim’.
ucminf(par, fn, gr = NULL, ..., control = list(), hessian=0)
par |
Initial estimate of minimum for |
fn |
Objective function to be minimized. |
gr |
Gradient of objective function. If |
... |
Optional arguments passed to the objective and gradient functions. |
control |
A list of control parameters. See ‘Details’. |
hessian |
Integer value:
If a |
The algorithm is documented in (Nielsen, 2000) (see References below) together with a comparison to the Fortran subroutine ‘MINF’ and the Matlab function ‘fminunc’. The implementation of ‘ucminf’ in R uses the original Fortran version of the algorithm.
The interface in R is designed so that it is very easy to switch
between using ‘ucminf’ and ‘optim’. The
arguments par
, fn
, gr
, and hessian
are all the same (with a few extra options for hessian
in
‘ucminf’). The difference is that there is no method
argument in ‘ucminf’ and that some of the components in the
control
argument are different due to differences in the
algorithms.
The algorithm can be given an initial estimate of the Hessian for the optimization and it is possible to get the final approximation of the Hessian based on the series of BFGS updates. This extra functionality may be useful for optimization in a series of related problems.
The functions fn
and gr
can return Inf
or NaN
if the functions cannot be evaluated at the supplied value, but the
functions must be computable at the initial value. The functions
are not allowed to return NA
. Any names given to par
will be
copied to the vectors passed to fn
and gr
. No
other attributes of par
are copied over.
The control
argument is a list that can supply any of the
following components:
trace
If trace is positive then detailed tracing information is printed for each iteration.
grtol
The algorithm stops when
||F'(x)||_inf <= grtol, that
is when the largest absolute value of the gradient is less than
grtol. Default value is grtol = 1e-6
.
xtol
The algorithm stops when ||x-x_p||_2 <=
xtol*(xtol + ||x||_2), where x_p and x are the
previous and current estimate of the minimizer. Thus the algorithm
stops when the last relative step length is
sufficiently small. Default value is xtol = 1e-12
.
stepmax
Initial maximal allowed step length (radius of
trust-region). The value is updated during the
optimization. Default value is stepmax = 1
.
maxeval
The maximum number of function evaluations. A function evaluation
is counted as one evaluation of the objective function and of the
gradient function. Default value is maxeval = 500
.
grad
Either ‘forward’ or ‘central’. Controls
the type of finite difference approximation to be used for the
gradient if no gradient function is given in the input argument
‘gr’. Default value is grad = 'forward'
.
gradstep
Vector of length 2. The step length in finite
difference approximation for the gradient. Step length is
|x_i|*gradstep[1]+gradstep[2].
Default value is gradstep = c(1e-6, 1e-8)
.
invhessian.lt
A vector with an initial approximation to the lower triangle of the
inverse Hessian. If not given, the inverse Hessian is initialized
as the identity matrix. If H0
is the initial hessian matrix then
the lower triangle of the inverse of H0
can be found as
invhessian.lt = solve(H0)[lower.tri(H0,diag=TRUE)]
.
par |
Computed minimizer. |
value |
Objective function value at computed minimizer. |
convergence |
Flag for reason of termination:
|
message |
String with reason of termination. |
hessian, invhessian |
Estimate of (inv.) Hessian at computed minimizer. The type of estimate is given by the input argument ‘hessian’. |
invhessian.lt |
The lower triangle of the final approximation to the inverse Hessian based on the series of BFGS updates during optimization. |
info |
Information about the search:
|
‘UCMINF’ algorithm design and Fortran code by Hans Bruun Nielsen.
Implementation in R by Stig B. Mortensen, stigbm@gmail.com.
Modifications by Douglas Bates <bates@stat.wisc.edu>, Nov. 2010, to support nested optimization and correct issues with printing on Windows.
Nielsen, H. B. (2000) ‘UCMINF - An Algorithm For Unconstrained, Nonlinear Optimization’, Report IMM-REP-2000-18, Department of Mathematical Modelling, Technical University of Denmark. http://orbit.dtu.dk/recid/200975.
The original Fortran source code can be found at http://www2.imm.dtu.dk/projects/hbn_software/ucminf.f. The code has been slightly modified in this package to be suitable for use with R.
The general structure of the implementation in R is based on the package ‘FortranCallsR’ by Diethelm Wuertz.
## Rosenbrock Banana function fR <- function(x) (1 - x[1])^2 + 100 * (x[2] - x[1]^2)^2 gR <- function(x) c(-400 * x[1] * (x[2] - x[1] * x[1]) - 2 * (1 - x[1]), 200 * (x[2] - x[1] * x[1])) # Find minimum and show trace ucminf(par = c(2,.5), fn = fR, gr = gR, control = list(trace = 1))
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.