Become an expert in R — Interactive courses, Cheat Sheets, certificates and more!
Get Started for Free

auglag

Augmented Lagrangian Algorithm


Description

The Augmented Lagrangian method adds additional terms to the unconstrained objective function, designed to emulate a Lagrangian multiplier.

Usage

auglag(x0, fn, gr = NULL, lower = NULL, upper = NULL, hin = NULL,
  hinjac = NULL, heq = NULL, heqjac = NULL,
  localsolver = c("COBYLA"), localtol = 1e-06, ineq2local = FALSE,
  nl.info = FALSE, control = list(), ...)

Arguments

x0

starting point for searching the optimum.

fn

objective function that is to be minimized.

gr

gradient of the objective function; will be provided provided is NULL and the solver requires derivatives.

lower, upper

lower and upper bound constraints.

hin, hinjac

defines the inequalty constraints, hin(x) >= 0

heq, heqjac

defines the equality constraints, heq(x) = 0.

localsolver

available local solvers: COBYLA, LBFGS, MMA, or SLSQP.

localtol

tolerance applied in the selected local solver.

ineq2local

logical; shall the inequality constraints be treated by the local solver?; not possible at the moment.

nl.info

logical; shall the original NLopt info been shown.

control

list of options, see nl.opts for help.

...

additional arguments passed to the function.

Details

This method combines the objective function and the nonlinear inequality/equality constraints (if any) in to a single function: essentially, the objective plus a ‘penalty’ for any violated constraints.

This modified objective function is then passed to another optimization algorithm with no nonlinear constraints. If the constraints are violated by the solution of this sub-problem, then the size of the penalties is increased and the process is repeated; eventually, the process must converge to the desired solution (if it exists).

Since all of the actual optimization is performed in this subsidiary optimizer, the subsidiary algorithm that you specify determines whether the optimization is gradient-based or derivative-free.

The local solvers available at the moment are “COBYLA” (for the derivative-free approach) and “LBFGS”, “MMA”, or “SLSQP” (for smooth functions). The tolerance for the local solver has to be provided.

There is a variant that only uses penalty functions for equality constraints while inequality constraints are passed through to the subsidiary algorithm to be handled directly; in this case, the subsidiary algorithm must handle inequality constraints. (At the moment, this variant has been turned off because of problems with the NLOPT library.)

Value

List with components:

par

the optimal solution found so far.

value

the function value corresponding to par.

iter

number of (outer) iterations, see maxeval.

global_solver

the global NLOPT solver used.

local_solver

the local NLOPT solver used, LBFGS or COBYLA.

convergence

integer code indicating successful completion (> 0) or a possible error number (< 0).

message

character string produced by NLopt and giving additional information.

Note

Birgin and Martinez provide their own free implementation of the method as part of the TANGO project; other implementations can be found in semi-free packages like LANCELOT.

Author(s)

Hans W. Borchers

References

Andrew R. Conn, Nicholas I. M. Gould, and Philippe L. Toint, “A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds,” SIAM J. Numer. Anal. vol. 28, no. 2, p. 545-572 (1991).

E. G. Birgin and J. M. Martinez, “Improving ultimate convergence of an augmented Lagrangian method," Optimization Methods and Software vol. 23, no. 2, p. 177-195 (2008).

See Also

alabama::auglag, Rsolnp::solnp

Examples

x0 <- c(1, 1)
fn <- function(x) (x[1]-2)^2 + (x[2]-1)^2
hin <- function(x) -0.25*x[1]^2 - x[2]^2 + 1    # hin >= 0
heq <- function(x) x[1] - 2*x[2] + 1            # heq == 0
gr <- function(x) nl.grad(x, fn)
hinjac <- function(x) nl.jacobian(x, hin)
heqjac <- function(x) nl.jacobian(x, heq)

auglag(x0, fn, gr = NULL, hin = hin, heq = heq) # with COBYLA
# $par:     0.8228761 0.9114382
# $value:   1.393464
# $iter:    1001

auglag(x0, fn, gr = NULL, hin = hin, heq = heq, localsolver = "SLSQP")
# $par:     0.8228757 0.9114378
# $value:   1.393465
# $iter     173

##  Example from the alabama::auglag help page
fn <- function(x) (x[1] + 3*x[2] + x[3])^2 + 4 * (x[1] - x[2])^2
heq <- function(x) x[1] + x[2] + x[3] - 1
hin <- function(x) c(6*x[2] + 4*x[3] - x[1]^3 - 3, x[1], x[2], x[3])

auglag(runif(3), fn, hin = hin, heq = heq, localsolver="lbfgs")
# $par:     2.380000e-09 1.086082e-14 1.000000e+00
# $value:   1
# $iter:    289

##  Powell problem from the Rsolnp::solnp help page
x0 <- c(-2, 2, 2, -1, -1)
fn1  <- function(x) exp(x[1]*x[2]*x[3]*x[4]*x[5])
eqn1 <-function(x)
	c(x[1]*x[1]+x[2]*x[2]+x[3]*x[3]+x[4]*x[4]+x[5]*x[5],
	  x[2]*x[3]-5*x[4]*x[5],
	  x[1]*x[1]*x[1]+x[2]*x[2]*x[2])

auglag(x0, fn1, heq = eqn1, localsolver = "mma")
# $par: -3.988458e-10 -1.654201e-08 -3.752028e-10  8.904445e-10  8.926336e-10
# $value:   1
# $iter:    1001

nloptr

R Interface to NLopt

v1.2.2.2
LGPL-3
Authors
Jelmer Ypma [aut, cre], Steven G. Johnson [aut] (author of the NLopt C library), Hans W. Borchers [ctb], Dirk Eddelbuettel [ctb], Brian Ripley [ctb] (build process on multiple OS), Kurt Hornik [ctb] (build process on multiple OS), Julien Chiquet [ctb], Avraham Adler [ctb] (removal deprecated calls from tests, <https://orcid.org/0000-0002-3039-0703>)
Initial release
2020-07-02

We don't support your browser anymore

Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.