Maximum Likelihood Estimation
Estimate parameters by the method of maximum likelihood.
mle2(minuslogl, start, method, optimizer, fixed = NULL, data=NULL, subset=NULL, default.start=TRUE, eval.only = FALSE, vecpar=FALSE, parameters=NULL, parnames=NULL, skip.hessian=FALSE, hessian.opts=NULL, use.ginv=TRUE, trace=FALSE, browse_obj=FALSE, gr=NULL, optimfun,...) calc_mle2_function(formula,parameters, links, start, parnames, use.deriv=FALSE, data=NULL,trace=FALSE)
minuslogl |
Function to calculate negative log-likelihood, or a formula |
start |
Named list. Initial values for optimizer |
method |
Optimization method to use. See |
optimizer |
Optimization function to use. Currently available
choices are "optim" (the default), "nlm", "nlminb", "constrOptim",
"optimx", and "optimize". If "optimx" is used, (1) the |
fixed |
Named list. Parameter values to keep fixed during optimization. |
data |
list of data to pass to negative log-likelihood function: must
be specified if |
subset |
logical vector for subsetting data (STUB) |
default.start |
Logical: allow default values of |
eval.only |
Logical: return value of |
vecpar |
Logical: is first argument a vector of all parameters?
(For compatibility with |
parameters |
List of linear models for parameters. MUST BE SPECIFIED IN THE SAME ORDER as the start vector (this is a bug/restriction that I hope to fix soon, but in the meantime beware) |
links |
(unimplemented) specify transformations of parameters |
parnames |
List (or vector?) of parameter names |
gr |
gradient function |
... |
Further arguments to pass to optimizer |
formula |
a formula for the likelihood (see Details) |
trace |
Logical: print parameter values tested? |
browse_obj |
Logical: drop into browser() within the objective function? |
skip.hessian |
Bypass Hessian calculation? |
hessian.opts |
Options for Hessian calculation, passed through to
the |
use.ginv |
Use generalized inverse ( |
optimfun |
user-supplied optimization function. Must take exactly
the same arguments and return exactly the same structure as |
use.deriv |
(experimental, not yet implemented): construct symbolic derivatives based on formula? |
The optim
optimizer is used to find the minimum of the
negative log-likelihood. An approximate covariance matrix for the
parameters is obtained by inverting the Hessian matrix at the optimum.
The minuslogl
argument can also specify a formula,
rather than an objective function, of the
form x~ddistn(param1,...,paramn)
. In this case
ddistn
is taken to be a probability or density
function, which must have (literally) x
as its
first argument (although this argument may be interpreted as
a matrix of multivariate responses) and which must have
a log
argument that can be used to specify the
log-probability or log-probability-density is required.
If a formula is specified, then parameters
can contain
a list of linear models for the parameters.
If a formula is given and non-trivial linear models are given
in parameters
for some of the variables, then
model matrices will be generated using model.matrix
.
start
can be given:
as a list containing lists, with each list corresponding to the starting values for a particular parameter;
just for the higher-level parameters, in which case
all of the additional parameters generated by model.matrix
will be given starting values of zero (unless a no-intercept
formula with -1
is specified, in which case all the
starting values for that parameter will be set equal)
(to be implemented!) as an exhaustive (flat) list
of starting values (in the order given by model.matrix
)
The trace
argument applies only when a formula is specified.
If you specify a function, you can build in your own print()
or cat()
statement to trace its progress. (You can also
specify a value for trace
as part of a control
list for optim()
: see optim
.)
The skip.hessian
argument is useful if the function is
crashing with a "non-finite finite difference value" error when trying
to evaluate the Hessian, but will preclude many subsequent
confidence interval calculations. (You will know the Hessian
is failing if you use method="Nelder-Mead"
and still
get a finite-difference error.)
If convergence fails, see the manual page of the
relevant optimizer (optim
by default,
but possibly nlm
, nlminb
,
optimx
, or constrOptim
if you have set the value of optimizer
)
for the meanings of the error codes/messages.
An object of class "mle2"
.
Do not use a higher-level variable named .i
in
parameters
– this is reserved for internal use.
Note that the minuslogl
function should
return the negative log-likelihood, -log L (not
the log-likelihood, log L, nor the deviance, -2 log L). It
is the user's responsibility
to ensure that the likelihood is correct, and that
asymptotic likelihood inference is valid (e.g.
that there are "enough" data and that the
estimated parameter values do not lie on the
boundary of the feasible parameter space).
If lower
, upper
, control$parscale
,
or control$ndeps
are specified for optim
fits, they must be named vectors.
The requirement that data
be specified when using
the formula interface is relatively new: it saves many
headaches on the programming side when evaluating the
likelihood function later on (e.g. for profiling or
constructing predictions). Since data.frame
uses
the names of its arguments as column names by default, it
is probably the easiest way to package objects that are
lying around in the global workspace for use in mle2
(provided they are all of the same length).
When optimizer
is set to "optimx" and multiple
optimization methods are used (i.e. the methods
argument has more than one element, or all.methods=TRUE
is set in the control options), the best (minimum
negative log-likelihood) solution will be saved,
regardless of reported convergence status
(and future operations such as profiling on the fit
will only use the method that found the best result).
x <- 0:10 y <- c(26, 17, 13, 12, 20, 5, 9, 8, 5, 4, 8) d <- data.frame(x,y) ## in general it is best practice to use the `data' argument, ## but variables can also be drawn from the global environment LL <- function(ymax=15, xhalf=6) -sum(stats::dpois(y, lambda=ymax/(1+x/xhalf), log=TRUE)) ## uses default parameters of LL (fit <- mle2(LL)) fit1F <- mle2(LL, fixed=list(xhalf=6)) coef(fit1F) coef(fit1F,exclude.fixed=TRUE) (fit0 <- mle2(y~dpois(lambda=ymean),start=list(ymean=mean(y)),data=d)) anova(fit0,fit) summary(fit) logLik(fit) vcov(fit) p1 <- profile(fit) plot(p1, absVal=FALSE) confint(fit) ## use bounded optimization ## the lower bounds are really > 0, but we use >=0 to stress-test ## profiling; note lower must be named (fit1 <- mle2(LL, method="L-BFGS-B", lower=c(ymax=0, xhalf=0))) p1 <- profile(fit1) plot(p1, absVal=FALSE) ## a better parameterization: LL2 <- function(lymax=log(15), lxhalf=log(6)) -sum(stats::dpois(y, lambda=exp(lymax)/(1+x/exp(lxhalf)), log=TRUE)) (fit2 <- mle2(LL2)) plot(profile(fit2), absVal=FALSE) exp(confint(fit2)) vcov(fit2) cov2cor(vcov(fit2)) mle2(y~dpois(lambda=exp(lymax)/(1+x/exp(lhalf))), start=list(lymax=0,lhalf=0), data=d, parameters=list(lymax~1,lhalf~1)) ## Not run: ## try bounded optimization with nlminb and constrOptim (fit1B <- mle2(LL, optimizer="nlminb", lower=c(lymax=1e-7, lhalf=1e-7))) p1B <- profile(fit1B) confint(p1B) (fit1C <- mle2(LL, optimizer="constrOptim", ui = c(lymax=1,lhalf=1), ci=2, method="Nelder-Mead")) set.seed(1001) lymax <- c(0,2) lhalf <- 0 x <- sort(runif(200)) g <- factor(sample(c("a","b"),200,replace=TRUE)) y <- rnbinom(200,mu=exp(lymax[g])/(1+x/exp(lhalf)),size=2) d2 <- data.frame(x,g,y) fit3 <- mle2(y~dnbinom(mu=exp(lymax)/(1+x/exp(lhalf)),size=exp(logk)), parameters=list(lymax~g),data=d2, start=list(lymax=0,lhalf=0,logk=0)) ## End(Not run)
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.