MM-type Estimators for Linear Regression
Computes fast MM-type estimators for linear (regression) models.
lmrob(formula, data, subset, weights, na.action, method = "MM", model = TRUE, x = !control$compute.rd, y = FALSE, singular.ok = TRUE, contrasts = NULL, offset = NULL, control = NULL, init = NULL, ...)
formula |
a symbolic description of the model to be fit. See
|
data |
an optional data frame, list or environment (or object
coercible by |
subset |
an optional vector specifying a subset of observations to be used in the fitting process. |
weights |
an optional vector of weights to be used in the fitting process (in addition to the robustness weights computed in the fitting process). |
na.action |
a function which indicates what should happen
when the data contain |
method |
string specifying the estimator-chain. |
model, x, y |
logicals. If |
singular.ok |
logical. If |
contrasts |
an optional list. See the |
offset |
this can be used to specify an a priori
known component to be included in the linear predictor
during fitting. An |
control |
a |
init |
an optional argument to specify or supply the initial estimate. See Details. |
... |
additional arguments can be used to specify control
parameters directly instead of (but not in addition to!) via |
This function computes an MM-type regression estimator as
described in Yohai (1987) and Koller and Stahel (2011). By
default it uses a bi-square redescending score function, and it
returns a highly robust and highly efficient estimator (with 50%
breakdown point and 95% asymptotic efficiency for normal errors).
The computation is carried out by a call to lmrob.fit()
.
The argument setting
of lmrob.control
is provided
to set alternative defaults as suggested in Koller and Stahel (2011)
(setting="KS2011"
; now do use its extension
setting="KS2014"
). For further details, see lmrob.control
.
init
:The initial estimator may be specified using the argument
init
. This can either be a string, a function or a list.
A string can be used to specify built in internal
estimators (currently S
and M-S
, see See also
below). A function
taking arguments x, y,
control, mf
(where mf
stands for model.frame
) and
returning a list containing at least the initial coefficients as
coefficients
and the initial scale estimate scale
.
Or a list giving the initial coefficients and scale as
coefficients
and scale
. See also Examples.
Note that if the init
argument is a function or list, the
method
argument must not contain the initial estimator, e.g.,
use MDM
instead of SMDM
.
The default, equivalent to init = "S"
, uses as initial
estimator an S-estimator (Rousseeuw and Yohai, 1984) which is
computed using the Fast-S algorithm of Salibian-Barrera and Yohai
(2006), calling lmrob.S()
. That function, since
March 2012, by default uses nonsingular subsampling which
makes the Fast-S algorithm feasible for categorical data as well,
see Koller (2012). Note that convergence problems may still show
up as warnings, e.g.,
S refinements did not converge (to refine.tol=1e-07) in 200 (= k.max) steps
and often can simply be remedied by increasing (i.e. weakening)
refine.tol
or increasing the allowed number of iterations
k.max
, see lmrob.control
.
method
:The following chain of estimates is customizable via the
method
argument.
There are currently two types of estimates available,
"M"
:corresponds to the standard M-regression estimate.
"D"
:stands for the Design Adaptive Scale estimate as proposed in Koller and Stahel (2011).
The method
argument takes a string that specifies the
estimates to be calculated as a chain. Setting
method='SMDM'
will result in an intial S-estimate, followed
by an M-estimate, a Design Adaptive Scale estimate and a final
M-step. For methods involving a D
-step, the default value
of psi
(see lmrob.control
) is changed to
"lqq"
.
By default, standard errors are computed using the formulas of
Croux, Dhaene and Hoorelbeke (2003) (lmrob.control
option cov=".vcov.avar1"
). This method, however, works only
for MM-estimates (i.e., method = "MM"
or = "SM"
). For other
method
arguments, the covariance matrix estimate used is based on the asymptotic
normality of the estimated coefficients (cov=".vcov.w"
) as
described in Koller and Stahel (2011).
The var-cov computation can be skipped by cov = "none"
and
(re)done later by e.g., vcov(<obj>, cov = ".vcov.w")
.
As of robustbase version 0.91-0 (April 2014), the computation of
robust standard errors for method="SMDM"
has been changed.
The old behaviour can be restored by setting the control parameter
cov.corrfact = "tauold"
.
An object of class lmrob
; a list including the following
components:
coefficients |
The estimate of the coefficient vector |
scale |
The scale as used in the M estimator. |
residuals |
Residuals associated with the estimator. |
converged |
|
iter |
number of IRWLS iterations |
rweights |
the “robustness weights” ψ(r_i/S) / (r_i/S). |
fitted.values |
Fitted values associated with the estimator. |
rank |
the numeric rank of the fitted linear model. |
cov |
The estimated covariance matrix of the regression coefficients |
df.residual |
the residual degrees of freedom. |
weights |
the specified weights (missing if none were used). |
na.action |
(where relevant) information returned by
|
offset |
the offset used (missing if none were used). |
contrasts |
(only where relevant) the contrasts used. |
xlevels |
(only where relevant) a record of the levels of the factors used in fitting. |
call |
the matched call. |
terms |
the |
model |
if requested (the default), the model frame used. |
x |
if requested, the model matrix used. |
y |
if requested, the response used. |
In addition, non-null fits will have components assign
,
and qr
relating to the linear fit, for use by extractor
functions such as summary
.
(mainly:) Matias Salibian-Barrera and Manuel Koller
Croux, C., Dhaene, G. and Hoorelbeke, D. (2003) Robust standard errors for robust estimators, Discussion Papers Series 03.16, K.U. Leuven, CES.
Koller, M. (2012)
Nonsingular subsampling for S-estimators with categorical predictors,
ArXiv e-prints https://arxiv.org/abs/1208.5595;
extended version published as Koller and Stahel (2017), see
lmrob.control
.
Koller, M. and Stahel, W.A. (2011) Sharpening Wald-type inference in robust regression for small samples. Computational Statistics & Data Analysis 55(8), 2504–2515.
Maronna, R. A., and Yohai, V. J. (2000) Robust regression with both continuous and categorical predictors. Journal of Statistical Planning and Inference 89, 197–214.
Rousseeuw, P.J. and Yohai, V.J. (1984) Robust regression by means of S-estimators, In Robust and Nonlinear Time Series, J. Franke, W. Härdle and R. D. Martin (eds.). Lectures Notes in Statistics 26, 256–272, Springer Verlag, New York.
Salibian-Barrera, M. and Yohai, V.J. (2006) A fast algorithm for S-regression estimates, Journal of Computational and Graphical Statistics 15(2), 414–427.
Yohai, V.J. (1987) High breakdown-point and high efficiency estimates for regression. The Annals of Statistics 15, 642–65.
Yohai, V., Stahel, W.~A. and Zamar, R. (1991) A procedure for robust estimation and inference in linear regression; in Stahel and Weisberg (eds), Directions in Robust Statistics and Diagnostics, Part II, Springer, New York, 365–374; doi: 10.1007/978-1-4612-4444-8_20.
lmrob.control
;
for the algorithms lmrob.S
, lmrob.M.S
and
lmrob.fit
;
and for methods,
summary.lmrob
, for the extra “statistics”,
notably R^2 (“R squared”);
predict.lmrob
,
print.lmrob
, plot.lmrob
, and
weights.lmrob
.
data(coleman) set.seed(0) ## Default for a very long time: summary( m1 <- lmrob(Y ~ ., data=coleman) ) ## Nowadays **strongly recommended** for routine use: summary(m2 <- lmrob(Y ~ ., data=coleman, setting = "KS2014") ) ## ------------------ plot(residuals(m2) ~ weights(m2, type="robustness")) ##-> weights.lmrob() abline(h=0, lty=3) data(starsCYG, package = "robustbase") ## Plot simple data and fitted lines plot(starsCYG) lmST <- lm(log.light ~ log.Te, data = starsCYG) (RlmST <- lmrob(log.light ~ log.Te, data = starsCYG)) abline(lmST, col = "red") abline(RlmST, col = "blue") ## --> Least Sq.:/ negative slope \ robust: slope ~= 2.2 % checked in ../tests/lmrob-data.R summary(RlmST) # -> 4 outliers; rest perfect vcov(RlmST) stopifnot(all.equal(fitted(RlmST), predict(RlmST, newdata = starsCYG), tol = 1e-14)) ## FIXME: setting = "KS2011" or setting = "KS2014" **FAIL** here ##--- 'init' argument ----------------------------------- ## 1) string set.seed(0) m3 <- lmrob(Y ~ ., data=coleman, init = "S") stopifnot(all.equal(m1[-18], m3[-18])) ## 2) function initFun <- function(x, y, control, ...) { # no 'mf' needed init.S <- lmrob.S(x, y, control) list(coefficients=init.S$coef, scale = init.S$scale) } set.seed(0) m4 <- lmrob(Y ~ ., data=coleman, method = "M", init = initFun) ## list m5 <- lmrob(Y ~ ., data=coleman, method = "M", init = list(coefficients = m3$init$coef, scale = m3$scale)) stopifnot(all.equal(m4[-17], m5[-17]))
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.