Leverage extractor for HLfit objects
This gets “leverages” or “hat values” from an object. However, there is hidden complexity in what this may mean, so care must be used in selecting proper arguments for a given use (see Details). To get the full hat matrix, see get_matrix(., which="hat_matrix")
.
## S3 method for class 'HLfit' hatvalues(model, type = "projection", which = "resid", force=FALSE, ...)
model |
An object of class |
type |
Character: |
which |
Character: |
force |
Boolean: to force recomputation of the leverages even if they are available in the object, for checking purposes. |
... |
For consistency with the generic. |
Leverages may have distinct meaning depending on context. The textbook version for linear models is that leverages (q_i) are the diagonal elements of a projection matrix (“hat matrix”), and that they may be used to standardize (“studentize”) residuals as follows. If the residual variance φ is known, then the variance of each fitted residual \hat{e}_i is φ(1-q_i). Standardized residuals, all with variance 1, are then \hat{e}_i/√(φ(1-q_i)). This standardization of variance no longer holds exactly with estimated φ, but if one uses here an unbiased (REML) estimator of φ, the studentized residuals may still practically have a unit expected variance.
The need for distinguishing “standardizing” from “projection” leverages arises from generalizations of this standardization to other contexts. Indeed, when a simple linear model is fitted by ML, the variance of the fitted residuals is less than φ, but \hat{φ} is downward biased so that residuals standardized only by √(φ), without any leverage correction, more closely have expected unit variance than if corrected by the previous leverages. This hints for another definition of leverages such that they are here zero, contrary to the ones derived from the projection matrix.
Leverages also appear in expressions for derivatives, with respect to the dispersion parameters, of the logdet(Hessian) term of Laplace approximations for marginal or restricted likelihood (Lee et al. 2006). This provides a basis to generalize the concept of standardizing leverages for ML and REML in mixed-effect models. In particular, in an ML fit, one considers leverages (q*_i) that are no longer the diagonal elements of the projection matrix for the mixed model (these (q*_i) are zero in the simple linear model). The generalized standardizing leverages may include corrections for non-Gaussian response, for non-Gaussian random effects, and for taking into account the variation of the GLM weights in the logdet(Hessian) derivatives. Which corrections are included depend on the precise method used to fit the model (e.g., EQL vs PQL vs REML). Standardizing leverages are also defined for the random effects.
These distinctions suggest breaking the usual synonymy between “leverages” or “hat values”: the term “hat values” better stands for the diagonal elements of a projection matrix, while “leverages” better stands for the standardizing values.
hatvalues(.,type="std")
returns the standardizing leverages. By contrast, hatvalues(.,type="projection")
will always return hat values from the fitted projection matrix. Note that these values typically differ between ML and REML fit because the fitted projection matrix differs between them.
A list with separate components resid
(leverages of the observations) and ranef
if which="both"
, and a vector otherwise.
Lee, Y., Nelder, J. A. and Pawitan, Y. (2006) Generalized linear models with random effects: unified analysis via h-likelihood. Chapman & Hall: London.
if (spaMM.getOption("example_maxtime")>0.8) { data("Orthodont",package = "nlme") rnge <- (107:108) # all different: # hatvalues(rlfit <- fitme(distance ~ age+(age|Subject), data = Orthodont, method="REML"))[rnge] hatvalues(mlfit <- fitme(distance ~ age+(age|Subject), data = Orthodont))[rnge] hatvalues(mlfit,type="std")[rnge] }
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.