Become an expert in R — Interactive courses, Cheat Sheets, certificates and more!
Get Started for Free

summary.rq

Summary methods for Quantile Regression


Description

Returns a summary list for a quantile regression fit. A null value will be returned if printing is invoked.

Usage

## S3 method for class 'rq'
summary(object, se = NULL, covariance=FALSE, hs = TRUE,  U = NULL, gamma = 0.7, ...)
## S3 method for class 'rqs'
summary(object, ...)

Arguments

object

This is an object of class "rq" or "rqs" produced by a call to rq(), depending on whether one or more taus are specified.

se

specifies the method used to compute standard standard errors. There are currently seven available methods:

  1. "rank" which produces confidence intervals for the estimated parameters by inverting a rank test as described in Koenker (1994). This method involves solving a parametric linear programming problem, and for large sample sizes can be extremely slow, so by default it is only used when the sample size is less than 1000, see below. The default option assumes that the errors are iid, while the option iid = FALSE implements a proposal of Koenker Machado (1999). See the documentation for rq.fit.br for additional arguments.

  2. "iid" which presumes that the errors are iid and computes an estimate of the asymptotic covariance matrix as in KB(1978).

  3. "nid" which presumes local (in tau) linearity (in x) of the the conditional quantile functions and computes a Huber sandwich estimate using a local estimate of the sparsity. If the initial fitting was done with method "sfn" then use of se = "nid" is recommended. However, if the cluster option is also desired then se = "boot" can be used and bootstrapping will also employ the "sfn" method.

  4. "ker" which uses a kernel estimate of the sandwich as proposed by Powell(1991).

  5. "boot" which implements one of several possible bootstrapping alternatives for estimating standard errors including a variate of the wild bootstrap for clustered response. See boot.rq for further details.

  6. "BLB" which implements the bag of little bootstraps method proposed in Kleiner, et al (2014). The sample size of the little bootstraps is controlled by the parameter gamma, see below. At present only bsmethod = "xy" is sanction, and even that is experimental. This option is intended for applications with very large n where other flavors of the bootstrap can be slow.

  7. "conquer" which is invoked automatically if the fitted object was created with method = "conquer", and returns the multiplier bootstrap percentile confidence intervals described in He et al (2020).

  8. "extreme" which uses the subsampling method of Chernozhukov Fernandez-Val, and Kaji (2018) designed for inference on extreme quantiles.

If se = NULL (the default) and covariance = FALSE, and the sample size is less than 1001, then the "rank" method is used, otherwise the "nid" method is used.

covariance

logical flag to indicate whether the full covariance matrix of the estimated parameters should be returned.

hs

Use Hall Sheather bandwidth for sparsity estimation If false revert to Bofinger bandwidth.

U

Resampling indices or gradient evaluations used for bootstrap, see boot.rq.

gamma

parameter controlling the effective sample size of the'bag of little bootstrap samples that will be b = n^gamma where n is the sample size of the original model.

...

Optional arguments to summary, e.g. bsmethod to use bootstrapping. see boot.rq. When using the "rank" method for confidence intervals, which is the default method for sample sizes less than 1000, the type I error probability of the intervals can be controlled with the alpha parameter passed via "...", thereby controlling the width of the intervals plotted by plot.summary.rqs. Similarly, the arguments alpha, mofn and kex can be passed when invoking the "extreme" option for "se" to control the percentile interval reported, given by estimated quantiles [alpha/2, 1 - alpha/2]; kex is a tuning parameter for the extreme value confidence interval construction. The size of the bootstrap subsamples for the "extreme" option can also be controlled by passing the argument mofm via "...". Default values for kex, mofn and alpha are 20, floor(n/5) and 0.1, respectively.

Details

When the default summary method is used, it tries to estimate a sandwich form of the asymptotic covariance matrix and this involves estimating the conditional density at each of the sample observations, negative estimates can occur if there is crossing of the neighboring quantile surfaces used to compute the difference quotient estimate. A warning message is issued when such negative estimates exist indicating the number of occurrences – if this number constitutes a large proportion of the sample size, then it would be prudent to consider an alternative inference method like the bootstrap. If the number of these is large relative to the sample size it is sometimes an indication that some additional nonlinearity in the covariates would be helpful, for instance quadratic effects. Note that the default se method is rank, unless the sample size exceeds 1001, in which case the rank method is used. There are several options for alternative resampling methods. When summary.rqs is invoked, that is when summary is called for a rqs object consisting of several taus, the B components of the returned object can be used to construct a joint covariance matrix for the full object.

Value

a list is returned with the following components, when object is of class "rqs" then there is a list of such lists.

coefficients

a p by 4 matrix consisting of the coefficients, their estimated standard errors, their t-statistics, and their associated p-values, in the case of most "se" methods. For methods "rank" and "extreme" potentially asymetric confidence intervals are return in lieu of standard errors and p-values.

cov

the estimated covariance matrix for the coefficients in the model, provided that cov=TRUE in the called sequence. This option is not available when se = "rank".

Hinv

inverse of the estimated Hessian matrix returned if cov=TRUE and se %in% c("nid","ker") , note that for se = "boot" there is no way to split the estimated covariance matrix into its sandwich constituent parts.

J

Unscaled Outer product of gradient matrix returned if cov=TRUE and se != "iid". The Huber sandwich is cov = tau (1-tau) Hinv %*% J %*% Hinv. as for the Hinv component, there is no J component when se == "boot". (Note that to make the Huber sandwich you need to add the tau (1-tau) mayonnaise yourself.)

B

Matrix of bootstrap realizations.

U

Matrix of bootstrap randomization draws.

References

Chernozhukov, Victor, Ivan Fernandez-Val, and Tetsuya Kaji, (2018) Extremal Quantile Regression, in Handbook of Quantile Regression, Eds. Roger Koenker, Victor Chernozhukov, Xuming He, Limin Peng, CRC Press.

Koenker, R. (2004) Quantile Regression.

Bilias, Y. Chen, S. and Z. Ying, Simple resampling methods for censored quantile regression, J. of Econometrics, 99, 373-386.

Kleiner, A., Talwalkar, A., Sarkar, P. and Jordan, M.I. (2014) A Scalable bootstrap for massive data, JRSS(B), 76, 795-816.

Powell, J. (1991) Estimation of Monotonic Regression Models under Quantile Restrictions, in Nonparametric and Semiparametric Methods in Econometrics, W. Barnett, J. Powell, and G Tauchen (eds.), Cambridge U. Press.

See Also

Examples

data(stackloss)
y <- stack.loss
x <- stack.x
summary(rq(y ~ x, method="fn")) # Compute se's for fit using "nid" method.
summary(rq(y ~ x, ci=FALSE),se="ker")
# default "br" alg, and compute kernel method se's

quantreg

Quantile Regression

v5.85
GPL (>= 2)
Authors
Roger Koenker [cre, aut], Stephen Portnoy [ctb] (Contributions to Censored QR code), Pin Tian Ng [ctb] (Contributions to Sparse QR code), Blaise Melly [ctb] (Contributions to preprocessing code), Achim Zeileis [ctb] (Contributions to dynrq code essentially identical to his dynlm code), Philip Grosjean [ctb] (Contributions to nlrq code), Cleve Moler [ctb] (author of several linpack routines), Yousef Saad [ctb] (author of sparskit2), Victor Chernozhukov [ctb] (contributions to extreme value inference code), Ivan Fernandez-Val [ctb] (contributions to extreme value inference code), Brian D Ripley [trl, ctb] (Initial (2001) R port from S (to my everlasting shame -- how could I have been so slow to adopt R!) and for numerous other suggestions and useful advice)
Initial release

We don't support your browser anymore

Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.