Become an expert in R — Interactive courses, Cheat Sheets, certificates and more!
Get Started for Free

rq.fit.pfnb

Quantile Regression Fitting via Interior Point Methods


Description

This is a lower level routine called by rq() to compute quantile regression parameters using the Frisch-Newton algorithm. It uses a form of preprocessing to accelerate the computations for situations in which several taus are required for the same model specification.

Usage

rq.fit.pfnb(x, y, tau, m0 = NULL, eps = 1e-06)

Arguments

x

The design matrix

y

The response vector

tau

The quantiles of interest, must lie in (0,1), be sorted and preferably equally spaced.

m0

An initial reduced sample size by default is set to be round((n * (log(p) + 1) )^(2/3) this could be explored further to aid performance in extreme cases.

eps

A tolerance parameter intended to bound the confidence band entries away from zero.

Details

The details of the Frisch-Newton algorithm are explained in Koenker and Portnoy (1997), as is the preprocessing idea which is related to partial sorting and the algorithms such as kuantile for univariate quantiles that operate in time O(n). The preprocessing idea of exploiting nearby quantile solutions to accelerate estimation of adjacent quantiles is proposed in Chernozhukov et al (2020). This version calls a fortran version of the preprocessing algorithm that accepts multiple taus. The preprocessing approach is also implemented for a single tau in rq.fit.pfn which may be regarded as a prototype for this function since it is written entirely in R and therefore is easier to experiment with.

Value

returns a list with elements consisting of

  1. coefficientsa matrix of dimension ncol(x) by length(taus)

  2. nit a 5 by m matrix of iteration counts: first two coordinates of each column are the number of interior point iterations, the third is the number of observations in the final globbed sample size, and the last two are the number of fixups and bad-fixups respectively. This is intended to aid fine tuning of the initial sample size, m0.

  3. info an m-vector of convergence flags

References

Koenker, R. and S. Portnoy (1997). The Gaussian Hare and the Laplacian Tortoise: Computability of squared-error vs. absolute-error estimators, with discussion, Statistical Science, 12, 279-300.

Chernozhukov, V., I., Fernandez-Val, and Melly, B. (2020), 'Fast algorithms for the quantile regression process', Empirical Economics, forthcoming.

See Also


quantreg

Quantile Regression

v5.85
GPL (>= 2)
Authors
Roger Koenker [cre, aut], Stephen Portnoy [ctb] (Contributions to Censored QR code), Pin Tian Ng [ctb] (Contributions to Sparse QR code), Blaise Melly [ctb] (Contributions to preprocessing code), Achim Zeileis [ctb] (Contributions to dynrq code essentially identical to his dynlm code), Philip Grosjean [ctb] (Contributions to nlrq code), Cleve Moler [ctb] (author of several linpack routines), Yousef Saad [ctb] (author of sparskit2), Victor Chernozhukov [ctb] (contributions to extreme value inference code), Ivan Fernandez-Val [ctb] (contributions to extreme value inference code), Brian D Ripley [trl, ctb] (Initial (2001) R port from S (to my everlasting shame -- how could I have been so slow to adopt R!) and for numerous other suggestions and useful advice)
Initial release

We don't support your browser anymore

Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.