Overview of rms Package
rms is the package that goes along with the book Regression Modeling Strategies. rms does regression modeling, testing, estimation, validation, graphics, prediction, and typesetting by storing enhanced model design attributes in the fit. rms is a re-written version of the Design package that has improved graphics and duplicates very little code in the survival package.
The package is a collection of about 180 functions that assist and streamline modeling, especially for biostatistical and epidemiologic applications. It also contains functions for binary and ordinal logistic regression models and the Buckley-James multiple regression model for right-censored responses, and implements penalized maximum likelihood estimation for logistic and ordinary linear models. rms works with almost any regression model, but it was especially written to work with logistic regression, Cox regression, accelerated failure time models, ordinary linear models, the Buckley-James model, generalized lease squares for longitudinal data (using the nlme package), generalized linear models, and quantile regression (using the quantreg package). rms requires the Hmisc package to be installed. Note that Hmisc has several functions useful for data analysis (especially data reduction and imputation).
Older references below pertaining to the Design package are relevant to rms.
To make use of automatic typesetting features you must
have LaTeX or one of its variants installed.
Some aspects of rms (e.g., latex
) will not work correctly if
options(contrasts=)
other than c("contr.treatment",
"contr.poly")
are used.
rms relies on a wealth of survival analysis functions written by Terry Therneau of Mayo Clinic. Front-ends have been written for several of Therneau's functions, and other functions have been slightly modified.
Ordinary linear regression models
Binary and ordinal logistic models (proportional odds and continuation ratio models, probit, log-log, complementary log-log including ordinal cumulative probability models for continuous Y, efficiently handling thousands of distinct Y values using full likelihood methods)
Bayesian binary and ordinal regression models, partial proportional odds model, and random effects
Cox model
Parametric survival models in the accelerated failure time class
Buckley-James least-squares linear regression model with possibly right-censored responses
Generalized linear model
Quantile regression
Generalized least squares
Bootstrap model validation to obtain unbiased estimates of model performance without requiring a separate validation sample
Automatic Wald tests of all effects in the model that are not parameterization-dependent (e.g., tests of nonlinearity of main effects when the variable does not interact with other variables, tests of nonlinearity of interaction effects, tests for whether a predictor is important, either as a main effect or as an effect modifier)
Graphical depictions of model estimates (effect plots, odds/hazard ratio plots, nomograms that allow model predictions to be obtained manually even when there are nonlinear effects and interactions in the model)
Various smoothed residual plots, including some new residual plots for verifying ordinal logistic model assumptions
Composing S functions to evaluate the linear predictor (X*beta hat), hazard function, survival function, quantile functions analytically from the fitted model
Typesetting of fitted model using LaTeX
Robust covariance matrix estimation (Huber or bootstrap)
Cubic regression splines with linear tail restrictions (natural splines)
Tensor splines
Interactions restricted to not be doubly nonlinear
Penalized maximum likelihood estimation for ordinary linear regression and logistic regression models. Different parts of the model may be penalized by different amounts, e.g., you may want to penalize interaction or nonlinear effects more than main effects or linear effects
Estimation of hazard or odds ratios in presence of nolinearity and interaction
Sensitivity analysis for an unmeasured binary confounder in a binary logistic model
rms was motivated by the following needs:
need to automatically print interesting Wald tests that can be constructed from the design
tests of linearity with respect to each predictor
tests of linearity of interactions
pooled interaction tests (e.g., all interactions involving race)
pooled tests of effects with higher order effects
test of main effect not meaningful when effect in interaction
pooled test of main effect + interaction effect is meaningful
test of 2nd-order interaction + any 3rd-order interaction containing those factors is meaningful
need to store transformation parameters with the fit
example: knot locations for spline functions
these are "remembered" when getting predictions, unlike standard S or R
for categorical predictors, save levels so that same dummy variables will be generated for predictions; check that all levels in out-of-data predictions were present when model was fitted
need for uniform re-insertion of observations deleted because of NAs
when using predict
without newdata
or when using
resid
need to easily plot the regression effect of any predictor
example: age is represented by a linear spline with knots at 40 and 60y plot effect of age on log odds of disease, adjusting interacting factors to easily specified constants
vary 2 predictors: plot x1 on x-axis, separate curves for discrete x2 or 3d perspective plot for continuous x2
if predictor is represented as a function in the model, plots
should be with respect to the original variable:f <- lrm(y ~ log(cholesterol)+age)
plot(Predict(f, cholesterol)) # cholesterol on x-axis, default range
ggplot(Predict(f, cholesterol)) # same using ggplot2
plotp(Predict(f, cholesterol)) # same directly using plotly
need to store summary of distribution of predictors with the fit
plotting limits (default: 10th smallest, 10th largest values or %-tiles)
effect limits (default: .25 and .75 quantiles for continuous vars.)
adjustment values for other predictors (default: median for continuous predictors, most frequent level for categorical ones)
discrete numeric predictors: list of possible values example: x=0,1,2,3,5 -> by default don't plot prediction at x=4
values are on the inner-most variable, e.g. cholesterol, not log(chol.)
allows estimation/plotting long after original dataset has been deleted
for Cox models, underlying survival also stored with fit, so original data not needed to obtain predicted survival curves
need to automatically print estimates of effects in presence of non- linearity and interaction
example: age is quadratic, interacting with sex default effect is inter-quartile-range hazard ratio (for Cox model), for sex=reference level
user-controlled effects: summary(fit, age=c(30,50),
sex="female")
-> odds ratios for logistic model, relative survival time
for accelerated failure time survival models
effects for all variables (e.g. odds ratios) may be plotted with multiple-confidence-level bars
need for prettier and more concise effect names in printouts, especially for expanded nonlinear terms and interaction terms
use inner-most variable name to identify predictors
e.g. for pmin(x^2-3,10)
refer to factor with legal S-name
x
need to recognize that an intercept is not always a simple concept
some models (e.g., Cox) have no intercept
some models (e.g., ordinal logistic) have multiple intercepts
need for automatic high-quality printing of fitted mathematical model (with dummy variables defined, regression spline terms simplified, interactions "factored"). Focus is on regression splines instead of nonparametric smoothers or smoothing splines, so that explicit formulas for fit may be obtained for use outside S. rms can also compose S functions to evaluate X*Beta from the fitted model analytically, as well as compose SAS code to do this.
need for automatic drawing of nomogram to represent the fitted model
need for automatic bootstrap validation of a fitted model, with only one S command (with respect to calibration and discrimination)
need for robust (Huber sandwich) estimator of covariance matrix, and be able to do all other analysis (e.g., plots, C.L.) using the adjusted covariances
need for robust (bootstrap) estimator of covariance matrix, easily used in other analyses without change
need for Huber sandwich and bootstrap covariance matrices adjusted for cluster sampling
need for routine reporting of how many observations were deleted
by missing values on each predictor (see na.delete
in Hmisc)
need for optional reporting of descriptive statistics for Y stratified by missing status of each X (see na.detail.response)
need for pretty, annotated survival curves, using the same commands for parametric and Cox models
need for ordinal logistic model (proportional odds model, continuation ratio model)
need for estimating and testing general contrasts without having to be conscious of variable coding or parameter order
rms will work with a wide variety of fitting functions, but it is meant especially for the following:
Function | Purpose | Related S |
Functions | ||
ols |
Ordinary least squares linear model | lm
|
lrm |
Binary and ordinal logistic regression | glm
|
model | cr.setup
|
|
orm |
Ordinal regression model | lrm
|
blrm |
Bayesian binary and ordinal regression | \ |
psm |
Accelerated failure time parametric | survreg
|
survival model | ||
cph |
Cox proportional hazards regression | coxph
|
npsurv |
Nonparametric survival estimates |
survfit.formula |
bj |
Buckley-James censored least squares | survreg
|
linear model | ||
Glm |
Version of glm for use with rms |
glm
|
Gls |
Version of gls for use with rms |
gls
|
Rq |
Version of rq for use with rms |
rq
|
The following generic functions work with fits with rms in effect:
Function | Purpose | Related |
Functions | ||
print |
Print parameters and statistics of fit | |
coef |
Fitted regression coefficients | |
formula |
Formula used in the fit | |
specs |
Detailed specifications of fit | |
robcov |
Robust covariance matrix estimates | |
bootcov |
Bootstrap covariance matrix estimates | |
summary |
Summary of effects of predictors | |
plot.summary |
Plot continuously shaded confidence | |
bars for results of summary | ||
anova |
Wald tests of most meaningful hypotheses | |
contrast |
General contrasts, C.L., tests | |
plot.anova |
Depict results of anova graphically | dotchart |
Predict |
Partial predictor effects | predict |
plot.Predict
|
Plot predictor effects using lattice graphics | predict |
ggplot |
Similar to above but using ggplot2 | |
plotp |
Similar to above but using plotly | |
bplot |
3-D plot of effects of varying two | |
continuous predictors | image, persp, contour |
|
gendata |
Generate data frame with predictor | expand.grid |
combinations (optionally interactively) | ||
predict |
Obtain predicted values or design matrix | |
fastbw |
Fast backward step-down variable | step |
selection | ||
residuals |
Residuals, influence statistics from fit | |
(or resid ) |
||
which.influence
|
Which observations are overly | residuals |
influential | ||
sensuc |
Sensitivity of one binary predictor in | |
lrm and cph models to an unmeasured | ||
binary confounder | ||
latex |
LaTeX representation of fitted | |
model or anova or summary table |
||
Function |
S function analytic representation | Function.transcan |
of a fitted regression model (X*Beta) | ||
hazard |
S function analytic representation | rcspline.restate |
of a fitted hazard function (for psm ) |
||
Survival |
S function analytic representation of | |
fitted survival function (for psm,cph ) |
||
Quantile |
S function analytic representation of | |
fitted function for quantiles of | ||
survival time (for psm, cph ) |
||
nomogram |
Draws a nomogram for the fitted model | latex, plot, ggplot, plotp |
survest |
Estimate survival probabilities | survfit |
(for psm, cph ) |
||
survplot |
Plot survival curves (psm, cph, npsurv) | plot.survfit |
validate |
Validate indexes of model fit using | val.prob |
resampling | ||
calibrate |
Estimate calibration curve for model | |
using resampling | ||
vif |
Variance inflation factors for a fit | |
naresid |
Bring elements corresponding to missing | |
data back into predictions and residuals | ||
naprint |
Print summary of missing values | |
pentrace |
Find optimum penality for penalized MLE | |
effective.df
|
Print effective d.f. for each type of | |
variable in model, for penalized fit or | ||
pentrace result | ||
rm.impute |
Impute repeated measures data with | transcan , |
non-random dropout | fit.mult.impute |
|
experimental, non-functional |
The following programs demonstrate how the pieces of
the rms package work together. A (usually)
one-time call to the function datadist
requires a
pass at the entire data frame to store distribution
summaries for potential predictor variables. These
summaries contain (by default) the .25 and .75
quantiles of continuous variables (for estimating
effects such as odds ratios), the 10th smallest and
10th largest values (or .1 and .9 quantiles for small
n) for plotting ranges for estimated curves, and the
total range. For discrete numeric variables (those
having <=10 unique values), the list of unique values
is also stored. Such summaries are used by the
summary.rms, Predict
, and nomogram.rms
functions. You may save time and defer running
datadist
. In that case, the distribution summary
is not stored with the fit object, but it can be
gathered before running summary
, plot
, ggplot
, or
plotp
.
d <- datadist(my.data.frame) # or datadist(x1,x2)
options(datadist="d") # omit this or use options(datadist=NULL)
# if not run datadist yet
cf <- ols(y ~ x1 * x2)
anova(f)
fastbw(f)
Predict(f, x2)
predict(f, newdata)
In the Examples section there are three detailed examples using a
fitting function
designed to be used with rms, lrm
(logistic
regression model). In Detailed Example 1 we
create 3 predictor variables and a two binary response
on 500 subjects. For the first binary response, dz
,
the true model involves only sex
and age
, and there is
a nonlinear interaction between the two because the log
odds is a truncated linear relationship in age
for
females and a quadratic function for males. For the
second binary outcome, dz.bp
, the true population model
also involves systolic blood pressure (sys.bp
) through
a truncated linear relationship. First, nonparametric
estimation of relationships is done using the Hmisc
package's plsmo
function which uses lowess
with outlier
detection turned off for binary responses. Then
parametric modeling is done using restricted cubic
splines. This modeling does not assume that we know
the true transformations for age
or sys.bp
but that
these transformations are smooth (which is not actually
the case in the population).
For Detailed Example 2, suppose that a
categorical variable treat has values "a", "b"
, and
"c"
, an ordinal variable num.diseases
has values
0,1,2,3,4, and that there are two continuous variables,
age
and cholesterol
. age
is fitted with a restricted
cubic spline, while cholesterol
is transformed using
the transformation log(cholesterol - 10)
. Cholesterol
is missing on three subjects, and we impute these using
the overall median cholesterol. We wish to allow for
interaction between treat
and cholesterol
. The
following S program will fit a logistic model,
test all effects in the design, estimate effects, and
plot estimated transformations. The fit for
num.diseases
really considers the variable to be a
5-level categorical variable. The only difference is
that a 3 d.f. test of linearity is done to assess
whether the variable can be re-modeled "asis". Here
we also show statements to attach the rms package
and store predictor characteristics from datadist.
Detailed Example 3 shows some of the survival
analysis capabilities of rms related to the Cox
proportional hazards model. We simulate data for 2000
subjects with 2 predictors, age
and sex
. In the true
population model, the log hazard function is linear in
age
and there is no age
x sex
interaction. In the
analysis below we do not make use of the linearity in
age. rms makes use of many of Terry Therneau's
survival functions that are builtin to S.
The following is a typical sequence of steps that
would be used with rms in conjunction with the Hmisc
transcan
function to do single imputation of all NAs in the
predictors (multiple imputation would be better but would be
harder to do in the context of bootstrap model validation),
fit a model, do backward stepdown to reduce the number of
predictors in the model (with all the severe problems this can
entail), and use the bootstrap to validate this stepwise model,
repeating the variable selection for each re-sample. Here we
take a short cut as the imputation is not repeated within the
bootstrap.
In what follows we (atypically) have only 3 candidate predictors. In practice be sure to have the validate and calibrate functions operate on a model fit that contains all predictors that were involved in previous analyses that used the response variable. Here the imputation is necessary because backward stepdown would otherwise delete observations missing on any candidate variable.
Note that you would have to define x1, x2, x3, y
to run
the following code.
xt <- transcan(~ x1 + x2 + x3, imputed=TRUE)
impute(xt) # imputes any NAs in x1, x2, x3
# Now fit original full model on filled-in data
f <- lrm(y ~ x1 + rcs(x2,4) + x3, x=TRUE, y=TRUE) #x,y allow boot.
fastbw(f)
# derives stepdown model (using default stopping rule)
validate(f, B=100, bw=TRUE) # repeats fastbw 100 times
cal <- calibrate(f, B=100, bw=TRUE) # also repeats fastbw
plot(cal)
Don't have a formula like y ~ age + age^2
.
In S you need to connect related variables using
a function which produces a matrix, such as pol
or
rcs
.
This allows effect estimates (e.g., hazard ratios)
to be computed as well as multiple d.f. tests of
association.
Don't use poly
or strata
inside formulas used in
rms. Use pol
and strat
instead.
Almost never code your own dummy variables or
interaction variables in S. Let S do this
automatically. Otherwise, anova
can't do its
job.
Almost never transform predictors outside of
the model formula, as then plots of predicted
values vs. predictor values, and other displays,
would not be made on the original scale. Use
instead something like y ~ log(cell.count+1)
,
which will allow cell.count
to appear on
x-axes. You can get fancier, e.g.,
y ~ rcs(log(cell.count+1),4)
to fit a restricted
cubic spline with 4 knots in log(cell.count+1)
.
For more complex transformations do something
like
f <- function(x) {
... various 'if' statements, etc.
log(pmin(x,50000)+1)
}
fit1 <- lrm(death ~ f(cell.count))
fit2 <- lrm(death ~ rcs(f(cell.count),4))
}
Don't put $
inside variable names used in formulas.
Either attach data frames or use data=
.
Don't forget to use datadist
. Try to use it
at the top of your program so that all model fits
can automatically take advantage if its
distributional summaries for the predictors.
Don't validate
or calibrate
models which were
reduced by dropping "insignificant" predictors.
Proper bootstrap or cross-validation must repeat
any variable selection steps for each re-sample.
Therefore, validate
or calibrate
models
which contain all candidate predictors, and if
you must reduce models, specify the option
bw=TRUE
to validate
or calibrate
.
Dropping of "insignificant" predictors ruins much of the usual statistical inference for regression models (confidence limits, standard errors, P-values, chi-squares, ordinary indexes of model performance) and it also results in models which will have worse predictive discrimination.
Use require(rms)
.
Spline fits
Spanos A, Harrell FE, Durack DT (1989): Differential diagnosis of acute meningitis: An analysis of the predictive value of initial observations. JAMA 2700-2707.
Ohman EM, Armstrong PW, Christenson RH, et al. (1996): Cardiac troponin T levels for risk stratification in acute myocardial ischemia. New Eng J Med 335:1333-1341.
Bootstrap calibration curve for a parametric survival model:
Knaus WA, Harrell FE, Fisher CJ, Wagner DP, et al. (1993): The clinical evaluation of new drugs for sepsis: A prospective study design based on survival analysis. JAMA 270:1233-1241.
Splines, interactions with splines, algebraic form of
fitted model from latex.rms
Knaus WA, Harrell FE, Lynn J, et al. (1995): The SUPPORT prognostic model: Objective estimates of survival for seriously ill hospitalized adults. Annals of Internal Medicine 122:191-203.
Splines, odds ratio chart from fitted model with
nonlinear and interaction terms, use of transcan
for
imputation
Lee KL, Woodlief LH, Topol EJ, Weaver WD, Betriu A. Col J, Simoons M, Aylward P, Van de Werf F, Califf RM. Predictors of 30-day mortality in the era of reperfusion for acute myocardial infarction: results from an international trial of 41,021 patients. Circulation 1995;91:1659-1668.
Splines, external validation of logistic models, prediction rules using point tables
Steyerberg EW, Hargrove YV, et al (2001): Residual mass histology in testicular cancer: development and validation of a clinical prediction rule. Stat in Med 2001;20:3847-3859.
van Gorp MJ, Steyerberg EW, et al (2003): Clinical prediction rule for 30-day mortality in Bjork-Shiley convexo-concave valve replacement. J Clinical Epidemiology 2003;56:1006-1012.
Model fitting, bootstrap validation, missing value imputation
Krijnen P, van Jaarsveld BC, Steyerberg EW, Man in 't Veld AJ, Schalekamp, MADH, Habbema JDF (1998): A clinical prediction rule for renal artery stenosis. Annals of Internal Medicine 129:705-711.
Model fitting, splines, bootstrap validation, nomograms
Kattan MW, Eastham JA, Stapleton AMF, Wheeler TM, Scardino PT. A preoperative nomogram for disease recurrence following radical prostatectomy for prostate cancer. J Natl Ca Inst 1998; 90(10):766-771.
Kattan, MW, Wheeler TM, Scardino PT. A postoperative nomogram for disease recurrence following radical prostatectomy for prostate cancer. J Clin Oncol 1999; 17(5):1499-1507
Kattan MW, Zelefsky MJ, Kupelian PA, Scardino PT, Fuks Z, Leibel SA. A pretreatment nomogram for predicting the outcome of three-dimensional conformal radiotherapy in prostate cancer. J Clin Oncol 2000; 18(19):3252-3259.
Eastham JA, May R, Robertson JL, Sartor O, Kattan MW. Development of a nomogram which predicts the probability of a positive prostate biopsy in men with an abnormal digital rectal examination and a prostate specific antigen between 0 and 4 ng/ml. Urology. (In press).
Kattan MW, Heller G, Brennan MF. A competing-risk nomogram fir sarcoma-specific death following local recurrence. Stat in Med 2003; 22; 3515-3525.
Penalized maximum likelihood estimation, regression splines, web site to get predicted values
Smits M, Dippel DWJ, Steyerberg EW, et al. Predicting intracranial traumatic findings on computed tomography in patients with minor head injury: The CHIP prediction rule. Ann Int Med 2007; 146:397-405.
Nomogram with 2- and 5-year survival probability and median survival time (but watch out for the use of univariable screening)
Clark TG, Stewart ME, Altman DG, Smyth JF. A prognostic model for ovarian cancer. Br J Cancer 2001; 85:944-52.
Comprehensive example of parametric survival modeling with an extensive nomogram, time ratio chart, anova chart, survival curves generated using survplot, bootstrap calibration curve
Teno JM, Harrell FE, Knaus WA, et al. Prediction of survival for older hospitalized patients: The HELP survival model. J Am Geriatrics Soc 2000; 48: S16-S24.
Model fitting, imputation, and several nomograms expressed in tabular form
Hasdai D, Holmes DR, et al. Cardiogenic shock complicating acute myocardial infarction: Predictors of death. Am Heart J 1999; 138:21-31.
Ordinal logistic model with bootstrap calibration plot
Wu AW, Yasui U, Alzola CF et al. Predicting functional status outcomes in hospitalized patients aged 80 years and older. J Am Geriatric Society 2000; 48:S6-S15.
Propensity modeling in evaluating medical diagnosis, anova dot chart
Weiss JP, Gruver C, et al. Ordering an echocardiogram for evaluation of left ventricular function: Level of expertise necessary for efficient use. J Am Soc Echocardiography 2000; 13:124-130.
Simulations using rms to study the properties of various modeling strategies
Steyerberg EW, Eijkemans MJC, Habbema JDF. Stepwise selection in small data sets: A simulation study of bias in logistic regression analysis. J Clin Epi 1999; 52:935-942.
Steyerberg WE, Eijekans MJC, Harrell FE, Habbema JDF. Prognostic modeling with logistic regression analysis: In search of a sensible strategy in small data sets. Med Decision Making 2001; 21:45-56.
Statistical methods and references related to rms, along with case studies which includes the rms code which produced the analyses
Harrell FE, Lee KL, Mark DB (1996): Multivariable prognostic models: Issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors. Stat in Med 15:361-387.
Harrell FE, Margolis PA, Gove S, Mason KE, Mulholland EK et al. (1998): Development of a clinical prediction model for an ordinal outcome: The World Health Organization ARI Multicentre Study of clinical signs and etiologic agents of pneumonia, sepsis, and meningitis in young infants. Stat in Med 17:909-944.
Bender R, Benner, A (2000): Calculating ordinal regression models in SAS and S-Plus. Biometrical J 42:677-699.
The author is willing to help with problems. Send E-mail to fh@fharrell.com. To report bugs, please do the following:
If the bug occurs when running a function on a fit
object (e.g., anova
), attach a dump
'd text
version of the fit object to your note. If you
used datadist
but not until after the fit was
created, also send the object created by
datadist
. Example: save(myfit,"/tmp/myfit.rda")
will create
an R binary save file that can be attached to the E-mail.
If the bug occurs during a model fit (e.g., with
lrm, ols, psm, cph
), send the statement causing
the error with a save
'd version of the data
frame used in the fit. If this data frame is very
large, reduce it to a small subset which still
causes the error.
GENERAL DISCLAIMER This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. In short: you may use this code any way you like, as long as you don't charge money for it, remove this notice, or hold anyone liable for its results. Also, please acknowledge the source and communicate changes to the author.
If this software is used is work presented for publication, kindly reference it using for example: Harrell FE (2009): rms: S functions for biostatistical/epidemiologic modeling, testing, estimation, validation, graphics, and prediction. Programs available from https://hbiostat.org/R/rms/. Be sure to reference other packages used as well as R itself.
Frank E Harrell Jr
Professor of Biostatistics
Vanderbilt University School of Medicine
Nashville, Tennessee
fh@fharrell.com
The primary resource for the rms package is Regression Modeling Strategies, second edition by FE Harrell (Springer-Verlag, 2015) and the web page https://hbiostat.org/R/rms/. See also the Statistics in Medicine articles by Harrell et al listed below for case studies of modeling and model validation using rms.
Several datasets useful for multivariable modeling with rms are found at https://hbiostat.org/data/.
## To run several comprehensive examples, run the following command ## Not run: demo(all, 'rms') ## End(Not run)
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.