General Contrasts of Regression Coefficients
This function computes one or more contrasts of the estimated
regression coefficients in a fit from one of the functions in rms,
along with standard errors, confidence limits, t or Z statistics, P-values.
General contrasts are handled by obtaining the design matrix for two
sets of predictor settings (a
, b
) and subtracting the
corresponding rows of the two design matrics to obtain a new contrast
design matrix for testing the a
- b
differences. This allows for
quite general contrasts (e.g., estimated differences in means between
a 30 year old female and a 40 year old male).
This can also be used
to obtain a series of contrasts in the presence of interactions (e.g.,
female:male log odds ratios for several ages when the model contains
age by sex interaction). Another use of contrast
is to obtain
center-weighted (Type III test) and subject-weighted (Type II test)
estimates in a model containing treatment by center interactions. For
the latter case, you can specify type="average"
and an optional
weights
vector to average the within-center treatment contrasts.
The design contrast matrix computed by contrast.rms
can be used
by other functions.
When the model was fitted by a Bayesian function such as blrm
,
highest posterior density intervals for contrasts are computed instead, along with the
posterior probability that the contrast is positive.
posterior.summary
specifies whether posterior mean/median/mode is
to be used for contrast point estimates.
contrast.rms
also allows one to specify four settings to
contrast, yielding contrasts that are double differences - the
difference between the first two settings (a
- b
) and the
last two (a2
- b2
). This allows assessment of interactions.
If usebootcoef=TRUE
, the fit was run through bootcov
, and
conf.type="individual"
, the confidence intervals are bootstrap
nonparametric percentile confidence intervals, basic bootstrap, or BCa
intervals, obtained on contrasts evaluated on all bootstrap samples.
By omitting the b
argument, contrast
can be used to obtain
an average or weighted average of a series of predicted values, along
with a confidence interval for this average. This can be useful for
"unconditioning" on one of the predictors (see the next to last
example).
Specifying type="joint"
, and specifying at least as many contrasts
as needed to span the space of a complex test, one can make
multiple degree of freedom tests flexibly and simply. Redundant
contrasts will be ignored in the joint test. See the examples below.
These include an example of an "incomplete interaction test" involving
only two of three levels of a categorical variable (the test also tests
the main effect).
When more than one contrast is computed, the list created by
contrast.rms
is suitable for plotting (with error bars or bands)
with xYplot
or Dotplot
(see the last example before the
type="joint"
examples).
When fit
is the result of a Bayesian model fit and fun
is
specified, contrast.rms
operates altogether differently. a
and b
must both be specified and a2, b2
not specified.
fun
is evaluated on the estimates
separately on a
and b
and the subtraction is deferred. So
even in the absence of interactions, when fun
is nonlinear, the
settings of factors (predictors) will not cancel out and estimates of
differences will be covariate-specific (unless there are no covariates
in the model besides the one being varied to get from a
to b
).
contrast(fit, ...) ## S3 method for class 'rms' contrast(fit, a, b, a2, b2, ycut=NULL, cnames=NULL, fun=NULL, funint=TRUE, type=c("individual", "average", "joint"), conf.type=c("individual","simultaneous"), usebootcoef=TRUE, boot.type=c("percentile","bca","basic"), posterior.summary=c('mean', 'median', 'mode'), weights="equal", conf.int=0.95, tol=1e-7, expand=TRUE, ...) ## S3 method for class 'contrast.rms' print(x, X=FALSE, fun=function(u)u, jointonly=FALSE, prob=0.95, ...)
fit |
a fit of class |
a |
a list containing settings for all predictors that you do not wish to
set to default (adjust-to) values. Usually you will specify two
variables in this list, one set to a constant and one to a sequence of
values, to obtain contrasts for the sequence of values of an
interacting factor. The |
b |
another list that generates the same number of observations as |
a2 |
an optional third list of settings of predictors |
b2 |
an optional fourth list of settings of predictors. Mandatory
if |
ycut |
used of the fit is a constrained partial proportional odds
model fit, to specify the single value or vector of values
(corresponding to the multiple contrasts) of the response
variable to use in forming contrasts. When there is
non-proportional odds, odds ratios will vary over levels of the
response variable. When there are multiple contrasts and only
one value is given for |
cnames |
vector of character strings naming the contrasts when
|
fun |
a function to evaluate on the linear predictor for each of
|
type |
set |
conf.type |
The default type of confidence interval computed for a given
individual (1 d.f.) contrast is a pointwise confidence interval. Set
|
usebootcoef |
If |
boot.type |
set to |
posterior.summary |
By default the posterior mean is used.
Specify |
weights |
a numeric vector, used when |
conf.int |
confidence level for confidence intervals for the contrasts (HPD interval probability for Bayesian analyses) |
tol |
tolerance for |
expand |
set to |
... |
passed to |
x |
result of |
X |
set |
funint |
set to |
jointonly |
set to |
prob |
highest posterior density interval probability when the fit
was Bayesian and |
a list of class "contrast.rms"
containing the elements
Contrast
, SE
, Z
, var
, df.residual
Lower
, Upper
, Pvalue
, X
, cnames
, redundant
, which denote the contrast
estimates, standard errors, Z or t-statistics, variance matrix,
residual degrees of freedom (this is NULL
if the model was not
ols
), lower and upper confidence limits, 2-sided P-value, design
matrix, contrast names (or NULL
), and a logical vector denoting
which contrasts are redundant with the other contrasts. If there are
any redundant contrasts, when the results of contrast
are
printed, and asterisk is printed at the start of the corresponding
lines. The object also contains ctype
indicating what method was
used for compute confidence intervals.
Frank Harrell
Department of Biostatistics
Vanderbilt University School of Medicine
fh@fharrell.com
set.seed(1) age <- rnorm(200,40,12) sex <- factor(sample(c('female','male'),200,TRUE)) logit <- (sex=='male') + (age-40)/5 y <- ifelse(runif(200) <= plogis(logit), 1, 0) f <- lrm(y ~ pol(age,2)*sex) anova(f) # Compare a 30 year old female to a 40 year old male # (with or without age x sex interaction in the model) contrast(f, list(sex='female', age=30), list(sex='male', age=40)) # Test for interaction between age and sex, duplicating anova contrast(f, list(sex='female', age=30), list(sex='male', age=30), list(sex='female', age=c(40,50)), list(sex='male', age=c(40,50)), type='joint') # Duplicate overall sex effect in anova with 3 d.f. contrast(f, list(sex='female', age=c(30,40,50)), list(sex='male', age=c(30,40,50)), type='joint') # For a model containing two treatments, centers, and treatment # x center interaction, get 0.95 confidence intervals separately # by center center <- factor(sample(letters[1 : 8], 500, TRUE)) treat <- factor(sample(c('a','b'), 500, TRUE)) y <- 8*(treat == 'b') + rnorm(500, 100, 20) f <- ols(y ~ treat*center) lc <- levels(center) contrast(f, list(treat='b', center=lc), list(treat='a', center=lc)) # Get 'Type III' contrast: average b - a treatment effect over # centers, weighting centers equally (which is almost always # an unreasonable thing to do) contrast(f, list(treat='b', center=lc), list(treat='a', center=lc), type='average') # Get 'Type II' contrast, weighting centers by the number of # subjects per center. Print the design contrast matrix used. k <- contrast(f, list(treat='b', center=lc), list(treat='a', center=lc), type='average', weights=table(center)) print(k, X=TRUE) # Note: If other variables had interacted with either treat # or center, we may want to list settings for these variables # inside the list()'s, so as to not use default settings # For a 4-treatment study, get all comparisons with treatment 'a' treat <- factor(sample(c('a','b','c','d'), 500, TRUE)) y <- 8*(treat == 'b') + rnorm(500, 100, 20) dd <- datadist(treat, center); options(datadist='dd') f <- ols(y ~ treat*center) lt <- levels(treat) contrast(f, list(treat=lt[-1]), list(treat=lt[ 1]), cnames=paste(lt[-1], lt[1], sep=':'), conf.int=1 - .05 / 3) # Compare each treatment with average of all others for(i in 1 : length(lt)) { cat('Comparing with', lt[i], '\n\n') print(contrast(f, list(treat=lt[-i]), list(treat=lt[ i]), type='average')) } options(datadist=NULL) # Six ways to get the same thing, for a variable that # appears linearly in a model and does not interact with # any other variables. We estimate the change in y per # unit change in a predictor x1. Methods 4, 5 also # provide confidence limits. Method 6 computes nonparametric # bootstrap confidence limits. Methods 2-6 can work # for models that are nonlinear or non-additive in x1. # For that case more care is needed in choice of settings # for x1 and the variables that interact with x1. ## Not run: coef(fit)['x1'] # method 1 diff(predict(fit, gendata(x1=c(0,1)))) # method 2 g <- Function(fit) # method 3 g(x1=1) - g(x1=0) summary(fit, x1=c(0,1)) # method 4 k <- contrast(fit, list(x1=1), list(x1=0)) # method 5 print(k, X=TRUE) fit <- update(fit, x=TRUE, y=TRUE) # method 6 b <- bootcov(fit, B=500) contrast(fit, list(x1=1), list(x1=0)) # In a model containing age, race, and sex, # compute an estimate of the mean response for a # 50 year old male, averaged over the races using # observed frequencies for the races as weights f <- ols(y ~ age + race + sex) contrast(f, list(age=50, sex='male', race=levels(race)), type='average', weights=table(race)) # For a Bayesian model get the highest posterior interval for the # difference in two nonlinear functions of predicted values # Start with the mean from a proportional odds model g <- blrm(y ~ x) M <- Mean(g) contrast(g, list(x=1), list(x=0), fun=M) # For the median we have to make sure that contrast can pass the # per-posterior-draw vector of intercepts through qu <- Quantile(g) med <- function(lp, intercepts) qu(0.5, lp, intercepts=intercepts) contrast(g, list(x=1), list(x=0), fun=med) ## End(Not run) # Plot the treatment effect (drug - placebo) as a function of age # and sex in a model in which age nonlinearly interacts with treatment # for females only set.seed(1) n <- 800 treat <- factor(sample(c('drug','placebo'), n,TRUE)) sex <- factor(sample(c('female','male'), n,TRUE)) age <- rnorm(n, 50, 10) y <- .05*age + (sex=='female')*(treat=='drug')*.05*abs(age-50) + rnorm(n) f <- ols(y ~ rcs(age,4)*treat*sex) d <- datadist(age, treat, sex); options(datadist='d') # show separate estimates by treatment and sex ggplot(Predict(f, age, treat, sex='female')) ggplot(Predict(f, age, treat, sex='male')) ages <- seq(35,65,by=5); sexes <- c('female','male') w <- contrast(f, list(treat='drug', age=ages, sex=sexes), list(treat='placebo', age=ages, sex=sexes)) # add conf.type="simultaneous" to adjust for having done 14 contrasts xYplot(Cbind(Contrast, Lower, Upper) ~ age | sex, data=w, ylab='Drug - Placebo') w <- as.data.frame(w[c('age','sex','Contrast','Lower','Upper')]) ggplot(w, aes(x=age, y=Contrast)) + geom_point() + facet_grid(sex ~ .) + geom_errorbar(aes(ymin=Lower, ymax=Upper), width=0) xYplot(Cbind(Contrast, Lower, Upper) ~ age, groups=sex, data=w, ylab='Drug - Placebo', method='alt bars') options(datadist=NULL) # Examples of type='joint' contrast tests set.seed(1) x1 <- rnorm(100) x2 <- factor(sample(c('a','b','c'), 100, TRUE)) dd <- datadist(x1, x2); options(datadist='dd') y <- x1 + (x2=='b') + rnorm(100) # First replicate a test statistic from anova() f <- ols(y ~ x2) anova(f) contrast(f, list(x2=c('b','c')), list(x2='a'), type='joint') # Repeat with a redundancy; compare a vs b, a vs c, b vs c contrast(f, list(x2=c('a','a','b')), list(x2=c('b','c','c')), type='joint') # Get a test of association of a continuous predictor with y # First assume linearity, then cubic f <- lrm(y>0 ~ x1 + x2) anova(f) contrast(f, list(x1=1), list(x1=0), type='joint') # a minimum set of contrasts xs <- seq(-2, 2, length=20) contrast(f, list(x1=0), list(x1=xs), type='joint') # All contrasts were redundant except for the first, because of # linearity assumption f <- lrm(y>0 ~ pol(x1,3) + x2) anova(f) contrast(f, list(x1=0), list(x1=xs), type='joint') print(contrast(f, list(x1=0), list(x1=xs), type='joint'), jointonly=TRUE) # All contrasts were redundant except for the first 3, because of # cubic regression assumption # Now do something that is difficult to do without cryptic contrast # matrix operations: Allow each of the three x2 groups to have a different # shape for the x1 effect where x1 is quadratic. Test whether there is # a difference in mean levels of y for x2='b' vs. 'c' or whether # the shape or slope of x1 is different between x2='b' and x2='c' regardless # of how they differ when x2='a'. In other words, test whether the mean # response differs between group b and c at any value of x1. # This is a 3 d.f. test (intercept, linear, quadratic effects) and is # a better approach than subsetting the data to remove x2='a' then # fitting a simpler model, as it uses a better estimate of sigma from # all the data. f <- ols(y ~ pol(x1,2) * x2) anova(f) contrast(f, list(x1=xs, x2='b'), list(x1=xs, x2='c'), type='joint') # Note: If using a spline fit, there should be at least one value of # x1 between any two knots and beyond the outer knots. options(datadist=NULL)
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.