Become an expert in R — Interactive courses, Cheat Sheets, certificates and more!
Get Started for Free

diff.resamples

Inferential Assessments About Model Performance


Description

Methods for making inferences about differences between models

Usage

## S3 method for class 'resamples'
diff(
  x,
  models = x$models,
  metric = x$metrics,
  test = t.test,
  confLevel = 0.95,
  adjustment = "bonferroni",
  ...
)

## S3 method for class 'diff.resamples'
summary(object, digits = max(3, getOption("digits") - 3), ...)

compare_models(a, b, metric = a$metric[1])

Arguments

x

an object generated by resamples

models

a character string for which models to compare

metric

a character string for which metrics to compare

test

a function to compute differences. The output of this function should have scalar outputs called estimate and p.value

confLevel

confidence level to use for dotplot.diff.resamples. See Details below.

adjustment

any p-value adjustment method to pass to p.adjust.

...

further arguments to pass to test

object

a object generated by diff.resamples

digits

the number of significant differences to display when printing

a, b

two objects of class train, sbf or rfe with a common set of resampling indices in the control object.

Details

The ideas and methods here are based on Hothorn et al. (2005) and Eugster et al. (2008).

For each metric, all pair-wise differences are computed and tested to assess if the difference is equal to zero.

When a Bonferroni correction is used, the confidence level is changed from confLevel to 1-((1-confLevel)/p) here p is the number of pair-wise comparisons are being made. For other correction methods, no such change is used.

compare_models is a shorthand function to compare two models using a single metric. It returns the results of t.test on the differences.

Value

An object of class "diff.resamples" with elements:

call

the call

difs

a list for each metric being compared. Each list contains a matrix with differences in columns and resamples in rows

statistics

a list of results generated by test

adjustment

the p-value adjustment used

models

a character string for which models were compared.

metrics

a character string of performance metrics that were used

or...

An object of class "summary.diff.resamples" with elements:

call

the call

table

a list of tables that show the differences and p-values

...or (for compare_models) an object of class htest resulting from t.test.

Author(s)

Max Kuhn

References

Hothorn et al. The design and analysis of benchmark experiments. Journal of Computational and Graphical Statistics (2005) vol. 14 (3) pp. 675-699

Eugster et al. Exploratory and inferential analysis of benchmark experiments. Ludwigs-Maximilians-Universitat Munchen, Department of Statistics, Tech. Rep (2008) vol. 30

See Also

Examples

## Not run: 
#load(url("http://topepo.github.io/caret/exampleModels.RData"))

resamps <- resamples(list(CART = rpartFit,
                          CondInfTree = ctreeFit,
                          MARS = earthFit))

difs <- diff(resamps)

difs

summary(difs)

compare_models(rpartFit, ctreeFit)

## End(Not run)

caret

Classification and Regression Training

v6.0-86
GPL (>= 2)
Authors
Max Kuhn [aut, cre], Jed Wing [ctb], Steve Weston [ctb], Andre Williams [ctb], Chris Keefer [ctb], Allan Engelhardt [ctb], Tony Cooper [ctb], Zachary Mayer [ctb], Brenton Kenkel [ctb], R Core Team [ctb], Michael Benesty [ctb], Reynald Lescarbeau [ctb], Andrew Ziem [ctb], Luca Scrucca [ctb], Yuan Tang [ctb], Can Candan [ctb], Tyler Hunt [ctb]
Initial release

We don't support your browser anymore

Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.