Become an expert in R — Interactive courses, Cheat Sheets, certificates and more!
Get Started for Free

compare_performance

Compare performance of different models


Description

compare_performance() computes indices of model performance for different models at once and hence allows comparison of indices across models.

Usage

compare_performance(..., metrics = "all", rank = FALSE, verbose = TRUE)

Arguments

...

Multiple model objects (also of different classes).

metrics

Can be "all", "common" or a character vector of metrics to be computed. See related documentation of object's class for details.

rank

Logical, if TRUE, models are ranked according to 'best' overall model performance. See 'Details'.

verbose

Toggle off warnings.

Details

Ranking Models

When rank = TRUE, a new column Performance_Score is returned. This score ranges from 0% to 100%, higher values indicating better model performance. Note that all score value do not necessarily sum up to 100%. Rather, calculation is based on normalizing all indices (i.e. rescaling them to a range from 0 to 1), and taking the mean value of all indices for each model. This is a rather quick heuristic, but might be helpful as exploratory index.

In particular when models are of different types (e.g. mixed models, classical linear models, logistic regression, ...), not all indices will be computed for each model. In case where an index can't be calculated for a specific model type, this model gets an NA value. All indices that have any NAs are excluded from calculating the performance score.

There is a plot()-method for compare_performance(), which creates a "spiderweb" plot, where the different indices are normalized and larger values indicate better model performance. Hence, points closer to the center indicate worse fit indices (see online-documentation for more details).

Value

A data frame (with one row per model) and one column per "index" (see metrics).

Note

There is also a plot()-method implemented in the see-package.

Examples

data(iris)
lm1 <- lm(Sepal.Length ~ Species, data = iris)
lm2 <- lm(Sepal.Length ~ Species + Petal.Length, data = iris)
lm3 <- lm(Sepal.Length ~ Species * Petal.Length, data = iris)
compare_performance(lm1, lm2, lm3)
compare_performance(lm1, lm2, lm3, rank = TRUE)

if (require("lme4")) {
  m1 <- lm(mpg ~ wt + cyl, data = mtcars)
  m2 <- glm(vs ~ wt + mpg, data = mtcars, family = "binomial")
  m3 <- lmer(Petal.Length ~ Sepal.Length + (1 | Species), data = iris)
  compare_performance(m1, m2, m3)
}

performance

Assessment of Regression Models Performance

v0.7.1
GPL-3
Authors
Daniel Lüdecke [aut, cre] (<https://orcid.org/0000-0002-8895-3206>), Dominique Makowski [aut, ctb] (<https://orcid.org/0000-0001-5375-9967>), Mattan S. Ben-Shachar [aut, ctb] (<https://orcid.org/0000-0002-4287-4801>), Indrajeet Patil [aut, ctb] (<https://orcid.org/0000-0003-1995-6531>), Philip Waggoner [aut, ctb] (<https://orcid.org/0000-0002-7825-7573>), Vincent Arel-Bundock [ctb] (<https://orcid.org/0000-0003-2042-7063>)
Initial release

We don't support your browser anymore

Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.