Become an expert in R — Interactive courses, Cheat Sheets, certificates and more!
Get Started for Free

test_performance

Test if Models are Different


Description

Testing whether models are different is a delicate and often complex procedure, with many limitations and requisites. Moreover, several tests are available, each coming with its own interpretation, and set of strengths and weaknesses.

The test_performance() function is available to run the most relevant and appropriate tests based on the input (for instance, whether the models are nested or not). That said, it still requires the user to understand what the tests are and what they do in order to prevent their misinterpretation. See the details section for more information regarding the different tests and their interpretation.

Usage

test_bf(...)

test_likelihoodratio(..., estimator = "ML")

performance_lrt(..., estimator = "ML")

test_lrt(..., estimator = "ML")

test_performance(..., reference = 1)

test_vuong(...)

test_wald(...)

Arguments

...

Multiple model objects.

estimator

Applied when comparing regression models using test_likelihoodratio(). Corresponds to the different estimators for the standard deviation of the errors. If estimator="OLS" (default), then it uses the same method as anova(..., test="LRT") implemented in base R, i.e., scaling by n-k (the unbiased OLS estimator) and using this estimator under the alternative hypothesis. If estimator="ML", which is for instance used by lrtest(...) in package lmtest, the scaling is done by n (the biased ML estimator) and the estimator under the null hypothesis. In moderately large samples, the differences should be negligible, but it is possible that OLS would perform slightly better in small samples with Gaussian errors.

reference

This only applies when models are non-nested, and determines which model should be taken as a reference, against which all the other models are tested.

Details

Nested vs. Non-nested Models

Model's "nesting" is an important concept of models comparison. Indeed, many tests only make sense when the models are "nested", i.e., when their predictors are nested. This means that all the predictors of a model are contained within the predictors of a larger model (sometimes referred to as the encompassing model). For instance, model1 (y ~ x1 + x2) is "nested" within model2 (y ~ x1 + x2 + x3). Usually, people have a list of nested models, for instance m1 (y ~ 1), m2 (y ~ x1), m3 (y ~ x1 + x2), m4 (y ~ x1 + x2 + x3), and it is conventional that they are "ordered" from the smallest to largest, but it is up to the user to reverse the order from largest to smallest. The test then shows whether a more parsimonious model, or whether adding a predictor, results in a significant difference in the model's performance. In this case, models are usually compared sequentially: m2 is tested against m1, m3 against m2, m4 against m3, etc.

Two models are considered as "non-nested" if their predictors are different. For instance, model1 (y ~ x1 + x2) and model2 (y ~ x3 + x4). In the case of non-nested models, all models are usually compared against the same reference model (by default, the first of the list).

Nesting is detected via the insight::is_nested_models() function. Aside of the nesting, note also that in order for the tests to be valid, other requirements have often to be the fulfilled. For instance, outcome variables (the response) must be the same. You cannot meaningfully test whether apples are significantly different from oranges!

Tests Description

  • Bayes factor for Model Comparison - test_bf(): If all models were fit from the same data, the returned BF shows the Bayes Factor (see bayestestR::bayesfactor_models()) for each model against the reference model (which depends on whether the models are nested or not). Check out this vignette for more details.

  • Wald's F-Test - test_wald(): The Wald test is a rough approximation of the Likelihood Ratio Test. However, it is more applicable than the LRT: you can often run a Wald test in situations where no other test can be run. Importantly, this test only makes statistical sense if the models are nested.

    This test is also available in base R through the anova() function. It returns an F-value column as a statistic and its associated p-value.

  • Likelihood Ratio Test (LRT) - test_likelihoodratio(): The LRT tests which model is a better (more likely) explanation of the data. Likelihood-Ratio-Test (LRT) gives usually somewhat close results (if not equivalent) to the Wald test and, similarly, only makes sense for nested models. However, Maximum likelihood tests make stronger assumptions than method of moments tests like the F-test, and in turn are more efficient. Agresti (1990) suggests that you should use the LRT instead of the Wald test for small sample sizes (under or about 30) or if the parameters are large.

    For regression models, this is similar to anova(..., test="LRT") or lmtest::lrtest(...), depending on the estimator argument. For lavaan models (SEM, CFA), the function calls lavaan::lavTestLRT().

  • Vuong's Test - test_vuong(): Vuong's (1989) test can be used both for nested and non-nested models, and actually consists of two tests.

    • The Test of Distinguishability (the Omega2 column and its associated p-value) indicates whether or not the models can possibly be distinguished on the basis of the observed data. If its p-value is significant, it means the models are distinguishable.

    • The Robust Likelihood Test (the LR column and its associated p-value) indicates whether each model fits better than the reference model. If the models are nested, then the test works as a robust LRT. The code for this function is adapted from the nonnest2 package, and all credit go to their authors.

Value

A data frame containing the relevant indices.

References

  • Vuong, Q. H. (1989). Likelihood ratio tests for model selection and non-nested hypotheses. Econometrica, 57, 307-333.

  • Merkle, E. C., You, D., & Preacher, K. (2016). Testing non-nested structural equation models. Psychological Methods, 21, 151-163.

See Also

compare_performance() to compare the performance indices of many different models.

Examples

# Nested Models
# -------------
m1 <- lm(Sepal.Length ~ Petal.Width, data = iris)
m2 <- lm(Sepal.Length ~ Petal.Width + Species, data = iris)
m3 <- lm(Sepal.Length ~ Petal.Width * Species, data = iris)

test_performance(m1, m2, m3)

test_bf(m1, m2, m3)
test_wald(m1, m2, m3) # Equivalent to anova(m1, m2, m3)

# Equivalent to lmtest::lrtest(m1, m2, m3)
test_likelihoodratio(m1, m2, m3, estimator = "ML")

# Equivalent to anova(m1, m2, m3, test='LRT')
test_likelihoodratio(m1, m2, m3, estimator = "OLS")

test_vuong(m1, m2, m3) # nonnest2::vuongtest(m1, m2, nested=TRUE)

# Non-nested Models
# -----------------
m1 <- lm(Sepal.Length ~ Petal.Width, data = iris)
m2 <- lm(Sepal.Length ~ Petal.Length, data = iris)
m3 <- lm(Sepal.Length ~ Species, data = iris)

test_performance(m1, m2, m3)
test_bf(m1, m2, m3)
test_vuong(m1, m2, m3) # nonnest2::vuongtest(m1, m2)

# Tweak the output
# ----------------
test_performance(m1, m2, m3, include_formula = TRUE)


# SEM / CFA (lavaan objects)
# --------------------------
# Lavaan Models
if (require("lavaan")) {
  structure <- " visual  =~ x1 + x2 + x3
                 textual =~ x4 + x5 + x6
                 speed   =~ x7 + x8 + x9

                  visual ~~ textual + speed "
  m1 <- lavaan::cfa(structure, data = HolzingerSwineford1939)

  structure <- " visual  =~ x1 + x2 + x3
                 textual =~ x4 + x5 + x6
                 speed   =~ x7 + x8 + x9

                  visual ~~ 0 * textual + speed "
  m2 <- lavaan::cfa(structure, data = HolzingerSwineford1939)

  structure <- " visual  =~ x1 + x2 + x3
                 textual =~ x4 + x5 + x6
                 speed   =~ x7 + x8 + x9

                  visual ~~ 0 * textual + 0 * speed "
  m3 <- lavaan::cfa(structure, data = HolzingerSwineford1939)

  test_likelihoodratio(m1, m2, m3)

  # Different Model Types
  # ---------------------
  if (require("lme4") && require("mgcv")) {
    m1 <- lm(Sepal.Length ~ Petal.Length + Species, data = iris)
    m2 <- lmer(Sepal.Length ~ Petal.Length + (1 | Species), data = iris)
    m3 <- gam(Sepal.Length ~ s(Petal.Length, by = Species) + Species, data = iris)

    test_performance(m1, m2, m3)
  }
}

performance

Assessment of Regression Models Performance

v0.7.1
GPL-3
Authors
Daniel Lüdecke [aut, cre] (<https://orcid.org/0000-0002-8895-3206>), Dominique Makowski [aut, ctb] (<https://orcid.org/0000-0001-5375-9967>), Mattan S. Ben-Shachar [aut, ctb] (<https://orcid.org/0000-0002-4287-4801>), Indrajeet Patil [aut, ctb] (<https://orcid.org/0000-0003-1995-6531>), Philip Waggoner [aut, ctb] (<https://orcid.org/0000-0002-7825-7573>), Vincent Arel-Bundock [ctb] (<https://orcid.org/0000-0003-2042-7063>)
Initial release

We don't support your browser anymore

Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.