Permutation Tests for 'rma.uni' Objects
The function carries out permutation tests for objects of class "rma.uni"
.
permutest(x, ...) ## S3 method for class 'rma.uni' permutest(x, exact=FALSE, iter=1000, permci=FALSE, progbar=TRUE, retpermdist=FALSE, digits, control, ...)
x |
an object of class |
exact |
logical indicating whether an exact permutation test should be carried out or not (the default is |
iter |
integer specifying the number of iterations for the permutation test when not doing an exact test (the default is |
permci |
logical indicating whether permutation-based CIs should also be calculated (the default is |
progbar |
logical indicating whether a progress bar should be shown (the default is |
retpermdist |
logical indicating whether the permutation distributions of the test statistics should be returned (the default is |
digits |
integer specifying the number of decimal places to which the printed results should be rounded (if unspecified, the default is to take the value from the object). |
control |
list of control values for numerical comparisons ( |
... |
other arguments. |
For models without moderators, the permutation test is carried out by permuting the signs of the observed effect sizes or outcomes. The (two-sided) p-value of the permutation test is then equal to the proportion of times that the absolute value of the test statistic under the permuted data is as extreme or more extreme than under the actually observed data. See Follmann and Proschan (1999) for more details.
For models with moderators, the permutation test is carried out by permuting the rows of the model matrix (i.e., X). The (two-sided) p-value for a particular model coefficient is then equal to the proportion of times that the absolute value of the test statistic for the coefficient under the permuted data is as extreme or more extreme than under the actually observed data. Similarly, for the omnibus test, the p-value is the proportion of times that the test statistic for the omnibus test is as extreme or more extreme than the actually observed one. See Higgins and Thompson (2004) and Viechtbauer et al. (2015) for more details.
If exact=TRUE
, the function will try to carry out an exact permutation test. An exact permutation test requires fitting the model to each possible permutation once. However, the number of possible permutations increases rapidly with the number of outcomes/studies (i.e., k). For models without moderators, there are 2^k possible permutations of the signs. Therefore, for k=5, there are 32 possible permutations, for k=10, there are already 1024, and for k=20, there are over one million permutations of the signs.
For models with moderators, the increase in the number of possible permutations may be even more severe. The total number of possible permutations of the model matrix is k!. Therefore, for k=5, there are 120 possible permutations, for k=10, there are 3,628,800, and for k=20, there are over 10^18 permutations of the model matrix.
Therefore, going through all possible permutations may become infeasible. Instead of using an exact permutation test, one can set exact=FALSE
(which is also the default). In that case, the function approximates the exact permutation-based p-value(s) by going through a smaller number (as specified by the iter
argument) of random permutations. Therefore, running the function twice on the same data can yield (slightly) different p-values. Setting iter
sufficiently large ensures that the results become stable. Note that if exact=FALSE
and iter
is actually larger than the number of iterations required for an exact permutation test, then an exact test will be carried out.
For models with moderators, the exact permutation test actually only requires fitting the model to each unique permutation of the model matrix. The number of unique permutations will be smaller than k! when the model matrix contains recurring rows. This may be the case when only including categorical moderators (i.e., factors) in the model or when any quantitative moderators included in the model only take on a small number of unique values. When exact=TRUE
, the function therefore uses an algorithm to restrict the test to only the unique permutations of the model matrix, which may make the use of the exact test feasible even when k is large.
When using random permutations, the function ensures that the very first permutation will always correspond to the original data. This avoids p-values equal to 0.
When permci=TRUE
, the function also tries to obtain permutation-based CIs of the model coefficient(s). This is done by shifting the observed effect sizes or outcomes and finding the most extreme values for which the permutation-based test would just lead to non-rejection. This is computationally expensive and may take a long time to complete. For models with moderators, one can also set permci
to a vector of indices specifying for which coefficient(s) a permutation-based CI should be obtained. When the algorithm fails to determine a particular CI bound, it will be shown as NA
in the output.
An object of class "permutest.rma.uni"
. The object is a list containing the following components:
pval |
p-value(s) based on the permutation test. |
QMp |
p-value for the omnibus test of coefficients based on the permutation test. |
zval.perm |
values of the test statistics of the coefficients under the various permutations (only when |
b.perm |
values of the model coefficients under the various permutations (only when |
QM.perm |
values of the test statistic for the omnibus test of coefficients under the various permutations (only when |
ci.lb |
lower bound of the confidence intervals for the coefficients (permutation-based when |
ci.ub |
upper bound of the confidence intervals for the coefficients (permutation-based when |
... |
some additional elements/values are passed on. |
The results are formatted and printed with the print.permutest.rma.uni
function. One can also use coef.permutest.rma.uni
to obtain the table with the model coefficients, corresponding standard errors, test statistics, p-values, and confidence interval bounds.
It is important to note that the p-values obtained with permutation tests cannot reach conventional levels of statistical significance (i.e., p ≤ .05) when k is very small. In particular, for models without moderators, the smallest possible (two-sided) p-value is .0625 when k=5 and .03125 when k=6. Therefore, the permutation test is only able to reject the null hypothesis at α=.05 when k is at least equal to 6. For models with moderators, the smallest possible (two-sided) p-value for a particular model coefficient is .0833 when k=4 and .0167 when k=5 (assuming that each row in the model matrix is unique). Therefore, the permutation test is only able to reject the null hypothesis at α=.05 when k is at least equal to 5. Consequently, permutation-based CIs can also only be obtained when k is sufficiently large.
When the number of permutations required for the exact test is so large as to be essentially indistinguishable from infinity (e.g., factorial(200)
), the function will terminate with an error.
Determining whether a test statistic under the permuted data is as extreme or more extreme than under the actually observed data requires making >=
or <=
comparisons. To avoid problems due to the finite precision with which computers generally represent numbers, the function uses a numerical tolerance (control argument comptol
, which is set by default equal to .Machine$double.eps^0.5
) when making such comparisons (e.g., instead of sqrt(3)^2 - 3 >= 0
, which may evaluate to FALSE
, we can use sqrt(3)^2 - 3 >= 0 - .Machine$double.eps^0.5
, which should evaluate to TRUE
).
When obtaining permutation-based CIs, the function makes use of uniroot
. By default, the desired accuracy is set equal to .Machine$double.eps^0.25
and the maximum number of iterations to 100
. The desired accuracy and the maximum number of iterations can be adjusted with the control
argument (i.e., control=list(tol=value, maxiter=value)
).
Wolfgang Viechtbauer wvb@metafor-project.org http://www.metafor-project.org/
Follmann, D. A., & Proschan, M. A. (1999). Valid inference in random effects meta-analysis. Biometrics, 55, 732–737.
Good, P. I. (2009). Permutation, parametric, and bootstrap tests of hypotheses (3rd ed.). New York: Springer.
Higgins, J. P. T., & Thompson, S. G. (2004). Controlling the risk of spurious findings from meta-regression. Statistics in Medicine, 23, 1663–1682.
Viechtbauer, W. (2010). Conducting meta-analyses in R with the metafor package. Journal of Statistical Software, 36(3), 1–48. https://www.jstatsoft.org/v036/i03.
Viechtbauer, W., López-López, J. A., Sánchez-Meca, J., & Marín-Martínez, F. (2015). A comparison of procedures to test for moderators in mixed-effects meta-regression models. Psychological Methods, 20, 360–374.
### calculate log risk ratios and corresponding sampling variances dat <- escalc(measure="RR", ai=tpos, bi=tneg, ci=cpos, di=cneg, data=dat.bcg) ### random-effects model res <- rma(yi, vi, data=dat) res ### permutation test (approximate and exact) ## Not run: permutest(res) permutest(res, exact=TRUE) ## End(Not run) ### mixed-effects model with two moderators (absolute latitude and publication year) res <- rma(yi, vi, mods = ~ ablat + year, data=dat) res ### permutation test (approximate only; exact not feasible) ## Not run: permres <- permutest(res, iter=10000, retpermdist=TRUE) permres ### histogram of permutation distribution for absolute latitude ### dashed horizontal line: the observed value of the test statistic ### red curve: standard normal density ### blue curve: kernel density estimate of the permutation distribution ### note that the tail area under the permutation distribution is larger ### than under a standard normal density (hence, the larger p-value) hist(permres$zval.perm[,2], breaks=120, freq=FALSE, xlim=c(-5,5), ylim=c(0,.4), main="Permutation Distribution", xlab="Value of Test Statistic", col="gray90") abline(v=res$zval[2], lwd=2, lty="dashed") abline(v=0, lwd=2) curve(dnorm, from=-5, to=5, add=TRUE, lwd=2, col=rgb(1,0,0,alpha=.7)) lines(density(permres$zval.perm[,2]), lwd=2, col=rgb(0,0,1,alpha=.7)) ## End(Not run)
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.