Extract Most "Important" Predictors (Experimental)
Extract the most "important" predictors for regression and classification models.
topPredictors(object, n = 1L, ...) ## Default S3 method: topPredictors(object, n = 1L, ...) ## S3 method for class 'train' topPredictors(object, n = 1L, ...)
object |
A fitted model object of appropriate class (e.g., |
n |
Integer specifying the number of predictors to return. Default is
|
... |
Additional optional arguments to be passed onto
|
This function uses the generic function varImp
to
calculate variable importance scores for each predictor. After that, they are
sorted at the names of the n
highest scoring predictors are returned.
## Not run: # # Regression example (requires randomForest package to run) # Load required packages library(ggplot2) library(randomForest) # Fit a random forest to the mtcars dataset data(mtcars, package = "datasets") set.seed(101) mtcars.rf <- randomForest(mpg ~ ., data = mtcars, mtry = 5, importance = TRUE) # Topfour predictors top4 <- topPredictors(mtcars.rf, n = 4) # Construct partial dependence functions for top four predictors pd <- NULL for (i in top4) { tmp <- partial(mtcars.rf, pred.var = i) names(tmp) <- c("x", "y") pd <- rbind(pd, cbind(tmp, predictor = i)) } # Display partial dependence functions ggplot(pd, aes(x, y)) + geom_line() + facet_wrap(~ predictor, scales = "free") + theme_bw() + ylab("mpg") ## End(Not run)
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.