Become an expert in R — Interactive courses, Cheat Sheets, certificates and more!
Get Started for Free

preProcess

Pre-Processing of Predictors


Description

Pre-processing transformation (centering, scaling etc.) can be estimated from the training data and applied to any data set with the same variables.

Usage

preProcess(x, ...)

## Default S3 method:
preProcess(
  x,
  method = c("center", "scale"),
  thresh = 0.95,
  pcaComp = NULL,
  na.remove = TRUE,
  k = 5,
  knnSummary = mean,
  outcome = NULL,
  fudge = 0.2,
  numUnique = 3,
  verbose = FALSE,
  freqCut = 95/5,
  uniqueCut = 10,
  cutoff = 0.9,
  rangeBounds = c(0, 1),
  ...
)

## S3 method for class 'preProcess'
predict(object, newdata, ...)

Arguments

x

a matrix or data frame. Non-numeric predictors are allowed but will be ignored.

...

additional arguments to pass to fastICA, such as n.comp

method

a character vector specifying the type of processing. Possible values are "BoxCox", "YeoJohnson", "expoTrans", "center", "scale", "range", "knnImpute", "bagImpute", "medianImpute", "pca", "ica", "spatialSign", "corr", "zv", "nzv", and "conditionalX" (see Details below)

thresh

a cutoff for the cumulative percent of variance to be retained by PCA

pcaComp

the specific number of PCA components to keep. If specified, this over-rides thresh

na.remove

a logical; should missing values be removed from the calculations?

k

the number of nearest neighbors from the training set to use for imputation

knnSummary

function to average the neighbor values per column during imputation

outcome

a numeric or factor vector for the training set outcomes. This can be used to help estimate the Box-Cox transformation of the predictor variables (see Details below)

fudge

a tolerance value: Box-Cox transformation lambda values within +/-fudge will be coerced to 0 and within 1+/-fudge will be coerced to 1.

numUnique

how many unique values should y have to estimate the Box-Cox transformation?

verbose

a logical: prints a log as the computations proceed

freqCut

the cutoff for the ratio of the most common value to the second most common value. See nearZeroVar.

uniqueCut

the cutoff for the percentage of distinct values out of the number of total samples. See nearZeroVar.

cutoff

a numeric value for the pair-wise absolute correlation cutoff. See findCorrelation.

rangeBounds

a two-element numeric vector specifying closed interval for range transformation

object

an object of class preProcess

newdata

a matrix or data frame of new data to be pre-processed

Details

In all cases, transformations and operations are estimated using the data in x and these operations are applied to new data using these values; nothing is recomputed when using the predict function.

The Box-Cox (method = "BoxCox"), Yeo-Johnson (method = "YeoJohnson"), and exponential transformations (method = "expoTrans") have been "repurposed" here: they are being used to transform the predictor variables. The Box-Cox transformation was developed for transforming the response variable while another method, the Box-Tidwell transformation, was created to estimate transformations of predictor data. However, the Box-Cox method is simpler, more computationally efficient and is equally effective for estimating power transformations. The Yeo-Johnson transformation is similar to the Box-Cox model but can accommodate predictors with zero and/or negative values (while the predictors values for the Box-Cox transformation must be strictly positive). The exponential transformation of Manly (1976) can also be used for positive or negative data.

method = "center" subtracts the mean of the predictor's data (again from the data in x) from the predictor values while method = "scale" divides by the standard deviation.

The "range" transformation scales the data to be within rangeBounds. If new samples have values larger or smaller than those in the training set, values will be outside of this range.

Predictors that are not numeric are ignored in the calculations.

method = "zv" identifies numeric predictor columns with a single value (i.e. having zero variance) and excludes them from further calculations. Similarly, method = "nzv" does the same by applying nearZeroVar exclude "near zero-variance" predictors. The options freqCut and uniqueCut can be used to modify the filter.

method = "corr" seeks to filter out highly correlated predictors. See findCorrelation.

For classification, method = "conditionalX" examines the distribution of each predictor conditional on the outcome. If there is only one unique value within any class, the predictor is excluded from further calculations (see checkConditionalX for an example). When outcome is not a factor, this calculation is not executed. This operation can be time consuming when used within resampling via train.

The operations are applied in this order: zero-variance filter, near-zero variance filter, correlation filter, Box-Cox/Yeo-Johnson/exponential transformation, centering, scaling, range, imputation, PCA, ICA then spatial sign. This is a departure from versions of caret prior to version 4.76 (where imputation was done first) and is not backwards compatible if bagging was used for imputation.

If PCA is requested but centering and scaling are not, the values will still be centered and scaled. Similarly, when ICA is requested, the data are automatically centered and scaled.

k-nearest neighbor imputation is carried out by finding the k closest samples (Euclidian distance) in the training set. Imputation via bagging fits a bagged tree model for each predictor (as a function of all the others). This method is simple, accurate and accepts missing values, but it has much higher computational cost. Imputation via medians takes the median of each predictor in the training set, and uses them to fill missing values. This method is simple, fast, and accepts missing values, but treats each predictor independently, and may be inaccurate.

A warning is thrown if both PCA and ICA are requested. ICA, as implemented by the fastICA package automatically does a PCA decomposition prior to finding the ICA scores.

The function will throw an error of any numeric variables in x has less than two unique values unless either method = "zv" or method = "nzv" are invoked.

Non-numeric data will not be pre-processed and their values will be in the data frame produced by the predict function. Note that when PCA or ICA is used, the non-numeric columns may be in different positions when predicted.

Value

preProcess results in a list with elements

call

the function call

method

a named list of operations and the variables used for each

dim

the dimensions of x

bc

Box-Cox transformation values, see BoxCoxTrans

mean

a vector of means (if centering was requested)

std

a vector of standard deviations (if scaling or PCA was requested)

rotation

a matrix of eigenvectors if PCA was requested

method

the value of method

thresh

the value of thresh

ranges

a matrix of min and max values for each predictor when method includes "range" (and NULL otherwise)

numComp

the number of principal components required of capture the specified amount of variance

ica

contains values for the W and K matrix of the decomposition

median

a vector of medians (if median imputation was requested)

predict.preProcess will produce a data frame.

Author(s)

Max Kuhn, median imputation by Zachary Mayer

References

Kuhn and Johnson (2013), Applied Predictive Modeling, Springer, New York (chapter 4)

Kuhn (2008), Building predictive models in R using the caret (http://www.jstatsoft.org/article/view/v028i05/v28i05.pdf)

Box, G. E. P. and Cox, D. R. (1964) An analysis of transformations (with discussion). Journal of the Royal Statistical Society B, 26, 211-252.

Box, G. E. P. and Tidwell, P. W. (1962) Transformation of the independent variables. Technometrics 4, 531-550.

Manly, B. L. (1976) Exponential data transformations. The Statistician, 25, 37 - 42.

Yeo, I-K. and Johnson, R. (2000). A new family of power transformations to improve normality or symmetry. Biometrika, 87, 954-959.

See Also

Examples

data(BloodBrain)
# one variable has one unique value
## Not run: 
preProc <- preProcess(bbbDescr)

preProc  <- preProcess(bbbDescr[1:100,-3])
training <- predict(preProc, bbbDescr[1:100,-3])
test     <- predict(preProc, bbbDescr[101:208,-3])

## End(Not run)

caret

Classification and Regression Training

v6.0-86
GPL (>= 2)
Authors
Max Kuhn [aut, cre], Jed Wing [ctb], Steve Weston [ctb], Andre Williams [ctb], Chris Keefer [ctb], Allan Engelhardt [ctb], Tony Cooper [ctb], Zachary Mayer [ctb], Brenton Kenkel [ctb], R Core Team [ctb], Michael Benesty [ctb], Reynald Lescarbeau [ctb], Andrew Ziem [ctb], Luca Scrucca [ctb], Yuan Tang [ctb], Can Candan [ctb], Tyler Hunt [ctb]
Initial release

We don't support your browser anymore

Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.