Brier Score for Assessing Prediction Accuracy
Calculate Brier score for assessing the quality of the probabilistic predictions of binary events.
BrierScore(...) ## S3 method for class 'glm' BrierScore(x, scaled = FALSE, ...) ## Default S3 method: BrierScore(resp, pred, scaled = FALSE, ...)
x |
a glm object |
resp |
the response variable |
pred |
the predicted values |
scaled |
logical, defining if scaled or not. Default is FALSE. |
... |
further arguments to be passed to other functions. |
The Brier score is a proper score function that measures the accuracy of probabilistic predictions. It is applicable to tasks in which predictions must assign probabilities to a set of mutually exclusive discrete outcomes. The set of possible outcomes can be either binary or categorical in nature, and the probabilities assigned to this set of outcomes must sum to one (where each individual probability is in the range of 0 to 1).
It's calculated as
1/n sum(p_i - o_i)^2, where p_i...predicted probability and o_i...observed value out of (0,1)
The lower the Brier score is for a set of predictions, the better the predictions are calibrated. Note that the Brier score, in its most common formulation, takes on a value between zero and one, since this is the largest possible difference between a predicted probability (which must be between zero and one) and the actual outcome (which can take on values of only 0 and 1). (In the original (1950) formulation of the Brier score, the range is double, from zero to two.)
a numeric value
Andri Signorell <andri@signorell.net>
Brier, G. W. (1950) Verification of forecasts expressed in terms of probability. Monthly Weather Review, 78, 1-3.
r.glm <- glm(Survived ~ ., data=Untable(Titanic), family=binomial) BrierScore(r.glm)
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.