Prediction strength for estimating number of clusters
Computes the prediction strength of a clustering of a dataset into different numbers of components. The prediction strength is defined according to Tibshirani and Walther (2005), who recommend to choose as optimal number of cluster the largest number of clusters that leads to a prediction strength above 0.8 or 0.9. See details.
Various clustering methods can be used, see argument
clustermethod
. In Tibshirani and Walther (2005), only
classification to the nearest centroid is discussed, but more methods
are offered here, see argument classification
.
prediction.strength(xdata, Gmin=2, Gmax=10, M=50, clustermethod=kmeansCBI, classification="centroid", centroidname = NULL, cutoff=0.8,nnk=1, distances=inherits(xdata,"dist"),count=FALSE,...) ## S3 method for class 'predstr' print(x, ...)
xdata |
data (something that can be coerced into a matrix). |
Gmin |
integer. Minimum number of clusters. Note that the
prediction strength for 1 cluster is trivially 1, which is
automatically included if |
Gmax |
integer. Maximum number of clusters. |
M |
integer. Number of times the dataset is divided into two halves. |
clustermethod |
an interface function (the function name, not a
string containing the name, has to be provided!). This defines the
clustering method. See the "Details"-section of |
classification |
string.
This determines how non-clustered points are classified to given
clusters. Options are explained in |
centroidname |
string. Indicates the name of the component of
|
cutoff |
numeric between 0 and 1. The optimal number of clusters
is the maximum one with prediction strength above |
nnk |
number of nearest neighbours if
|
distances |
logical. If |
count |
logical. |
x |
object of class |
... |
arguments to be passed on to the clustering method. |
The prediction strength for a certain number of clusters k under a
random partition of the dataset in halves A and B is defined as
follows. Both halves are clustered with k clusters. Then the points of
A are classified to the clusters of B. In the original paper
this is done by assigning every
observation in A to the closest cluster centroid in B (corresponding
to classification="centroid"
), but other methods are possible,
see classifnp
. A pair of points A in
the same A-cluster is defined to be correctly predicted if both points
are classified into the same cluster on B. The same is done with the
points of B relative to the clustering on A. The prediction strength
for each of the clusterings is the minimum (taken over all clusters)
relative frequency of correctly predicted pairs of points of that
cluster. The final mean prediction strength statistic is the mean over
all 2M clusterings.
prediction.strength
gives out an object of class
predstr
, which is a
list with components
predcorr |
list of vectors of length |
mean.pred |
means of |
optimalk |
optimal number of clusters. |
cutoff |
see above. |
method |
a string identifying the clustering method. |
Gmax |
see above. |
M |
see above. |
Tibshirani, R. and Walther, G. (2005) Cluster Validation by Prediction Strength, Journal of Computational and Graphical Statistics, 14, 511-528.
options(digits=3) set.seed(98765) iriss <- iris[sample(150,20),-5] prediction.strength(iriss,2,3,M=3) prediction.strength(iriss,2,3,M=3,clustermethod=claraCBI) # The examples are fast, but of course M should really be larger.
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.