Latent Class Model for Two Exchangeable Raters and One Item
This function computes a latent class model for ratings on an item based on exchangeable raters (Uebersax & Grove, 1990). Additionally, several measures of rater agreement are computed (see e.g. Gwet, 2010).
lc.2raters(data, conv=0.001, maxiter=1000, progress=TRUE) ## S3 method for class 'lc.2raters' summary(object,...)
data |
Data frame with item responses (must be ordered from 0 to K) and two columns which correspond to ratings of two (exchangeable) raters. |
conv |
Convergence criterion |
maxiter |
Maximum number of iterations |
progress |
An optional logical indicating whether iteration progress should be displayed. |
object |
Object of class |
... |
Further arguments to be passed |
For two exchangeable raters which provide ratings on an item, a latent class model with K+1 classes (if there are K+1 item categories 0,...,K) is defined. Where P(X=x, Y=y | c) denotes the probability that the first rating is x and the second rating is y given the true but unknown item category (class) c. Ratings are assumed to be locally independent, i.e.
P(X=x, Y=y | c )=P( X=x | c) \cdot P(Y=y | c )=p_{x|c} \cdot p_{y|c}
Note that P(X=x|c)=P(Y=x|c)=p_{x|c} holds due to the exchangeability of raters. The latent class model estimates true class proportions π_c and conditional item probabilities p_{x|c}.
A list with following entries
classprob.1rater.like |
Classification probability P(c|x) of latent category c given a manifest rating x (estimated by maximum likelihood) |
classprob.1rater.post |
Classification probability P(c|x) of latent category c given a manifest rating x (estimated by the posterior distribution) |
classprob.2rater.like |
Classification probability P(c|(x,y)) of latent category c given two manifest ratings x and y (estimated by maximum likelihood) |
classprob.2rater.post |
Classification probability P(c|(x,y)) of latent category c given two manifest ratings x and y (estimated by posterior distribution) |
f.yi.qk |
Likelihood of each pair of ratings |
f.qk.yi |
Posterior of each pair of ratings |
probs |
Item response probabilities p_{x|c} |
pi.k |
Estimated class proportions π_c |
pi.k.obs |
Observed manifest class proportions |
freq.long |
Frequency table of ratings in long format |
freq.table |
Symmetrized frequency table of ratings |
agree.stats |
Measures of rater agreement. These measures include
percentage agreement ( |
data |
Used dataset |
N.categ |
Number of categories |
Aickin, M. (1990). Maximum likelihood estimation of agreement in the constant predictive probability model, and its relation to Cohen's kappa. Biometrics, 46, 293-302.
Gwet, K. L. (2008). Computing inter-rater reliability and its variance in the presence of high agreement. British Journal of Mathematical and Statistical Psychology, 61, 29-48.
Gwet, K. L. (2010). Handbook of Inter-Rater Reliability. Advanced Analytics, Gaithersburg. http://www.agreestat.com/
Uebersax, J. S., & Grove, W. M. (1990). Latent class analysis of diagnostic agreement. Statistics in Medicine, 9, 559-572.
See also the irr package for measures of rater agreement.
############################################################################# # EXAMPLE 1: Latent class models for rating datasets data.si05 ############################################################################# data(data.si05) #*** Model 1: one item with two categories mod1 <- sirt::lc.2raters( data.si05$Ex1) summary(mod1) #*** Model 2: one item with five categories mod2 <- sirt::lc.2raters( data.si05$Ex2) summary(mod2) #*** Model 3: one item with eight categories mod3 <- sirt::lc.2raters( data.si05$Ex3) summary(mod3)
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.