Become an expert in R — Interactive courses, Cheat Sheets, certificates and more!
Get Started for Free

MCMCirtHier1d

Markov Chain Monte Carlo for Hierarchical One Dimensional Item Response Theory Model, Covariates Predicting Latent Ideal Point (Ability)


Description

This function generates a sample from the posterior distribution of a one dimensional item response theory (IRT) model, with multivariate Normal priors on the item parameters, and a Normal-Inverse Gamma hierarchical prior on subject ideal points (abilities). The user supplies item-response data, subject covariates, and priors. Note that this identification strategy obviates the constraints used on theta in MCMCirt1d. A sample from the posterior distribution is returned as an mcmc object, which can be subsequently analyzed with functions provided in the coda package.

Usage

MCMCirtHier1d(
  datamatrix,
  Xjdata,
  burnin = 1000,
  mcmc = 20000,
  thin = 1,
  verbose = 0,
  seed = NA,
  theta.start = NA,
  a.start = NA,
  b.start = NA,
  beta.start = NA,
  b0 = 0,
  B0 = 0.01,
  c0 = 0.001,
  d0 = 0.001,
  ab0 = 0,
  AB0 = 0.25,
  store.item = FALSE,
  store.ability = TRUE,
  drop.constant.items = TRUE,
  marginal.likelihood = c("none", "Chib95"),
  px = TRUE,
  px_a0 = 10,
  px_b0 = 10,
  ...
)

Arguments

datamatrix

The matrix of data. Must be 0, 1, or missing values. The rows of datamatrix correspond to subjects and the columns correspond to items.

Xjdata

A data.frame containing second-level predictor covariates for ideal points θ. Predictors are modeled as a linear regression on the mean vector of θ; the posterior sample contains regression coefficients β and common variance σ^2. See Rivers (2003) for a thorough discussion of identification of IRT models.

burnin

The number of burn-in iterations for the sampler.

mcmc

The number of Gibbs iterations for the sampler.

thin

The thinning interval used in the simulation. The number of Gibbs iterations must be divisible by this value.

verbose

A switch which determines whether or not the progress of the sampler is printed to the screen. If verbose is greater than 0 then every verboseth iteration will be printed to the screen.

seed

The seed for the random number generator. If NA, the Mersenne Twister generator is used with default seed 12345; if an integer is passed it is used to seed the Mersenne twister. The user can also pass a list of length two to use the L'Ecuyer random number generator, which is suitable for parallel computation. The first element of the list is the L'Ecuyer seed, which is a vector of length six or NA (if NA a default seed of rep(12345,6) is used). The second element of list is a positive substream number. See the MCMCpack specification for more details.

theta.start

The starting values for the subject abilities (ideal points). This can either be a scalar or a column vector with dimension equal to the number of voters. If this takes a scalar value, then that value will serve as the starting value for all of the thetas. The default value of NA will choose the starting values based on an eigenvalue-eigenvector decomposition of the agreement score matrix formed from the datamatrix.

a.start

The starting values for the a difficulty parameters. This can either be a scalar or a column vector with dimension equal to the number of items. If this takes a scalar value, then that value will serve as the starting value for all a. The default value of NA will set the starting values based on a series of probit regressions that condition on the starting values of theta.

b.start

The starting values for the b discrimination parameters. This can either be a scalar or a column vector with dimension equal to the number of items. If this takes a scalar value, then that value will serve as the starting value for all b. The default value of NA will set the starting values based on a series of probit regressions that condition on the starting values of theta.

beta.start

The starting values for the β regression coefficients that predict the means of ideal points θ. This can either be a scalar or a column vector with length equal to the number of covariates. If this takes a scalar value, then that value will serve as the starting value for all of the betas. The default value of NA will set the starting values based on a linear regression of the covariates on (either provided or generated) theta.start.

b0

The prior mean of β. Can be either a scalar or a vector of length equal to the number of subject covariates. If a scalar all means with be set to the passed value.

B0

The prior precision of β. This can either be a scalar or a square matrix with dimensions equal to the number of betas. If this takes a scalar value, then that value times an identity matrix serves as the prior precision of beta. A default proper but diffuse value of .01 ensures finite marginal likelihood for model comparison. A value of 0 is equivalent to an improper uniform prior for beta.

c0

c_0/2 is the shape parameter for the inverse Gamma prior on σ^2 (the variance of θ). The amount of information in the inverse Gamma prior is something like that from c_0 pseudo-observations.

d0

d_0/2 is the scale parameter for the inverse Gamma prior on σ^2 (the variance of θ). In constructing the inverse Gamma prior, d_0 acts like the sum of squared errors from the c_0 pseudo-observations.

ab0

The prior mean of (a, b). Can be either a scalar or a 2-vector. If a scalar both means will be set to the passed value. The prior mean is assumed to be the same across all items.

AB0

The prior precision of (a, b).This can either be ascalar or a 2 by 2 matrix. If this takes a scalar value, then that value times an identity matrix serves as the prior precision. The prior precision is assumed to be the same across all items.

store.item

A switch that determines whether or not to store the item parameters for posterior analysis. NOTE: In situations with many items storing the item parameters takes an enormous amount of memory, so store.item should only be TRUE if the chain is thinned heavily, or for applications with a small number of items. By default, the item parameters are not stored.

store.ability

A switch that determines whether or not to store the ability parameters for posterior analysis. NOTE: In situations with many individuals storing the ability parameters takes an enormous amount of memory, so store.ability should only be TRUE if the chain is thinned heavily, or for applications with a small number of individuals. By default, ability parameters are stored.

drop.constant.items

A switch that determines whether or not items that have no variation should be deleted before fitting the model. Default = TRUE.

marginal.likelihood

Should the marginal likelihood of the second-level model on ideal points be calculated using the method of Chib (1995)? It is stored as an attribute of the posterior mcmc object and suitable for comparison using BayesFactor.

px

Use Parameter Expansion to reduce autocorrelation in the chain? PX introduces an unidentified parameter alpha for the residual variance in the latent data (Liu and Wu 1999). Default = TRUE

px_a0

Prior shape parameter for the inverse-gamma distribution on alpha, the residual variance of the latent data. Default=10.

px_b0

Prior scale parameter for the inverse-gamma distribution on alpha, the residual variance of the latent data. Default = 10

...

further arguments to be passed

Details

If you are interested in fitting K-dimensional item response theory models, or would rather identify the model by placing constraints on the item parameters, please see MCMCirtKd.

MCMCirtHier1d simulates from the posterior distribution using standard Gibbs sampling using data augmentation (a Normal draw for the subject abilities, a multivariate Normal draw for (second-level) subject ability predictors, an Inverse-Gamma draw for the (second-level) variance of subject abilities, a multivariate Normal draw for the item parameters, and a truncated Normal draw for the latent utilities). The simulation proper is done in compiled C++ code to maximize efficiency. Please consult the coda documentation for a comprehensive list of functions that can be used to analyze the posterior sample.

The model takes the following form. We assume that each subject has an subject ability (ideal point) denoted θ_j and that each item has a difficulty parameter a_i and discrimination parameter b_i. The observed choice by subject j on item i is the observed data matrix which is (I \times J). We assume that the choice is dictated by an unobserved utility:

z_{i,j} = -α_i + β_i θ_j + \varepsilon_{i,j}

Where the errors are assumed to be distributed standard Normal. This constitutes the measurement or level-1 model. The subject abilities (ideal points) are modeled by a second level Normal linear predictor for subject covariates Xjdata, with common variance σ^2. The parameters of interest are the subject abilities (ideal points), item parameters, and second-level coefficients.

We assume the following priors. For the subject abilities (ideal points):

θ_j \sim \mathcal{N}(μ_{θ} ,T_{0}^{-1})

For the item parameters, the prior is:

≤ft[a_i, b_i \right]' \sim \mathcal{N}_2 (ab_{0},AB_{0}^{-1})

The model is identified by the proper priors on the item parameters and constraints placed on the ability parameters.

As is the case with all measurement models, make sure that you have plenty of free memory, especially when storing the item parameters.

Value

An mcmc object that contains the sample from the posterior distribution. This object can be summarized by functions provided by the coda package. If marginal.likelihood = "Chib95" the object will have attribute logmarglike.

Author(s)

References

James H. Albert. 1992. “Bayesian Estimation of Normal Ogive Item Response Curves Using Gibbs Sampling." Journal of Educational Statistics. 17: 251–269.

Joshua Clinton, Simon Jackman, and Douglas Rivers. 2004. “The Statistical Analysis of Roll Call Data." American Political Science Review 98: 355–370.

Valen E. Johnson and James H. Albert. 1999. “Ordinal Data Modeling." Springer: New York.

Liu, Jun S. and Ying Nian Wu. 1999. “Parameter Expansion for Data Augmentation.” Journal of the American Statistical Association 94: 1264–1274.

Andrew D. Martin, Kevin M. Quinn, and Jong Hee Park. 2011. “MCMCpack: Markov Chain Monte Carlo in R.”, Journal of Statistical Software. 42(9): 1-21. https://www.jstatsoft.org/v42/i09/.

Daniel Pemstein, Kevin M. Quinn, and Andrew D. Martin. 2007. Scythe Statistical Library 1.0. http://scythe.lsa.umich.edu.

Martyn Plummer, Nicky Best, Kate Cowles, and Karen Vines. 2006. “Output Analysis and Diagnostics for MCMC (CODA)”, R News. 6(1): 7-11. https://CRAN.R-project.org/doc/Rnews/Rnews_2006-1.pdf.

Douglas Rivers. 2004. “Identification of Multidimensional Item-Response Models." Stanford University, typescript.

See Also

Examples

## Not run: 
data(SupremeCourt)

Xjdata <- data.frame(presparty= c(1,1,0,1,1,1,1,0,0),
                     sex= c(0,0,1,0,0,0,0,1,0))

## Parameter Expansion reduces autocorrelation.
  posterior1 <- MCMCirtHier1d(t(SupremeCourt),
                   burnin=50000, mcmc=10000, thin=20,
                   verbose=10000,
                   Xjdata=Xjdata,
                   marginal.likelihood="Chib95",
		   px=TRUE)

## But, you can always turn it off.
  posterior2 <- MCMCirtHier1d(t(SupremeCourt),
                   burnin=50000, mcmc=10000, thin=20,
                   verbose=10000,
                   Xjdata=Xjdata,
                   #marginal.likelihood="Chib95",
		   px=FALSE)
## Note that the hierarchical model has greater autocorrelation than
## the naive IRT model.
  posterior0 <- MCMCirt1d(t(SupremeCourt),
                        theta.constraints=list(Scalia="+", Ginsburg="-"),
                        B0.alpha=.2, B0.beta=.2,
                        burnin=50000, mcmc=100000, thin=100, verbose=10000,
                        store.item=FALSE)

## Randomly 10% Missing -- this affects the expansion parameter, increasing
## the variance of the (unidentified) latent parameter alpha.

   scMiss <- SupremeCourt
   scMiss[matrix(as.logical(rbinom(nrow(SupremeCourt)*ncol(SupremeCourt), 1, .1)),
      dim(SupremeCourt))] <- NA

   posterior1.miss <- MCMCirtHier1d(t(scMiss),
                   burnin=80000, mcmc=10000, thin=20,
                   verbose=10000,
                   Xjdata=Xjdata,
                   marginal.likelihood="Chib95",
		   px=TRUE)

   
## End(Not run)

MCMCpack

Markov Chain Monte Carlo (MCMC) Package

v1.5-0
GPL-3
Authors
Andrew D. Martin [aut], Kevin M. Quinn [aut], Jong Hee Park [aut,cre], Ghislain Vieilledent [ctb], Michael Malecki[ctb], Matthew Blackwell [ctb], Keith Poole [ctb], Craig Reed [ctb], Ben Goodrich [ctb], Ross Ihaka [cph], The R Development Core Team [cph], The R Foundation [cph], Pierre L'Ecuyer [cph], Makoto Matsumoto [cph], Takuji Nishimura [cph]
Initial release
2021-01-19

We don't support your browser anymore

Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.