Density Estimation with Dirichlet Process Prior and Normal Base
rDPGibbs
implements a Gibbs Sampler to draw from the posterior for a normal mixture problem with a Dirichlet Process prior. A natural conjugate base prior is used along with priors on the hyper parameters of this distribution. One interpretation of this model is as a normal mixture with a random number of components that can grow with the sample size.
rDPGibbs(Prior, Data, Mcmc)
Data |
list(y) |
Prior |
list(Prioralpha, lambda_hyper) |
Mcmc |
list(R, keep, nprint, maxuniq, SCALE, gridsize) |
y_i ~ N(μ_i, Σ_i)
θ_i=(μ_i,Σ_i) ~ DP(G_0(λ),alpha)
G_0(λ):
μ_i | Σ_i ~ N(0,Σ_i (x) a^{-1})
Σ_i ~ IW(nu,nu*v*I)
λ(a,nu,v):
a ~ uniform on grid[alim[1], alimb[2]]
nu ~ uniform on grid[dim(data)-1 + exp(nulim[1]), dim(data)-1 + exp(nulim[2])]
v ~ uniform on grid[vlim[1], vlim[2]]
alpha ~ (1-(α-alphamin)/(alphamax-alphamin))^{power}
alpha = alphamin then expected number of components = Istarmin
alpha = alphamax then expected number of components = Istarmax
We parameterize the prior on Σ_i such that mode(Σ)= nu/(nu+2) vI. The support of nu enforces valid IW density; nulim[1] > 0
We use the structure for nmix
that is compatible with the bayesm
routines for finite mixtures of normals. This allows us to use the same summary and plotting methods.
The default choices of alim
, nulim
, and vlim
determine the location and approximate size of candidate "atoms" or possible normal components. The defaults are sensible given that we scale the data. Without scaling, you want to insure that alim
is set for a wide enough range of values (remember a is a precision parameter) and the v
is big enough to propose Sigma
matrices wide enough to cover the data range.
A careful analyst should look at the posterior distribution of a
, nu
, v
to make sure that the support is set correctly in alim
, nulim
, vlim
. In other words, if we see the posterior bunched up at one end of these support ranges, we should widen the range and rerun.
If you want to force the procedure to use many small atoms, then set nulim
to consider only large values and set vlim
to consider only small scaling constants. Set Istarmax
to a large number. This will create a very "lumpy" density estimate somewhat like the classical Kernel density estimates. Of course, this is not advised if you have a prior belief that densities are relatively smooth.
Data = list(y)
y: |
n x k matrix of observations on k dimensional data |
Prior = list(Prioralpha, lambda_hyper)
[optional]
Prioralpha: |
list(Istarmin, Istarmax, power) |
$Istarmin: |
is expected number of components at lower bound of support of alpha (def: 1) |
$Istarmax: |
is expected number of components at upper bound of support of alpha (def: min(50, 0.1*nrow(y)) ) |
$power: |
is the power parameter for alpha prior (def: 0.8) |
lambda_hyper: |
list(alim, nulim, vlim) |
$alim: |
defines support of a distribution (def: c(0.01, 10) ) |
$nulim: |
defines support of nu distribution (def: c(0.01, 3) ) |
$vlim: |
defines support of v distribution (def: c(0.1, 4) )
|
Mcmc = list(R, keep, nprint, maxuniq, SCALE, gridsize)
[only R
required]
R: |
number of MCMC draws |
keep: |
MCMC thinning parameter -- keep every keep th draw (def: 1) |
nprint: |
print the estimated time remaining for every nprint 'th draw (def: 100, set to 0 for no print) |
maxuniq: |
storage constraint on the number of unique components (def: 200) |
SCALE: |
should data be scaled by mean,std deviation before posterior draws (def: TRUE ) |
gridsize: |
number of discrete points for hyperparameter priors (def: 20) |
nmix
Detailsnmix
is a list with 3 components. Several functions in the bayesm
package that involve a Dirichlet Process or mixture-of-normals return nmix
. Across these functions, a common structure is used for nmix
in order to utilize generic summary and plotting functions.
probdraw: |
ncomp x R/keep matrix that reports the probability that each draw came from a particular component |
zdraw: |
R/keep x nobs matrix that indicates which component each draw is assigned to |
compdraw: |
A list of R/keep lists of ncomp lists. Each of the inner-most lists has 2 elemens: a vector of draws for mu and a matrix of draws for the Cholesky root of Sigma .
|
A list containing:
nmix |
a list containing: |
alphadraw |
R/keep x 1 vector of alpha draws |
nudraw |
R/keep x 1 vector of nu draws |
adraw |
R/keep x 1 vector of a draws |
vdraw |
R/keep x 1 vector of v draws |
Peter Rossi, Anderson School, UCLA, perossichi@gmail.com.
if(nchar(Sys.getenv("LONG_TEST")) != 0) {R=2000} else {R=10} set.seed(66) ## simulate univariate data from Chi-Sq N = 200 chisqdf = 8 y1 = as.matrix(rchisq(N,df=chisqdf)) ## set arguments for rDPGibbs Data1 = list(y=y1) Prioralpha = list(Istarmin=1, Istarmax=10, power=0.8) Prior1 = list(Prioralpha=Prioralpha) Mcmc = list(R=R, keep=1, maxuniq=200) out1 = rDPGibbs(Prior=Prior1, Data=Data1, Mcmc=Mcmc) if(0){ ## plotting examples rgi = c(0,20) grid = matrix(seq(from=rgi[1],to=rgi[2],length.out=50), ncol=1) deltax = (rgi[2]-rgi[1]) / nrow(grid) plot(out1$nmix, Grid=grid, Data=y1) ## plot true density with historgram plot(range(grid[,1]), 1.5*range(dchisq(grid[,1],df=chisqdf)), type="n", xlab=paste("Chisq ; ",N," obs",sep=""), ylab="") hist(y1, xlim=rgi, freq=FALSE, col="yellow", breaks=20, add=TRUE) lines(grid[,1], dchisq(grid[,1],df=chisqdf) / (sum(dchisq(grid[,1],df=chisqdf))*deltax), col="blue", lwd=2) } ## simulate bivariate data from the "Banana" distribution (Meng and Barnard) banana = function(A, B, C1, C2, N, keep=10, init=10) { R = init*keep + N*keep x1 = x2 = 0 bimat = matrix(double(2*N), ncol=2) for (r in 1:R) { x1 = rnorm(1,mean=(B*x2+C1) / (A*(x2^2)+1), sd=sqrt(1/(A*(x2^2)+1))) x2 = rnorm(1,mean=(B*x2+C2) / (A*(x1^2)+1), sd=sqrt(1/(A*(x1^2)+1))) if (r>init*keep && r%%keep==0) { mkeep = r/keep bimat[mkeep-init,] = c(x1,x2) } } return(bimat) } set.seed(66) nvar2 = 2 A = 0.5 B = 0 C1 = C2 = 3 y2 = banana(A=A, B=B, C1=C1, C2=C2, 1000) Data2 = list(y=y2) Prioralpha = list(Istarmin=1, Istarmax=10, power=0.8) Prior2 = list(Prioralpha=Prioralpha) Mcmc = list(R=R, keep=1, maxuniq=200) out2 = rDPGibbs(Prior=Prior2, Data=Data2, Mcmc=Mcmc) if(0){ ## plotting examples rx1 = range(y2[,1]) rx2 = range(y2[,2]) x1 = seq(from=rx1[1], to=rx1[2], length.out=50) x2 = seq(from=rx2[1], to=rx2[2], length.out=50) grid = cbind(x1,x2) plot(out2$nmix, Grid=grid, Data=y2) ## plot true bivariate density tden = matrix(double(50*50), ncol=50) for (i in 1:50) { for (j in 1:50) { tden[i,j] = exp(-0.5*(A*(x1[i]^2)*(x2[j]^2) + (x1[i]^2) + (x2[j]^2) - 2*B*x1[i]*x2[j] - 2*C1*x1[i] - 2*C2*x2[j])) }} tden = tden / sum(tden) image(x1, x2, tden, col=terrain.colors(100), xlab="", ylab="") contour(x1, x2, tden, add=TRUE, drawlabels=FALSE) title("True Density") }
Please choose more modern alternatives, such as Google Chrome or Mozilla Firefox.