One attractive method for constructing an MCMC algorithm is Gibbs sampling, introduced in Chapter 6. To slightly generalize our earlier discussion, suppose that we partition the parameter vector of interest into \(p\) components \(\theta = (\theta_1, \ldots, \theta_p)\), where \(\theta_k\) may consist of a vector of parameters. The MCMC algorithm is implemented by sampling in turn from the \(p\) conditional posterior distributions.
KeywordsPosterior Distribution Grade Point Average Gibbs Sampling Posterior Density Order Restriction
Unable to display preview. Download preview PDF.