# Gibbs Sampling

• Jim Albert
Chapter

One attractive method for constructing an MCMC algorithm is Gibbs sampling, introduced in Chapter 6. To slightly generalize our earlier discussion, suppose that we partition the parameter vector of interest into $$p$$ components $$\theta = (\theta_1, \ldots, \theta_p)$$, where $$\theta_k$$ may consist of a vector of parameters. The MCMC algorithm is implemented by sampling in turn from the $$p$$ conditional posterior distributions.

## Keywords

Posterior Distribution Grade Point Average Gibbs Sampling Posterior Density Order Restriction
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

## Authors and Affiliations

• Jim Albert
• 1
1. 1.Bowling Green state UniversityBowling GreenUSA