Question: 4.19 There is a connection between the EM algorithm and Gibbs sampling, in that both have their basis in Markov chain theory. One way of
4.19 There is a connection between the EM algorithm and Gibbs sampling, in that both have their basis in Markov chain theory. One way of seeing this is to show that the incomplete-data likelihood is a solution to the integral equation of successive substitution sampling (see Problems 4.5.9-4.5.11), and that Gibbs sampling can then be used to calculate the likelihood function. If L(θ|y) is the incomplete-data likelihood and L(θ|y, z) is the complete-data likelihood, define L∗
(θ|y) =
L(θ|y)
L(θ|y)dθ , L∗
(θ|y, z) =
L(θ|y, z)
L(θ|y, z)dθ .
.
(a) Show that L∗(θ|y) is the solution to L∗
(θ|y) =
L∗
(θ|y, z)k(z|θ
, y)dz
L∗
(θ
|y)dθ
where, as usual, k(z|θ , y) = L(θ|y, z)/L(θ|y).
(b) Show how the sequence θ(j ) from the Gibbs iteration,
θ(j ) ∼ L∗
(θ|y, z(j−1)), z(j ) ∼ k(z|θ(j ), y), will converge to a random variable with density L∗(θ|y) as j → ∞. How can this be used to compute the likelihood function L(θ|y)?
[Using the functions L(θ|y, z) and k(z|θ , y), the EM algorithm will get us the ML estimator from L(θ|y), whereas the Gibbs sampler will get us the entire function. This likelihood implementation of the Gibbs sampler was used by Casella and Berger (1994)
and is also described by Smith and Roberts (1993). A version of the EM algorithm, where the Markov chain connection is quite apparent, was given by Baum and Petrie
(1966) and Baum et al. (1970).]
Step by Step Solution
There are 3 Steps involved in it
Get step-by-step solutions from verified subject matter experts
