Meeting 20: Approximate Inference

Reading: AIAMA 14.5

We saw last time that exact inference is computationally very difficult. One important difference between logic and probability is that it makes sense to approximate a probability. This open up methods for approximating the posterior even for very complex networks. We will look at one of the basic approaches to this, sampling (variational approaches are the other), and two classic sampling algorithms likelihood-weighted and Gibbs sampling.

Questions you should be able to answer after reading are:

  1. What are Monte Carlo algorithms?
  2. What does it mean to sample from a distribution?
  3. What does it mean for a sample to be consistent?
  4. How does rejection sampling ensure consistent samples?
  5. Why is likelihood-weighting more efficient than simple rejection sampling?
  6. What is a Markov chain and how is it used in sampling?
  7. How does Gibbs sampling ensure consistent samples?