Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

EM Algorithm for Gaussian Mixture Models: Lecture 17 in TTIC 31020, Lecture notes of Introduction to Machine Learning

An overview of the em algorithm for gaussian mixture models, as presented in lecture 17 of the ttic 31020: introduction to machine learning course. The em algorithm is a method for maximizing the likelihood of a model given observed data, and it is particularly useful for mixture models, where the data can be assumed to come from a mixture of multiple distributions. The basic steps of the em algorithm, including initialization, the e-step and m-step, and the convergence of the algorithm. It also discusses the use of the em algorithm for missing data and the importance of regularization to prevent overfitting.

Typology: Lecture notes

2011/2012

Uploaded on 03/12/2012

alfred67
alfred67 🇺🇸

4.9

(20)

328 documents

1 / 22

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Lecture 17: EM
TTIC 31020: Introduction to Machine Learning
Instructor: Greg Shakhnarovich
TTI–Chicago
November 3, 2010
Lecture 17: EM TTIC 31020
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16

Partial preview of the text

Download EM Algorithm for Gaussian Mixture Models: Lecture 17 in TTIC 31020 and more Lecture notes Introduction to Machine Learning in PDF only on Docsity!

Lecture 17: EM

TTIC 31020: Introduction to Machine Learning

Instructor: Greg Shakhnarovich

TTI–Chicago

November 3, 2010

The EM for Gaussian mixtures- summary

Initialize: random μoldc , Σoldc , πoldc = 1/k, for c = 1,... , k. Iterate until convergence: E-step estimate responsibilities:

γic =

πcold N

xi; μoldc , Σoldc

∑k l=1 π old l N^

xi; μoldl , Σoldl

M-step re-estimate mixture parameters:

̂ πnewc =

N

∑^ N

i=

γic,

̂ μnewc =

∑N

i=1 γic

∑N

i=

γicxi,

Σ̂ newc = (^) ∑^1 N i=1 γic

∑N

i=

γic(xi − μ̂ newc )(xi − ̂μnewc )T^.

The EM algorithm in general

Observed data X, hidden variables Z.

  • (^) E.g., missing data.

Initialize θold, and iterate until convergence: E-step: Compute the expected complete data log-likelihood as a function of θ.

Q

θ; θold

= Ep(Z | X,θold)

[

log p(X, Z; θ) | X, θold

]

M-step: Compute

θnew^ = argmax θ

Q

θ; θold

EM for missing data

Suppose some of the data is missing.

Examples:

  • (^) A mixture of 4 Gaussians, two of which are given
  • A set of points with some coordinates in some points missing.

E-step: compute posterior and expectation only over missing data.

M-step: maximize likelihood over complete data, i.e. observed and missing.

Example: EM for missing data (continued)

Initial guess: θold^ = [0, 0 , 1 , 1]T^ i.e., μ = 0 , Σ = I. After some calculus:

Q(θ; θold) =

∑^3

i=

N (xi; 0 , I)− 1 + μ^21 2 σ^21

(4 − μ 2 )^2 2 σ^22

−log 2πσ 1 σ 2.

Maximizing w.r.t. θ we get

θnew^ = [0. 75 , 2 , 0. 938 , 2]T

After three iterations, converges to θ = [1, 2 ,. 667 , 2]T

Why does EM work?

Ultimately, we want to maximize likelihood of the observed data θ∗^ = argmax θ

log p(X; θ).

Let log p(t)^ be log p(X; θnew) after t iterations.

Can show (not today):

log p(0)^ ≤ log p(1)^ ≤... ≤ log p(t)^...

EM and maximum likelihood

log p(0)^ ≤ log p(1)^ ≤... ≤ log p(t)^...

The idea of the proof:

  • in each iteration, E-step computes Q(θ; θold) which is a lower bound on log p(X; θ);
  • M-step maximizes (“saturates”) that lower bound.
  • (^) In the subsequent E-step, the new bound is at least as high as the previous one.

EM monotonically increases likelihood of the observed data.

  • (^) as long as log p(X; θnew) < ∞ EM necessarily converges
    • but possibly to a local maximum!
  • (^) One popular solution: restart a number of times (with different initializations), and choose the run with highest log p(X; θf inal).

Caveat: log-sum computation

Typical with mixture models, need to compute log p(x ; θ) = log

∑k c=1 πc^ p(x;^ θc) Problem: underflow (especially in high-dimensional spaces)

x=[-2000 -2006]; log(sum(exp(x))) Warning: Log of zero. ans = -Inf

Observation: a = log[exp(a)] = log[exp(a + B)] − B. Set a constant B so that the highest p(x; θc) saturates positive precision on your machine.

B=700-max(logp);log(sum(exp(x+B)))-B ans = -2.0000e+

EM and overfitting

We can be very unlucky with the initial guess.

1st iteration 2nd 4th

−4−6 −4 −2 0 2 4 6 8

0

2 4

6

8

10

12

−4−6 −4 −2 0 2 4 6 8

0

2

4

6

8

10

−4−6 −4 −2 0 2 4 6 8

0

2 4

6

8

10

12

The problem:

lim σ^2 → 0

N

x; μ = x, Σ = σ^2 I

Regularized EM

Impose a prior on θ.

Instead of maximizing the likelihood in the M-step, maximize the posterior:

θnew^ = argmax θ

Ep(Z|X;θ)

[

log p(X, Z; θ)|X; θold

]

  • log p(θ)

A common prior on a covariance matrix: the Wishart distribution

p(Σ; S, n) ∝

|Σ|n/^2

exp

Tr

Σ−^1 S

Intuition: S is the covariance of n “hallucinated” observations.

Model selection: setting k

So far we have assumed known k. Idea: select k that maximizes the likelihood.

−6 −4 −2 0 2 4 6 8

0

2

4

6

8

k

log

p

Model selection: setting k

So far we have assumed known k. Idea: select k that maximizes the likelihood.

−6 −4 −2 0 2 4 6 8

0

2

4

6

8

−6 −4 −2 0 2 4 6 8

0

2

4

6

8

k

log

p

Model selection: setting k

So far we have assumed known k. Idea: select k that maximizes the likelihood.

−6 −4 −2 0 2 4 6 8

0

2

4

6

8

−6 −4 −2 0 2 4 6 8

0

2

4

6

8

−6 −4 −2 0 2 4 6 8

0

2

4

6

8

−6 −4 −2 0 2 4 6 8

0 2

4 6

8

−8 −6 −4 −2 0 2 4 6 8

−2^0

2 46

8 10

k

log

p

Overfitting mixture models

Place a separate, very narow Gaussian component on every training example.

This solution yields infinite log-likelihood!

We need a criterion to penalize such models.

Occam’s razor: try to find the simplest among all possible explanations.

Reminder: AIC

max θ

log p(X; θ) −

|θ| 2

where |θ| stands for the number of parameters in θ.