Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Generative Models-Introduction to Machine Learning-Lecture 15-Computer Science, Lecture notes of Introduction to Machine Learning

Generative Models, Optimal Classification, Bayes Classifier, Bayes Risk, Discriminant Functions, Two-Category, Equal Covariance Gaussian, Gaussian Case, Linear Discriminant, Generative Models, Maximum Likelihood, Density Estimation, Unequal Covariances, Gaussians, Decision Boundaries, Quadratic Decision Boundaries, Gaussian ML, Naive Bayes Classifier, SPAM Detection, MAP Estimate, Greg Shakhnarovich, Lecture Slides, Introduction to Machine Learning, Computer Science, Toyota Technological Institu

Typology: Lecture notes

2011/2012

Uploaded on 03/12/2012

alfred67
alfred67 🇺🇸

4.9

(20)

328 documents

1 / 37

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Lecture 15: Generative models
TTIC 31020: Introduction to Machine Learning
Instructor: Greg Shakhnarovich
TTI–Chicago
October 29, 2010
Lecture 15: Generative models TTIC 31020
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25

Partial preview of the text

Download Generative Models-Introduction to Machine Learning-Lecture 15-Computer Science and more Lecture notes Introduction to Machine Learning in PDF only on Docsity!

Lecture 15: Generative models

TTIC 31020: Introduction to Machine Learning

Instructor: Greg Shakhnarovich

TTI–Chicago

October 29, 2010

Reminder: optimal classification

Expected classification error is minimized by

h(x) = argmax c

p (y = c | x)

p (x | y) p(y) p(x)

The Bayes classifier:

h∗(x) = argmax c

p (x | y = c) p(y = c) p(x) = argmax c

p (x | y = c) p(y = c)

= argmax c

{log pc(x) + log Pc}.

Note: p(x) is equal for all c, and can be ignored.

Reminder: optimal classification

Expected classification error is minimized by

h(x) = argmax c

p (y = c | x)

p (x | y) p(y) p(x)

The Bayes classifier:

h∗(x) = argmax c

p (x | y = c) p(y = c) p(x) = argmax c

p (x | y = c) p(y = c)

= argmax c

{log pc(x) + log Pc}.

Note: p(x) is equal for all c, and can be ignored.

Bayes risk

−0.2−10 −8 −6 −4 −2 0 2 4 6 8 10

−0.

0

y =2, h∗(x) =

y =1, h∗(x) =

The risk (probability of error) of Bayes classifier h∗^ is called the Bayes risk R∗. This is the minimal achievable risk for the given p(x, y) with any classifier! In a sense, R∗^ measures the inherent difficulty of the classification problem.

R∗^ = 1 −

x

max c {p (x | c = y) Pc} dx

Two-category case

In case of two classes y ∈ {± 1 }, the Bayes classifier is

h∗(x) = argmax c=± 1

δc(x) = sign (δ+1(x) − δ− 1 (x)).

Decision boundary is given by δ+1(x) − δ− 1 (x) = 0.

  • (^) Sometimes f (x) = δ+1(x) − δ− 1 (x) is referred to as a discriminant function.

With equal priors, this is equivalent to the (log)-likelihood ratio test:

h∗(x) = sign

[

log p (x | y = +1) p (x | y = −1)

]

Equal covariance Gaussian case

Consider the case of pc(x) = N (x; μc, Σ), and equal prior for all classes.

δk(x) = log p(x | y = k)

= − log(2π)d/^2 −

log(|Σ|) ︸ ︷︷ ︸ same for all k

(x − μk)T^ Σ−^1 (x − μk)

Equal covariance Gaussian case

Consider the case of pc(x) = N (x; μc, Σ), and equal prior for all classes.

δk(x) = log p(x | y = k)

= − log(2π)d/^2 −

log(|Σ|) ︸ ︷︷ ︸ same for all k

(x − μk)T^ Σ−^1 (x − μk)

∝ const − xT^ Σ−^1 x + μTk Σ−^1 x + xT^ Σ−^1 μk − μTk Σ−^1 μk

Now consider the two classes:

δk(x) ∝ 2 μTk Σ−^1 x − μTk Σ−^1 μk

δc(x) ∝ 2 μTq Σ−^1 x − μTq Σ−^1 μq

Linear discriminant

Two class discriminants:

δk(x) − δq(x) = μTk Σ−^1 x − xT^ Σ−^1 μk + μTk Σ−^1 μk − μTq Σ−^1 x − xT^ Σ−^1 μq + μTq Σ−^1 μq

Generative models for classification

In generative models one explicitly models p(x, y) or, equivalently, pc(x) and Pc, to derive discriminants. Typically, the model imposes certain parametric form on the assumed distributions, and requires estimation of the parameters from data.

  • (^) Most popular: Gaussian for continuous, multinomial for discrete.
  • We will see later in this class non-parametric models. Often, the classifier is OK even if data clearly don’t conform to assumptions.

−12−10 −5 0 5 10 15 −10^ −

− −4^ −

02

46

8

−2−6 −4 −2 0 2 4 6 8

0

2

4

6

8

10

Generative models for classification

In generative models one explicitly models p(x, y) or, equivalently, pc(x) and Pc, to derive discriminants. Typically, the model imposes certain parametric form on the assumed distributions, and requires estimation of the parameters from data.

  • (^) Most popular: Gaussian for continuous, multinomial for discrete.
  • We will see later in this class non-parametric models. Often, the classifier is OK even if data clearly don’t conform to assumptions.

−12−10 −5 0 5 10 15 −10^ −

− −4^ −

02

46

8

−2−6 −4 −2 0 2 4 6 8

0

2

4

6

8

10

−4 −6 −4 −2 0 2 4 6 8 10 12

0

2 4

6

8

10

Gaussians with unequal covariances

What if we remove the restriction that ∀c, Σc = Σ?

Compute ML estimate for μc, Σc for each c.

We get discriminants (and decision boundaries) quadratic in x:

δc(x) = −

xT^ Σ− c 1 x + μTc Σ− c 1 x − 〈const in x〉

(as shown in PS1).

A quadratic form in x: xT^ Ax.

Quadratic decision boundaries

What do quadratic boundaries look like in 2D?

Second-degree curves can be any conic section:

Quadratic decision boundaries

What do quadratic boundaries look like in 2D?

Second-degree curves can be any conic section:

Can all of these arise from two Gaussian classes?

Sources of error in generative models

Reminder: three sources of error: noise variance (irreducible), structural due to our model class, estimation due to our choice of model from that class.

In generative model, estimation error may be due to overfitting.