Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Lecture 4: Statistical Inference - Populations, Samples, Models - Prof. Grzegorz A. Rempal, Study notes of Statistics

A portion of lecture notes from a biostatistics course (stat 9220) at the medical college of georgia. The notes cover the topics of mathematical statistics, parametric and nonparametric families, exponential families, location-scale families, and statistics and their distributions. The lecturer, greg rempala, explains various concepts, definitions, and examples related to these topics.

Typology: Study notes

Pre 2010

Uploaded on 08/04/2009

koofers-user-1k6
koofers-user-1k6 🇺🇸

10 documents

1 / 14

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
STAT 9220
Lecture 4
Statistical inference: Populations, Samples,
Models
Greg Rempala
Department of Biostatistics
Medical College of Georgia
Feb 3, 2009
1
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe

Partial preview of the text

Download Lecture 4: Statistical Inference - Populations, Samples, Models - Prof. Grzegorz A. Rempal and more Study notes Statistics in PDF only on Docsity!

STAT 9220

Lecture 4

Statistical inference: Populations, Samples,

Models

Greg Rempala

Department of Biostatistics

Medical College of Georgia

Feb 3, 2009

4.1 Mathematical statistics – fundamentals

Let (Ω, F, P ) be a probability space. P is called the population (often unknown). Let X a random element called a sample from P. Let (x 1 ,... , xn) be a data set that is viewed as an outcome of the experiment whose sample space is Ω = Rn. We can define a random vector X = (X 1 ,... , Xn) on

∏n i=1(R,^ B, P^ ) whose realization is (x 1 ,... , xn) (i.i.d. samples).

Problem. Determine P based on X.

  • (i) (Measurement problems). We are looking for μ(θ). We take X¯ = (^1) n

∑n

i=

Xi.

How close are μ(θ) on X¯? (by SLLN X¯ → μ(θ)).

  • (ii) (Life-time testing problems). Let x 1 ,... , xn be observed lifetimes of some electronic components. They can be viewed as realizations of independent random variables X 1 ,... , Xn. Suppose that X 1 ,... , Xn have the a common marginal c.d.f. F , that is unknown. We want to find 1 − F (t) for any t > 0.

Definition 4.1.1. A set of probability measures {Pθ : θ ∈ θ} is called parametric family if and only if θ ⊂ Rd^ for some fixed positive integer d and each Pθ is a known probability measure when θ is known.

Example 4.1.1. Pθ = {N (μ, σ^2 ), θ = (μ, σ^2 )}

Remark 4.1.2. The representation (4.1) of an exponential family is not unique. In fact, any transformation ˜η(θ) = Dη(θ) with a p × p nonsingular matrix D gives another representation (with T replaced by T˜ (ω) = (D>)−^1 T (ω)).

In an exponential family, consider the reparameterization η = η(θ) of (4.1) and

fη(ω) = exp{η>T (ω) − ξ˜(η)}h(ω) ω ∈ Ω

Since (^) ∫ fη(ω)dν = 1 =

T (ω)

eξ˜(η)^

h(ω)dν(ω)

we get

ξ˜(η) = log

Ω

exp{[η(θ)]>T (ω)}h(ω)dν(ω)

for every η = η(θ) and θ ∈ θ. This is the canonical form for the family. The new parameter space is that of all η ∈ Rp^ such that

log{

Ω

exp{[η(θ)]>T (ω)}h(ω)dν(ω)} < ∞

is called natural parameter space. An exponential family in canonical form is called a natural exponential family (4.1).

Example 4.1.3 (The binomial family). Let Pθ ∼ Binomial(θ, n), then {Pθ : θ ∈ (0, 1)} is an exponential family, since the p.d.f. of Pθ w.r.t. counting measure ν, is

fθ(x) = exp

x log

θ 1 − θ

  • n log(1 − θ)

n x

for x = 0, 1 ,... , n, T (x) = x, η(θ) = log (^1) −θθ , ξ(θ) = −n log(1 − θ), and h(x) =

(n x

If we let η = log (^1) −θθ (logit transform), then the family with p.d.f.’s

fη(x) = exp {xη − n log(1 + eη)}

n x

for x = 0, 1 ,... , n is a natural exponential family of full rank.

Theorem 4.1.1. Let P be a natural exponential family dominated by some mea- sure on (Rp, Bp). (i) Let T ∼ P. Then, T has the p.d.f.

fη(y) = exp{η>y − ξ(η)}

w.r.t. a σ-finite measure. (ii) If η 0 is an interior point of the natural parameter space, then the m.g.f. ψη 0 of Pη 0 ◦ T −^1 is finite in a neighborhood of 0 and is given by

ψη 0 (t) = exp{ξ(η 0 + t) − ξ(η 0 )}.

Furthermore, if f is a Borel function satisfying

|f |dPη 0 < ∞, then the function ∫ f (ω) exp{η>(ω)T (ω)}h(ω)dν(ω)

is infinitely often differentiable in a neighborhood of η 0 , and the derivatives may be computed by differentiation under the integral sign.

Remark 4.1.3. From the latter part of the above theorem it follows in particular that often one may obtain the mean and variance of a member of an exponential family by differentiating w.r.t. the natural parameter.

Example 4.1.6. Using the above theorem and the result in Example 4.1.3, we obtain the m.g.f of the binomial distribution Binomial(p, n)

ψη(t) = exp

n log(1 + eη+t) − n log(1 + eη)

1 + eηet 1 + eη

)n = (1 − p + pet)n

since p = eη/(1 + eη).

Example 4.1.7. Similarly using the result in Example 4.1.5, we obtain the m.g.f of the Poisson distribution P o(λ)

ψη(t) = exp {eη+t^ − eη} = exp {eη(et^ − 1)} = exp{λ(et^ − 1)}

since λ = eη.

The following are some important examples of location-scale families.

  • The family {P (μ, Ik) : μ ∈ Rk} is a location family, where Ik is the k × k identity matrix.
  • The family {P (0, Σ) : Σ ∈ Mk} is a scale family.
  • The family of exponential distributions E(a, θ) is a location-scale family on R with location parameter a and scale parameter θ;
  • the family of uniform distributions U (0, θ) is a scale family on R with a scale parameter θ.
  • The k-dimensional normal family is a location-scale family on Rk.

Examples of nonparametric family on ( Rk, Bk) :

(1) The joint c.d.f.s are continuous.

(2) The joint c.d.f.s have finite moments of order ≤ k a fixed integer.

(3) The joint c.d.f.s have p.d.f.s (e.g., Lebesgue p.d.f.s).

(4) k = 1 and the c.d.f.s are symmetric.

(5) The family of all probability measures on (Rk, Bk).

Nonparametric methods: methods designed for nonparametric models

4.3 Statistics and their distributions

Our data set is a realization of a sample (random vector) X from an unknown population P.

Definition 4.3.1 (Statistic). Statistic T (X) is measurable function T of X ; T (X) is a known value whenever X is known.

Statistical analyses are based on various statistics, for various purposes. Note that X itself is a statistic, but it is a trivial statistic. The range of a nontrivial statistic T (X) is usually simpler than that of X.

For example, X may be a random n-vector and T (X) may be a random p-vector with a p much smaller than n. σ(T (X)) ⊂ σ(X) and the two σ-fields are the same if and only if T is one-to-one.

Usually σ(T (X)) simplifies σ(X), i.e., a statistic provides a reduction of the σ-field.

The “information” within the statistic T (X) concerning the unknown distribu- tion of X is contained in the σ-field σ(T (X)). S is any other statistic for which σ(S(X)) = σ(T (X)). Then, (by earlier lemma), S is a measurable function of T , and T is a measurable function of S.

Thus, once the value of S (or T ) is known, so is the value of T (or S ). It is not the particular values of a statistic that contain the information, but the generated σ-field of the statistic.

The distribution of S^2 By Cohran’s theorem if P is N (μ, σ^2 ), then (n − 1)S^2 /σ^2 has the chi-square distribution χ^2 n− 1 (see text, Example 2.18). An approximate distribution for S^2 can be obtained from the approximate joint distribution of X¯ and S^2.

Joint distribution of X¯ and S^2 If P is N (μ, σ^2 ), then X¯ and S^2 are independent (Cohran thm, see Example 2.18). Hence, the joint distribution of ( X, S¯^2 ) is the product of the marginal distributions of X¯ and S^2 given above. Without the normality assumption, an approximate joint distribution can be obtained instead. Assume that μ = EX 1 , σ^2 = V ar(X 1 ), and E|X 1 |^4 are finite. Let Yi = (Xi − μ, (Xi − μ)^2 ), i = 1,... , n. Note Y 1 ,... , Yn are i.i.d. random 2-vectors with EY 1 = (0, σ^2 ) and variance-covariance matrix

[

σ^2 E(X 1 − μ)^3 E(X 1 − μ)^3 E(X 1 − μ)^4 − σ^4

]

Note that Y¯ = n−^1

∑n i=1 Yi^ = ( X¯−μ,^ S˜

(^2) ), where S˜ (^2) = n− 1 ∑n i=1(Xi−μ)

(^2). Applying

the CLT (Corollary 1.2 in the text) to the Yi’ s, we obtain that

√ n( X¯ − μ, S˜^2 − σ^2 ) d −→ N 2 (0, Σ).

Since S^2 = (^) n−n 1 [ S˜^2 − ( X¯ − μ)^2 ] (check in discussion) and X¯ a.s. −→ μ (the SLLN), an application of Slutsky’s theorem leads to

√ n( X¯ − μ, S^2 − σ^2 ) d −→ N 2 (0, Σ).

Example 4.3.2 (Order statistics). Let X = (X 1 ,... , Xn) with i.i.d. random components. Let X(i) be the i-th smallest value of X 1 ,... , Xn. The statistics T (X) = X(1),... , X(n) are called the order statistics. Order statistics is a set of very useful statistics in addition to the sample mean and variance. Suppose that Xi has a c.d.f. F having a Lebesgue p.d.f. f. Then the joint Lebesgue p.d.f. of X(1),... , X(n) is

g(x 1 , x 2 ,... , xn) =

n! f (x 1 )f (x 2 ) · · · f (xn) x 1 < x 2 < · · · < xn, 0 otherwise.

The joint Lebesgue p.d.f. of X(i) and X(j), 1 ≤ i < j ≤ n, is

gi,j (x, y) =

{n! [F (x)]i− (^1) [F (y)−F (x)]j−i− (^1) [1−F (y)]n−j (^) f (x)f (y) (i−1)!(j−i−1)!(n−j)! x < y 0 otherwise.

and the Lebesgue p.d.f. of X(i) is

gi(x) =

n! (i − 1)!(n − i)!

[F (x)]i−^1 [1 − F (x)]n−if (x).

A statistic T (X) provides a reduction of the σ-field σ(X).