Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Notes on Sufficiency and Completeness, Data Reduction Principles | STAT 9220, Study notes of Statistics

Material Type: Notes; Professor: Rempala; Class: Advanced Statistical Inference; Subject: Statistics; University: Medical College of Georgia; Term: Spring 2009;

Typology: Study notes

Pre 2010

Uploaded on 08/04/2009

koofers-user-2mt
koofers-user-2mt 🇺🇸

10 documents

1 / 16

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
STAT 9220
Lecture 5
Statistical Inference: Sufficiency and
Completeness, Data Reduction Principles
Greg Rempala
Department of Biostatistics
Medical College of Georgia
Feb 10, 2009
1
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff

Partial preview of the text

Download Notes on Sufficiency and Completeness, Data Reduction Principles | STAT 9220 and more Study notes Statistics in PDF only on Docsity!

STAT 9220

Lecture 5

Statistical Inference: Sufficiency and

Completeness, Data Reduction Principles

Greg Rempala

Department of Biostatistics

Medical College of Georgia

Feb 10, 2009

5.1 Sufficiency

Definition 5.1.1 (Sufficiency). Let X be a sample from an unknown population P ∈ P, where P is a family of populations. A statistic T (X) is said to be sufficient for P ∈ P (or for θ ∈ Θ when P = {Pθ : θ ∈ Θ} is a parametric family) if and only if the conditional distribution of X given T is known (does not depend on P or θ).

Once we observe X and compute a sufficient statistic T (X), the original data X do not contain any further information concerning the unknown population P (since its conditional distribution is unrelated to P ) and can be discarded. A sufficient statistic T (X) contains all information about P contained in X and provides a reduction of the data if T is not one-to-one. The concept of sufficiency depends on the given family P. If T is sufficient for P ∈ P, then T is also sufficient for P ∈ P 0 ⊂ P but not necessarily sufficient for P ∈ P 1 ⊃ P.

Example 5.1.1. Suppose that X = (X 1 ,... , Xn) and X 1 ,... , Xn are i.i.d. from the binomial distribution with the p.d.f. (w.r.t. the counting measure)

fθ(z) = θz^ (1 − θ)^1 −z^ I{ 0 , 1 }(z), z ∈ R, θ ∈ (0, 1).

For any realization x of X, x is a sequence of n ones and zeros.

finding a sufficient statistic by means of the definition is not convenient.

For families of populations having p.d.f.s, a simple way of finding sufficient statis- tics is to use the following factorization theorem.

Theorem 5.1.1 (The factorization theorem). Suppose that X is a sample from P ∈ P and P is a family of probability measures on (Rn, Bn) dominated by a σ-finite measure ν. Then T (X) is sufficient for P ∈ P if and only if there are nonnegative Borel functions h (which does not depend on P ) on (Rn, Bn) and gP (which depends on P ) on the range of T such that

dP dν

(x) = gP (T (x))h(x).

Example 5.1.2. If P is an exponential family, then FT above can be applied with

gθ(t) = exp{η(θ)>t − ξ(θ)},

i.e., T is a sufficient statistic for θ ∈ Θ.

In the sequel we shall need a following notion of uniqueness. Consider a family of measures P. If a statement holds except for outcomes in an event A satisfying P (A) = 0 for all P ∈ P, then we say that the statement holds a.s. P.

5.2 Examples of Sufficient Statistics

Example 5.2.1 (Truncation families). Let φ(x) be a positive Borel function on

(R, B) such that

∫ (^) b a φ(x)dx <^ ∞^ for any^ a^ and^ b,^ −∞^ < a < b <^ ∞.^ Let θ = (a, b), Θ = {(a, b) ∈ R^2 : a < b}, and

fθ(x) = c(θ)φ(x)I(a,b)(x),

where c(θ) =

[∫ (^) b a φ(x)dx

]− 1

. Then {fθ : θ ∈ Θ}, called a truncation family, is

a parametric family dominated by the Lebesgue measure on R. Let X 1 ,... , Xn be i.i.d. random variables having the p.d.f. fθ. Then the joint p.d.f. of X = (X 1 ,... , Xn) is

∏^ n

i=

fθ(xi) = [c(θ)]nI(a,∞)(x(1))I(−∞,b)(x(n))

∏^ n

i=

φ(xi),

where x(i) is the i-th smallest value of x 1 ,... , xn. Let T (X) = (X(1), X(n)), gθ(t 1 , t 2 ) = [c(θ)]nI(a,∞)(t 1 )I(−∞,b)(t 2 ), and h(x) =

∏n i=1 φ(xi).^ By FT^ T^ (X) is sufficient for θ ∈ Θ.

Example 5.2.2 (Order statistics). Let X = (X 1 ,... , Xn) and X 1 ,... , Xn be i.i.d. random variables having a distribution P ∈ P, where P is the family of distributions on R having Lebesgue p.d.f.s. Let X(1),... , X(n) be the order statistics discussed before. Note that the joint p.d.f. of X is f (x 1 ) · · · f (xn) = f (x(1)) · · · f (x(n)). Hence, T (X) = (X(1),... , X(n)) is sufficient for P ∈ P.

Hence, the minimal sufficient statistic is unique in the sense that two statistics that are one-to-one measurable functions of each other can be treated as one statistic.

Example 5.2.3. Let X 1 ,... , Xn be i.i.d. random variables from Pθ, the uniform distribution U (θ, θ + 1), θ ∈ R. Suppose that n > 1. The joint Lebesgue p.d.f. of (X 1 ,... , Xn) is

fθ(x) =

∏^ n

i=

I(θ,θ+1)(xi) = I(x(n)− 1 ,x(1))(θ), x = (x 1 ,... , xn) ∈ Rn,

where x(i) denotes the ith smallest value of x 1 ,... , xn. By FT T = (X(1), X(n)) is sufficient for θ. Note that

x(1) = sup{θ : fθ(x) > 0 } and x(n) = 1 + inf{θ : fθ(x) > 0 }.

If S(X) is a statistic sufficient for θ, then by FT, there are Borel functions h and gθ such that fθ(x) = gθ(S(x))h(x). For x with h(x) > 0,

x(1) = sup{θ : gθ(S(x)) > 0 } and x(n) = 1 + inf{θ : gθ(S(x)) > 0 }.

Hence, there is a measurable function ψ such that T (x) = ψ(S(x)) when h(x) > 0. Since h > 0 a.s. P, we conclude that T is minimal sufficient.

Minimal sufficient statistics exist under weak assumptions, e.g., P contains distributions on Rk^ dominated by a σ-finite measure (Bahadur, 1957).

Theorem 5.2.1. Let P be a family of distributions on Rk.

(i) Suppose that P 0 ⊂ P and a.s. P 0 implies a.s. P. If T is sufficient for P ∈ P and minimal sufficient for P ∈ P 0 , then T is minimal sufficient for P ∈ P.

(ii) Suppose that P contains p.d.f.’s f 0 , f 1 , f 2 ,... , w.r.t. a σ-finite measure. Let f∞(x) =

i=0 cifi(x), where^ ci^ >^0 for all^ i^ and^

i=0 ci^ = 1, and let^ Ti(X) = fi(x)/f∞(x) when f∞(x) > 0 , i = 0, 1 , 2 ,... Then T (X) = (T 0 , T 1 , T 2 ,.. .) is minimal sufficient for P ∈ P. Furthermore, if {x : fi(x) > 0 } ⊂ {x : f 0 (x) > 0 } for all i, then we may replace f∞ by f 0 , in which case T (X) = (T 1 , T 2 ,.. .) is minimal sufficient for P ∈ P.

(iii) Suppose that P contains p.d.f.’s fP w.r.t. a σ-finite measure and that there exists a sufficient statistic T (X) such that, for any possible values x and y of X, fP (x) = fP (y)φ(x, y) for all P implies T (x) = T (y), where φ is a measurable function. Then T (X) is minimal sufficient for P ∈ P.

Example 5.2.4. Let P = {fθ : θ ∈ Θ} be an exponential family with p.d.f.’s

fθ(x) = exp{[η(θ)]>T (x) − ξ(θ)}h(x)

Suppose that there exists Θ 0 = {θ 0 , θ 1 ,... , θp} ⊂ Θ such that the vectors ηi = η(θi) − η(θ 0 ), i = 1,... , p, are linearly independent in Rp. (This is true if the family is of full rank.) We have shown that T (X) is sufficient for θ ∈ Θ. We now show that T is in fact minimal sufficient for θ ∈ Θ.

5.3 Complete Statistics, Basu Theorem

Definition 5.3.1. A statistic V (X) is ancillary if its distribution does not depend on the population P. V (X) is first-order ancillary if E[V (X)] is independent of P. A trivial ancillary statistic is the constant statistic V (X) ≡ c ∈ R.

If V (X) is a nontrivial ancillary statistic, then σ(V (X)) ⊂ σ(X) is a nontrivial σ-field that does not contain any information about P.

Hence, if S(X) is a statistic and V (S(X)) is a nontrivial ancillary statistic, it indicates that σ(S(X)) contains a nontrivial σ-field that does not contain any information about P and, hence, the “data” S(X) may be further reduced.

A sufficient statistic T appears to be most successful in reducing the data if no nonconstant function of T is ancillary or even first-order ancillary.

Definition 5.3.2 (Completeness). A statistic T (X) is said to be complete for P ∈ P if and only if, for any Borel f , E[f (T )] = 0 for all P ∈ P implies f = 0 a.s. P. T is said to be boundedly complete if and only if the previous statement holds for any bounded Borel f.

Remark 5.3.1. A complete statistic is boundedly complete.

If T is complete (or boundedly complete) and S = ψ(T ) for a measurable ψ, then S is complete (or boundedly complete).

Intuitively, a complete and sufficient statistic should be minimal sufficient (text, Exercise 48).

A minimal sufficient statistic is not necessarily complete; for example, the minimal sufficient statistic (X(1), X(n)) in Example 5.2.3 is not complete (text, Exercise 47).

Finding a complete and sufficient statistic in an exponential family is simple that to the following.

Proposition 5.3.1. If P is in an exponential family of full rank with p.d.f.s given by fη(x) = exp{η>T (x) − ξ(η)}h(x),

then T (X) is complete and sufficient for η ∈ Ξ.

Proof. We have shown that T is sufficient. Suppose that there is a function f such that E[f (T )] = 0 for all η ∈ Ξ. Then,

∫ f (t) exp{η>t − ξ(η)} dλ = 0 for all η ∈ Ξ,

where λ is a measure on (Rp, Bp) (involving h). Let η 0 be an interior point of Ξ. Then (^) ∫

f+(t)eη

t dλ =

f−(t)eη

t dλ for all η ∈ N (η 0 ), (∗)

where N (η 0 ) = {η ∈ Rp^ : ||η − η 0 || < ε} for some ε > 0.

Example 5.3.2. Let X 1 ,... , Xn be i.i.d. random variables from P θ, the uniform distribution U (0, θ), θ > 0. The largest order statistic, X(n), is complete and sufficient for θ ∈ (0, ∞). The sufficiency of X(n) follows from the fact that the joint Lebesgue p.d.f. of X 1 ,... , Xn is θ−nI(0,θ)(x(n)). From earlier example, X(n) has the Lebesgue p.d.f. (nxn−^1 /θn)I(0,θ)(x) on R. Let f be a Borel function on [0, ∞) such that E[f (X(n))] = 0 for all θ > 0. Then

∫ (^) θ

0

f (x)xn−^1 dx = 0 for all θ > 0.

Let G(θ) be the left-hand side of the previous equation. Applying the result of differentiation of an integral (see, e.g., Royden (1968, 5.3)), we obtain that G′(θ) = f (θ)θn−^1 a.e. m+, where m+ is the Lebesgue measure on ([0, ∞), B[0, ∞)). Since G(θ) = 0 for all θ > 0 , f (θ)θn−^1 = 0 a.e. m+ and, hence, f (x) = 0 a.e. m+. Therefore, X(n) is complete and sufficient for θ ∈ (0, ∞).

Example 5.3.3. In Example 5.2.2, we showed that the order statistics T (X) = (X(1),... , X(n)) of i.i.d. random variables X 1 ,... , Xn is sufficient for P ∈ P, where P is the family of distributions on R having Lebesgue p.d.f.s. We now show that T (X) is also complete for P ∈ P. Let P 0 be the family of Lebesgue p.d.f.s of the form f (x) = C(θ 1 ,... , θn) exp{−x^2 n^ + θ 1 x + θ 2 x^2 + · · · + θnxn},

where θj ∈ R and C(θ 1 ,... , θn) is a normalizing constant such that

f (x)dx =

  1. Then P 0 ⊂ P and P 0 is an exponential family of full rank. Note that the joint distribution of X = (X 1 ,... , Xn) is also in an exponential family of full rank. Thus, by Proposition 5.3.1, U = (U 1 ,... , Un) is a complete statistic for P ∈ P 0 , where Uj =

∑n i=1 X

j i.^ Since a.s.^ P^0 implies a.s.^ P,^ U^ (X) is also complete for P ∈ P. The result follows if we can show that there is a one-to-one correspondence between T (X) and U (X). Let V 1 =

∑n i=1 Xi, V^2 =^

i<j XiXj^ , V 3 =

i<j<k XiXj^ Xk,... , Vn^ =^ X^1 · · ·^ Xn.^ From the identities

Uk − V 1 Uk− 1 + V 2 Uk− 2 − · · · + (−1)k−^1 Vk− 1 U 1 + (−1)kkVk = 0,

k = 1,... , n, there is a one-to-one correspondence between U (X) and V (X) = (V 1 ,... , Vn). From the identity

(t − X 1 ) · · · (t − Xn) = tn^ − V 1 tn−^1 + V 2 tn−^2 − · · · + (−1)nVn,

there is a one-to-one correspondence between V (X) and T (X). This completes the proof and, hence, T (X) is sufficient and complete for P ∈ P. In fact, both U (X) and V (X) are sufficient and complete for P ∈ P.

Example 5.3.4. Suppose that X 1 ,... , Xn are i.i.d. random variables having the N (μ, σ^2 ) distribution, with μ ∈ R and a known σ > 0. It can be easily shown that the family {N (μ, σ^2 ) : μ ∈ R} is an exponential family of full rank with natural parameter η = μ/σ^2. By Proposition 5.3.1 above, the sample mean X¯ is complete and sufficient for η (and μ). Let S^2 be the sample variance. Since S^2 = (n−1)−^1

∑n i=1(Zi^ −^ Z¯)^2 , where Zi = Xi −μ is N (0, σ 2 ) and Z¯ = n−^1 ∑n i=1 Zi, S^2 is an ancillary statistic (σ^2 is known). By Basu’s theorem, X¯ and S^2 are independent w.r.t. N (μ, σ^2 ) with μ ∈ R. Since σ^2 is arbitrary, X¯ and S^2 are independent w.r.t. N (μ, σ^2 ) for any μ ∈ R and σ^2 > 0. Using the independence of X¯ and S^2 , we now show that (n − 1)S^2 /σ^2 has the chi-square distribution χ^2 n− 1. Note that

n

X − μ σ

(n − 1)S^2 σ^2

∑^ n

i=

Xi − μ σ

From the properties of the normal distributions, n( X¯ − μ)^2 /σ^2 has the chi-square distribution χ^21 with the m.g.f. (1 − 2 t)−^1 /^2 , t < 1 /2 and

∑n i=1(Xi^ −^ μ)

(^2) /σ (^2) has

the chi-square distribution χ^2 n with the m.g.f. (1 − 2 t)−n/^2 , t < 1 /2. By the independence of X¯ and S^2 , the m.g.f. of (n − 1)S^2 /σ^2 is

(1 − 2 t)−n/^2 /(1 − 2 t)−^1 /^2 = (1 − 2 t)−(n−1)/^2

for t < 1 /2. This is the m.g.f. of the chi-square distribution χ^2 n− 1 and, therefore, the result follows. (Alternative proof is possible by using Cohran’s thm).