









Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Material Type: Notes; Professor: Rempala; Class: Advanced Statistical Inference; Subject: Statistics; University: Medical College of Georgia; Term: Spring 2009;
Typology: Study notes
1 / 15
This page cannot be seen from the preview
Don't miss anything!
In elementary probability theory
provided P (A) > 0. What is P (A) =). In statistics often A = {Y = c}, so if Y is continuous then P (A) = 0.
Definition 3.1.1. Let X be integrable r.v. on (Ω, F, P )
(i) Let A be a σ-field s.t. A ⊂ F. The conditional expectation of X w.r.t. A denoted by E(X|A) is the a.s.-unique r.v. satisfying
(a) E(X|A) is measurable from (Ω, A) to (R, B).
(b)
A E(X|A)dP^ =^
A XdP^ ∀A∈A (ii) Let B ∈ F. The conditional probability of B given A is defined to be P (B|A) = E(IB |A). (iii) Let Y be measurable from (Ω, F, P ) to (Λ, G). The conditional expectation of X given Y is defined to be E(X|Y ) = E(X|σ(Y )).
σ(Y )− “information contained in Y ” E(X|σ(Y ))− “expectation of X given info in Y ”.
Theorem 3.1.1. Let Y be a measurable from (Ω, F) to (Λ, G) and Z a function from (Ω, F) to Rk. Then Z is measurable from (Ω, σ(Y )) to (Rk, Bk) if and only if there is a measurable function h from (Λ, G) to (Rk, Bk) such that Z = h ◦ Y.
The function h in E(X|Y ) = h ◦ Y is a Borel function on (Λ, G). Let y ∈ Λ. We define E(X|Y = y) = h(y)
to be the conditional expectation of X given Y = y. Note that h(y) is a function on Λ, whereas h ◦ Y = E(X|Y ) is a function on Ω. Proposition 3.1.1. Let X be a random variable in Rn^ and Y a random variable in Rm. Suppose that (X, Y ) has a joint p.d.f. f (x, y) w.r.t. product measure ν × λ, where ν and λ are σ-finite measures on (Rn, Bn) and (Rm, Bm), respectively. Let g(x, y) be a Borel function on Rn+m^ for which E|g(X, Y )| < ∞. Then
E[g(X, Y )|Y ] =
g(x, Y )f (x, Y )dν(x) ∫ f (x, Y )dν(x)
a.s.
Proof. Denote the right-hand side by h(Y ). By Fubini’s theorem, h is Borel. Then, by Prop 1.5.1 h(Y ) is Borel as well. Note fY (y) =
f (x, y)dν(x) is the p.d.f. of Y w.r.t. λ.
For every B ∈ Bm
∫
Y −^1 (B)
h(Y )dP =
B
h(y)dPY =
B
g(x, Y )f (x, Y )dν(x) ∫ f (x, Y )dν(x)
fY (y)dλ(y)
Rn×B
g(x, y)f (x, y)d(ν × λ) =
Rn×B
g(x, y)dP(X,Y ) =
Y −^1 (B)
g(X, Y )dP
For a random vector(X, Y ) with a joint p.d.f. f (x, y) w.r.t. ν × λ, define the conditional p.d.f. of X given Y = y to be
fX|Y (x|y) =
f (x, y) fY (y)
where fY (y) =
f (x, y)dν(x) is the marginal p.d.f. of Y w.r.t. λ. Then proposi- tion above states that
E[g(X, Y )|Y ] =
g(x, Y )fX|Y (x|Y )dν(x).
Let (Ω, F, P ) be a probability space. (i) Distinct events A 1 ,... , Ak are independent if,
P (Ai 1 ∩ Ais ) = P (Ai 1 ) · · · · · P (Ais ),
where {i 1 ,... is} ⊂ { 1 ,... , k} (ii) Classes of events C 1 ,... , Ck are independent if all events of the form {Ai ∈ Ci, i = 1,... , k} are independent. (iii) Random variables X 1 ,... , Xk, are said to be independent if and only if σ(X 1 ),... , σ(Xk) are independent. (iv) Any random vector whose law is the product measure P 1 × P 2 × · · · × Pk has independent components. (v) The set of random variables Xi,... , Xk is independent if and only if
F (X 1 ,... , Xk)(x 1 ,... , xk) = FX 1 (x 1 ) · · · · · FXk (xk), (x 1 ,... , xk) ∈ Rk,
where FXi is the distribution of Xi. (vi) If E[X 1 ,... , Xk] < ∞ and X 1 ,... , Xk are independent, then
EX 1 · · · · · EXk = E(X 1 · · · · · Xk)
(by Fubini’s Theorem.)
c ∈ Rk^ (||c||^2 = cT^ c) Definition 3.3.1. Let X, X 1 ,... , Xn,... be random k-vectors defined on a prob- ability space. (i) We say that the sequence {Xn} converges to X almost surely (a.s.) and write Xn a.s. −−→ X if and only if
P (lim ||Xn − X|| = 0) = 1
(ii) We say that {Xn} converges in probability to X and write Xn −→P X if and only if, for every fixed > 0,
P (||Xn − X||k > ) → 0.
(iii) We say that {Xn} converges to X in Lp (p-th moment) and write Xn
Lp −→ X if and only if lim n E||Xn − X||p^ = 0,
where p > 0 is a fixed constant. (iv) Let FXn be the c.d.f of Xn, n = 1, 2 ,... and FX be the c.d.f of X. We say
that {Xn} converges to X in distribution (or in law) and write Xn d −→ X if and only if, for each continuity point x of F ,
FXn (x) = F (x).
Theorem 3.3.3. (Slutsky’s theorem). Let X, X 1 ,... , Y, Y 1 ,... be random vari-
ables on a probability space. Suppose that Xn d −→ X and Yn P −→ c, where c is a fixed real number. Then (a)Xn + Yn d −→ X + c
(b) YnXn −→d cX
(c) Xn/Yn d −→ X/c if c 6 = 0.
Definition 3.3.2. Two sequences of real numbers, {an} and {bn}, satisfy an = O(bn) if and only if |an| ≤ c|bn| for all n and a constant c an = o(bn) if and only if an/bn → 0 as n → ∞. The following conventions are often used Let X 1 , X 2 ,... be random vectors and Y 1 , Y 2 ,... be random variables defined on a common probability space. (i) Xn = O(Yn) a.s. if and only if Xn(ω) = O(Yn(ω)) (a.s. P) (ii) Xn = o(Yn) a.s. if and only if Xn/Yn → 0 a.s. (iii) Xn = Op(Yn) if and only if, for any > 0, there is a constant C > 0 such that sup n
P (|Xn| ≥ C|Yn|) <
(iv) Xn = op(Yn) if and only if Xn/Yn P −→ 0.
Theorem 3.3.4. Let X 1 , X 2 ,... and Y be random k-vectors satisfying
an(Xn − c) −→d Y
where c ∈ Rk^ and {an} is a sequence of positive numbers such that an → ∞ as n → ∞. Let g be a differentiable function from Rk^ to R. Then (i)
an[g(Xn) − g(c)] d −→ [∇g(c)]T^ Y
where ∇g(x) the k-vector of partial derivatives of g at x.
(ii) Suppose g has continuous partial derivatives of order m > 1 in a neighborhood of c, with all the partial derivatives of order j, 1 ≤ j ≤ m − 1 , vanishing at c, but with the mth-order partial derivatives not all vanishing at c. Then
(an)m[g(Xn) − g(c)] d −→
m!
∑^ k
i 1 =
∑^ k
im=
∂mg ∂xi 1 · · · ∂xim
x=c
Yi 1 · · · Yim
Theorem 3.4.1. Let X, X 1 , X 2 ,... be random k-vectors.
(i) Xn d −→ X ⇔ E h(Xn) → E h(X) for every bounded function h : Rk^ → R
(ii) Let ϕX , ϕX 1 , ϕX 2 ,... be ch.f.’s of X, X 1 , X 2 ,... , resp. Then Xn d −→ X ⇔ lim n→∞ ϕXn (t) = ϕX (t) for all t ∈ Rk.
(iii) Xn d −→ X ⇔ cT^ Xn d −→ cT^ X for every c ∈ Rk.
Theorem 3.4.2. (SSLN) (i) Let {Xi} be i.i.d. random variables. Then
1 n
∑^ n
i=
Xi a.s. −−→ a ⇔ E|X 1 | < ∞
where a = EX 1. (ii) Let {Xi} be independent (not identically distributed) random variables such that V arXi < ∞ for every i. Then
∑^ ∞
i=
V arXi i^2
n
∑^ n
i=
(Xi − EXi) a.s. −−→ 0
If (a.s.) is replaced by (P ) we have weak law of large numbers (WLLN).
Theorem 3.5.1. Let {Xnj } be independent random variables, j = 1,... , n, with
0 < σ^2 n = V ar(
∑^ n
j=
Xnj ) < ∞
If
lim n→∞
σ n^2
∑n
j=
E[(Xnj − EXnj )^2 I({|Xnj − EXnj | > σn})] = 0 (3.1)
then 1 σ n^2
∑n
j=
(Xnj − EXnj ) −→d N (0, 1)
Remark 3.5.1. (3.1) is implied by the Liapounov’s condition:
lim n→∞
∑^ n
j=
E|Xnj − EXnj |2+δ
σn2+δ
for some δ > 0.