



Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
A lecture note from Sta230/Mth230 course taught by Colin Rundel at the University of California, Berkeley. The lecture covers the concepts of random variables, expected value, variance, and covariance. It includes definitions, properties, and examples of discrete and continuous random variables such as Bernoulli, Binomial, Hypergeometric, and Poisson distributions.
What you will learn
Typology: Study notes
1 / 7
This page cannot be seen from the preview
Don't miss anything!
Sta230/Mth
Colin Rundel
February 5, 2014
Chapter 3.1-3.3 Random Variables
We have been using them for a while now in a variety of forms but it is good to explicitly define what we mean
Random Variable A real-valued∗^ function on the sample space Ω
Example: If Ω is the 36 element space resulting from rolling two fair six-sided dies (r and g ), then the following are all random variables
X (r , g ) = r Y (r , g ) = |r − g | Z (r , g ) = r + g
Sta230/Mth230 (Colin Rundel) Lecture 6 February 5, 2014 1 / 24
Chapter 3.1-3.3 Random Variables
Random variables are in essence a fancy way of describing an event.
Previous example:
Ω = {(r , g ) : 1 ≤ r , g ≤ 6 } Y (r , g ) = |r − g |
What is the event for P(Y = 1) in terms of ω ∈ Ω?
Chapter 3.1-3.3 Random Variables
Range of a random variable Set of all possible values
Distribution of a random variable Specification of P(X ∈ A) for every set A. If X has a countably large range then we can define f (x) = P(X = x) as P(X ∈ A) =
x∈A f^ (x)
The expected value of a random variable is defined as follows
Discrete Random Variable:
E [X ] =
all x
xP(X = x)
Continous Random Variable:
E [X ] =
all x
xP(X = x)dx
This is a natural generalization of what we do when deciding if a casino game is fair.
Sta230/Mth230 (Colin Rundel) Lecture 6 February 5, 2014 4 / 24
Constants - E (c) = c if c is constant
Indicators - E (IA) = P(A) where IA is an indicator function
Functions - E [g (X )] =
∫^ all^ x^ g^ (x)^ P(X^ =^ x)^ if discrete x g^ (x)^ P(X^ =^ x)^ dx^ if continuous
Constant Factors - E (cX ) = cE (x)
Addition - E (X + Y ) = E (X ) + E (Y )
Multiplication - E (XY ) = E (X )E (Y ) if X and Y are independent. Sta230/Mth230 (Colin Rundel) Lecture 6 February 5, 2014 5 / 24
Chapter 3.1-3.3 Random Variables
Another common property of random variables we are interested in is the Variance which measures the squared deviation from the mean.
Var (X ) = E
= E (X − μ)^2 One common simplification:
Var (X ) = E (X − μ)^2 = E (X 2 − 2 μX + μ^2 ) = E (X 2 ) − 2 μE (X ) + μ^2 = E (X 2 ) − μ^2
Standard Deviation: SD(X ) =
Var (X )
Chapter 3.1-3.3 Random Variables
What is Var (aX + b) when a and b are constants?
Which gives us: Var (aX ) = a^2 Var (X ) Var (X + c) = Var (X ) Var (c) = 0
For a completely general formula for the variances of a linear combination of n random variables:
Var
( (^) n ∑
i=
ci Xi
∑^ n
i=
∑^ n
j=
Cov (ci Xi , cj Xj )
∑^ n
i=
c^2 i Var (Xi ) +
∑^ n
i=
∑^ n
j= i 6 =j
ci cj Cov (Xi , Xj )
Sta230/Mth230 (Colin Rundel) Lecture 6 February 5, 2014 12 / 24
Let X ∼ Bern(p), what is E (X ) and Var (X )?
Sta230/Mth230 (Colin Rundel) Lecture 6 February 5, 2014 13 / 24
Chapter 3.1-3.3 Random Variables
Let X ∼ Binom(n, p), what is E (X ) and Var (X )?
We can redefine X =
∑n i=1 Yi^ where^ Y^1 ,^ · · ·^ ,^ Yn^ ∼^ Bern(p), and since we are sampling with replacement all Yi and Yj are independent.
Chapter 3.1-3.3 Random Variables
Lets consider a simple case where we have an urn with m black marbles and N − m white marbles. Let Bi be an indicator variable for the ith marble being black.
Bi =
1 if ith draw is black 0 otherwise In the case where N = 2 and m = 1 what is P(Bi ) = 1 for all i?
Ω = {BW , WB}
What about when N = 3 and m = 1?
Ω = {BW 1 W 2 , BW 2 W 1 , W 1 BW 2 , W 2 BW 1 , W 1 W 2 B, W 2 W 1 B}
Proposition
P(Bi = 1) = m/N for all i
Sta230/Mth230 (Colin Rundel) Lecture 6 February 5, 2014 16 / 24
Let X ∼ Hypergeo(N, m, n) then X = B 1 + B 2 + · · · + Bn
Sta230/Mth230 (Colin Rundel) Lecture 6 February 5, 2014 17 / 24
Chapter 3.1-3.3 Random Variables
Let X ∼ Hypergeo(N, m, n), what is E (X )?
Chapter 3.1-3.3 Random Variables
Let X ∼ Hypergeo(N, m, n), what is Var (X )?
Chapter 3.1-3.3 Random Variables
We start with $1 on the table and a coin.
At each step: Toss the coin; if it shows Heads, take the money. If it shows Tails, I double the money on the table.
Let X be the amount you win, what is E (X )?
Sta230/Mth230 (Colin Rundel) Lecture 6 February 5, 2014 24 / 24