My Avatar

LanternD's Castle

An electronics enthusiast - survive technically

STT 861 Theory of Prob and STT I Lecture Note - 10

2017-11-08

Solving a problem in midterm exam; conditional distribution, definition, example and its expectation and variance.

Portal to all the other notes

Lecture 10 - Nov 08 2017

Exam question

Let XU(50,90), ZU(70,90), Law(Z)=Law(X|X70).

Q: Let XU(a,b), c(a,b), find law (X|X>c).

A: let F be the CDF of X|X>c. Let x[c,b].

F(x)=P(Xx|x>c)=P(Xx,X|c)P(X>c)=xcbabcba=xcbc

It is uniform in (c,b).


Conditional distribution - Chapter 5

Recall joint density of a vector (X,Y) is fX,Y(x,y), x and y are real numbers.

Definition (Theorem): The conditional law of X given Y is

fX|Y(x|y)=fX,Y(x,y)fY(y)

Consequently, the CDF of X given Y=y is

FX|Y=P(Xx|Y=y)=xfX|Y(u,v)du

Discrete case using the same idea.

For Y, no change in notation.

For X, use instead of integrals.

Example 1

Let NPoi(λ) and XBin(N,θ), here we see law (X|N=k)=Bin(k,θ).

This type of example is called a mixture. A mixture is when the parameter of one random variable is determined by the value of another random variable.

In our example, we would like to find law (X) unconditional on N. We want to find P(X=i) for any k=0,1,2,.

P(X=i)=k=iP(X=i and N=k)=k=iP(X=i|N=k)P(N=k)=k=iCkiθi(1θ)kieλλkk!=k=ik!i!(ki)!θi(1θ)kieλλkk!=eλθii!k=i(1θ)ki(ki)!λk=eλθii!k=0(iθ)kk!λk+i=eλ(θλ)ii!k=0(iθ)kk!λk=eλ(θλ)ii!eλ(1θ)=eλθ(λθ)ii!

It is Poisson distribution with parameter λ=λθ.

Home exercise

Read and understand the section of the book in Chapter 5 on Markov Chains.

Conditional expectations - Chapter 5.2

E(X|Y=y)=fX|Y(x|y)xdx

For the discrete case,

E(X|Y=y)=k=PX|Y(xk|y)xk

Super important theorem (definition)

let X and Y be given. Let

g(x)=E(Y|X=x) v(x)=Var(Y|X=x)=E(Y2|X=x)(E(Y|X=x))2=E((Yg(x))2|X=x)

Theorem: E(XY)=E(Xg(x))

(the idea of “tower property”: E(E(Z|X))=E(Z). “Given X” means X is not random).

Proof: use the property to prove the theorem

E(XY)=E(E(XY|X))=E(XE(Y|X))=E(Xg(X))

Notice:

Now recall v(x). It turns out

Var(Y)=E(v(X))+Var(g(X))=E(v(x))+var(E(Y|X))

Explanation: The unconditional variance of Y is the expected conditional variance plus the variance of the conditional expectation.

Example 2

Let X|NBin(N,θ), NPoi(λ).

(We found that XPoi(λ))

From the theorem, ignoring the fact above, we apply E(X)=E(E(X|N))

If we give some information and then take the information away, it is equivalent to not giving information at the first place.

E(X)=E(E(XN))=θE(N)=λθ E(Poi(λθ))=λθ

We verify the theorem above.

For variance

Var(X)=E(v(N))+Var(E(X|N))=Nθ(1θ)+Nθ=θ(1θ)λ+θ2λ=λθ

This is also correct.



Disqus Comment 0