My Avatar

LanternD's Castle

PhD Student in ECE @ MSU

STT 861 Theory of Prob and STT I Lecture Note - 8

2017-10-25

Proof of the biased sample mean; Example 3.5.3 in the text book; Normal distribution, joint normal distribution (multivariate), Gamma distribution.

Portal to all the other notes

Lecture 08 - Oct 25 2017

For video record

(This part is similar to previous note. We recorded a video for it so the professor talked about it once again.)

Basically it is the prove that sample variance $\hat{\sigma}^2=\frac{1}{n}\sum_{i=1}^{n}(x_i-\bar{x})^2$ biased.

Recall,

Note that $\bar{x}$ is the expectation of a r.v. $\hat{X}$ which takes the value $x_i$ with probability = $\frac{1}{n}$.

Therefore

and

Now recall the formula:

Use this with $X=\hat{X}$ defined above:

Now the question is: what is the bias of $\hat{\sigma}^2$? For this we replace each $x_i$ by $X_i$, where $X_i$’s are i.i.d with mean $=\mu$ and variance$=\sigma^2$. The resulting expression is

We are interested in $E(\hat{\sigma}^2(X))$. We want to know if this $=\sigma^2$ or not.

Now use formula above with $C=\mu$ and $x=X$ and we take $E$ on both side.

We proved that $E(\hat{\sigma}^2)=(1-\frac{1}{n}\sigma^2)$. It is biased with $\frac{1}{n}\sigma^2$.

Example 3.5.3, Page 100

$(X,Y)$ has density $f(x,y)$ is 2 if $0\leq y\leq y\leq1$, and 0 otherwise.

First compute the joint CDF of $(X,Y)$,

General definition

If $u\geq v$, we integrate $y$ between 0 and $x$,

Now we integrates between $0$ and $u$ with respect to $x$.

So we have proved that when $u>v$,

If $u<v$, we still compute the same integral

($0<u<v$)

This proven $F(u,v)=u^2$ when $u<v$.

Compute the marginal density of $x$ and $y$.

In general, marginal density $f_X(x)=\frac{\partial }{\partial x}F(x,y)$ when fixing $y$. Similarly, $f_Y(y)=\frac{\partial}{\partial x}F(x,y)$ where $x$ is fixed.

If $x>y$,

If $x<y$, then $f_X(x)=2x$.

Special continuous distributions - Chapter 4

Definition: $X$ is a normal with parameters $0$ and $1$ if it has this density

Facts:

Notation: $X\sim N(0,1)$.

Definition: $X$ is normal with parameters $\mu$ and $\sigma^2$ if it has the density

Facts:

Notation: $X\sim N(\mu,\sigma)$

Proof: comes from the case $N(0,1)$ by using change of variables.

Fact: let $X\sim N(0,1)$, let $\mu$ and $\sigma$ be fixed. Let $Y=\mu + \sigma X$. Then $Y\sim N(\mu, \sigma^2)$.

Fact: let $X$ and $X’$ be two independent r.v.’s respectively, $N(\mu,\sigma^2)$, $N(\mu’, \sigma’^2)$, then

Moral of the story: the class of normal r.v.’s is stable by linear combination.

Example 1

let $X\sim N(1,1)$, $Y\sim N(0,4)$, $Z\sim N(-2,1)$. Assume they are independent. $V=2X+3+Y-4Z$. Find $E(V)$, $Var(V)$.

A: By the above moral, $V$ should be normal.

$E(V)=13$, $Var(V)=24$

Example 2

Let $X\sim N(\mu, \sigma^2)$, let

so $E(Z)=0$, $Var(Z)=1$. Because $Z$ is a linear transformation of $X$.

Joint normal distribution (multivariate normal)

Definition: The vector $(X_1,X_2,…,X_n)$ is normal with mean $\mu\in\mathbb{R}^n$ and covariance matrix $C$. (here $\vec{\mu}=\{\mu_i\},i=1,2,…,n$, $C={c_{ij}}_{i,j=1}^n$, and $E(x_i)=\mu_i$, $cov(x_i,x_j)=c_{ij}$ and sometimes people use the letter $Q$ or $\Sigma$ instead of $C$), if its joint density is

(Notice: $x$ and $\mu$ are column vectors)

The stuff in $\exp()$ is

Sample case, $n=1$, $C=(\sigma^2)$, so $\det(C)=\sigma^2$ and $\frac{1}{\det (C)}=\frac{1}{\sqrt{\sigma^2}}$

So

This matches $f(x)$.

when $n=2$ with independent $x_1$ and $x_2$.

In general, if $x_1,x_2,…,x_n$ are independent normals, $N(\mu_i,\sigma_i^2), i=1,2,…,n$, then

We recognize that if $Cov(x_i,x_j)=0$ and $(x_i,x_j)$ are bivariate normal, then $x_i$ and $x_j$ are independent.

Normally, $Cov(x_i,x_j)\nRightarrow$ (can not infer) $x_i$ and $x_j$ are independent. But it does when $(x_i,x_j)$ is bivariate normal.

Question: let’s create a multivariate normal vector $Y$ with covariance matrix $C$ to create a vector $X=(x_1,x_2,…,x_n)$, where all $x_i$’s are i.i.d $N(0,1)$ (``standard normals’’).

(It’s easy to use Box-Mueller transformation to do so).

How to create a $Y$ from this $X$?

Answer: Recall the notion of square-root of a matrix ($C$ is positive definite).

(Linear algebra: $M^TM = MM^T = C$)

There exists a matrix $M$ which is ``$\sqrt{C}$’’.

Consider

We know (by stability by linear combinations) that $Z$ is multivariate normal.

Note

What about $Cov(Z)$?

So we see that $Y=Z=``\sqrt{X}”$.

Exercise: For $n=2$, find a square-root for a $2\times 2$ matrix. There should be a place in the book where the author does that in hiding for the purpose of creating a bivariate normal.

Gamma distribution - Chapter 4.3

Definition: A r.v. $X$ is Gamma with shape parameter $\alpha$ and scale parameter $\theta$ if it has this density

where the constant $c=\Gamma(\alpha)$, if $\alpha=n$ is an integer, $\Gamma=(n-1)!$

Notation: $X\sim \Gamma(\alpha,\theta)$.

Fact:

Consequently, apply the above with $\alpha_1=\alpha_2=\cdots=\alpha_n=1$, the sum of $n$ i.i.d r.v.’s $\sim\exp(\lambda=\frac{1}{\theta})$ is a r.v. $\sim \Gamma(n, \theta)$.

Specifically, this proves exactly that the $n$th ``arrival’’ time of a $Pos(\lambda)$ process is a $\Gamma(n,\frac{1}{\lambda})$ r.v.



Disqus Comment 0