# Thread: unbiased estimator

1. ## unbiased estimator

Let $\displaystyle Y_1, \ Y_2, \ \dotso, \ Y_n$ be a random sample of size n from a normal population with mean $\displaystyle \mu$ and variance $\displaystyle \sigma^2$. Assuming n=2k for some integer k, one possible estimator for $\displaystyle \sigma^2$ is given by:

$\displaystyle \hat{\sigma}^2=\frac{1}{2k}\sum^k_{i=1} (Y_{2i}-Y_{2i-1})^2$

show that $\displaystyle \hat{\sigma}^2$ is an unbiased estimator for $\displaystyle \sigma^2$

Attempt:

so when I expend it out it gives:

$\displaystyle \hat{\sigma}^2=\frac{1}{2k}\sum^k_{i=1} Y_{2i}^2-2(Y_{2i} \cdot Y_{2i-1})+ Y_{2i-1}^2$ That term in the middle is throwing me off.

I know that I eventually have to use the fact that

$\displaystyle E \bigg{[}\frac{1}{2k}\sum^k_{i=1} Y_{2i}^2-2(Y_{2i} \cdot Y_{2i-1})+ Y_{2i-1}^2 \bigg{]}$

and $\displaystyle E[Y^2] = V[Y]+(E[Y])^2 \longrightarrow \sigma^2+\mu^2$

so taking that into consideration I would get:

$\displaystyle E \bigg{[}\frac{1}{2k}\sum^k_{i=1} Y_{2i}^2-2(Y_{2i} \cdot Y_{2i-1})+ Y_{2i-1}^2 \bigg{]} \longrightarrow E\bigg{[} \frac{1}{2k} \sum^k_{i=1}Y_{2i}^2\bigg{]} -E\bigg{[} \frac{1}{2k}\sum^k_{i=1}2(Y_{2i} \cdot Y_{2i-1})\bigg{]}$ $\displaystyle + E\bigg{[} \frac{1}{2k} \sum^k_{i=1}Y_{2i-1}^2\bigg{]}$

$\displaystyle \frac{k}{2} E\bigg{[} \overline{Y}_{2i}^2\bigg{]} -E\bigg{[} (\mu_{2i} \cdot \mu_{2i-1})\bigg{]}$ $\displaystyle + \frac{k}{2} E\bigg{[} \overline{Y}_{2i-1}^2\bigg{]}$

I'm not sure if I should have the bottom indicator or if it should just be:

$\displaystyle \frac{k}{2} E\bigg{[} \overline{Y}^2\bigg{]} -E\bigg{[} \frac{1}{2} 2(\mu \cdot \mu)\bigg{]}$ $\displaystyle + \frac{k}{2} E\bigg{[} \overline{Y}^2\bigg{]}$ $\displaystyle = {k} E\bigg{[} \overline{Y}^2\bigg{]} -E\bigg{[}(\mu^2)\bigg{]}$ although this doesn't seem correct.

$\displaystyle =k\bigg{(} \frac{\sigma^2}{k}+\mu^2\bigg{)} -\bigg{(} {\sigma^2}+\mu^2\bigg{)}= \mu^2(k-1)$.

2. Originally Posted by lllll
Let $\displaystyle Y_1, \ Y_2, \ \dotso, \ Y_n$ be a random sample of size n from a normal population with mean $\displaystyle \mu$ and variance $\displaystyle \sigma^2$. Assuming n=2k for some integer k, one possible estimator for $\displaystyle \sigma^2$ is given by:

$\displaystyle \hat{\sigma}^2=\frac{1}{2k}\sum^k_{i=1} (Y_{2i}-Y_{2i-1})^2$

show that $\displaystyle \hat{\sigma}^2$ is an unbiased estimator for $\displaystyle \sigma^2$

Attempt:

so when I expend it out it gives:

$\displaystyle \hat{\sigma}^2=\frac{1}{2k}\sum^k_{i=1} Y_{2i}^2-2(Y_{2i} \cdot Y_{2i-1})+ Y_{2i-1}^2$ That term in the middle is throwing me off.

I know that I eventually have to use the fact that

$\displaystyle E \bigg{[}\frac{1}{2k}\sum^k_{i=1} Y_{2i}^2-2(Y_{2i} \cdot Y_{2i-1})+ Y_{2i-1}^2 \bigg{]}$

and $\displaystyle E[Y^2] = V[Y]+(E[Y])^2 \longrightarrow \sigma^2+\mu^2$

so taking that into consideration I would get:

$\displaystyle E \bigg{[}\frac{1}{2k}\sum^k_{i=1} Y_{2i}^2-2(Y_{2i} \cdot Y_{2i-1})+ Y_{2i-1}^2 \bigg{]} \longrightarrow E\bigg{[} \frac{1}{2k} \sum^k_{i=1}Y_{2i}^2\bigg{]} -E\bigg{[} \frac{1}{2k}\sum^k_{i=1}2(Y_{2i} \cdot Y_{2i-1})\bigg{]}$ $\displaystyle + E\bigg{[} \frac{1}{2k} \sum^k_{i=1}Y_{2i-1}^2\bigg{]}$

Mr F says: What follows is wrong. See below.

$\displaystyle \frac{k}{2} E\bigg{[} \overline{Y}_{2i}^2\bigg{]} -E\bigg{[} (\mu_{2i} \cdot \mu_{2i-1})\bigg{]}$ $\displaystyle + \frac{k}{2} E\bigg{[} \overline{Y}_{2i-1}^2\bigg{]}$

I'm not sure if I should have the bottom indicator or if it should just be:

$\displaystyle \frac{k}{2} E\bigg{[} \overline{Y}^2\bigg{]} -E\bigg{[} \frac{1}{2} 2(\mu \cdot \mu)\bigg{]}$ $\displaystyle + \frac{k}{2} E\bigg{[} \overline{Y}^2\bigg{]}$ $\displaystyle = {k} E\bigg{[} \overline{Y}^2\bigg{]} -E\bigg{[}(\mu^2)\bigg{]}$ although this doesn't seem correct.

$\displaystyle =k\bigg{(} \frac{\sigma^2}{k}+\mu^2\bigg{)} -\bigg{(} {\sigma^2}+\mu^2\bigg{)}= \mu^2(k-1)$.
$\displaystyle E \left[ \frac{1}{2k} \sum^k_{i=1}Y_{2i}^2\right] -E\left[ \frac{1}{2k}\sum^k_{i=1}2(Y_{2i} \cdot Y_{2i-1})\right] + E\left[ \frac{1}{2k} \sum^k_{i=1}Y_{2i-1}^2\right]$

$\displaystyle = \frac{1}{2k} \sum^k_{i=1} E \left[Y_{2i}^2\right] - \frac{1}{k}\sum^k_{i=1} E\left[ Y_{2i} \cdot Y_{2i-1} \right] + \frac{1}{2k} \sum^k_{i=1} E\left[ Y_{2i-1}^2\right]$

$\displaystyle = \frac{1}{2k} \cdot k \left( \sigma^2 + \mu^2 \right) - \frac{1}{k}\sum^k_{i=1} E[ Y_{2i}] \cdot E[Y_{2i-1}] + \frac{1}{2k} \cdot k \left( \sigma^2 + \mu^2 \right)$

Note 1: $\displaystyle Var(Y) = E(Y^2) - [E(Y)]^2 \Rightarrow \sigma^2 = E(Y^2) - \mu^2 \Rightarrow E(Y^2) = \sigma^2 + \mu^2$.

Note 2: The middle term reduces to this because the $\displaystyle X_i$'s are independent.

$\displaystyle = \sigma^2 + \mu^2 - \frac{1}{k}\sum^k_{i=1} \mu \cdot \mu$

$\displaystyle = \sigma^2 + \mu^2 - \frac{1}{k} \cdot k \mu^2 = \sigma^2$.

3. If I wanted to show that $\displaystyle \hat{\sigma}^2$ is a consistent estimator for $\displaystyle \sigma^2$, how would I go about it?

4. Originally Posted by lllll
If I wanted to show that $\displaystyle \hat{\sigma}^2$ is a consistent estimator for $\displaystyle \sigma^2$, how would I go about it?
From my reply in this thread http://www.mathhelpforum.com/math-he...tml#post232762 it's known that $\displaystyle Var((Y_1 - Y_2)^2) = 8 \sigma^4$.

Therefore $\displaystyle Var(\hat{\sigma}^2) = \frac{1}{4k^2} \left( k [8 \sigma^4]\right) = \frac{2 \sigma^4}{k} = \frac{4 \sigma^4}{n}$.

Since $\displaystyle \hat{\sigma^2}$ is an unbiased estimator it's sufficient to show that $\displaystyle \lim_{n \rightarrow +\infty} Var(\hat{\sigma^2}) = 0$ to prove consistency ....