upperbound on expected value

May 2010
2
0
Hi

I found a proof, but can not follow. The critical part is the following:

\(\displaystyle \mathbb{E}\left[\frac 1 n \sum_{i=1}^n x_i \left(1-\frac 1 n \sum_{i=1}^n w_i +\left(\frac 1 n \sum_{i=1}^n w_i\right)^2-\left(\frac 1 n \sum_{i=1}^n w_i\right)^3+...\right)\right]-X\)
where
\(\displaystyle \mathbb{E}\left[x_i\right]=X, \mathbb{E}\left[w_i\right]=0\)

I need an approximation / upperbound to n. The authors assert that it is in order of \(\displaystyle n^{-1}\). Can anyone explain it?

Thanking you in anticipation!
 
May 2010
1,034
272
how about some more information?

Like, the preceeding line and what it is you are trying to find the expected value of.
 
May 2010
2
0
What I'm trying to do is, estimate the bias of a self normalized importance sampler, that is
\(\displaystyle \hat{\mu}=\frac{\sum_{i=1}^n w(z_i) h(z_i)}{\sum_{i=1}^n w(z_i) }\)

for sake of simplicity let's see it as
\(\displaystyle \hat{\mu}=\frac{\sum_{i=1}^n x_i}{\sum_{i=1}^n y_i}\)

the bias is given bei
\(\displaystyle Bias=\mathbb{E}[\hat{\mu}]-X\)

a taylor expansion for \(\displaystyle \frac x y\) around the expected values X and 1 leads to the given equation above.

I'm interested in an estimation depending on n. there are some books asserting that it is
\(\displaystyle |Bias|\leq \frac C n\)
for some constant \(\displaystyle C \geq 0\).

Can you give me a more detailed explanation?