Hello,

I was wondering how to derive the asymptotic expansion of the following function:

where

is the probability function of the i-th event, say.

I don't even know where to start, or if asymptotic expansion can be done.

Printable View

- February 15th 2010, 11:19 PMEmpSciAsymptotic expansion of a discrete function
Hello,

I was wondering how to derive the asymptotic expansion of the following function:

where

is the probability function of the i-th event, say.

I don't even know where to start, or if asymptotic expansion can be done. - February 16th 2010, 12:47 AMLaurent
- February 16th 2010, 10:22 AMEmpSci
Well, if you have a set of N elements, and p_i is the probability of the i-th element in this set, then that's how you could relate p_i. In fact, p_i=P(X=some value in the set). If you want, you can disregard N, I just need to see how such an expansion could be done.

- February 19th 2010, 11:38 PMEmpSci
I believe I should be more explicit in what I am looking for.

I have the following discrete function:

where is the probability of the object in the indexed set . Consider this probability distribution to be a theoretical distribution.

Now, from some experiments, say that I have collected the empirical probability distribution in order to build the following function:

Obviously, these two functions are mathematical expectations on and , wherein the first function, , is the theoretical expectation, while the second function, , is the observed expectation. Therefore, I wanted to calculate the error involved in terms of the asymptotic expansion of the difference in observed and theoretical probabilities.

That is, given the error term, , I am looking for an asymptotic expansion on in order to provide an asymptotic argument on the discrepancy between the two expectations above.

Hence, I am looking to express the error in terms of some asymptotic series so that it can be the asymptotic expansion of function . In other words, I want to find that series so that:

I don't even know how to start looking for . Any insights would be encouraging for me to explore further. - February 20th 2010, 03:30 AMLaurent
- February 20th 2010, 02:10 PMEmpSci
Thank you, Laurent.

Apparently, in the second "more precise" expansion, the linear terms seem to disappear. Does that imply that the discrepancy is negligible? - February 20th 2010, 03:42 PMLaurent
- February 20th 2010, 04:09 PMEmpSci
I mean, how would you interpret the right-hand side of the inequality? Could you write that using Big-O notation such as the following:

- February 20th 2010, 09:38 PMEmpSci
Ok, here's what I came up with.

The forward difference that you consider:

is the first-order Newton series approximation of the xlogx function:

Then, from this, we approximated function E = S - H as:

Now, we may consider approximating the expansion using n-th order Newton series approximations. That is, define the n-th order forward difference as:

where C(n,i) is the combination "n choose i".

Then, we to obtain the n-th order derivative approximation of f, we have:

Or, we may write:

And, the general expression for our discrepancy function E = S - H becomes:

as , naturally.

So, for instance, for , and using the function , we have the second-order approximation of E equal to:

How would you comment on this approach? I am really interested on your insights and suggestions. - February 20th 2010, 11:40 PMEmpSci
Also, by definition is equivalent to stating that .

Now, shouldn't the little-o in our functions above be equal to:

for the n-th order approximation, so that it complies with the definition of asymptotic equivalence? - February 21st 2010, 04:56 AMLaurent
I don't know about this Newton approximation, but there is obviously something wrong with your formulas: I guess you have to sum the approximation terms for up to the order you want. For Taylor approximation, for instance, , and not just , of course.

Anyway, I would be very surprised if you need third order approximation or beyond. - February 21st 2010, 01:48 PMEmpSci
Yes, by definition, I need to sum up the terms. Thanks for your input, though a comment of yours on how to interpret, say, the second-order approximation would also be appreciated.