I have a question regarding confidence levels.
I am studying about favorite-longshot bias in betting just for curiosity. Basically the bias means that favorites are underbet and offer a better rate of return than bets on longshots.
The methodology of studying the bias was developed by Snyder (1978). He divided the odds into groups based on arbitrary intervals. Then he checked out how many winners there were in each odds group relative to the total number of observations in that group. Specifically, he defines rate of return as
RR = [W(O+1)-N] / N
where W is the ex-post amount of winning bets/selections in each group and N is the total number of bets/selections in each group. Odds are defined as O.
I have a dataset of ~2 000 soccer matches and pre-match odds (in decimal format, e.g. 2.00). I have no trouble of calculating the rate of returns for each odds group. However, I do not have enough understanding to calculate t-statistics for the rates of return like Snyder (1978) does.
So my question is how can I calculate t-statistics for the rates of return to see whether they are significantly different from 0 or (-bookmaker commission)?
Snyder’s paper can be found at
Snyder 1978. Horse racing: Testing the efficient markets model, The Journal of Finance.
Thanks for help!