This one really has me stumped.
I have a variable X that can have the values of 0 and 1. I have a sample of n1 observations where X=0 and n2 observations where X=1. Ybar1 is the mean value for the n1 observations where X=0; Ybar2 is the mean value of Y for the n2 observations where X=1. For the OLS regression: Y=alpha + betaX + e, I need to somehow find the values of alpha and beta in terms of n1 and n2.
This is what I think I know:
When X=0, Ybar1 is 0 so Y=alpha in the n1 case.
When X=1, Ybar2 is 1 so Y=alpha+beta in the n2 case.
I asked for help from the professor and he wrote "You can show it with matrices or with the scalar version of the bivariate regression model. If you choose the latter, split your data and do the computations where n1 are the 1...n1 cases where X is coded as 0 and n1+1...n = n2 cases are the ones where X is coded as 1. You are basically splitting up the summation of the covariances and variances over these two subset so data cases", but I honestly do not know what any of that really means.