1. ## Linear predictors

The best linear predictor of Y with respect to $\displaystyle X_1$ and $\displaystyle X_2$ is equal to $\displaystyle a + bX_1 + cX_2$, where a, b and c are chosen to minimise

$\displaystyle E[(Y - (a +bX_1 +cX_2))^2]$

I get
$\displaystyle E[Y^2 -2aY - 2bX_1Y - 2cX_2Y + a^2 + 2abX_1 +2acX_2 +2bcX_1X_2 + b^2X_1^2 + c^2X_2^2]$

but cannot see where to go next

2. differentiate?

(i dont actually know that is the next step, but you'll have to do it sooner or later )

To elaborate, you need to differentiate with respect to a,b, and c and set each derivative=0. That gives you 3 simultaneous to solve in 3 variables.

3. would you not do something like
$\displaystyle E[Y^2]-aE[Y]+......$

I know this is wrong, but i am unsure why not

4. i hope you mean $\displaystyle E[Y^2] -2a E[Y] ...$

I would finish that , then differenciate.

5. I think I can help, but please include all details of the problem. You haven't specified whether X_1 and X_2 are fixed, or if we are conditioning on them but they are random etc (which ties into the fact that we are doing prediction), among other things. If they are fixed, then letting $\displaystyle \mu = \mbox E[Y]$ and $\displaystyle \sigma^2 = \mbox{Var}[Y]$ gives

$\displaystyle \displaystyle \mbox E[(Y - a - bX_1 - cX_2)^2] = \sigma^2 + (\mu - a - bX_1 - cX_2)^2$

so it suffices to minimize $\displaystyle (\mu - a - bX_1 - cX_2)^2$ with respect to a, b, c. But please post all details.

6. The question in my textbook is exactly as I stated it at the start of this thread.

7. Well, that can't be true, because the OP doesn't even have a question in it. If that is all the textbook says, you should provide more background. Is there any more context at all you can provide? Depending on what $\displaystyle \mbox E[Y|X_1, X_2]$ it could be as simple as choosing a,b, and c so that $\displaystyle \mbox E[Y|X_1, X_2] = a + bX_1 + cX_2$ or possibly much more complicated.

Just as an example, suppose X and Y are iid, with variance 1 and mean mu. Then $\displaystyle \mbox E[(Y - a - bX)^2] = 1 + \mbox E[(\mu - a - bX)^2]$ so that the best linear predictor of Y is equal to the best linear estimator of $\displaystyle \mu$ based on a sample of $\displaystyle X$. I don't think this exists in general. If you try to use calculus to solve this, however, it ends up telling you that $\displaystyle a = \mu$ and $\displaystyle b = 0$, which kind of defeats the purpose. So just taking derivatives isn't going to help you here.

I think you have to be missing details because, as stated, I don't think the question is solvable without further assumptions. In particular, you need some of the parameters to be known. If you can assume that you know some of the parameters, then you can follow the outline here:

http://math.tntech.edu/ISR/Introduct...newnode12.html

but I honestly don't see the point since in practice you don't know the parameters.