The best linear predictor of Y with respect to and is equal to , where a, b and c are chosen to minimise
I get
but cannot see where to go next
I think I can help, but please include all details of the problem. You haven't specified whether X_1 and X_2 are fixed, or if we are conditioning on them but they are random etc (which ties into the fact that we are doing prediction), among other things. If they are fixed, then letting and gives
so it suffices to minimize with respect to a, b, c. But please post all details.
Well, that can't be true, because the OP doesn't even have a question in it. If that is all the textbook says, you should provide more background. Is there any more context at all you can provide? Depending on what it could be as simple as choosing a,b, and c so that or possibly much more complicated.
Just as an example, suppose X and Y are iid, with variance 1 and mean mu. Then so that the best linear predictor of Y is equal to the best linear estimator of based on a sample of . I don't think this exists in general. If you try to use calculus to solve this, however, it ends up telling you that and , which kind of defeats the purpose. So just taking derivatives isn't going to help you here.
I think you have to be missing details because, as stated, I don't think the question is solvable without further assumptions. In particular, you need some of the parameters to be known. If you can assume that you know some of the parameters, then you can follow the outline here:
http://math.tntech.edu/ISR/Introduct...newnode12.html
but I honestly don't see the point since in practice you don't know the parameters.