Just a quick question about the analytical solution to the simple one-parameter least squares problem, i.e. y=mx.
If we assume the error (sigma) is the same for all points, it's easy to show that the solutions for m and its variance are:
But what happens when we have different errors on all points, so instead of sigma we have sigma(i) for each y(i). What do these formulas become?
Thanks for your help!
Thanks for your reply. I'm slightly confused though. The result you have is slightly different to mine, because you effectively have (sum of x)^2 rather than (sum of x^2). Also, are you saying that the variance does not change when the individual errors are used? Surely it must?
You have a set of and the corresponding observed values of the dependent variable . We wish
to find an estimate of which minimises:
where the are known variances of the 's (which we assume are independent).
So we solve:
Which has solution:
which has variance:
Please check this it is quite likely that I have an error somewhere.
Thank you both for your help with this. The results dervied by Captain Black are the ones I was looking for, and agree with what have now derived myself, so it looks as if they are right!
Sorry for any confusion, Jake. I didn't really understand either my setup or derivation until now!