I am a bit confused about averaging differences and am hoping someone on here can point me in the right direction.
A particular system can be described using 3 parameters that have fixed, known values Xk, Yk and Zk.
When this system is modelled these 3 parameters are predicted to have values Xp, Yp, and Zp. The model in use is dependent upon two variables, A & B. My overall task is to find values of A & B for which:
Xp = Xk
Yp = Yk
Zp = Zk
In the first stage of this task, I look at how the difference between the predicted (subscript p) and known values (subscript k) of X, Y and Z varies as a function of A & B: only if the difference is zero does values of the A & B pair get taken to the second stage of the task. It is here that my understanding faulters and my question arises.
If I consider each of the parameters X, Y and Z separately, finding the difference between the predicted and known values and then identifying when this is zero is trivial. However, considering the parameters separately does not realistically represent the system and it would 'be better' to consider all three at once - by finding the mean difference and identifying when this is zero. But consider the following:
X Y Z
Known values: 5 5 5
Predicted values: -95 5 105
Difference: -100 0 100
This gives a mean difference of zero, but is obviously far from what is required and the A and B values should not be accepted.
Is there a way of finding the average difference between the predicted and known values of all three parameters at once, but accounts for situations akin to the above?
Any help would be much appreciated!