So I have a series of experimentally determined 2x2 matrices (y0, y1, y2,etc) and I know each of them is a product of some number of unknown 2x2 matrices (d1,d2,etc) and some number of known matrices (k1,k2,etc). The order for the multiplication is determined by my test setup and there is a certain amount of flexibility here. One experiment example could result in the following equations:
[y0] = [d1][k1][d2]
[y1] = [d1][k2][d2]
[yn] = [d1][kn][d2]
My FIRST question would be where would error terms come into these equations? The k matrices could have changed value a bit between when I measured them the first time and when I measured the y matrices so I would expect this error to change an equation to the following:
[y] = [d1]([k] + [ERROR])[d2]
Also, there will be some error in the measurement of y yielding:
[y] = [d1]([k] + [Error k])[d2] + [Error y]
The d matrices could have changed a bit between measurements of y matrices, but I believe that this error would just be [Error y].
My SECOND question would be how would I would go about solving for the d matrices in an arbitrary experiment of this type? Some experiments might not have enough data, others might be too computationally complex. If this question is too difficult to answer, an answer for the particular example experiment above would be fine too!
I have already done a lot of research into that particular example which I will go into in a reply to this thread.
PS: I have asked another question in the university algebra board that deals with another example experiment; the link is below: