From an orientation sensor i get 3 euler angles (further denoted as yaw, pitch, roll for Z,Y,X rotation) in world coordinate system every timestep. The coordinate system has X-axis facing north , Y axis facing east and Z axis facing down. Positive rotation indication is always having thumb pointing towards axis and looking at other fingers direction. it is also clear that the three rotation components y,p,r also have to be applied in the same and correct order in local coordinate system.
Mouting the sensor on top of an object e.g a cube and rotating that cube around Y axis of course causes the pitch value to change. For visual representation the angles can computed on arbitrary 3d vectors e.g [1,0,0]. The matrix for this absolute euler orientation are computed every timestep and applied to the vector. See attached image for the rotation matrix.
However when i mount the sensor on the front face of the cube the offset in Euler angles in the resting position is not 0,0,0 anymore but of course 90°,0°,90° applied successively. When i now rotate the cube the same global Y rotation i did in the previous case the change happens in the roll angle starting at 90°.
Given the offset orientation 90°,0°,90° at the cubes resting position and the absolute orientation when rotating the cube we can easily construct the 2 rotation matrices. However knowing the resting orientation offset is it possible to substract these from the 3 angles that are updated every timestep back to what the angles would have been when we mounted the sensor on the top of the cube like in the first example.
I know that there exists something like Euler singularity problems, but still from 2 absolute rotation matrices the first a stable offset, the second an updated rotation every timestep can i subtract the offset somehow?
Sry if this explanation sounds a bit dull and thx in advance