1. finding orthogonal matrix?

I would like to solve the following problem in the 'correct' way. I currently use optimization to find the answer, but I would like to get rid of it.

Given a real-valued symmetric Matrix M, containing only elements along all borders and the diagonal.
Also given a symmetric prototype matrix P, containing ones and zeroes. The 1/0 indicate where values are allowed in the result matrix.

I need to find an orthogonal matrix Q, so that when applying
M_2 = Q M_1 Q^-1
the matrix M_2 does not contain any values at the zero positions of P.
The other way is possible, M_2 may contain a zero, where P is one.
M_2(i,j) * (1 - P(i,j)) = 0

P was chosen in such a way, that the transformation is possible.
Q does not have to be unique. Multiple solutions might be possible.

For some forms of P, it is possible to derive Q using a series of Givens rotations, annihilating elements one by one. But I could not find a solution for the general case. Neither could i find a strategy in which order to apply the Givens rotations.

How to derive the Q matrix?

We could start simple:
P is chosen in such a way that Q is unique (I think).
This way M_2 cannot be zero, where P is one.

2. Since nobody even left a hint, maybe the problem is not clear enough.
So to clarify, here is an example.

$\displaystyle M_1 = \begin{bmatrix} 0 & 0.364 & -0.654 & 0.668 & -0.343 & 0.015 \\ 0.364 & 1.314 & 0 & 0 & 0 & 0.364 \\ -0.654 & 0 & 0.783 & 0 & 0 &0.654 \\ 0.668 & 0 & 0 & -0.804 & 0 & 0.668 \\ -0.343 & 0 & 0 & 0 & -1.297 & 0.343 \\ 0.015 & 0.364 & 0.654 & 0.668 & 0.343 & 0 \end{bmatrix}$

$\displaystyle P = \begin{bmatrix} 0 & 1 & 0 & 0 & 0 & 1 \\ 1 & 1 & 1 & 0 & 1 & 1 \\ 0 & 1 & 1 & 1 & 1 & 0 \\ 0 & 0 & 1 & 1 & 1 & 0 \\ 0 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 0 & 0 & 1 & 0 \end{bmatrix}$

$\displaystyle M_2 = \begin{bmatrix} 0 & 1.060 & 0 & 0 & 0 & 0.015 \\ 1.060 & -0.002 & 0.874 & 0 & -0.326 & 0.032 \\ 0 & 0.874 & 0.048 & 0.836 & 0.034 & 0 \\ 0 & 0 & 0.836 & -0.067 & 0.872 & 0 \\ 0 & -0.326 & 0.034 & 0.872 & 0.017 & 1.060 \\ 0.015 & 0.032 & 0 & 0 & 1.060 & 0 \end{bmatrix}$

For this specific P-matrix, Q can be build using a series of rotations:
$\displaystyle \begin{matrix} \textrm{Pivot} & \textrm{Zeroed element} & \textrm{rotation} \\ \hline 4,5 & 1,5 & \theta=\tan^{-1}(-M_{1,5}/M_{1,4}) \\ 3,4 & 1,4 & \theta=\tan^{-1}(-M_{1,4}/M_{1,3}) \\ 2,3 & 1,3 & \theta=\tan^{-1}(-M_{1,3}/M_{1,2}) \\ 3,4 & 3,6 & \theta=\tan^{-1}(M_{3,6}/M_{4,6}) \\ 4,5 & 4,6 & \theta=\tan^{-1}(M_{4,6}/M_{5,6}) \\ 3,4 & 2,4 & \theta=\tan^{-1}(-M_{2,4}/M_{2,3}) \\ \end{matrix}$

The question is about finding a general procedure to determine the/a Q-matrix, valid for any P-matrix (provided the transformation is possible)

3. I don't quite understand your question, but have you considered using Householder reflections?

4. Could you state what is unclear?
Let me try again.
I want to find a Q matrix, so that Q M Q^-1 is transformed into the shape described in the P matrix.

Only for certain shapes of P, it is easy to build up Q with a series of transformations, one step at the time. Eg. like using the Householder transformations for wiping the lowerleft triangle in QR decomposition.
However for a general prescribed solutions pattern (P) it seems to me like solving a rubiks cube. As soon as I put the next zero somewhere, my previous zeroes start to fill up again.