I am using an evolutionary fitting method which provides the inverse Hessian up to a constant, i.e.

\mathbf{C} = const \quad \mathbf{H}^{-1}

The 2-norm of \mathbf{C} is very small, e.g. 10^{-20}, but the eigenvalues of \mathbf{C} are relative to the eigenvalues of \mathbf{H}. I was reading a proof which had to do with scaling a matrix via eigenvalues in the form

factor=\frac{\sum_j^p f(x) \lambda_j}{\sum_j ^p f(x)}

I am working with log-likelihood, so f(x) would have to be \log[L(\beta)] in the above equation. Firstly, is there a known way to obtain \mathbf{H}^{-1} by rescaling with the eigenvalues of \mathbf{C}, or do I need to calculate log-likehood for each record, sum them, and then solve for \mathbf{H}^{-1}?