Hi,

I have an minimization problem in which by construction I sometimes need to start from a saddle point. To escape from this saddle point a possible solution would be to take as a descent direction the direction pointed by the eigenvector associated to the smallest negative eigenvalue of the Hessian matrix. This is unfortunately very computationally expensive.

To find a "good" direction I thought of drawing random unit vectors $\displaystyle x_i$ and look at the value $\displaystyle \mathbf x_i^T H \mathbf x_i$, where H is the Hessian matrix. If I find a negative value, I use the vector $\displaystyle \mathbf x_i$ as a descent direction and I know it's a good escape direction other wise i take the one with the smallest value and just test if i can minimize more.

My intuition that it might work is based on the fact that:

$\displaystyle \mathbf x^T H \mathbf x = \sum_k \lambda_k (\mathbf u_k^T \mathbf x)^2$

where $\displaystyle \lambda_k$ and $\displaystyle \mathbf u_k$ are the eigenvalues and corresponding eigenvector of H. So if $\displaystyle \mathbf x$ lies mostly in the subspace corresponding to the negative eigenvalues, i will get a negative or at least a small value and x is a good direction.

Does any one knows about some useful references about this approach ?

Or any useful comment ?

Thanks

Alexis