I have the following optimisation problem:

where M is a (NxN) matrix 's are constant vectors and 's are convex fonctions.

I think that this is a convex optimisation problem.

To avoid having to deal with the constraint I factorize the matrix , where is a matrix. This ensures that M is PSD and of rank k at most. But the problem is not convex any more.

However in practicle applications, gradient descent based methods seem to lead to good solutions, and I would have to proove it.

My intuition is that the minima when optimizing over L, correspond to minima of the constrained problem and that other correspond either to maxima either to saddle points, which would mean that an iterative optimisation process based on gradient descent has very low probabilities to get stuck at a 0 gradient point which is not the minimum.

the gradient wrt to L is:

How can I proove that or at least get some hints weither it's true or not ?