Optimal Control Problem: Constrained Optimzation with Fixed Endpoints

Hi. This is a problem I'm having hard time solving.

The following "autonomous" optimization problem can also be solved using Calculus of Variations, but I prefer the Optimal Control approach:

** Determine solving subject to and and **, where

Now in optimal control theory, the conventional approach to solve such a *Bounded *problem is to put as the control variable and as the state variable and recast the problem as below:

subject to and and

Then form the Lagrangian:

But, instead, I decided to have a look on the unbounded version of the same u. In other words, initially, I tried to solve the same problem in a way as if there were no limits on the amount of . Hence, instead of the Lagriangian, I found the Hamiltonian as below:

and then applying the optimality conditions, we'll have:

Since I and II yield two different s, one independent of time and the other, a linear function of t, I concluded that the unbounded version of the initial problem has no solution.

Now, question is: can I also conclude that since there is no solution for an unbounded u, a fortiori, there is no bounded control, u, that can satisfy the given conditions?

A classmate of mine suggested that there is a solution for the original problem and that I should be looking for a discontinuous "Bang Bang Control" solution, but I'm not convinced that my argument is incorrect. (And, if there is a "Bang Bang" control, satisfying the given constraints, how am I supposed to find it?)

Any help will be appreciated. :)