1. ## Find the extermals

how do i find the extremals of

$\displaystyle J[x,y]=\int_b^{a}((x)^2+(y)^2-2x'y)dt$

i can do this with1 dependent variable using the euler lagrange equations but i dont know how to do this

thanks for any help.

2. Originally Posted by hmmmm
how do i find the extremals of

$\displaystyle J[x,y]=\int_b^{a}((x)^2+(y)^2-2x'y)dt$

i can do this with1 dependent variable using the euler lagrange equations but i dont know how to do this
The Euler–Lagrange equations for several functions of one variable are given here. The notation there is different from that used in this question, so here's an attempt at a translation. On the Wikipedia page, the independent variable is $\displaystyle x$, in the problem here it is $\displaystyle t$. On the Wikipedia page, the dependent variables are $\displaystyle f_1,\ldots,f_n$, with derivatives $\displaystyle f'_1,\ldots,f'_n$. In this problem, there are two dependent variables, $\displaystyle x$ and $\displaystyle y$, with $\displaystyle x$ having derivative $\displaystyle x'$ (but I'll call it $\displaystyle \dot{x}$ since the independent variable is $\displaystyle t$), and the derivative of $\displaystyle y$ does not appear. Finally, the Wikipedia page calls the integrand $\displaystyle \mathcal{L}(x, f_1,\ldots,f_n,f'_1,\ldots,f'_n)$, but here it is $\displaystyle F(t,x,y,\dot{x},\dot{y}) = x^2+y^2-2\dot{x}y$.

According to Wikipedia, the E–L equations are

$\displaystyle \displaystyle \frac{\partial\mathcal{L}}{\partial f_k} - \frac{\partial}{\partial x} \left(\frac{\partial\mathcal{L}}{\partial f'_k}\right) - \sum_{i=1}^n \frac{\partial}{\partial f_i} \left(\frac{\partial\mathcal{L}}{\partial f'_k}\right)f'_i - \sum_{i=1}^n \frac{\partial}{\partial f'_i} \left(\frac{\partial\mathcal{L}}{\partial f'_k}\right)f''_i = 0$, for $\displaystyle 1\leqslant k\leqslant n$.

Translating those into the notation of this problem, I get the two E–L equations to be

$\displaystyle 2x+2\dot{y}+2\dot{y} = 0,$

$\displaystyle 2y-2\dot{x} = 0.$ (The second one is particularly simple because all the derivatives with respect to $\displaystyle \dot{y}$ are 0.

If those are correct, the you have two simultaneous linear differential equations, which you can represent as

$\displaystyle \begin{bmatrix}\dot{x}\\ \dot{y}\end{bmatrix} = \begin{bmatrix}0&1\\ -\frac12&0\end{bmatrix} \begin{bmatrix}x\\y\end{bmatrix},$

and you solve them by diagonalising the matrix.

$\displaystyle \displaystyle J[x,y]=\int_{a}^{b}(x^{2}+y^{2}-2\dot{x}y)\,dt.$

Let $\displaystyle L=x^{2}+y^{2}-2\dot{x}y.$ The EL equations are

$\displaystyle \dfrac{d}{dt}\dfrac{\partial L}{\partial\dot{x}}-\dfrac{\partial L}{\partial x}=0,$ and

$\displaystyle \dfrac{d}{dt}\dfrac{\partial L}{\partial\dot{y}}-\dfrac{\partial L}{\partial y}=0.$

Assembling the ingredients, we have

$\displaystyle \dfrac{\partial L}{\partial\dot{x}}=-2y,$

$\displaystyle \dfrac{\partial L}{\partial x}=2x,$

$\displaystyle \dfrac{\partial L}{\partial\dot{y}}=0,$

$\displaystyle \dfrac{\partial L}{\partial y}=2y-2\dot{x},$

$\displaystyle \dfrac{d}{dt}\dfrac{\partial L}{\partial\dot{x}}=-2\dot{y},$ and

$\displaystyle \dfrac{d}{dt}\dfrac{\partial L}{\partial\dot{y}}=0.$

Therefore, the EL equations yield

$\displaystyle -2\dot{y}-2x=0$ and

$\displaystyle 0-(2y-2\dot{x})=0,$ or

$\displaystyle \dot{y}+x=0$ and

$\displaystyle y-\dot{x}=0.$ Thus, my system of ODE's reads

$\displaystyle \begin{bmatrix}\dot{x}\\ \dot{y}\end{bmatrix}=\begin{bmatrix}0 &1\\-1&0\end{bmatrix}\begin{bmatrix}x\\y\end{bmatrix} .$

Did I make a mistake somewhere?

I'm well beyond my comfort zone when looking at Euler–Lagrange equations for several functions. I relied entirely on the Wikipedia formula, which is surprisingly more elaborate than for the single function case. In your solution, you started by writing down the E–L equations for the x and y functions separately. I suspect that this may not be justified if the two functions do not operate independently, as happens in this problem because of the term $\displaystyle -2\dot{x}y$ in the integrand. It seems plausible that where the two functions interact in this sort of way, it might have the effect of introducing an additional complication into the E–L equations. But that is just a guess. I have no idea what the true solution should be.

I don't think you did the computations correctly. I think all the terms in the sums vanish, at which point you actually end up with the EL equations that I started with.

The question of the correct starting EL equations is an interesting one. I have Landau's Mechanics that definitely says the EL equations are what I used. I'm guessing all my physics books will say the same. I have a number of CoV books that I can look through. When I get the time, I'll definitely look it up. Perhaps the physicists just assume that all the Lagrangians they ever meet up with will work with version I used.

6. If I translate the Wikipedia E–L equation for $\displaystyle x$ into the notation for this question, I get

$\displaystyle \dfrac{\partial L}{\partial x} - \dfrac d{dt}\left(\dfrac{\partial L}{\partial \dot{x}}\right) - \dfrac{\partial}{\partial x}\left(\dfrac{\partial L}{\partial \dot{x}}\right)\dot{x} - \dfrac{\partial}{\partial y}\left(\dfrac{\partial L}{\partial \dot{x}}\right)\dot{y} - \dfrac{\partial}{\partial \dot{x}}\left(\dfrac{\partial L}{\partial \dot{x}}\right)\ddot{x} - \dfrac{\partial}{\partial \dot{y}}\left(\dfrac{\partial L}{\partial \dot{x}}\right)\ddot{y} = 0.$

Putting in the values for the partial derivatives, that becomes

$\displaystyle 2x - \frac d{dt}(-2y) - 0 -(-2)\dot{y} - 0 - 0 = 0$.

That's where I got $\displaystyle x+2\dot{y} = 0$ from.

7. I think you're right, not me. I didn't do the sum correctly.

So that still leaves open the question of which EL equations are correct. I'll do a little research.

8. Ok, here's what I've found. On page 98 of George Ewing's Calculus of Variations with Applications, he does the double-pendulum problem with Lagrangian

$\displaystyle L=\dfrac{m_{1}}{2}\,r_{1}^{2}\dot{\theta}_{1}^{2}+ \dfrac{m_{2}}{2}\left[r_{1}^{2}\dot{\theta}_{1}^{2}+r_{2}^{2}\dot{\theta }_{2}^{2}+2r_{1}r_{2}\dot{\theta}_{1}\dot{\theta}_ {2}\cos(\theta_{2}-\theta_{1})\right]-V.$

The generalized coordinates being used are $\displaystyle \theta_{1},\theta_{2}.$ You will notice that the product $\displaystyle \dot{\theta}_{1}\dot{\theta}_{2}$ is present in this Lagrangian, similar to the OP's product of $\displaystyle \dot{x}y.$ However, the author uses the independent EL equations

$\displaystyle L_{\theta_{i}}=\dfrac{d}{dt}L_{\dot{\theta}_{i}},\ quad i=1,2$

to find the minimizer.

So when are you supposed to use the wiki version? Answer: when the generalized coordinates are dependent. Here's a quote from Goldstein's Classical Mechanics, 3rd Ed., p. 45: after quoting the equivalent equations I just wrote down (the generalized coordinate-by-coordinate, separated EL equations), the author writes,

In deriving Eqs. (2.18), we assumed that the $\displaystyle y_{i}$ variables are independent. The corresponding condition in connection with Hamilton's principles is that the generalized coordinates $\displaystyle q_{i}$ be independent, which requires that the constraints be holonomic.
Goldstein goes on to derive what appear to be the wiki equations, for the dependent case. How would you recognize dependence versus independence? Goldstein gives an example. If, in addition to the Lagrangian, you also have a constraint that looks, say, like this:

$\displaystyle \dot{x}\dot{y}+ky=0,$

then you've got yourself dependent generalized coordinates. That's when you'd need the wiki version. Otherwise, I suppose you'd assume that the generalized coordinates are independent, so you could use the de-coupled version.

In this problem, there doesn't appear to be any equation of constraint. Hence, it looks as though the de-coupled version of the EL equations is the correct one.

All of this goes to show that it might, perhaps, be the case that the wiki is incorrect on this matter, or misleading at best. Maybe they should distinguish between the dependent and independent case.

But there's still this nagging question in my mind: shouldn't the dependent case collapse to the independent case when the variables are, in fact, independent? Why doesn't that happen with the OP?