# Eigenvalue problem

Show 40 post(s) from this thread on one page
Page 1 of 2 12 Last
• Feb 23rd 2011, 08:06 PM
dwsmith
Eigenvalue problem
$\displaystyle \varphi''+\lambda\varphi=0$
$\displaystyle \varphi'(0)=0$
$\displaystyle \varphi(L)=0$

$\displaystyle m^2+\lambda=0\Rightarrow \exp(\pm xi\sqrt{\lambda})$

$\displaystyle \varphi=C_1\cos(x\sqrt{\lambda})+C_2\sin(x\sqrt{\l ambda})$

$\displaystyle \varphi_1(0): \ C_1=1$
$\displaystyle \varphi_1'(0): \ C_2=0$

$\displaystyle \varphi_1=\cos(x\sqrt{\lambda})$

$\displaystyle \varphi_2(0): \ C_1=0$
$\displaystyle \displaystyle\varphi_2'(0): \ C_2=\frac{1}{\sqrt{\lambda}}$

$\displaystyle \displaystyle\varphi_2=\frac{\sin(x\sqrt{\lambda}) }{\sqrt{\lambda}}$

$\displaystyle \displaystyle\varphi(x)=A\cos(x\sqrt{\lambda})+B\f rac{\sin(x\sqrt{\lambda})}{\sqrt{\lambda}}$

$\displaystyle \varphi'(0): \ B=0$

$\displaystyle \varphi(L): \ A\cos(L\sqrt{\lambda})=0$

$\displaystyle \displaystyle\lambda_n=\left(\frac{(2n+1)\pi}{2L}\ right)^2, \ \ n\in\mathbb{Z}$

$\displaystyle \displaystyle\varphi_n(x)=A_n\cos\left(\frac{(2n+1 )\pi x}{2L}\right)$

$\displaystyle \displaystyle f(x)=\sum_{n=0}^{\infty}A_n\cos\left(\frac{(2n+1)\ pi x}{2L}\right)$

Is this correct so far?
• Feb 23rd 2011, 08:21 PM
dwsmith
Assuming the above is correct, I am now going to show that all the eigenvalues are real.

$\displaystyle \cos(L\sqrt{\lambda})=0$

Let $\displaystyle L\sqrt{\lambda}=a+bi, \ \ a,b\in\mathbb{R}$

$\displaystyle \cos(a+bi)=\cos(a)\cos(bi)-\sin(a)\sin(bi)=\cos(a)\cosh(b)-i\sin(a)\sinh(b)=0$

$\displaystyle \displaystyle\cos(a)\cosh(b)=0, \ \ \cosh(b)>0, \ \ \cos(a)=0\Rightarrow a=\frac{\pi}{2}+\pi k, \ \ k\in\mathbb{Z}$
$\displaystyle -\sin(a)\sinh(b)=0, \ \ \sinh(b)=0\Rightarrow b=0, \ \ \sin(a)=0\Rightarrow a=\pi k$

$\displaystyle \cos(a+bi)\neq 0$

Thus, all eigenvalues are real.
• Feb 24th 2011, 01:27 AM
Ackbeet
Just commenting on the proof of the realness of the eigenvalues: you could save yourself a bit of effort if you could show that your original differential operator is self-adjoint. In this context, that amounts to showing that the operator is of the Stürm-Liouville type.
• Feb 24th 2011, 09:24 AM
dwsmith
Quote:

Originally Posted by Ackbeet
Just commenting on the proof of the realness of the eigenvalues: you could save yourself a bit of effort if you could show that your original differential operator is self-adjoint. In this context, that amounts to showing that the operator is of the Stürm-Liouville type.

I haven't arrived to that in the book yet. Is post 1 correct though?
• Feb 24th 2011, 09:41 AM
Ackbeet
Post 1 looks correct so far as it goes. You should probably, if you haven't yet, go ahead and show that there are no eigenvalues for either the $\displaystyle \lambda<0$ or $\displaystyle \lambda=0$ case.
• Feb 24th 2011, 05:25 PM
dwsmith
Quote:

Originally Posted by Ackbeet
Post 1 looks correct so far as it goes. You should probably, if you haven't yet, go ahead and show that there are no eigenvalues for either the $\displaystyle \lambda<0$ or $\displaystyle \lambda=0$ case.

Doesn't the fact that cosine can't be of form cos(a + bi) show that lambda can't be less than 0?
• Feb 25th 2011, 01:52 AM
Ackbeet
I've pored over your post # 2 now, and I think I can finally discern the logic you're employing there. Basically, it comes down to this: $\displaystyle b = 0$, or you get $\displaystyle a$ having to be both an odd-integer multiple of $\displaystyle \pi/2$, and a multiple of $\displaystyle \pi$, which can't be. Therefore, $\displaystyle b=0$. I think your overall logic works, provided that the form of your solution hasn't already assumed that $\displaystyle \lambda>0.$. You have not shown that lambda can't be zero. In order to do that, you have to re-solve the DE with that assumption in mind (the solutions you get for that case are not obtainable with any selection of the integration constants for the lambda not zero case), and show that there are no eigenvectors.

I would probably solve the problem this way: break it up into three cases, according to the dichotomy law: $\displaystyle \lambda<0,\;\lambda=0,\;\lambda>0.$ For $\displaystyle \lambda<0,$ let $\displaystyle \lambda=-\alpha^{2}.$ For $\displaystyle \lambda=0,$ do the obvious. And for $\displaystyle \lambda>0,$ let $\displaystyle \lambda=\alpha^{2}.$ In each case, you get a different form of the solution, with which you work to see if it's an allowed case. For $\displaystyle \lambda<0$, you get exponentials. For $\displaystyle \lambda=0,$ you get straight lines. For $\displaystyle \lambda>0,$ you get sinusoids. Remember that, by definition, eigenvectors cannot be identically zero. That fact, in this case, rules out the $\displaystyle \lambda<0$ and $\displaystyle \lambda=0$ cases.

Make sense?
• Feb 25th 2011, 12:27 PM
dwsmith
Quote:

Originally Posted by Ackbeet
I've pored over your post # 2 now, and I think I can finally discern the logic you're employing there. Basically, it comes down to this: $\displaystyle b = 0$, or you get $\displaystyle a$ having to be both an odd-integer multiple of $\displaystyle \pi/2$, and a multiple of $\displaystyle \pi$, which can't be. Therefore, $\displaystyle b=0$. I think your overall logic works, provided that the form of your solution hasn't already assumed that $\displaystyle \lambda>0.$. You have not shown that lambda can't be zero. In order to do that, you have to re-solve the DE with that assumption in mind (the solutions you get for that case are not obtainable with any selection of the integration constants for the lambda not zero case), and show that there are no eigenvectors.

I would probably solve the problem this way: break it up into three cases, according to the dichotomy law: $\displaystyle \lambda<0,\;\lambda=0,\;\lambda>0.$ For $\displaystyle \lambda<0,$ let $\displaystyle \lambda=-\alpha^{2}.$ For $\displaystyle \lambda=0,$ do the obvious. And for $\displaystyle \lambda>0,$ let $\displaystyle \lambda=\alpha^{2}.$ In each case, you get a different form of the solution, with which you work to see if it's an allowed case. For $\displaystyle \lambda<0$, you get exponentials. For $\displaystyle \lambda=0,$ you get straight lines. For $\displaystyle \lambda>0,$ you get sinusoids. Remember that, by definition, eigenvectors cannot be identically zero. That fact, in this case, rules out the $\displaystyle \lambda<0$ and $\displaystyle \lambda=0$ cases.

Make sense?

For lambda = 0, wouldn't it be easier to just show:

$\displaystyle \cos(L\sqrt{\lambda})=0\Rightarrow 1=0$ which is never true.

When lambda < 0, the term inside the cosine is complex, and I have already shown that complex numbers aren't eigenvalues of the solution.
• Feb 25th 2011, 12:35 PM
Ackbeet
Quote:

Originally Posted by dwsmith
For lambda = 0, wouldn't it be easier to just show:

$\displaystyle \cos(L\sqrt{\lambda})=0\Rightarrow 1=0$ which is never true.

It is not only not easier to show it this way, it is impossible! In even writing down the $\displaystyle \cos$ function at all, you've already assumed that that is the form of the solution when $\displaystyle \lambda=0,$ which simply isn't true. Instead, you must re-solve the DE from scratch (it's quite straight-forward, really), and then apply the boundary conditions. There is no other way that I know of to show that $\displaystyle \lambda=0$ is not an eigenvalue.

Quote:

When lambda < 0, the term inside the cosine is complex, and I have already shown that complex numbers aren't eigenvalues of the solution.
Like I said in my previous post, as long as you haven't already assumed a form of the solution that is only applicable when $\displaystyle \lambda>0,$ then your proof works out fine.

I would, incidentally, put more English in your proof of post # 2. It's a bit hard to follow what you're doing. Don't write so that you can be understood! Write so that you can't be misunderstood.
• Feb 25th 2011, 12:46 PM
dwsmith
Quote:

Originally Posted by Ackbeet
It is not only not easier to show it this way, it is impossible! In even writing down the $\displaystyle \cos$ function at all, you've already assumed that that is the form of the solution when $\displaystyle \lambda=0,$ which simply isn't true. Instead, you must re-solve the DE from scratch (it's quite straight-forward, really), and then apply the boundary conditions. There is no other way that I know of to show that $\displaystyle \lambda=0$ is not an eigenvalue.

Like I said in my previous post, as long as you haven't already assumed a form of the solution that is only applicable when $\displaystyle \lambda>0,$ then your proof works out fine.

I would, incidentally, put more English in your proof of post # 2. It's a bit hard to follow what you're doing. Don't write so that you can be understood! Write so that you can't be misunderstood.

From my understand, plugging in lambda = 0 is fine.

The example in my book has:

$\displaystyle \displaystyle \varphi(L)=\frac{\sin(L\sqrt{\lambda})}{\sqrt{\lam bda}}$

$\displaystyle \displaystyle \frac{\sin(L\sqrt{\lambda})}{\sqrt{\lambda}}=0$

Then states: We return now to the problem of finding all all the eigenvalues, that is, all the solutions of the equation. If lambda = 0 the left member is to be interpreted as

$\displaystyle \displaystyle\lim_{\lambda\to 0}\frac{\sin(L\sqrt{\lambda})}{\sqrt{\lambda}}=L\n eq 0$

I used what was obtained $\displaystyle \varphi(L)$ as well so I don't see what the difference is besides the example is different.
• Feb 25th 2011, 01:01 PM
Ackbeet
Well, that method could well be valid: I don't know. To me it seems a bit strange to write down the sin or cosine, which isn't the solution to the $\displaystyle \lambda=0$ case, and then turn around and use that form of the solution to show that $\displaystyle \lambda=0$ is not an eigenvalue.

Here's what I would do: $\displaystyle \lambda=0$ implies $\displaystyle \varphi''=0,$ and so $\displaystyle \varphi(x)=mx+b.$ The $\displaystyle \varphi'(L)=0$ condition implies $\displaystyle m=0,$ and thus $\displaystyle \varphi(x)=b.$ But $\displaystyle \varphi(0)=0$ implies $\displaystyle b=0,$ and hence $\displaystyle \varphi(x)=0,$ which is not allowed, because eigenfunctions can't be identically zero by definition. Done. Is that not fairly intuitive?
• Feb 25th 2011, 02:10 PM
dwsmith
Now, I am asked to show that eigenfunctions are orthogonal.

$\displaystyle \displaystyle\int_0^L\varphi_n(x)\varphi_m(x) \ dx=0, \ \ m\neq n$

$\displaystyle \displaystyle\int_0^L(\varphi_n(x))^2 \ dx>0$

$\displaystyle \displaystyle\int_0^L\cos\left(\frac{(2n+1)\pi x}{2L}\right)\cdot\cos\left(\frac{(2m+1)\pi x}{2L}\right) \ dx=0, \ \forall m\neq n$

$\displaystyle \displaystyle \int_0^L\left[\cos\left(\frac{(2n+1)\pi x}{2L}\right)\right]^2 \ dx=\frac{L}{2}$

To save space, I haven't shown the steps for orthogonality but it does hold.

Now, I am supposed to use everything in this thread to solve:

$\displaystyle \text{D.E.}=u_t=ku_{xx}, \ \ \ t>0, \ \ \ 0<x<L$
$\displaystyle \displaystyle\text{B.C.}=\begin{cases} u_x(0,t)=0\\u(L,t)=0\end{cases}, \ \ \ t>0$
$\displaystyle u(x,0)=L-x, \ \ \ 0<x<L$

Not sure where to begin.
• Feb 25th 2011, 02:47 PM
TheEmptySet
Quote:

Originally Posted by dwsmith
Now, I am asked to show that eigenfunctions are orthogonal.

$\displaystyle \displaystyle\int_0^L\varphi_n(x)\varphi_m(x) \ dx=0, \ \ m\neq n$

$\displaystyle \displaystyle\int_0^L(\varphi_n(x))^2 \ dx>0$

$\displaystyle \displaystyle\int_0^L\cos\left(\frac{(2n+1)\pi x}{2L}\right)\cdot\cos\left(\frac{(2m+1)\pi x}{2L}\right) \ dx=0, \ \forall m\neq n$

$\displaystyle \displaystyle \int_0^L\left[\cos\left(\frac{(2n+1)\pi x}{2L}\right)\right]^2 \ dx=\frac{L}{2}$

To save space, I haven't shown the steps for orthogonality but it does hold.

Now, I am supposed to use everything in this thread to solve:

$\displaystyle \text{D.E.}=u_t=ku_{xx}, \ \ \ t>0, \ \ \ 0<x<L$
$\displaystyle \displaystyle\text{B.C.}=\begin{cases} u_x(0,t)=0\\u(L,t)=0\end{cases}, \ \ \ t>0$
$\displaystyle u(x,0)=L-x, \ \ \ 0<x<L$

Not sure where to begin.

Now you need to separate the PDE

Assume $\displaystyle u(x,t)=T(t)X(x)$ this gives

$\displaystyle u_{t}=\dot{T}X \text{ and } u_{xx}=TX''$

This gives

$\displaystyle \displaystyle \dot{T}X=TX'' \iff \frac{\dot{T}}{T}=\frac{X''}{X}=-\lambda$

Now the $\displaystyle X$ equation is what you have already solved

$\displaystyle X''-\lambda X=0$

Now solve for $\displaystyle T(t)=e^{-\lambda t}$ and you will have the general form of the solution to your equation.

Using your initial condition gives this

$\displaystyle \displaystyle u(x,0)=\sum_{n=0}^{\infty}a_ne^{-n(0)}\varphi_n(x)=L-x$

Now use your innerproduct and the orthogonality relationships to solve for the $\displaystyle a_n$

$\displaystyle \varphi_m(x)u(x,0)=\varphi_{m}(x)(L-x)=\sum_{n=0}^{\infty}a_n\varphi_m(x)\varphi_n(x)$

Now integrate bothsides from 0 to L and see what happens!
• Feb 25th 2011, 04:00 PM
dwsmith
Quote:

Originally Posted by TheEmptySet
Now you need to separate the PDE

Assume $\displaystyle u(x,t)=T(t)X(x)$ this gives

$\displaystyle u_{t}=\dot{T}X \text{ and } u_{xx}=TX''$

This gives

$\displaystyle \displaystyle \dot{T}X=TX'' \iff \frac{\dot{T}}{T}=\frac{X''}{X}=-\lambda$

Now the $\displaystyle X$ equation is what you have already solved

$\displaystyle X''-\lambda X=0$

Now solve for $\displaystyle T(t)=e^{-\lambda t}$ and you will have the general form of the solution to your equation.

Using your initial condition gives this

$\displaystyle \displaystyle u(x,0)=\sum_{n=0}^{\infty}a_ne^{-n(0)}\varphi_n(x)=L-x$

Now use your innerproduct and the orthogonality relationships to solve for the $\displaystyle a_n$

$\displaystyle \varphi_m(x)u(x,0)=\varphi_{m}(x)(L-x)=\sum_{n=0}^{\infty}a_n\varphi_m(x)\varphi_n(x)$

Now integrate bothsides from 0 to L and see what happens!

$\displaystyle \displaystyle u_t=ku_{xx}\Rightarrow\frac{\dot{T}}{T}=k\frac{X'' }{X}=-\lambda\text{?}$
• Feb 25th 2011, 04:27 PM
TheEmptySet
Quote:

Originally Posted by dwsmith

$\displaystyle \displaystyle u_t=ku_{xx}\Rightarrow\frac{\dot{T}}{T}=k\frac{X'' }{X}=-\lambda\text{?}$

$\displaystyle \displaystyle \frac{\dot{T}}{kT}=\frac{X''}{X}=-\lambda$
$\displaystyle T(t)=e^{-k\lambda t}$ and just keep going