Results 1 to 3 of 3

Math Help - DE Tutorial - Part III: Systems of Differential Equations

  1. #1
    Rhymes with Orange Chris L T521's Avatar
    Joined
    May 2008
    From
    Santa Cruz, CA
    Posts
    2,844
    Thanks
    3

    DE Tutorial - Part III: Systems of Differential Equations

    The DE Tutorial is currently being split up into different threads to make editing these posts easier.

    Its been about 8 months since I've updated this. This post will probably be the first of two on systems of differential equations.

    Systems of Differential Equations (Part I)

    In all the previous posts, we dealt with differential equations that had one dependent variable. Now, we introduce the idea of a system of differential equations that have two or more dependent variables. For now, we consider first order systems of two (or three) differential equations.

    When we construct our system, we consider the following:

    \begin{aligned}f\!\left(t,x,y,x^{\prime},y^{\prime  }\right) & = 0\\g\!\left(t,x,y,x^{\prime},y^{\prime}\right) & = 0\end{aligned}

    where t is the independent variable. A solution to this system would be a pair of functions x\!\left(t\right) and y\!\left(t\right) such that both equations were satisfied.

    Let's go through the following example to introduce us to solving techniques.

    Example 23

    Find a general solution to the following system of differential equations:
    \left\{\begin{aligned}x^{\prime} & = y\\ y^{\prime} & = 2x+y\end{aligned}\right.

    To solve this, we will use techniques in solving second order differential equations.

    Since x^{\prime}=y, we see that when we differentiate the equation wrt x, we have x^{\prime\prime}=y^{\prime}. Now take notice that y^{\prime} was defined in the second equation. So it follows that x^{\prime\prime}=y^{\prime}=2x+y. Also, since x^{\prime}=y, it now follows that we have x^{\prime\prime}=2x+x^{\prime}, which becomes the second order equation x^{\prime\prime}-x^{\prime}-2x=0.

    From here, its a walk in the park...

    The characteristic equation is r^2-r-2=0\implies \left(r+1\right)\left(r-2\right)=0. Thus, r_1=-1 and r_2=2. Therefore, \color{red}\boxed{x\!\left(t\right)=c_1e^{-t}+c_2e^{2t}}.

    Now that we have a solution for x, we can find the solution for y, since x^{\prime}=y. It now follows that \color{red}\boxed{y\!\left(t\right)=-c_1e^{-t}+2c_2e^{2t}}.

    These two functions form the solution to this system of differential equations.

    Let's go through another simple example:

    Example 24

    Find a particular solution to the system of differential equations

    \left\{\begin{aligned}x^{\prime}&=-y\\y^{\prime}&=13x+4y\end{aligned}\right.

    given that x(0)=0 and y(0)=3.

    Again, we note that x^{\prime}=-y\implies -x^{\prime\prime}=y^{\prime}.

    We then substitute this value into the second equation to get

    -x^{\prime\prime}=13x+4y.

    Now, substitute the first equation into the second to obtain the second order equation

    -x^{\prime\prime}=13x+4\left(-x^{\prime}\right)\implies x^{\prime\prime}-4x^{\prime}+13x=0

    The characteristic equation is r^2-4r+13=0\implies r=\frac{4\pm\sqrt{16-52}}{2}\implies r=2\pm 3i

    Thus, x(t)=e^{2t}\left[c_1\cos\!\left(3t\right)+c_2\sin\!\left(3t\right)\  right]

    Since -x^{\prime}=y, it follows that

    y(t)=-2e^{2t}\left[c_1\cos\!\left(3t\right)+c_2\sin\!\left(3t\right)\  right]-e^{2t}\left[-3c_1\sin\!\left(3t\right)+3c_2\cos\!\left(3t\right  )\right] =e^{2t}\left[\left(-3c_2-2c_1\right)\cos\!\left(3t\right)+\left(3c_1-2c_2\right)\sin\!\left(3t\right)\right]

    We now apply the initial conditions:

    x(0)=0\implies 0=c_1

    y(0)=3\implies 3=-3c_2-2c_1\implies c_2=-1

    Therefore, our pair of solutions to the system of differential equations is

    \color{red}\boxed{x(t)=-e^{2t}\sin\!\left(3t\right)} and \color{red}\boxed{y(t)=e^{2t}\left[3\cos\!\left(3t\right)+2\sin\!\left(3t\right)\righ  t]}

    -----------------------------------------------------------------------

    Let us now move on to a technique that is good for solving small systems of differential equations. (We will resort to matrix methods when we have 4 or more equations -- that will be the next post.)

    The Method of Elimination

    As the title suggests, we will use elimination techniques to help us reduce the system of equations into a differential equation with one unknown variable.

    Let us consider a nth order linear differential operator

    L=a_nD^n+a_{n-1}D^{n-1}+\dots+a_1D+a_0

    where D represents differentiation with respect to t.

    Let's now consider a system of differential equations defined by

    \left\{\begin{aligned}L_1x+L_2y &= f_1\!\left(t\right)\\L_3x+L_4y &= f_2\!\left(t\right)\end{aligned}\right.

    where L_1, L_2, L_3 and L_4 are (different) linear differential operators.

    Let's say we wanted to eliminate the independent variable x. Multiplying the first equation by L_3 and the second equation by L_1, we have the system

    \left\{\begin{aligned}L_3L_1x+L_3L_2y &= L_3f_1\!\left(t\right)\\L_1L_3x+L_1L_4y &= L_1f_2\!\left(t\right)\end{aligned}\right.

    Since the linear differential operators multiply like regular polynomials, it follows that L_1L_2=L_2L_1. Now we can subtract the two equations to get

    L_3L_2y-L_1L_4y=L_3f_1\!\left(t\right)-L_1f_2\!\left(t\right)\implies\left(L_3L_2-L_1L_4\right)y=L_3f_1\!\left(t\right)-L_1f_2\!\left(t\right)

    With minor manipulations, we end up with \left(L_1L_4-L_2L_3\right)y=L_1f_2\!\left(t\right)-L_3f_1\!\left(t\right)\implies\begin{vmatrix}L_1 & L_2 \\ L_3 & L_4\end{vmatrix}y=\begin{vmatrix} L_1 & f_1\!\left(t\right)\\ L_3 & f_2\!\left(t\right)\end{vmatrix}

    Once we know what y(t) is, we can then substitute it into either equation in the original system.

    Similarly, if we eliminate y, we end up with \begin{vmatrix}L_1 & L_2 \\ L_3 & L_4\end{vmatrix}x=\begin{vmatrix} f_1\!\left(t\right) & L_2\\ f_2\!\left(t\right) & L_4\end{vmatrix}

    Let us go through a couple examples.

    Example 25

    Find the general solution for the system

    \left\{\begin{aligned}(D-4)x+3y &= 0\\-6x+(D+7)y&=0\end{aligned}\right.

    Let us first eliminate x.

    Then it follows that we have the equation

    \begin{vmatrix}D-4 & 3 \\ -6 & D+7\end{vmatrix}y=0\implies\left[(D-4)(D+7)-(-3)(6)\right]y=0 \implies \left(D^2+3D-10\right)y=0.

    Now the characteristic equation is r^2+3r-10=0. It follows that r=-5 or r=2.

    Thus, y=b_1e^{2t}+b_2e^{-5t}.

    If we choose to eliminate y instead, we get

    \begin{vmatrix}D-4 & 3 \\ -6 & D+7\end{vmatrix}x=0\implies\left[(D-4)(D+7)-(-3)(6)\right]x=0 \implies \left(D^2+3D-10\right)x=0.

    Thus, it follows that x=a_1e^{2t}+a_2e^{-5t}.

    However, there is a slight dilemma. It appears that our solution set contains four different arbitrary constants. However, by the Theorem for Existence and Uniqueness of Linear Systems, since we have two equations in our system, we should only have exactly two different arbitrary constants. So what now? The solution is simple: Substitute both functions into one of the equations in the original system.

    If we substitute them into the first equation (D-4)x+3y=0\implies x^{\prime}-4x+3y=0, we see that

    0=\left(2a_1e^{2t}-5a_2e^{-5t}\right)-4\left(a_1e^{2t}+a_2e^{-5t}\right)+3\left(b_1e^{2t}+b_2e^{-5t}\right) =\left(-2a_1+3b_1\right)e^{2t}+\left(-9a_2+3b_2\right)e^{-5t}.

    We now use the fact that e^{2t} and e^{-5t} are linearly independent. Thus, it follows that -2a_1+3b_1=0\implies a_1=\tfrac{3}{2}b_1 and -9a_2+3b_2=0\implies a_2=\tfrac{1}{3}b_2.

    Therefore, the general solution to our system is

    \color{red}\boxed{x(t)=\tfrac{3}{2}b_1e^{2t}+\tfra  c{1}{3}b_1e^{-5t}} and \color{red}\boxed{y=b_1e^{2t}+b_2e^{-5t}}

    -----------------------------------------------------------------------

    The next post in the tutorial will be on matrix methods to solving systems of differential equations. I will try to post that in the next couple days.
    Last edited by mash; March 5th 2012 at 12:26 PM. Reason: fixed latex
    Follow Math Help Forum on Facebook and Google+

  2. #2
    Rhymes with Orange Chris L T521's Avatar
    Joined
    May 2008
    From
    Santa Cruz, CA
    Posts
    2,844
    Thanks
    3
    I'm in such the mood to post Part II....so I'll do it now. XD

    Systems of Differential Equations (Part II - Matrix Methods)

    In part one, we covered basic techniques on how to solve first order system of two (or three) differential equations. What we will discuss in this post are techniques used in solving systems with a larger number of equations, and look at some non-linear systems.

    Matrix-Valued Functions

    A matrix-valued function is of the form

    \mathbf{x}(t)=\begin{bmatrix}x_(t)\\ x_2(t)\\ \vdots\\ x_n(t)\end{bmatrix} or \mathbf{A}(t)=\begin{bmatrix}a_{11}(t) & a_{12}(t) & \dots & a_{1n}(t)\\ a_{21}(t) & a_{22}(t) & \dots & a_{2n}(t)\\ \vdots & \vdots & \phantom{x}& \vdots\\ a_{m1}(t) & a_{m2}(t) & \dots & a_{mn}(t)\end{bmatrix}

    where each entry is a function of t. Now, \mathbf{x}(t) or \mathbf{A}(t) is differentiable if each entry is differentiable. Thus, we define \frac{\,d\mathbf{A}}{\,dt}=\left[\frac{\,da_{ij}}{\,dt}\right]

    Let us now look into a popular method (which we will spend the rest of the post discussing) -- the Eigenvalue Method of Homogeneous Systems.

    -----------------------------------------------------------------------

    Eigenvalue Method of Homogeneous Systems

    Let us consider the following first order system of n differential equations

    \left\{\begin{aligned}x_1^{\prime} &= a_{11}x_1+a_{12}x_2+\dots+a_{1n}x_n\\x_2^{\prime} &= a_{21}x_1+a_{22}x_2+\dots+a_{2n}x_n\\ &\vdots\\ x_n^{\prime} &= a_{n1}x_1+a_{n2}x_2+\dots+a_{nn}x_n\\\end{aligned}  \right.

    It suffices to find n linearly independent solution vectors \mathbf{x}_1,\mathbf{x}_2,\dots,\mathbf{x}_n such that

    \mathbf{x}(t)=c_1\mathbf{x}_1+c_2\mathbf{x}_2+\dot  s+c_n\mathbf{x}_n

    is a solution to the general system.

    We anticipate the solution vectors to be of the form

    \mathbf{x}(t)=\begin{bmatrix}x_1\\x_2\\x_3\\\vdots  \\x_n\end{bmatrix}=\begin{bmatrix}v_1e^{\lambda t}\\v_2e^{\lambda t}\\v_3e^{\lambda t}\\\vdots\\v_ne^{\lambda t}\end{bmatrix}=\begin{bmatrix}v_1\\v_2\\v_3\\\vdo  ts\\v_n\end{bmatrix}e^{\lambda t}=\mathbf{v}e^{\lambda t}

    where \lambda,v_1,v_2,v_3,\dots,v_n are appropriate scalar constants.

    To expand on this, let us rewrite our general system in matrix form:

    \mathbf{x}^{\prime}=\mathbf{Ax}

    Now, let us substitute the anticipated solution into the differential equation to get

    \left(\mathbf{v}e^{\lambda t}\right)^{\prime}=\mathbf{A}\left(\mathbf{v}e^{\l  ambda t}\right)\implies \lambda\mathbf{v}e^{\lambda t}=\mathbf{Av}e^{\lambda t}

    Cancelling out e^{\lambda t}, we now have

    \lambda\mathbf{v}=\mathbf{Av}.

    From this, we see that \mathbf{x}=\mathbf{v}e^{\lambda t} will be a nontrivial solution of \mathbf{x}^{\prime}=\mathbf{Ax} given that \mathbf{v}\neq\mathbf{0} and such that \mathbf{Av} is a scalar multiple of \mathbf{v}.

    So ... How do we find \mathbf{v} and \lambda??

    First, we rewrite \lambda\mathbf{v}=\mathbf{Av} as \left(\mathbf{A}-\lambda\mathbf{I}\right)\mathbf{v}=\mathbf{0}.

    Now we recall from linear algebra, this equation has a nontrivial solution iff

    \det\left(\mathbf{A}-\lambda\mathbf{I}\right)=0.

    Thus, \lambda is referred to the eigenvalue of \mathbf{A}, and \mathbf{v} is the associated eigenvector.

    We also define \det\left(\mathbf{A}-\lambda\mathbf{I}\right)=0 to be the characteristic equation of \mathbf{A}.

    Now, we lay out the steps of the eigenvalue method:

    1. First solve the characteristic equation for the eigenvalues \lambda_1,\lambda_2,\dots,\lambda_n of the matrix \mathbf{A}.

    2. Attempt to find n linearly independent eigenvectors \mathbf{v}_1,\mathbf{v}_2,\dots,\mathbf{v}_n associated with the eigenvalues.

    3. If step 2 is possible (it may not always be!), we have n linearly independent solutions \mathbf{x}_1=\mathbf{v}_1e^{\lambda_1t}, \mathbf{x}_2=\mathbf{v}_2e^{\lambda_2t},\dots,\mat  hbf{x}_n=\mathbf{v}_ne^{\lambda_nt}. Thus, \mathbf{x}(t)=c_1\mathbf{x}_1(t)+c_2\mathbf{x}_2(t  )+\dots+c_n\mathbf{x}_n(t) is the general solution of \mathbf{x}^{\prime}=\mathbf{Ax}

    -----------------------------------------------------------------------

    Let us now go through two special cases (each illustrated by an example):

    Case I: \lambda_1,\lambda_2,\dots,\lambda_n are real and distinct.

    Let us start with an example.

    Example 26

    Find a general solution for the system

    \left\{\begin{aligned}x_1^{\prime} & = 4x_1 + 2x_2\\ x_2^{\prime} &= 3x_1-x_2\end{aligned}\right.

    To solve this, let us rewrite the system in matrix form:

    \mathbf{x}^{\prime}=\begin{bmatrix}4 & 2\\3 & -1\end{bmatrix}\mathbf{x}

    It follows that the characteristic equation is

    \begin{vmatrix}4-\lambda & 2 \\ 3 & -1-\lambda\end{vmatrix}=-\left(4-\lambda\right)\left(1+\lambda\right)-6=\lambda^2-3\lambda-10=0

    Thus, \lambda^2-3\lambda-10=0\implies\left(\lambda-5\right)\left(\lambda+2\right)=0\implies \lambda_1=-2 and \lambda_2=5.

    Now that we have the eigenvalues, let us try to find the eigenvectors.

    Note that the eigenvector equation in this case is

    \begin{bmatrix}4-\lambda & 2 \\ 3 & -1-\lambda\end{bmatrix}\begin{bmatrix}v_1\\v_2\end{bm  atrix}=\begin{bmatrix}0\\0\end{bmatrix}.

    Case I: \lambda=-2.

    Here, the eigenvector equation becomes

    \begin{bmatrix}6& 2 \\ 3 & 1\end{bmatrix}\begin{bmatrix}v_1\\v_2\end{bmatrix}  =\begin{bmatrix}0\\0\end{bmatrix}.

    This gives us the linear system

    \left\{\begin{aligned}6v_1+2v_2 & =0\\ 3v_1 + v_2 &= 0\end{aligned}\right..

    It is evident that there are infinitely many solutions. So what now? What we usually do is pick a simple value. So for example, if v_1=1, we have v_2=-3.

    Therefore, \mathbf{v}_1=\begin{bmatrix}1\\-3\end{bmatrix} is the eigenvector associated to \lambda_1=-2. Thus, \mathbf{x}_1(t)=\begin{bmatrix}1\\-3\end{bmatrix}e^{-2t} is a solution to the general equation.

    Case II: \lambda=5.

    Here, the eigenvector equation becomes

    \begin{bmatrix}-1& 2 \\ 3 & -6\end{bmatrix}\begin{bmatrix}v_1\\v_2\end{bmatrix}  =\begin{bmatrix}0\\0\end{bmatrix}.

    This gives us the linear system

    \left\{\begin{aligned}-v_1+2v_2 & =0\\ 3v_1 - 6v_2 &= 0\end{aligned}\right..

    It is evident that there are infinitely many solutions. So what now? What we usually do is pick a simple value. So for example, if v_2=1, we have v_1=2.

    Therefore, \mathbf{v}_2=\begin{bmatrix}2\\1\end{bmatrix} is the eigenvector associated to \lambda_2=5. Thus, \mathbf{x}_2(t)=\begin{bmatrix}2\\1\end{bmatrix}e^  {5t} is a solution to the general equation.

    It is easy to show that e^{-2t} and e^{5t} are linearly independent (via Wronskian).

    Now, by the principle of superposition, it follows that

    \color{red}\boxed{\mathbf{x}(t)=c_1\begin{bmatrix}  1\\-3\end{bmatrix}e^{-2t}+c_2\begin{bmatrix}2\\1\end{bmatrix}e^{5t}}

    satisfies \mathbf{x}^{\prime}=\begin{bmatrix}4&2\\3&-1\end{bmatrix}\mathbf{x}

    (Written in scalar form, the solutions would be \color{red}\boxed{\mathbf{x}_1(t)=c_1e^{-2t}+2c_2e^{5t}} and \color{red}\boxed{\mathbf{x}_2(t)=-3c_1e^{-2t}+c_2e^{5t}})

    -----------------------------------------------------------------------

    Case II: \lambda_1,\lambda_2,\dots,\lambda_n are complex.

    Prelim Theory

    We are after real valued solutions (it will turn out to be real and imaginary parts of the general solution). When complex eigenvalues pop up, they always appear in conjugate pairs (i.e. \lambda=p+qi and \bar{\lambda}=p-qi).

    Now, if \mathbf{v} is an eigenvector associated with \lambda, such that

    \left(\mathbf{A}-\lambda\mathbf{I}\right)\mathbf{v}=\mathbf{0},

    then taking complex conjugates in the equation gives us

    \left(\mathbf{A}-\bar{\lambda}\mathbf{I}\right)\overline{\mathbf{v}  }=\mathbf{0}

    If we take

    \mathbf{v}=\begin{bmatrix}a_1+b_1i\\a_2+b_2i\\\vdo  ts\\a_n+b_ni\end{bmatrix}=\begin{bmatrix}a_1\\a_2\  \\vdots\\a_n\end{bmatrix}+\begin{bmatrix}b_1\\b_2\  \\vdots\\b_n\end{bmatrix}i=\mathbf{a}+\mathbf{b}i,

    then \overline{\mathbf{v}}=\mathbf{a}-\mathbf{b}i

    Therefore, the complex-valued solution associated with \lambda and \mathbf{v} is

    \mathbf{x}(t)=\mathbf{v}e^{\lambda t}=\mathbf{v}e^{\left(p+qi\right)t}=\left(\mathbf{  a}+\mathbf{b}i\right)e^{pt}\left[\cos\!\left(qt\right)+\sin\!\left(qt\right)\right]

    Rearranging, we have

    \mathbf{x}(t)=e^{pt}\left[\mathbf{a}\cos\!\left(qt\right)-\mathbf{b}\sin\!\left(qt\right)\right]+ie^{pt}\left[\mathbf{b}\cos\!\left(qt\right)+\mathbf{a}\sin\!\l  eft(qt\right)\right].

    Therefore,

    \begin{aligned}\mathbf{x}_1(t)&=\Re\left(\mathbf{x  }(t)\right)=e^{pt}\left[\mathbf{a}\cos\!\left(qt\right)-\mathbf{b}\sin\!\left(qt\right)\right]\\\mathbf{x}_2(t)&=\Im\left(\mathbf{x}(t)\right)=e  ^{pt}\left[\mathbf{b}\cos\!\left(qt\right)+\mathbf{a}\sin\!\l  eft(qt\right)\right]\end{aligned}

    I leave it for you to verify we get the same set of solutions when we check the real and imaginary parts of \overline{\mathbf{v}}e^{\bar{\lambda}t}.

    Example 27

    Find the general solution of the system

    \begin{aligned}x_1^{\prime} &= 4x_1-3x_2\\ x_2^{\prime}&= 3x_1+4x_2\end{aligned}

    Our coefficient matrix \mathbf{A}=\begin{bmatrix}4&-3\\3&4\end{bmatrix} has the characteristic equation

    \begin{bmatrix}4-\lambda & -3 \\ 3 & 4-\lambda\end{bmatrix}=\left(4-\lambda\right)^2+9=0\implies \lambda=4-3i and \bar{\lambda}=4+3i.

    Substituting \lambda=4-3i into the eigenvector equation, we have

    \begin{bmatrix}3i & -3\\ 3 & 3i\end{bmatrix}\begin{bmatrix}v_1\\v_2\end{bmatrix  }=\begin{bmatrix}0\\0\end{bmatrix}.

    Thus, we have the linear system

    \left\{\begin{aligned}iv_1-v_2 & = 0\\ v_1 + iv_2 & = 0\end{aligned}\right.

    If we take v_1=1, v_2=i. Thus, \mathbf{v}=\begin{bmatrix}1\\i\end{bmatrix} is complex eigenvector associated with \lambda=4-3i.

    Now, the corresponding complex solution is

    \mathbf{x}(t)=\begin{bmatrix}1\\i\end{bmatrix}e^{\  left(4-3i\right)t}=\begin{bmatrix}1\\i\end{bmatrix}e^{4t}  \left(\cos\!\left(3t\right)-i\sin\!\left(3t\right)\right)=e^{4t}\begin{bmatrix  }\cos\!\left(3t\right)-i\sin\!\left(3t\right)\\i\cos\!\left(3t\right)+\si  n\!\left(3t\right)\end{bmatrix}

    Thus,

    \mathbf{x}_1(t)=\Re\left(\mathbf{x}(t)\right)=e^{4  t}\begin{bmatrix}\cos\!\left(3t\right)\\\sin\!\lef  t(3t\right)\end{bmatrix} and \mathbf{x}_2(t)=\Im\left(\mathbf{x}(t)\right)=e^{4  t}\begin{bmatrix}-\sin\!\left(3t\right)\\\cos\!\left(3t\right)\end{b  matrix}

    Therefore, a real-valued general solution to \mathbf{x}^{\prime}=\mathbf{Ax} is

    \color{red}\boxed{\mathbf{x}(t)=c_1\mathbf{x}_1(t)  +c_2\mathbf{x}_2(t)=e^{4t}\begin{bmatrix}c_1\cos\!  \left(3t\right)-c_2\sin\!\left(3t\right)\\c_1\sin\!\left(3t\right)  +c_2\cos\!\left(3t\right)\end{bmatrix}}.

    -----------------------------------------------------------------------

    I will have to post a Part III for Case III: \lambda_1,\lambda_2,\dots,\lambda_n are real, but not distinct.

    I will have that posted sometime tomorrow or the next day.
    Last edited by mash; March 5th 2012 at 12:28 PM. Reason: fixed latex
    Follow Math Help Forum on Facebook and Google+

  3. #3
    Rhymes with Orange Chris L T521's Avatar
    Joined
    May 2008
    From
    Santa Cruz, CA
    Posts
    2,844
    Thanks
    3
    System of Differential Equations (Part III - Matrix Methods (cont.))

    In Part II, we ended with two special cases for the eigenvalues of an n x n matrix system. We now devote an entire post
    to the last special case.

    -----------------------------------------------------------------------

    Case III: \lambda_1,\lambda_2,\dots,\lambda_n are real but not distinct.

    When \lambda_1,\lambda_2,\dots,\lambda_n were distinct (real or complex), then the general solution of
    \mathbf{x}^{\prime}=\mathbf{Ax} took on the form

    \mathbf{x}(t)=c_1\mathbf{v}_1e^{\lambda_1t}+c_2\ma  thbf{v}_2e^{\lambda_2t}+\dots+c_n\mathbf{v}_ne^{\l  ambda_nt}<br />
.

    We now consider when the characteristic equation \left|\mathbf{A}-\lambda\mathbf{I}\right|=0 doesn't have n
    distinct root --> the characteristic equation has at least one repeated root.

    In that case, we refer to the eigenvalue as having multiplicity. An eigenvalue is of multiplicity k if it is a k-fold
    root of the characteristic equation. If \lambda is of multiplicity k, then there is at least one
    eigenvector \mathbf{v} associated with it. However, we may not always be able to find k linearly
    independent eigenvectors associated with \lambda (this is referred to as a defect of \lambda,
    which will be discussed later). If we can find k linearly independent eigenvectors associated with \lambda,
    we say that \lambda is complete.

    Example 28

    Find a general solution of the system

    \mathbf{x}^{\prime}=\begin{bmatrix}9 & 4 & 0\\-6 & -1 & 0\\6 & 4 & 3\end{bmatrix}\mathbf{x}

    The characteristic equation of \mathbf{A}=\begin{bmatrix}9 & 4 & 0\\-6 & -1 & 0\\6 & 4 & 3\end{bmatrix} is

    \begin{vmatrix}9-\lambda & 4 & 0\\-6 & -1-\lambda & 0\\6 & 4 & 3-\lambda\end{vmatrix}=(3-\lambda)\begin{vmatrix}9<br />
-\lambda & 4\\ -6 & -1-\lambda\end{vmatrix} =(3-\lambda)(\lambda^2-8\lambda+15)=(5-\lambda)(3-\lambda)^2=0<br />

    Here, we see that \lambda_1=5 and \lambda_2=3 with multiplicity 2.

    Case I: \lambda=5

    The eigenvector equation is

    \begin{bmatrix}4 & 4 & 0\\-6 & -6 & 0\\6 & 4 & -2\end{bmatrix}\begin{bmatrix}v_1\\v_2\\v_3\end{bma  trix}=\begin<br />
{bmatrix}0\\0\\0\end{bmatrix}

    Thus, we have the following system of equations:

    \left\{\begin{aligned}4v_1+4v_2 & = 0\\-6v_1-6v_2&=0\\6v_1+4v_2-2v_3&=0\end{aligned}\right.

    The first two deduce to v_2=-v_1

    Now, it follows the third equation can be written as 2v_1-2v_3=0\implies v_3=v_1. Thus, if we pick <br />
v_1=1, we have the eigenvector \mathbf{v}_1=\begin{bmatrix}1\\-1\\1\end{bmatrix} associated with
    \lambda=5.

    Case II: \lambda=3

    The eigenvector equation is

    \begin{bmatrix}6 & 4 & 0\\-6 & -6 & 0\\6 & 4 & 0\end{bmatrix}\begin{bmatrix}v_1\\v_2\\v_3\end{bma  trix}=\begin<br />
{bmatrix}0\\0\\0\end{bmatrix}

    Thus, we have a nonzero eigenvector iff 6v_1+4v_2=0\implies v_2=-\tfrac{3}{2}v_1. Thus, v_3
    is arbitrary. So if we pick v_3=1, we can let v_1=v_2=0. Thus, \mathbf{v}_2=\begin<br />
{bmatrix}0\\0\\1\end{bmatrix} is an associated eigenvector to \lambda=3. However, there is one more eigenvector!

    If we pick v_3=0, we can pick v_1 and v_2 such that we don't have the zero vector. So if we take v_2=2, we see that v_3=-3. Thus, \mathbf{v}_3=\begin{bmatrix}2\\-3\\0\end{bmatrix}<br />
is the eigenvector associated with \lambda=3.

    Therefore, the general solution is

    \color{red}\boxed{\mathbf{x}(t)=c_1\begin{bmatrix}  1\\-1\\1\end{bmatrix}e^{5t}+c_2\begin{bmatrix}0\\0\\1\  end{bmatrix}e^{3t}+c_3\begin{bmatrix}2\\-3\\0\end{bmatrix}e^{3t}}

    Remark: With regards to the two eigenvectors for \lambda=3, the fact that v_2=-\tfrac{3}{2}v_1 is worth taking note of. The eigenvector can be rewritten as

    \mathbf{v}=\begin{bmatrix}v_1\\-\frac{3}{2}v_1\\v_3\end{bmatrix}=v_3\begin{bmatrix  }0\\0\\1\end{bmatrix}+\tfrac{1}{2}v_1\begin{bmatri  x}2\\-3\\0\end{bmatrix}=v_3\mathbf{v}_2-\tfrac{1}{2}v_1\mathbf{v}_3

    Thus, we could replace \mathbf{v} for the eigenvector and still get the same answer we did when considering both eigenvectors. This tells us that we don't have to worry about making the right choice -- its just advisible that we pick the simplest one.

    -----------------------------------------------------------------------

    Defective Eigenvalues

    We start this section with an example.

    Example 29

    Consider the coefficient matrix \mathbf{A}=\begin{bmatrix}1 &-3\\3 & 7\end{bmatrix}.

    The characteristic equation is \begin{vmatrix}1-\lambda & -3\\ 3 & 7-\lambda\end{vmatrix}=\lambda^2-8\lambda+16=\left(\lambda-4\right)^2=0.

    Thus, \lambda=4 is an eigenvalue of multiplicity two.

    Now, the eigenvector equation is

    \begin{bmatrix}-3 & -3\\3 & 3\end{bmatrix}\begin{bmatrix}v_1\\v_2\end{bmatrix}  =\begin{bmatrix}0\\0\end{bmatrix}.

    Thus, it follows that our system of equation is

    \left\{\begin{aligned}-3v_1-3v_2 & = 0\\3v_1 + 3v_2 & = 0\end{aligned}\right.

    Thus, v_2=-v_1.

    Thus the eigenvector is of the form \mathbf{v}=\begin{bmatrix}v_1\\-v_1\end{bmatrix}=v_1\begin{bmatrix}1\\-1\end{bmatrix}.

    This implies that all eigenvectors associated with \lambda=4 will be a constant multiple of \begin{bmatrix}1\\-1\end{bmatrix}. Therefore, there is only one linearly independent eigenvector associated with \lambda=4, making \lambda=4 incomplete.

    The eigenvalue in the above example is incomplete, or defective.

    Now, if an eigenvalue \lambda has p<k linearly independent eigenvectors, then d=k-p is the number of missing eigenvectors - the defect of the defective eigenvalue \lambda.

    In Example 29, the defect would be d=2-1=1.

    What we do now is consider a way to solve a system of differential equations given the defect d=1.

    -----------------------------------------------------------------------

    Case IV: \lambda has multiplicity two and is defective.

    Suppose that \lambda has one linearly independent eigenvector, implying that \mathbf{x}_1(t)=\mathbf{v}_1e^{\lambda t} is the only solution (that we know of) to \mathbf{x}^{\prime}=\mathbf{Ax}.

    However, we hope to find a second solution of the form \mathbf{x}_2(t)=\mathbf{v}_2te^{\lambda t}. Substituting it into the system, we have

    \mathbf{v}_2e^{\lambda t}+\lambda\mathbf{v}_2te^{\lambda t}=\mathbf{Av}_2te^{\lambda t}.

    Since the coefficients of e^{\lambda t} and te^{\lambda t} need to balance, it follows from the above equation that \mathbf{v}_2=\mathbf{0} and consequently, \mathbf{x}_2(t)\equiv\mathbf{0}.

    Since that didn't work, let us extend our original idea and replace \mathbf{v}_2t with \mathbf{v}_1t+\mathbf{v}_2. So we suppose now that the second solution will take on the form

    \mathbf{x}_2(t)=\mathbf{v}_1te^{\lambda t}+\mathbf{v_2}e^{\lambda t}.

    Substituting this into \mathbf{x}^{\prime}=\mathbf{Ax}, we get

    \left(\mathbf{v}_1+\lambda\mathbf{v}_2\right)e^{\l  ambda t}+\lambda\mathbf{v}_1te^{\lambda t}=\mathbf{Av}_1te^{\lambda t}+\mathbf{Av}_2e^{\lambda t}

    Comparing coefficents of e^{\lambda t} and te^{\lambda t}, we see that

    \mathbf{v}_1+\lambda\mathbf{v}_2=A\mathbf{v}_2\imp  lies \left(\mathbf{A}-\lambda\mathbf{I}\right)\mathbf{v_2}=\mathbf{v_1}

    and

    \lambda\mathbf{v}_1=\mathbf{Av}_1\implies \left(\mathbf{A}-\lambda\mathbf{I}\right)\mathbf{v}_1=\mathbf{0}

    The second equation confirms that \mathbf{v}_1 is an eigenvector for \lambda. Now, it follows that \mathbf{v}_2 satisfies the equation

    \left(\mathbf{A}-\lambda\mathbf{I}\right)^2\mathbf{v}_2=\left(\math  bf{A}-\lambda\mathbf{I}\right)\left(\mathbf{A}-\lambda\mathbf{I}\right)\mathbf{v}_2=\left(\mathbf  {A}-\lambda\mathbf{I}\right)\mathbf{v}_1=\mathbf{0}

    This tells us that it suffices to find a single solution \mathbf{v}_2 to the equation \left(\mathbf{A}-\lambda\mathbf{I}\right)^2\mathbf{v}_2=\mathbf{0} such that \mathbf{v}_1=\left(\mathbf{A}-\lambda\mathbf{I}\right)\mathbf{v}_2\neq\mathbf{0}.

    It is always possible to find a solution when the defective eigenvalue \lambda has multiplicity two.

    Let us go through an example that illustrates this process.

    ----------------------------------------------------------------------

    Example 30

    Find the general solution to the system

    \mathbf{x}^{\prime}=\begin{bmatrix}1 & -3\\ 3 & 7\end{bmatrix}\mathbf{x}

    In example 29, we showed that the characteristic equation produced a defective eigenvalue \lambda=4 of multiplicity two.

    We now start by calculation \left(\mathbf{A}-4\mathbf{I}\right)^2:

    \left(\mathbf{A}-4\mathbf{I}\right)^2=\begin{bmatrix}-3 & -3\\3 & 3\end{bmatrix}\begin{bmatrix}-3 & -3\\3 & 3\end{bmatrix}=\begin{bmatrix}0 & 0\\0 & 0\end{bmatrix}

    Thus, \left(\mathbf{A}-4\mathbf{I}\right)^2\mathbf{v}_2=\mathbf{0}\implie  s \begin{bmatrix}0 & 0\\0 & 0\end{bmatrix}\mathbf{v}_2=\mathbf{0} implies that \mathbf{v}_2 can be of any (nonzero) form.

    So if we take \mathbf{v}_2=\begin{bmatrix}1\\0\end{bmatrix}, then we see that

    \left(\mathbf{A}-4\mathbf{I}\right)\mathbf{v}_2=\begin{bmatrix}-3 & -3\\3 & 3\end{bmatrix}\begin{bmatrix}1\\0\end{bmatrix}=\be  gin{bmatrix}-3\\3\end{bmatrix}=\mathbf{v}_1.

    This eigenvector is nonzero, and thus associated with the eigenvalue \lambda=4 (Note that this is -3 times the eigenvector we found in example 29).

    Therefore, the two solutions to the system are

    \mathbf{x}_1(t)=\mathbf{v}_1e^{4t}=\begin{bmatrix}-3\\3\end{bmatrix}e^{4t}

    and

    \mathbf{x}_2(t)=\left(\mathbf{v}_1t+\mathbf{v}_2\r  ight)e^{4t}=\begin{bmatrix}-3t+1\\3t\end{bmatrix}e^{4t}

    Therefore, the general solution to the system is

    \color{red}\boxed{\mathbf{x}(t)=c_1\begin{bmatrix}-3\\3\end{bmatrix}e^{4t}+c_2\begin{bmatrix}-3t+1\\3t\end{bmatrix}e^{4t}=\begin{bmatrix}-3c_1-3c_2t+c_2\\3c_1+3c_2t\end{bmatrix}e^{4t}}

    -----------------------------------------------------------------------

    This will conclude the systems of differential equations section of the tutorial.

    I will start working on the first of three (or maybe four) posts on Laplace Transforms and their use in IVPs.
    Last edited by mash; March 5th 2012 at 12:29 PM. Reason: fixed latex
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. Systems of linear Differential Equations
    Posted in the Differential Equations Forum
    Replies: 2
    Last Post: October 24th 2011, 02:05 AM
  2. Replies: 5
    Last Post: March 15th 2010, 12:44 PM
  3. Systems of differential equations...
    Posted in the Calculus Forum
    Replies: 3
    Last Post: May 10th 2009, 06:36 AM
  4. Replies: 4
    Last Post: December 27th 2008, 08:56 AM
  5. Linear systems of differential equations
    Posted in the Calculus Forum
    Replies: 7
    Last Post: April 29th 2007, 11:45 AM

Search Tags


/mathhelpforum @mathhelpforum