DE Tutorial - Part III: Systems of Differential Equations

Not open for further replies.

Chris L T521

MHF Hall of Fame
May 2008
Chicago, IL
The DE Tutorial is currently being split up into different threads to make editing these posts easier.

Its been about 8 months since I've updated this. This post will probably be the first of two on systems of differential equations.

Systems of Differential Equations (Part I)

In all the previous posts, we dealt with differential equations that had one dependent variable. Now, we introduce the idea of a system of differential equations that have two or more dependent variables. For now, we consider first order systems of two (or three) differential equations.

When we construct our system, we consider the following:

\(\displaystyle \begin{aligned}f\!\left(t,x,y,x^{\prime},y^{\prime}\right) & = 0\\g\!\left(t,x,y,x^{\prime},y^{\prime}\right) & = 0\end{aligned}\)

where \(\displaystyle t\) is the independent variable. A solution to this system would be a pair of functions \(\displaystyle x\!\left(t\right)\) and \(\displaystyle y\!\left(t\right)\) such that both equations were satisfied.

Let's go through the following example to introduce us to solving techniques.

Example 23

Find a general solution to the following system of differential equations:
\(\displaystyle \left\{\begin{aligned}x^{\prime} & = y\\ y^{\prime} & = 2x+y\end{aligned}\right.\)

To solve this, we will use techniques in solving second order differential equations.

Since \(\displaystyle x^{\prime}=y\), we see that when we differentiate the equation wrt x, we have \(\displaystyle x^{\prime\prime}=y^{\prime}\). Now take notice that \(\displaystyle y^{\prime}\) was defined in the second equation. So it follows that \(\displaystyle x^{\prime\prime}=y^{\prime}=2x+y\). Also, since \(\displaystyle x^{\prime}=y\), it now follows that we have \(\displaystyle x^{\prime\prime}=2x+x^{\prime}\), which becomes the second order equation \(\displaystyle x^{\prime\prime}-x^{\prime}-2x=0\).

From here, its a walk in the park...

The characteristic equation is \(\displaystyle r^2-r-2=0\implies \left(r+1\right)\left(r-2\right)=0\). Thus, \(\displaystyle r_1=-1\) and \(\displaystyle r_2=2\). Therefore, \(\displaystyle \color{red}\boxed{x\!\left(t\right)=c_1e^{-t}+c_2e^{2t}}\).

Now that we have a solution for x, we can find the solution for y, since \(\displaystyle x^{\prime}=y\). It now follows that \(\displaystyle \color{red}\boxed{y\!\left(t\right)=-c_1e^{-t}+2c_2e^{2t}}\).

These two functions form the solution to this system of differential equations.

Let's go through another simple example:

Example 24

Find a particular solution to the system of differential equations

\(\displaystyle \left\{\begin{aligned}x^{\prime}&=-y\\y^{\prime}&=13x+4y\end{aligned}\right.\)

given that \(\displaystyle x(0)=0\) and \(\displaystyle y(0)=3\).

Again, we note that \(\displaystyle x^{\prime}=-y\implies -x^{\prime\prime}=y^{\prime}\).

We then substitute this value into the second equation to get

\(\displaystyle -x^{\prime\prime}=13x+4y\).

Now, substitute the first equation into the second to obtain the second order equation

\(\displaystyle -x^{\prime\prime}=13x+4\left(-x^{\prime}\right)\implies x^{\prime\prime}-4x^{\prime}+13x=0\)

The characteristic equation is \(\displaystyle r^2-4r+13=0\implies r=\frac{4\pm\sqrt{16-52}}{2}\implies r=2\pm 3i\)

Thus, \(\displaystyle x(t)=e^{2t}\left[c_1\cos\!\left(3t\right)+c_2\sin\!\left(3t\right)\right]\)

Since \(\displaystyle -x^{\prime}=y\), it follows that

\(\displaystyle y(t)=-2e^{2t}\left[c_1\cos\!\left(3t\right)+c_2\sin\!\left(3t\right)\right]-e^{2t}\left[-3c_1\sin\!\left(3t\right)+3c_2\cos\!\left(3t\right)\right]\) \(\displaystyle =e^{2t}\left[\left(-3c_2-2c_1\right)\cos\!\left(3t\right)+\left(3c_1-2c_2\right)\sin\!\left(3t\right)\right]\)

We now apply the initial conditions:

\(\displaystyle x(0)=0\implies 0=c_1\)

\(\displaystyle y(0)=3\implies 3=-3c_2-2c_1\implies c_2=-1\)

Therefore, our pair of solutions to the system of differential equations is

\(\displaystyle \color{red}\boxed{x(t)=-e^{2t}\sin\!\left(3t\right)}\) and \(\displaystyle \color{red}\boxed{y(t)=e^{2t}\left[3\cos\!\left(3t\right)+2\sin\!\left(3t\right)\right]}\)


Let us now move on to a technique that is good for solving small systems of differential equations. (We will resort to matrix methods when we have 4 or more equations -- that will be the next post.)

The Method of Elimination

As the title suggests, we will use elimination techniques to help us reduce the system of equations into a differential equation with one unknown variable.

Let us consider a nth order linear differential operator

\(\displaystyle L=a_nD^n+a_{n-1}D^{n-1}+\dots+a_1D+a_0\)

where \(\displaystyle D\) represents differentiation with respect to \(\displaystyle t\).

Let's now consider a system of differential equations defined by

\(\displaystyle \left\{\begin{aligned}L_1x+L_2y &= f_1\!\left(t\right)\\L_3x+L_4y &= f_2\!\left(t\right)\end{aligned}\right.\)

where \(\displaystyle L_1\), \(\displaystyle L_2\), \(\displaystyle L_3\) and \(\displaystyle L_4\) are (different) linear differential operators.

Let's say we wanted to eliminate the independent variable \(\displaystyle x\). Multiplying the first equation by \(\displaystyle L_3\) and the second equation by \(\displaystyle L_1\), we have the system

\(\displaystyle \left\{\begin{aligned}L_3L_1x+L_3L_2y &= L_3f_1\!\left(t\right)\\L_1L_3x+L_1L_4y &= L_1f_2\!\left(t\right)\end{aligned}\right.\)

Since the linear differential operators multiply like regular polynomials, it follows that \(\displaystyle L_1L_2=L_2L_1\). Now we can subtract the two equations to get

\(\displaystyle L_3L_2y-L_1L_4y=L_3f_1\!\left(t\right)-L_1f_2\!\left(t\right)\implies\left(L_3L_2-L_1L_4\right)y=L_3f_1\!\left(t\right)-L_1f_2\!\left(t\right)\)

With minor manipulations, we end up with \(\displaystyle \left(L_1L_4-L_2L_3\right)y=L_1f_2\!\left(t\right)-L_3f_1\!\left(t\right)\implies\begin{vmatrix}L_1 & L_2 \\ L_3 & L_4\end{vmatrix}y=\begin{vmatrix} L_1 & f_1\!\left(t\right)\\ L_3 & f_2\!\left(t\right)\end{vmatrix}\)

Once we know what \(\displaystyle y(t)\) is, we can then substitute it into either equation in the original system.

Similarly, if we eliminate \(\displaystyle y\), we end up with \(\displaystyle \begin{vmatrix}L_1 & L_2 \\ L_3 & L_4\end{vmatrix}x=\begin{vmatrix} f_1\!\left(t\right) & L_2\\ f_2\!\left(t\right) & L_4\end{vmatrix}\)

Let us go through a couple examples.

Example 25

Find the general solution for the system

\(\displaystyle \left\{\begin{aligned}(D-4)x+3y &= 0\\-6x+(D+7)y&=0\end{aligned}\right.\)

Let us first eliminate \(\displaystyle x\).

Then it follows that we have the equation

\(\displaystyle \begin{vmatrix}D-4 & 3 \\ -6 & D+7\end{vmatrix}y=0\implies\left[(D-4)(D+7)-(-3)(6)\right]y=0\) \(\displaystyle \implies \left(D^2+3D-10\right)y=0\).

Now the characteristic equation is \(\displaystyle r^2+3r-10=0\). It follows that \(\displaystyle r=-5\) or \(\displaystyle r=2\).

Thus, \(\displaystyle y=b_1e^{2t}+b_2e^{-5t}\).

If we choose to eliminate \(\displaystyle y\) instead, we get

\(\displaystyle \begin{vmatrix}D-4 & 3 \\ -6 & D+7\end{vmatrix}x=0\implies\left[(D-4)(D+7)-(-3)(6)\right]x=0\) \(\displaystyle \implies \left(D^2+3D-10\right)x=0\).

Thus, it follows that \(\displaystyle x=a_1e^{2t}+a_2e^{-5t}\).

However, there is a slight dilemma. It appears that our solution set contains four different arbitrary constants. However, by the Theorem for Existence and Uniqueness of Linear Systems, since we have two equations in our system, we should only have exactly two different arbitrary constants. So what now? The solution is simple: Substitute both functions into one of the equations in the original system.

If we substitute them into the first equation \(\displaystyle (D-4)x+3y=0\implies x^{\prime}-4x+3y=0\), we see that

\(\displaystyle 0=\left(2a_1e^{2t}-5a_2e^{-5t}\right)-4\left(a_1e^{2t}+a_2e^{-5t}\right)+3\left(b_1e^{2t}+b_2e^{-5t}\right)\) \(\displaystyle =\left(-2a_1+3b_1\right)e^{2t}+\left(-9a_2+3b_2\right)e^{-5t}\).

We now use the fact that \(\displaystyle e^{2t}\) and \(\displaystyle e^{-5t}\) are linearly independent. Thus, it follows that \(\displaystyle -2a_1+3b_1=0\implies a_1=\tfrac{3}{2}b_1\) and \(\displaystyle -9a_2+3b_2=0\implies a_2=\tfrac{1}{3}b_2\).

Therefore, the general solution to our system is

\(\displaystyle \color{red}\boxed{x(t)=\tfrac{3}{2}b_1e^{2t}+\tfrac{1}{3}b_1e^{-5t}}\) and \(\displaystyle \color{red}\boxed{y=b_1e^{2t}+b_2e^{-5t}}\)


The next post in the tutorial will be on matrix methods to solving systems of differential equations. I will try to post that in the next couple days.
Last edited by a moderator:

Chris L T521

MHF Hall of Fame
May 2008
Chicago, IL
I'm in such the mood to post Part I'll do it now. XD

Systems of Differential Equations (Part II - Matrix Methods)

In part one, we covered basic techniques on how to solve first order system of two (or three) differential equations. What we will discuss in this post are techniques used in solving systems with a larger number of equations, and look at some non-linear systems.

Matrix-Valued Functions

A matrix-valued function is of the form

\(\displaystyle \mathbf{x}(t)=\begin{bmatrix}x_(t)\\ x_2(t)\\ \vdots\\ x_n(t)\end{bmatrix}\) or \(\displaystyle \mathbf{A}(t)=\begin{bmatrix}a_{11}(t) & a_{12}(t) & \dots & a_{1n}(t)\\ a_{21}(t) & a_{22}(t) & \dots & a_{2n}(t)\\ \vdots & \vdots & \phantom{x}& \vdots\\ a_{m1}(t) & a_{m2}(t) & \dots & a_{mn}(t)\end{bmatrix}\)

where each entry is a function of \(\displaystyle t\). Now, \(\displaystyle \mathbf{x}(t)\) or \(\displaystyle \mathbf{A}(t)\) is differentiable if each entry is differentiable. Thus, we define \(\displaystyle \frac{\,d\mathbf{A}}{\,dt}=\left[\frac{\,da_{ij}}{\,dt}\right]\)

Let us now look into a popular method (which we will spend the rest of the post discussing) -- the Eigenvalue Method of Homogeneous Systems.


Eigenvalue Method of Homogeneous Systems

Let us consider the following first order system of \(\displaystyle n\) differential equations

\(\displaystyle \left\{\begin{aligned}x_1^{\prime} &= a_{11}x_1+a_{12}x_2+\dots+a_{1n}x_n\\x_2^{\prime} &= a_{21}x_1+a_{22}x_2+\dots+a_{2n}x_n\\ &\vdots\\ x_n^{\prime} &= a_{n1}x_1+a_{n2}x_2+\dots+a_{nn}x_n\\\end{aligned}\right.\)

It suffices to find \(\displaystyle n\) linearly independent solution vectors \(\displaystyle \mathbf{x}_1,\mathbf{x}_2,\dots,\mathbf{x}_n\) such that

\(\displaystyle \mathbf{x}(t)=c_1\mathbf{x}_1+c_2\mathbf{x}_2+\dots+c_n\mathbf{x}_n\)

is a solution to the general system.

We anticipate the solution vectors to be of the form

\(\displaystyle \mathbf{x}(t)=\begin{bmatrix}x_1\\x_2\\x_3\\\vdots\\x_n\end{bmatrix}=\begin{bmatrix}v_1e^{\lambda t}\\v_2e^{\lambda t}\\v_3e^{\lambda t}\\\vdots\\v_ne^{\lambda t}\end{bmatrix}=\begin{bmatrix}v_1\\v_2\\v_3\\\vdots\\v_n\end{bmatrix}e^{\lambda t}=\mathbf{v}e^{\lambda t}\)

where \(\displaystyle \lambda,v_1,v_2,v_3,\dots,v_n\) are appropriate scalar constants.

To expand on this, let us rewrite our general system in matrix form:

\(\displaystyle \mathbf{x}^{\prime}=\mathbf{Ax}\)

Now, let us substitute the anticipated solution into the differential equation to get

\(\displaystyle \left(\mathbf{v}e^{\lambda t}\right)^{\prime}=\mathbf{A}\left(\mathbf{v}e^{\lambda t}\right)\implies \lambda\mathbf{v}e^{\lambda t}=\mathbf{Av}e^{\lambda t}\)

Cancelling out \(\displaystyle e^{\lambda t}\), we now have

\(\displaystyle \lambda\mathbf{v}=\mathbf{Av}\).

From this, we see that \(\displaystyle \mathbf{x}=\mathbf{v}e^{\lambda t}\) will be a nontrivial solution of \(\displaystyle \mathbf{x}^{\prime}=\mathbf{Ax}\) given that \(\displaystyle \mathbf{v}\neq\mathbf{0}\) and such that \(\displaystyle \mathbf{Av}\) is a scalar multiple of \(\displaystyle \mathbf{v}\).

So ... How do we find \(\displaystyle \mathbf{v}\) and \(\displaystyle \lambda\)??

First, we rewrite \(\displaystyle \lambda\mathbf{v}=\mathbf{Av}\) as \(\displaystyle \left(\mathbf{A}-\lambda\mathbf{I}\right)\mathbf{v}=\mathbf{0}\).

Now we recall from linear algebra, this equation has a nontrivial solution iff

\(\displaystyle \det\left(\mathbf{A}-\lambda\mathbf{I}\right)=0\).

Thus, \(\displaystyle \lambda\) is referred to the eigenvalue of \(\displaystyle \mathbf{A}\), and \(\displaystyle \mathbf{v}\) is the associated eigenvector.

We also define \(\displaystyle \det\left(\mathbf{A}-\lambda\mathbf{I}\right)=0\) to be the characteristic equation of \(\displaystyle \mathbf{A}\).

Now, we lay out the steps of the eigenvalue method:

1. First solve the characteristic equation for the eigenvalues \(\displaystyle \lambda_1,\lambda_2,\dots,\lambda_n\) of the matrix \(\displaystyle \mathbf{A}\).

2. Attempt to find \(\displaystyle n\) linearly independent eigenvectors \(\displaystyle \mathbf{v}_1,\mathbf{v}_2,\dots,\mathbf{v}_n\) associated with the eigenvalues.

3. If step 2 is possible (it may not always be!), we have \(\displaystyle n\) linearly independent solutions \(\displaystyle \mathbf{x}_1=\mathbf{v}_1e^{\lambda_1t}, \mathbf{x}_2=\mathbf{v}_2e^{\lambda_2t},\dots,\mathbf{x}_n=\mathbf{v}_ne^{\lambda_nt}\). Thus, \(\displaystyle \mathbf{x}(t)=c_1\mathbf{x}_1(t)+c_2\mathbf{x}_2(t)+\dots+c_n\mathbf{x}_n(t)\) is the general solution of \(\displaystyle \mathbf{x}^{\prime}=\mathbf{Ax}\)


Let us now go through two special cases (each illustrated by an example):

Case I: \(\displaystyle \lambda_1,\lambda_2,\dots,\lambda_n\) are real and distinct.

Let us start with an example.

Example 26

Find a general solution for the system

\(\displaystyle \left\{\begin{aligned}x_1^{\prime} & = 4x_1 + 2x_2\\ x_2^{\prime} &= 3x_1-x_2\end{aligned}\right.\)

To solve this, let us rewrite the system in matrix form:

\(\displaystyle \mathbf{x}^{\prime}=\begin{bmatrix}4 & 2\\3 & -1\end{bmatrix}\mathbf{x}\)

It follows that the characteristic equation is

\(\displaystyle \begin{vmatrix}4-\lambda & 2 \\ 3 & -1-\lambda\end{vmatrix}=-\left(4-\lambda\right)\left(1+\lambda\right)-6=\lambda^2-3\lambda-10=0\)

Thus, \(\displaystyle \lambda^2-3\lambda-10=0\implies\left(\lambda-5\right)\left(\lambda+2\right)=0\implies \lambda_1=-2\) and \(\displaystyle \lambda_2=5\).

Now that we have the eigenvalues, let us try to find the eigenvectors.

Note that the eigenvector equation in this case is

\(\displaystyle \begin{bmatrix}4-\lambda & 2 \\ 3 & -1-\lambda\end{bmatrix}\begin{bmatrix}v_1\\v_2\end{bmatrix}=\begin{bmatrix}0\\0\end{bmatrix}\).

Case I: \(\displaystyle \lambda=-2\).

Here, the eigenvector equation becomes

\(\displaystyle \begin{bmatrix}6& 2 \\ 3 & 1\end{bmatrix}\begin{bmatrix}v_1\\v_2\end{bmatrix}=\begin{bmatrix}0\\0\end{bmatrix}\).

This gives us the linear system

\(\displaystyle \left\{\begin{aligned}6v_1+2v_2 & =0\\ 3v_1 + v_2 &= 0\end{aligned}\right.\).

It is evident that there are infinitely many solutions. So what now? What we usually do is pick a simple value. So for example, if \(\displaystyle v_1=1\), we have \(\displaystyle v_2=-3\).

Therefore, \(\displaystyle \mathbf{v}_1=\begin{bmatrix}1\\-3\end{bmatrix}\) is the eigenvector associated to \(\displaystyle \lambda_1=-2\). Thus, \(\displaystyle \mathbf{x}_1(t)=\begin{bmatrix}1\\-3\end{bmatrix}e^{-2t}\) is a solution to the general equation.

Case II: \(\displaystyle \lambda=5\).

Here, the eigenvector equation becomes

\(\displaystyle \begin{bmatrix}-1& 2 \\ 3 & -6\end{bmatrix}\begin{bmatrix}v_1\\v_2\end{bmatrix}=\begin{bmatrix}0\\0\end{bmatrix}\).

This gives us the linear system

\(\displaystyle \left\{\begin{aligned}-v_1+2v_2 & =0\\ 3v_1 - 6v_2 &= 0\end{aligned}\right.\).

It is evident that there are infinitely many solutions. So what now? What we usually do is pick a simple value. So for example, if \(\displaystyle v_2=1\), we have \(\displaystyle v_1=2\).

Therefore, \(\displaystyle \mathbf{v}_2=\begin{bmatrix}2\\1\end{bmatrix}\) is the eigenvector associated to \(\displaystyle \lambda_2=5\). Thus, \(\displaystyle \mathbf{x}_2(t)=\begin{bmatrix}2\\1\end{bmatrix}e^{5t}\) is a solution to the general equation.

It is easy to show that \(\displaystyle e^{-2t}\) and \(\displaystyle e^{5t}\) are linearly independent (via Wronskian).

Now, by the principle of superposition, it follows that

\(\displaystyle \color{red}\boxed{\mathbf{x}(t)=c_1\begin{bmatrix}1\\-3\end{bmatrix}e^{-2t}+c_2\begin{bmatrix}2\\1\end{bmatrix}e^{5t}}\)

satisfies \(\displaystyle \mathbf{x}^{\prime}=\begin{bmatrix}4&2\\3&-1\end{bmatrix}\mathbf{x}\)

(Written in scalar form, the solutions would be \(\displaystyle \color{red}\boxed{\mathbf{x}_1(t)=c_1e^{-2t}+2c_2e^{5t}}\) and \(\displaystyle \color{red}\boxed{\mathbf{x}_2(t)=-3c_1e^{-2t}+c_2e^{5t}}\))


Case II: \(\displaystyle \lambda_1,\lambda_2,\dots,\lambda_n\) are complex.

Prelim Theory

We are after real valued solutions (it will turn out to be real and imaginary parts of the general solution). When complex eigenvalues pop up, they always appear in conjugate pairs (i.e. \(\displaystyle \lambda=p+qi\) and \(\displaystyle \bar{\lambda}=p-qi\)).

Now, if \(\displaystyle \mathbf{v}\) is an eigenvector associated with \(\displaystyle \lambda\), such that

\(\displaystyle \left(\mathbf{A}-\lambda\mathbf{I}\right)\mathbf{v}=\mathbf{0}\),

then taking complex conjugates in the equation gives us

\(\displaystyle \left(\mathbf{A}-\bar{\lambda}\mathbf{I}\right)\overline{\mathbf{v}}=\mathbf{0}\)

If we take

\(\displaystyle \mathbf{v}=\begin{bmatrix}a_1+b_1i\\a_2+b_2i\\\vdots\\a_n+b_ni\end{bmatrix}=\begin{bmatrix}a_1\\a_2\\\vdots\\a_n\end{bmatrix}+\begin{bmatrix}b_1\\b_2\\\vdots\\b_n\end{bmatrix}i=\mathbf{a}+\mathbf{b}i\),

then \(\displaystyle \overline{\mathbf{v}}=\mathbf{a}-\mathbf{b}i\)

Therefore, the complex-valued solution associated with \(\displaystyle \lambda\) and \(\displaystyle \mathbf{v}\) is

\(\displaystyle \mathbf{x}(t)=\mathbf{v}e^{\lambda t}=\mathbf{v}e^{\left(p+qi\right)t}=\left(\mathbf{a}+\mathbf{b}i\right)e^{pt}\left[\cos\!\left(qt\right)+\sin\!\left(qt\right)\right]\)

Rearranging, we have

\(\displaystyle \mathbf{x}(t)=e^{pt}\left[\mathbf{a}\cos\!\left(qt\right)-\mathbf{b}\sin\!\left(qt\right)\right]+ie^{pt}\left[\mathbf{b}\cos\!\left(qt\right)+\mathbf{a}\sin\!\left(qt\right)\right]\).


\(\displaystyle \begin{aligned}\mathbf{x}_1(t)&=\Re\left(\mathbf{x}(t)\right)=e^{pt}\left[\mathbf{a}\cos\!\left(qt\right)-\mathbf{b}\sin\!\left(qt\right)\right]\\\mathbf{x}_2(t)&=\Im\left(\mathbf{x}(t)\right)=e^{pt}\left[\mathbf{b}\cos\!\left(qt\right)+\mathbf{a}\sin\!\left(qt\right)\right]\end{aligned}\)

I leave it for you to verify we get the same set of solutions when we check the real and imaginary parts of \(\displaystyle \overline{\mathbf{v}}e^{\bar{\lambda}t}\).

Example 27

Find the general solution of the system

\(\displaystyle \begin{aligned}x_1^{\prime} &= 4x_1-3x_2\\ x_2^{\prime}&= 3x_1+4x_2\end{aligned}\)

Our coefficient matrix \(\displaystyle \mathbf{A}=\begin{bmatrix}4&-3\\3&4\end{bmatrix}\) has the characteristic equation

\(\displaystyle \begin{bmatrix}4-\lambda & -3 \\ 3 & 4-\lambda\end{bmatrix}=\left(4-\lambda\right)^2+9=0\implies \lambda=4-3i\) and \(\displaystyle \bar{\lambda}=4+3i\).

Substituting \(\displaystyle \lambda=4-3i\) into the eigenvector equation, we have

\(\displaystyle \begin{bmatrix}3i & -3\\ 3 & 3i\end{bmatrix}\begin{bmatrix}v_1\\v_2\end{bmatrix}=\begin{bmatrix}0\\0\end{bmatrix}\).

Thus, we have the linear system

\(\displaystyle \left\{\begin{aligned}iv_1-v_2 & = 0\\ v_1 + iv_2 & = 0\end{aligned}\right.\)

If we take \(\displaystyle v_1=1\), \(\displaystyle v_2=i\). Thus, \(\displaystyle \mathbf{v}=\begin{bmatrix}1\\i\end{bmatrix}\) is complex eigenvector associated with \(\displaystyle \lambda=4-3i\).

Now, the corresponding complex solution is

\(\displaystyle \mathbf{x}(t)=\begin{bmatrix}1\\i\end{bmatrix}e^{\left(4-3i\right)t}=\begin{bmatrix}1\\i\end{bmatrix}e^{4t}\left(\cos\!\left(3t\right)-i\sin\!\left(3t\right)\right)=e^{4t}\begin{bmatrix}\cos\!\left(3t\right)-i\sin\!\left(3t\right)\\i\cos\!\left(3t\right)+\sin\!\left(3t\right)\end{bmatrix}\)


\(\displaystyle \mathbf{x}_1(t)=\Re\left(\mathbf{x}(t)\right)=e^{4t}\begin{bmatrix}\cos\!\left(3t\right)\\\sin\!\left(3t\right)\end{bmatrix}\) and \(\displaystyle \mathbf{x}_2(t)=\Im\left(\mathbf{x}(t)\right)=e^{4t}\begin{bmatrix}-\sin\!\left(3t\right)\\\cos\!\left(3t\right)\end{bmatrix}\)

Therefore, a real-valued general solution to \(\displaystyle \mathbf{x}^{\prime}=\mathbf{Ax}\) is

\(\displaystyle \color{red}\boxed{\mathbf{x}(t)=c_1\mathbf{x}_1(t)+c_2\mathbf{x}_2(t)=e^{4t}\begin{bmatrix}c_1\cos\!\left(3t\right)-c_2\sin\!\left(3t\right)\\c_1\sin\!\left(3t\right)+c_2\cos\!\left(3t\right)\end{bmatrix}}\).


I will have to post a Part III for Case III: \(\displaystyle \lambda_1,\lambda_2,\dots,\lambda_n\) are real, but not distinct.

I will have that posted sometime tomorrow or the next day.
Last edited by a moderator:

Chris L T521

MHF Hall of Fame
May 2008
Chicago, IL
System of Differential Equations (Part III - Matrix Methods (cont.))

In Part II, we ended with two special cases for the eigenvalues of an n x n matrix system. We now devote an entire post
to the last special case.


Case III: \(\displaystyle \lambda_1,\lambda_2,\dots,\lambda_n\) are real but not distinct.

When \(\displaystyle \lambda_1,\lambda_2,\dots,\lambda_n\) were distinct (real or complex), then the general solution of
\(\displaystyle \mathbf{x}^{\prime}=\mathbf{Ax}\) took on the form

\(\displaystyle \mathbf{x}(t)=c_1\mathbf{v}_1e^{\lambda_1t}+c_2\mathbf{v}_2e^{\lambda_2t}+\dots+c_n\mathbf{v}_ne^{\lambda_nt}

We now consider when the characteristic equation \(\displaystyle \left|\mathbf{A}-\lambda\mathbf{I}\right|=0\) doesn't have n
distinct root --> the characteristic equation has at least one repeated root.

In that case, we refer to the eigenvalue as having multiplicity. An eigenvalue is of multiplicity k if it is a k-fold
root of the characteristic equation. If \(\displaystyle \lambda\) is of multiplicity k, then there is at least one
eigenvector \(\displaystyle \mathbf{v}\) associated with it. However, we may not always be able to find k linearly
independent eigenvectors associated with \(\displaystyle \lambda\) (this is referred to as a defect of \(\displaystyle \lambda\),
which will be discussed later). If we can find k linearly independent eigenvectors associated with \(\displaystyle \lambda\),
we say that \(\displaystyle \lambda\) is complete.

Example 28

Find a general solution of the system

\(\displaystyle \mathbf{x}^{\prime}=\begin{bmatrix}9 & 4 & 0\\-6 & -1 & 0\\6 & 4 & 3\end{bmatrix}\mathbf{x}\)

The characteristic equation of \(\displaystyle \mathbf{A}=\begin{bmatrix}9 & 4 & 0\\-6 & -1 & 0\\6 & 4 & 3\end{bmatrix}\) is

\(\displaystyle \begin{vmatrix}9-\lambda & 4 & 0\\-6 & -1-\lambda & 0\\6 & 4 & 3-\lambda\end{vmatrix}=(3-\lambda)\begin{vmatrix}9
-\lambda & 4\\ -6 & -1-\lambda\end{vmatrix}\) \(\displaystyle =(3-\lambda)(\lambda^2-8\lambda+15)=(5-\lambda)(3-\lambda)^2=0

Here, we see that \(\displaystyle \lambda_1=5\) and \(\displaystyle \lambda_2=3\) with multiplicity 2.

Case I: \(\displaystyle \lambda=5\)

The eigenvector equation is

\(\displaystyle \begin{bmatrix}4 & 4 & 0\\-6 & -6 & 0\\6 & 4 & -2\end{bmatrix}\begin{bmatrix}v_1\\v_2\\v_3\end{bmatrix}=\begin

Thus, we have the following system of equations:

\(\displaystyle \left\{\begin{aligned}4v_1+4v_2 & = 0\\-6v_1-6v_2&=0\\6v_1+4v_2-2v_3&=0\end{aligned}\right.\)

The first two deduce to \(\displaystyle v_2=-v_1\)

Now, it follows the third equation can be written as \(\displaystyle 2v_1-2v_3=0\implies v_3=v_1\). Thus, if we pick \(\displaystyle
v_1=1\), we have the eigenvector \(\displaystyle \mathbf{v}_1=\begin{bmatrix}1\\-1\\1\end{bmatrix}\) associated with
\(\displaystyle \lambda=5\).

Case II: \(\displaystyle \lambda=3\)

The eigenvector equation is

\(\displaystyle \begin{bmatrix}6 & 4 & 0\\-6 & -6 & 0\\6 & 4 & 0\end{bmatrix}\begin{bmatrix}v_1\\v_2\\v_3\end{bmatrix}=\begin

Thus, we have a nonzero eigenvector iff \(\displaystyle 6v_1+4v_2=0\implies v_2=-\tfrac{3}{2}v_1\). Thus, \(\displaystyle v_3\)
is arbitrary. So if we pick \(\displaystyle v_3=1\), we can let \(\displaystyle v_1=v_2=0\). Thus, \(\displaystyle \mathbf{v}_2=\begin
{bmatrix}0\\0\\1\end{bmatrix}\) is an associated eigenvector to \(\displaystyle \lambda=3\). However, there is one more eigenvector!

If we pick \(\displaystyle v_3=0\), we can pick \(\displaystyle v_1\) and \(\displaystyle v_2\) such that we don't have the zero vector. So if we take \(\displaystyle v_2=2\), we see that \(\displaystyle v_3=-3\). Thus,\(\displaystyle \mathbf{v}_3=\begin{bmatrix}2\\-3\\0\end{bmatrix}
\) is the eigenvector associated with \(\displaystyle \lambda=3\).

Therefore, the general solution is

\(\displaystyle \color{red}\boxed{\mathbf{x}(t)=c_1\begin{bmatrix}1\\-1\\1\end{bmatrix}e^{5t}+c_2\begin{bmatrix}0\\0\\1\end{bmatrix}e^{3t}+c_3\begin{bmatrix}2\\-3\\0\end{bmatrix}e^{3t}}\)

Remark: With regards to the two eigenvectors for \(\displaystyle \lambda=3\), the fact that \(\displaystyle v_2=-\tfrac{3}{2}v_1\) is worth taking note of. The eigenvector can be rewritten as

\(\displaystyle \mathbf{v}=\begin{bmatrix}v_1\\-\frac{3}{2}v_1\\v_3\end{bmatrix}=v_3\begin{bmatrix}0\\0\\1\end{bmatrix}+\tfrac{1}{2}v_1\begin{bmatrix}2\\-3\\0\end{bmatrix}=v_3\mathbf{v}_2-\tfrac{1}{2}v_1\mathbf{v}_3\)

Thus, we could replace \(\displaystyle \mathbf{v}\) for the eigenvector and still get the same answer we did when considering both eigenvectors. This tells us that we don't have to worry about making the right choice -- its just advisible that we pick the simplest one.


Defective Eigenvalues

We start this section with an example.

Example 29

Consider the coefficient matrix \(\displaystyle \mathbf{A}=\begin{bmatrix}1 &-3\\3 & 7\end{bmatrix}\).

The characteristic equation is \(\displaystyle \begin{vmatrix}1-\lambda & -3\\ 3 & 7-\lambda\end{vmatrix}=\lambda^2-8\lambda+16=\left(\lambda-4\right)^2=0\).

Thus, \(\displaystyle \lambda=4\) is an eigenvalue of multiplicity two.

Now, the eigenvector equation is

\(\displaystyle \begin{bmatrix}-3 & -3\\3 & 3\end{bmatrix}\begin{bmatrix}v_1\\v_2\end{bmatrix}=\begin{bmatrix}0\\0\end{bmatrix}\).

Thus, it follows that our system of equation is

\(\displaystyle \left\{\begin{aligned}-3v_1-3v_2 & = 0\\3v_1 + 3v_2 & = 0\end{aligned}\right.\)

Thus, \(\displaystyle v_2=-v_1\).

Thus the eigenvector is of the form \(\displaystyle \mathbf{v}=\begin{bmatrix}v_1\\-v_1\end{bmatrix}=v_1\begin{bmatrix}1\\-1\end{bmatrix}\).

This implies that all eigenvectors associated with \(\displaystyle \lambda=4\) will be a constant multiple of \(\displaystyle \begin{bmatrix}1\\-1\end{bmatrix}\). Therefore, there is only one linearly independent eigenvector associated with \(\displaystyle \lambda=4\), making \(\displaystyle \lambda=4\) incomplete.

The eigenvalue in the above example is incomplete, or defective.

Now, if an eigenvalue \(\displaystyle \lambda\) has \(\displaystyle p<k\) linearly independent eigenvectors, then \(\displaystyle d=k-p\) is the number of missing eigenvectors - the defect of the defective eigenvalue \(\displaystyle \lambda\).

In Example 29, the defect would be \(\displaystyle d=2-1=1\).

What we do now is consider a way to solve a system of differential equations given the defect \(\displaystyle d=1\).


Case IV: \(\displaystyle \lambda\) has multiplicity two and is defective.

Suppose that \(\displaystyle \lambda\) has one linearly independent eigenvector, implying that \(\displaystyle \mathbf{x}_1(t)=\mathbf{v}_1e^{\lambda t}\) is the only solution (that we know of) to \(\displaystyle \mathbf{x}^{\prime}=\mathbf{Ax}\).

However, we hope to find a second solution of the form \(\displaystyle \mathbf{x}_2(t)=\mathbf{v}_2te^{\lambda t}\). Substituting it into the system, we have

\(\displaystyle \mathbf{v}_2e^{\lambda t}+\lambda\mathbf{v}_2te^{\lambda t}=\mathbf{Av}_2te^{\lambda t}\).

Since the coefficients of \(\displaystyle e^{\lambda t}\) and \(\displaystyle te^{\lambda t}\) need to balance, it follows from the above equation that \(\displaystyle \mathbf{v}_2=\mathbf{0}\) and consequently, \(\displaystyle \mathbf{x}_2(t)\equiv\mathbf{0}\).

Since that didn't work, let us extend our original idea and replace \(\displaystyle \mathbf{v}_2t\) with \(\displaystyle \mathbf{v}_1t+\mathbf{v}_2\). So we suppose now that the second solution will take on the form

\(\displaystyle \mathbf{x}_2(t)=\mathbf{v}_1te^{\lambda t}+\mathbf{v_2}e^{\lambda t}\).

Substituting this into \(\displaystyle \mathbf{x}^{\prime}=\mathbf{Ax}\), we get

\(\displaystyle \left(\mathbf{v}_1+\lambda\mathbf{v}_2\right)e^{\lambda t}+\lambda\mathbf{v}_1te^{\lambda t}=\mathbf{Av}_1te^{\lambda t}+\mathbf{Av}_2e^{\lambda t}\)

Comparing coefficents of \(\displaystyle e^{\lambda t}\) and \(\displaystyle te^{\lambda t}\), we see that

\(\displaystyle \mathbf{v}_1+\lambda\mathbf{v}_2=A\mathbf{v}_2\implies \left(\mathbf{A}-\lambda\mathbf{I}\right)\mathbf{v_2}=\mathbf{v_1}\)


\(\displaystyle \lambda\mathbf{v}_1=\mathbf{Av}_1\implies \left(\mathbf{A}-\lambda\mathbf{I}\right)\mathbf{v}_1=\mathbf{0}\)

The second equation confirms that \(\displaystyle \mathbf{v}_1\) is an eigenvector for \(\displaystyle \lambda\). Now, it follows that \(\displaystyle \mathbf{v}_2\) satisfies the equation

\(\displaystyle \left(\mathbf{A}-\lambda\mathbf{I}\right)^2\mathbf{v}_2=\left(\mathbf{A}-\lambda\mathbf{I}\right)\left(\mathbf{A}-\lambda\mathbf{I}\right)\mathbf{v}_2=\left(\mathbf{A}-\lambda\mathbf{I}\right)\mathbf{v}_1=\mathbf{0}\)

This tells us that it suffices to find a single solution \(\displaystyle \mathbf{v}_2\) to the equation \(\displaystyle \left(\mathbf{A}-\lambda\mathbf{I}\right)^2\mathbf{v}_2=\mathbf{0}\) such that \(\displaystyle \mathbf{v}_1=\left(\mathbf{A}-\lambda\mathbf{I}\right)\mathbf{v}_2\neq\mathbf{0}\).

It is always possible to find a solution when the defective eigenvalue \(\displaystyle \lambda\) has multiplicity two.

Let us go through an example that illustrates this process.


Example 30

Find the general solution to the system

\(\displaystyle \mathbf{x}^{\prime}=\begin{bmatrix}1 & -3\\ 3 & 7\end{bmatrix}\mathbf{x}\)

In example 29, we showed that the characteristic equation produced a defective eigenvalue \(\displaystyle \lambda=4\) of multiplicity two.

We now start by calculation \(\displaystyle \left(\mathbf{A}-4\mathbf{I}\right)^2\):

\(\displaystyle \left(\mathbf{A}-4\mathbf{I}\right)^2=\begin{bmatrix}-3 & -3\\3 & 3\end{bmatrix}\begin{bmatrix}-3 & -3\\3 & 3\end{bmatrix}=\begin{bmatrix}0 & 0\\0 & 0\end{bmatrix}\)

Thus, \(\displaystyle \left(\mathbf{A}-4\mathbf{I}\right)^2\mathbf{v}_2=\mathbf{0}\implies \begin{bmatrix}0 & 0\\0 & 0\end{bmatrix}\mathbf{v}_2=\mathbf{0}\) implies that \(\displaystyle \mathbf{v}_2\) can be of any (nonzero) form.

So if we take \(\displaystyle \mathbf{v}_2=\begin{bmatrix}1\\0\end{bmatrix}\), then we see that

\(\displaystyle \left(\mathbf{A}-4\mathbf{I}\right)\mathbf{v}_2=\begin{bmatrix}-3 & -3\\3 & 3\end{bmatrix}\begin{bmatrix}1\\0\end{bmatrix}=\begin{bmatrix}-3\\3\end{bmatrix}=\mathbf{v}_1\).

This eigenvector is nonzero, and thus associated with the eigenvalue \(\displaystyle \lambda=4\) (Note that this is -3 times the eigenvector we found in example 29).

Therefore, the two solutions to the system are

\(\displaystyle \mathbf{x}_1(t)=\mathbf{v}_1e^{4t}=\begin{bmatrix}-3\\3\end{bmatrix}e^{4t}\)


\(\displaystyle \mathbf{x}_2(t)=\left(\mathbf{v}_1t+\mathbf{v}_2\right)e^{4t}=\begin{bmatrix}-3t+1\\3t\end{bmatrix}e^{4t}\)

Therefore, the general solution to the system is

\(\displaystyle \color{red}\boxed{\mathbf{x}(t)=c_1\begin{bmatrix}-3\\3\end{bmatrix}e^{4t}+c_2\begin{bmatrix}-3t+1\\3t\end{bmatrix}e^{4t}=\begin{bmatrix}-3c_1-3c_2t+c_2\\3c_1+3c_2t\end{bmatrix}e^{4t}}\)


This will conclude the systems of differential equations section of the tutorial.

I will start working on the first of three (or maybe four) posts on Laplace Transforms and their use in IVPs.
Last edited by a moderator:
Not open for further replies.