Results 1 to 5 of 5

Math Help - Difference equation tutorial: draft of part I

  1. #1
    MHF Contributor chisigma's Avatar
    Joined
    Mar 2009
    From
    near Piacenza (Italy)
    Posts
    2,162
    Thanks
    5

    Difference equation tutorial: draft of part I

    Dear friends of MHF
    in the two and half years of my presence in MHF, I often noticed some difficulties in the treatment of difference equations, an important chapter of Discrete Maths that [may be...] is a little unfairly undervaluated. After a discussion about that with the staff, I decided to write a tutorial on the Difference equations and in next posts You can read the draft of Part I: general concepts about sequences and first order difference equations, which for size reason has been devided in three parts...

    I will appreciate very much all the errors, omissions, modifications, suggestion and requests of clarification that You all will express about my work...

    Kind regards

    \chi \sigma
    Follow Math Help Forum on Facebook and Google+

  2. #2
    MHF Contributor chisigma's Avatar
    Joined
    Mar 2009
    From
    near Piacenza (Italy)
    Posts
    2,162
    Thanks
    5

    Re: Difference equation tutorial: draft of part I

    Difference equation tutorial- Part I: general concepts about sequences and first order difference equations

    By: Carlo Sabatini

    1. Definitions

    In mathematics there are various definitions of ‘sequence’ so that we adopt, among them, the most suitable with the purposes of this tutorial work. We define sequence a set of numbers each of them associated to a natural number n, supposing that the set of natural numbers includes the 0. Each term of a sequence a is written as a_{n} ,n=0,1,2,…. In general we will consider the term ‘sequence’ and ‘discrete function’ as synonyms. Each term an can be real or complex, so that there are real or complex sequences. If nothing is specified a sequence has to be considered as a real sequence.

    Like a ‘conventional’ function of real variable, a discrete function can be defined in explicit or implicit form. Defining a sequence in explicit form means to have a computational formula that permits for each n to compute the corresponding a_{n}. A simple example of sequence defined in explicit form is…

    a_{n}=2 n (1.1)

    Defining a sequence in implicit form means to have an expression of the type…

    f(n,a_{n},a_{n+1},a_{n+2},…,a_{n+k})=0 (1.2)

    ... combined with the ‘initial conditions’…

    a_{0}=\alpha_{0},a_{1}=\alpha_{1},…,a_{k-1}=\alpha_{k-1} (1.3)

    An expression like the (1.2) combined with the initial conditions like the (1.3) is called recursive relation, or alternatively difference equation. The index k in (1.2) is the order of the difference equation. ‘Solving’ a difference equation means to find, given the (1.2) and (1.3), an expression of the a_{n} like (1.1). As we will see later, that is not always feasible or ‘comfortable’. The reason of the ‘dual name’ will be explained indetail in a successive section.

    2. Some basic sequences

    Now we define some ‘fundamental sequences’ that are commonly used practically in all branches of Mathematics. The first of them is the unit sample sequence, defined as…

    \delta_{n}=\begin{cases}1&\text{if}\ n=0\\ 0 &\text{if}\ n>0\end{cases} (2.1)

    The unit sample sequence \delta_{n} plays the same role of the ‘impulsive function’ \delta(t) in the continuous functions set but, and that is remarkable, it doesn’t suffer of any ‘complication’ in the definition for n=0.

    The second ‘fundamental sequence’ [or, more exactly, ‘fundamental set of sequences’…] is the exponential sequence on base \alpha the general term of which is usually written as \alpha^{n} and defined by the recursive relation…

    a_{n+1}= \alpha\ a_{n}\ ,\ a_{0}=1\ ,\ \alpha \in \mathbb{R} (2.2)

    It is important to precise that \alpha can be any real number, even negative and even 0,so that any ‘ambiguity’ in the definition of 0^{0} doesn’t exist indiscrete mathematics. In particular the sequence 0^{n} is different from the ‘all zero sequence’ and is 0^{n}=\delta_{n}. A significant difference between continuous and discrete mathematics in in the fact that in the first the ‘exponential function’ has as base the number e, defined as…

    e=\lim_{n\rightarrow \infty} (1+\frac{1}{n})^{n}

    That is in continuous mathematics. In discrete mathematics any real number \alpha can be the base of an ‘exponential sequence’ and don’t exist any particular ‘privilege’.

    The last ‘fundamental sequence’ we go to define is the factorial sequence, the general term of which is written as n! and defined by the recursive relation…

    a_{n+1}= (n+1)\a_{n}\ ,\ a_{0}=1 (2.3)

    Also for factorials the definition (2.3) eliminates any ‘complication’ regarding 0!, sothat we can say that that is an important ‘common property’ of the three ‘fundamental sequences’ examined in this section.

    3. Transforming a recursive relation into adifference equation and vice-versa

    The reason of the ‘duality of the name’ is now explained starting from definition of first difference, second differenceand ‘difference of order k’. Given a sequence a_{n} we define as first difference the quantity…

    \Delta_{n}=a_{n+1}-a_{n} (3.1)

    … and second difference the quantity…

    \Delta_{n}^{2}= \Delta_{n+1}-\Delta_{n}=a_{n+2}-2\ a_{n+1}+a_{n} (3.2)

    In general the difference of order k of a sequence is the quantity…

    \Delta_{n}^{k}=\Delta_{n+1}^{k-1}-\Delta_{n}^{k-1} (3.3)

    An useful property of the recursive relation written in the form (1.2) is the fact that it can be rewritten in term of differences as…

    f(n,a_{n},\Delta_{n},\Delta_{n}^{2},…,\Delta_{n}^{  k})=0 (3.4)

    This possibility, as we will see in a successive section, may be very useful and an illustrative example is appropriate. The following second order recursive relation…

    a_{n+2}=a_{n+1}+a_{n} (3.5)

    … is well known and, with the initial conditions a_{0}=0 and a_{1}=1, it generates the ‘Fibonacci’s sequence’. Now with simple steps the (3.5) is transformed in the form (3.4) as…

    \Delta_{n}^{2}-\Delta_{n}-a_{n}=0 (3.6)

    In the successive sections we will use both the expression (1.2) and (3.4), simply taking into account which is more ‘comfortable’ and in both cases will use the term ‘difference equation’…
    Follow Math Help Forum on Facebook and Google+

  3. #3
    MHF Contributor chisigma's Avatar
    Joined
    Mar 2009
    From
    near Piacenza (Italy)
    Posts
    2,162
    Thanks
    5

    Re: Difference equation tutorial: draft of part I

    4. First order difference equations: some particular cases

    As illustrative example we start considering the first order recursive relation…

    \Delta_{n}= a_{n+1}-a_{n}= n\ a_{n}+n^{2}\ ; a_{0}=1 (4.1)

    The values of the a_{n} are computable simply performing the elementary operation illustrated in (4.1) and we obtain a_{0}=1\ ,\ a_{1}=1\ ,\ a_{2}=3\ ,\ a_{3}=13\ ,\ a_{4}=61\ ,\ .... Such type of solution is always possible of course but rarely is useful. Finding the explicit form of the sequence is ‘the best’ but, expecially for notlinear difference equations, often is an hard task. In many practical situations however someone can be interested to the convergence or divergence of the solution, i.e. , given a first order difference equation written in standard form…

    \Delta_{n}= f(n,a_{n})\ ;\ a_{0}=\alpha(4.2)

    … to find [if it exists…] the limit…

    \lim_{n \rightarrow \infty} a_{n}=a_{0}+\lim_{n \rightarrow \infty} \sum_{k=0}^{n-1} \Delta_{k} (4.3)

    Looking at the (4.3) it is clear that the problem of convergence of the solution is the problem of convergence of the series in the second term. The necessary condition for the convergence of a series is well known: the general term must tend to 0 so that necessary condition is…

    \lim_{n \rightarrow \infty} \Delta_{n}=0 (4.4)

    In order to obtain also sufficient conditions let suppose that we are in the particular case...

    \Delta_{n}=f(a_{n}) (4.5)

    … so that f(*) is function only of a_{n}. In this case the condition (4.4) implies that, if the solution converges to a finite limit x_{0} , then it must be f(x_{0})=0. Now we suppose that the function f(*) for which is f(x_{0})=0 and its derivative are continuous in an interval a<x<b containing x_{0} and in this interval f(*) has no more zeroes. We have three distinct cases…

    a) in x=x_{0} is f^{‘}(x_{0})<0 and in this case x_{0} is called attractive fixed point

    b) in x=x_{0} is f^{‘}(x_{0})>0 and in this case x_{0} is called repulsive fixed point

    c) in x=x_{0} is f^{‘}(x_{0})=0 and this case requires a particular treatment and it will not examined here…

    From (4.5) we can easily verify that, if f(*)has a zero in x=x_{0}, then the initial condition a_{0}=x_{0} will generate the ‘all x_{0} sequence’ no matter if x_{0} is an attractive or repulsive fixed point. If thatis not the case, then x_{0} can be the finite limit of the solution only if x_{0} is an attractive fixed point. The successive step is answering the question: well!... but when a solution does converge effectively to x_{0}?... A first sufficient condition is given by the…

    Theorem 4.1- If x_{0} is anattractive fixed point and it exists an interval a<x<b wherefor any x \ne x_{0} is…

    |f(x)|<|x_{0}-x| (4.6)

    … and the initial value a_{0} \in[a,b], then the solution converges monotonically at a_{0}.

    Demonstration- In this case the series (4.3),as consequence of the (4.6), has all the terms positive or negative, so that we can invoke the ratio test for convergence. The (4.6) guarantees also that is…

    \frac{\Delta_{n}}{\Delta_{n-1}}\le\lambda < 1 (4.7)

    … so that the convergence of the solution to x_{0} is demonstrated.

    A second condition is given by the…

    Theorem 4.2- If x_{0} is anattractive fixed point and it exist an interval a<x<b wherefor any x \ne x_{0} is…

    |x_{0}-x|<|f(x)|<2\|x_{0}-x| (4.8)

    … and the initial value a_{0} \in[a,b], then the solution converges at x_{0} and the convergence is ‘oscillating’.
    Demonstration- In this case the series (4.3),as consequence of (4.7), is with alternate terms and, because the \Delta_{n} are decreasing and their limit if n tends to infinity is 0, the convergence to the solution to x_{0} is proved.

    At this point the best way to proceed is to give some illustrative example and we will do that applying the theorems 4.1and 4.2 to a well popular procedure for solving nonlinear equations: the Newton's method. In the seventeenth century the British philosopher, physicist and mathematician [Sir] Isaac Newton proposed an iterative way to find the solution of an equation written in the form…

    \varphi(x)=0 (4.9)

    According to Newton, if a sequence defined by the difference equation…

    \Delta_{n}=a_{n+1}-a_{n}= f(a_{n})\ ;\ f(x)=-\frac{\varphi(x)}{\varphi^{‘}(x)} (4.10)

    … ‘started up’ by a certain initial value a_{0} has limit x_{0}, then x_{0} is solution of (4.9). Since we are in ‘historical context’, now we will apply the Newton's method going back more than four centuries before him. In the year 1225 the Italia nmathematician Leonardo Fibonacci studied the third order equation…

    \varphi(x)=x^{3}+2\ x^{2}+10\ x-20=0 (4.11)

    … and obtained the solution x_{0}= 1.368808107, exact till to the last digit. Only three centuries later the Italian mathematicians Scipione del Ferro and Nicolò Tartaglia, independently one from the other, will discovery the solving procedure for the cubic equation, so that how Fibonacci obtained this remarkable result is still unknown. Now we try to apply the Newton's method to verify the solution of Fibonacci and first we compute…

    f(x)= - \frac{x^{3}+2\ x^{2}+10\ x-20}{3\ x^{2}+4\ x +10} (4.12)

    … and then we start the Newton’s iterations(4.10) with the [not well meditated…] value a_{0}=1

    a_{0} = 1

    a_{1} = 1.41176470588…

    a_{2} = 1.36933647059…

    a_{3} = 1.36880818862…

    a_{4} = 1.36880810782…

    … and for greater value on n the result in 12 digit remains the same. The convergence is very fast and in only four iterations we have obtained the result of Fibonacci. Regarding the a_{n} we observe that for n=0 the sequence increases and for greater n the sequence is decreasing. That is not a surprise if we observe the diagram of f(x) as in the figure…



    fig. 4.1

    In the figure the ‘black line’ is f(x), the ‘blue line’ is g_{1}(x)=x_{0}-x and the ‘red line’ is g_{2}(x)= 2\ (x_{0}-x) . If we observe with care the figure we note that for a_{0} in the region x <x_{0} we are in the condition of theorem 4.2 and the convergence is oscillating and for a_{0} in the region x>x_{0} we are in the condition of theorem 4.1 and the convergence is monotonic and that is true for all possible values of [tex]a_{0}. In other words the Newton's method, applied to the Fibonacci’s equation, always converges no matter which is a_{0}. Unfortunately not always that is true, as we will see in next example.

    We want to apply the Newton's method to the equation…

    \varphi(x)= \sin x=0 (4.13)

    Of course we know that x=0 is a solution of (4.13), but what we are interested for is to verify the convergence of the method. First we compute f(x) with (4.10)…

    f(x)= -\tan x (4.14)

    ...and then we observe the diagram of f(x)…



    fig. 4.2

    Also in this case the is the ‘black line’ is f(x), the ‘blue line’ is g_{1}(x)= x_{0}-x and the ‘red line’ is g_{2}(x)=2 (x_{0}-x). For –a<a_{0}<a where a=1.3917452… we are in the conditions of theorem 4.2 and the Newton’s sequence will converge to the attractive fixed point x_{0}=0 with oscillations. For a_{0}<-a or a_{0}>a the convergence criterions of the theorem 4.1 or 4.2 aren’t satisfied and the Newton’s sequence will diverge. Before to proceed to the next section it is useful to apply the procedure now described to a very popular biological mathematical model: the inhibited growth model. According to this model, the population of a bacteria colonial grows according to the solution of the difference equation…

    \Delta_{n}=a_{n+1}-a_{n}= \beta\ a_{n}\(1-\frac{a_{n}}{M})\ ;\ a_{0}=a (4.15)

    … where \beta is the ‘natural’ bacteria grow rate, i.e. the grow rate without any constraint, and M is the maximum number of bacteria that the ‘environmental resources’ can permit. If we ‘normalize’ the independent variable to M the f(x) in this case is…

    f(x)=\beta\ M\ x\ (1-x) (4.16)

    Using the described procedure we can identify two ‘critical values’ of the constant \beta M. For \betaM=1 f(x) is illustrated in fig. 4.3…



    fig. 4.3

    As in the previous cases the ‘black line’ isf(x), the ‘blue line’ is g_{1}(x)= x_{0}-x and the ‘red line’ is g_{2}(x)=2 (x_{0}-x). For 0<a_{0}<1 we are in the conditions of theorem 4.1 and the sequence will converge to the attractive fixed point x_{0}=0 ‘monotonically increasing’. The same is for 0< \beta M< 1. For \beta M=2 f(x) is illustrated infig. 4.4…



    fig. 4.4

    Again the ‘black line’ is f(x), the ‘blue line’ is g_{1}(x)= x_{0}-x and the ‘red line’ is g_{2}(x)=2(x_{0}-x). For 0<a_{0}<1 we are in the conditions of theorem 4.2 and the sequence will converge to the attractive fixed point x_{0}=0‘oscillating’. The same is for 1< \beta M< 2. For \beta M> 2 the solution of the difference equation (4.15) will diverge no matter which is a_{0}
    Last edited by chisigma; December 19th 2011 at 05:36 PM.
    Follow Math Help Forum on Facebook and Google+

  4. #4
    MHF Contributor chisigma's Avatar
    Joined
    Mar 2009
    From
    near Piacenza (Italy)
    Posts
    2,162
    Thanks
    5

    Re: Difference equation tutorial: draft of part I

    5. Linear first order difference equations

    Now we will have a look at the difference equation…

    a_{n+1}=\alpha_{n}\ a_{n}+\beta_{n}\ ;\ a_{0}=a (5.1)

    Here a_{n+1} is a linear function of the a_{n} so that the (5.1) is a linear difference equation. Of course the \alpha_{n} and the \beta_{n} are in general discrete functions of n. Proceeding in ‘direct way’ from the initial value a_{0}=a we find…

    a_{1}=\alpha_{0}\ a + \beta_{0}

    a_{2}=\alpha_{0}\ \alpha_{1}\ a + \alpha_{1}\ \beta_{0}+ \beta_{1}

    a_{3}=\alpha_{0}\ \Alpha_{1}\ \alpha_{2}\ a + \alpha_{1}\ \alpha_{2}\ \beta_{0} + \alpha_{2}\ \beta_{1} + \beta_{2}
    ...
    Now if we set p_{n}=\alpha_{0}\ \alpha_{1}\ …\ \alpha_{n-1}, the solution can be written as…

    a_{n}= p_{n}\ (a+\frac{\beta_{0}}{p_{1}} + \frac{\beta_{1}}{p_{2}}+ … + \frac{\beta_{n-1}}{p_{n}} ) (5.2)

    As we said before, such result is not always full satisfactory and, when possible, an explicit solution as function of n is preferable. That is feasible in a large quantity of cases and we can start describing the simplest one: the case \alpha_{n}= \alpha= \text{const} \beta_{n}=0. So that the difference equation is in the form…

    a_{n+1}= \alpha\ a_{n}\ ;\ a_{0}=a (5.3)

    As for linear differential equations, a linear difference equation written as the (5.3) is called homogeneous and in this the coefficients are constant. The solution is immediately derived from (5.2)…

    a_{n}= a\ \alpha^{n} (5.4)

    This case is very simple but is of fundamental importance for the solution of more complex difference equations. The next step is to set case \alpha_{n}=\alpha=\text{const} and \beta_{n}=\beta=\text{const}, so that the difference equation is…

    a_{n+1}=\alpha\ a_{n}+\beta\ ;\ a_{0}=a (5.5)

    As for linear differential equations, a linear difference equation written as the (5.5) is called nonhomogeneous. The solution is derived from (5.2) in a way a little more complex respect to the previous case…

    a_{n}=a\ \alpha^{n}+\beta\ \frac{1-\alpha^{n}}{1-\alpha} (5.6)

    The (5.6) allows us to arrive to another important analogy between difference equations and differential equations: the solution of a nonhomogeneous difference equation is the sum of the solution of the homogeneous equation and ‘another term’ which doesn’t depend from the ‘initial value’ a… and the same is true also for higher order linear difference equations.

    Now we perform a further step setting \alpha_{n}=x, \beta_{n}=c_{n+1} and a=c_{n}, so that the difference equation is…

    a_{n+1}= x\ a_{n} + c_{n+1}\ ;\ a_{0}=1 (5.7)

    The solution, as derived from (5.2), is…

    a_{n}= \sum_{k=0}^{n}c_{n-k}\ x^{k} (5.8)

    … and immediately we observe that it is a polynomial. The Horner method to compute a polynomial using only addictions and multiplications is based on the difference equation (5.7).

    Proceeding with examples more and more complex we now examine the difference equation…

    a_{n+1}= \frac{n+1}{x}\ a_{n}+1\ ;\ a_{0}=1 (5.9)

    In (5.2) we have…

    p_{n}= \frac{n!}{x^{n}}

    ... and for all n is \beta_{n}=1, so that the solution is…

    a_{n}= \sum_{k=0}^{n} \frac{x^{k}}{k!} (5.10)

    … and for n ‘large enough’ is…

    a_{n} \sim e^{x}

    As in the previous case the difference equation (5.9) permits the practical computation of e^{x} using only addictions and multiplications. With proper modifications in (5.9) a similar procedure is allowable for sin x, cos x, sinh x, cosh x, etc…, and in most cases it is a more convenient way to compute these functions respect to the ‘standard’ series expansion. Also for functions that can represented as ‘infinite product’ this type of approach can be very useful, as in the following example: let consider the linear homogeneous difference equation…

    a_{n+1}= \{1-\frac{x^{2}}{(n+1)^{2}}\}\ a_{n}\ ;\ a_{0}=1 (5.11)

    Here the \beta_{n} are all 0, so that from (5.2) we derive…

    a_{n}= p_{n} = \prod_{k=1}^{n} (1-\frac{x^{2}}{k^{2}}) (5.12)

    … and is…

    \lim_{n \rightarrow \infty} a_{n}= \prod_{k=1}^{\infty}(1-\frac{x^{2}}{k^{2}})= \frac{\sin \pi x }{\pi x} (5.13)

    In general the effective computation of a numerical ‘finite or infinite’ sum can be reduced to the solution of a first order difference equation in the form…

    \Delta_{n}= a_{n+1}-a_{n}= \beta_{n}\ ;\ a_{0}=a (5.14)

    … that is the general expression (5.1) where all the \alpha_{n} are 0. An useful preliminary is the definition of the following function…

    \phi(x)=x\ \sum_{k=1}^{\infty} \frac{1}{k\ (k+x)} + c (5.15)

    … where c is a constant that for the moment is left undefined. From (5.15) we derive immediately the ‘fundamental identity’…

    \phi(1+x)-\phi(x)= \sum_{k=1}^{\infty}\{\frac{1+x}{k\ (k+1+x)}-\frac{x}{k\ (k+x)}\}=

    = \sum_{k=1}^{\infty}\{\frac{1}{k+x}-\frac{1}{k+1+x}\}= \frac{1}{1+x}

    … that is…

    \phi(1+x)=\phi(x)+\frac{1}{1+x} (5.16)

    The ‘fundamental identity’ (5.16) remember a property of the so called factorial function, defined as…

    x!=\int_{0}^{\infty} t^{x}\ e^{-t}\ d t (5.17)

    Integrating by parts (5.17) we arrive to write…

    (1+x)!=x!\ (1+x) (5.18)

    Now deriving (5.18) we obtain…

    \frac{d}{dx} (1+x)!=x!+\frac{d}{dx} x!\ (1+x)

    … so that is…

    \frac{\{(1+x)!\}^{‘}}{(1+x)!}= \frac{\{x!\}^{‘}}{x!}+ \frac{1}{1+x} (5.19)

    It is easy to verify that the expressions (5.16) and (5.19) are ‘formally identical’ and that permits us to conclude that is…

    \phi(x)= \frac{d}{dx} \ln x! (5.20)

    The (5.20) permits us to find the value of the constant c in (5.15). Starting from the ‘infinite product expansion’ of the factorial function with little efforts we can obtain the following McLaurin expansion, valid for |x|<1…

    \ln x!= -\gamma x+\sum_{k=2}^{\infty}(-1)^{k}\ \zeta(k)\ \frac{x^{k}}{k} (5.21)

    … where \zeta(*) is the so called ‘Riemann Zeta function’ …

    \zeta(x)= \sum_{k=1}^{\infty}\frac{1}{k^{x}} (5.22)

    … and \gamma is the so called ‘Euler’s constant’…

    \gamma= \lim_{n \rightarrow \infty} (\sum_{k=1}^{n}\frac{1}{k}-\ln n)= .577215664901… (5.23)

    Deriving the (5.21) we obtain…

    \frac{d}{dx} \ln x!= - \gamma + \sum_{k=2}^{\infty}(-1)^{k}\ \zeta(k)\ x^{k-1} (5.24)

    … and comparing (5.15),(5.20) and (5.24) we conclude that the constant term in (5.15) and (5.24) must be the same and that in (5.15) is c=-\gamma.

    Now if we set in (5.16) x=n, we discover that \phi(n) satisfies the difference equation…

    \Delta_{n}= a_{n+1}-a_{n}= \frac{1}{n+1}\ ;\ a_{0}=- \gamma (5.25)

    … the solution of which is…

    a_{n}= \phi(n)= \sum_{k=0}^{n-1}\frac{1}{k+1} - \gamma (5.26)

    If we remember the definition of the Euler’s constant, it is easy to see that for n ‘large enough’ is…

    \phi(n)\sim \ln n (5.27)

    … and a ‘visible proof’ of that is illustrated in figure…



    fig. 5.1

    Although the term is used for more functions, we call the \phi(*) we have defined in the last lines digamma function and now we illustrate in two examples how tom use its properties for the computation of finite or infinite sums. The first is the computation of…

    \sigma(n,t)= \sum_{k=1}^{n}\frac{1}{k+t} (5.28)

    … for any value of n and t. If in (5.16) we set x=k+t-1 we obtain…

    \frac{1}{k+t}=\phi(k+t)-\phi(k+t-1)

    … so that the (5.28) can be treated as a telescopic sum and obtain…

    \sigma(n,t)= \sum_{k=1}^{n}\{\phi(k+t)-\phi(k+t-1)\}= \phi(n+t)-\phi(t) (5.29)

    The second is the computation of the ‘infinite sum’…

    \sigma(a,b)= \sum_{k=1}^{\infty}\frac{1}{(k+a)\ (k+b)} (5.30)

    … for any value of a and b. Writing the n-th partial sum of (5.30) taking into account (5.29) we have…

    \sigma_{n}= \sum_{k=1}^{n}\frac{1}{(k+a)\ (k+b)}= \frac{1}{b-a}\ \sum_{k=1}^{n}(\frac{1}{k+a}-\frac{1}{k+b})=

    =\frac{1}{b-a}\ \{\phi(n+a)-\phi(a)-\phi(n+b)+\phi(b)\} (5.31)

    A consequence of (5.27) is…

    \lim_{n \rightarrow \infty} \{\phi(n+a)-\phi(n+b)\}=0

    ... so that we arrive to write…

    \sum_{k=1}^{\infty}\frac{1}{(k+a)\ (k+b)}= \frac{\phi(b)-\phi(a)}{b-a} (5.32)

    Of course the field of linear first order difference equation is much wider, but our scope is only to supply some solving method. To acquire more material the reader can consult the specialized literature.

    6. Nonlinear first order difference equations

    If a nonlinear first order is written in the form (4.5), then ’almost always’ the procedure illustrated in section 4 shows the general character of the solution. If not, then the solving procedure, if it exist, must be found ‘case by case’. One possibility is the variable change as in the following example…

    a_{n+1}= \alpha_{n}\ \frac{a_{n}}{1+a_{n}}\ ;\ a_{0}=a\ne 0 (6.1)

    … where the \alpha_{n} are function of n. In this case the variable change b_{n}=\frac{1}{a_{n}} conducts to the difference equation…

    b_{n+1}= \frac{1+b_{n}}{\alpha_{n}}\ ;\ b_{0}=\frac{1}{a_{0}} (6.2)

    … which is linear, so that the procedures of the section 5 can be applied.
    Last edited by chisigma; January 16th 2012 at 10:40 PM.
    Follow Math Help Forum on Facebook and Google+

  5. #5
    Banned
    Joined
    Jun 2014
    From
    anhtuanpt
    Posts
    1

    Re: Difference equation tutorial: draft of part I

    Excellent many thanks. I'll reward yourself with a period of convalescence after a strenuous academic year.
    Follow Math Help Forum on Facebook and Google+

Similar Math Help Forum Discussions

  1. DE Tutorial - Part IV: Laplace Transforms
    Posted in the Differential Equations Forum
    Replies: 1
    Last Post: February 8th 2011, 11:56 PM
  2. Replies: 5
    Last Post: March 15th 2010, 12:44 PM
  3. DE Tutorial - Part III: Systems of Differential Equations
    Posted in the Differential Equations Forum
    Replies: 2
    Last Post: August 29th 2009, 09:03 PM
  4. Replies: 4
    Last Post: December 27th 2008, 08:56 AM

Search Tags


/mathhelpforum @mathhelpforum