Search for notes by fellow students, in your own course and all over the country.

Browse our notes for titles which look like what you need, you can preview any of the notes via a sample of the contents. After you're happy these are the notes you're after simply pop them into your shopping cart.

My Basket

You have nothing in your shopping cart yet.

Title: Ordinary Differential equations
Description: Subject contents are for 2nd and 3rd year student.Full notes concise and precise material.Good for Last minute preparation

Document Preview

Extracts from the notes are below, to see the PDF you'll receive please use the links above


Preface
In the present venture we present a few important aspects of Ordinary Differential equations
in the form of lectures
...
It is only a modest attempt to
gather appropriate material to cover about 39 odd lectures
...

In all there are 39 lectures
...
A few
problems are posed at the end of each lecture either for illustration or to cover a missed
elements of the theme
...
Module 1 dealswith existence
and uniqueness of solutions for Initial Value Problems(IVP) while Module 2 dealswith the
structure of solutions of Linear Equations Of Higher Orders
...
Module 4 is an elementary introduction
to Theory Of Oscillations and Two Point Boundary Value Problems
...
Elementary Real Analysis, Linear Algebra is a prerequisite
...
I am thankful to Mr
...
Rasmita Kar (PDF
at TIFR, Bangalore) for the timely help to edit the manuscript
...
I welcome useful suggestions from reader /users which perhaps may improve
the lecture notes
...


i

Web Course On: Ordinary Deferential Equations
By
Raghavendra V
Department of mathematics and Statistics
Indian Institute of Technology
Kanpur,India

1

Contents Topicwise
Module 1 Existence and Uniqueness of Solutions
1
...
1 Preliminaries

7

2
...
2 Picard’s Successive Approximations

12

3
...
3 Picard’s Theorem

15

4
...
4 Continuation and Dependence on Initial conditions

21

5
...
5 Existence of Solutions in the Large

25

6
...
6 Existence and Uniqueness of Solutions of Systems

29

7
...
7 Cauchy-Peano theorem

32

Module 2 Linear Differential Equations of Higher Order
1
...
1 Introduction

35

2
...
2 Linear Dependence and Wronskian

35

3
...
4 Basic Theory for Linear Equations

40

4
...
5 Method of Variation of Parameters

46

5
...
6 Homogeneous Linear Equations with Constant Coefficients

51

Module 3 Oscillations and Boundary value Problems
1
...
1 Introduction

57

2
...
2 Systems of First Order Equations

57

3
...
3 Fundamental Matrix

64

4
...
4 Non-homogeneous linear Systems

68

5
...
5 Linear Systems with Constant Coefficients

71

6
...
6 Phase Portraits-Introduction

78

7
...
7 Phase Portraits in R2 (continued)

82

Module 4 Oscillations and Boundary value Problems
4
...
Introduction

89

4
...
Sturm’s Comparison Theorem

93

4
...
Elementary Linear Oscillations

97
2

4
...
Boundary Value Problems

100

1
...
Sturm-Liouville Problem

104

4
...
Green’s Functions

110

Module 5 Asymptotic behavior and Stability Theory
5
...
Introduction

115

5
...
Linear Systems with Constant Coefficients

115

5
...
Linear Systems with Variable Coefficients

120

5
...
Second Order Linear Differential Equations

126

5
...
Stability of Quasi-linear Systems

133

5
...
Stability of Autonomous Systems

140

5
...
Stability of Non-Autonomous Systems

146

5
...
A Particular Lyapunov Function

150

3

Contents Topicwise
Lecture 1

7

Lecture 2

12

Lecture 3

15

Lecture 4

21

Lecture 5

25

Lecture 6

29

Lecture 7

32

Lecture 8

35

Lecture 9

40

Lecture 10

45

Lecture 11

48

Lecture 12

51

Lecture 13

54

Lecture 14

57

Lecture 15

62

Lecture 16

66

Lecture 17

71

Lecture 18

76

Lecture 19

78

Lecture 20

82

Lecture 21

86

Lecture 22

89

Lecture 23

93

Lecture 24

97

Lecture 25

100

Lecture 26

104

Lecture 27

107
4

Lecture 28

110

Lecture 29

115

Lecture 30

119

Lecture 31

123

Lecture 32

127

Lecture 33

131

Lecture 34

135

Lecture 35

140

Lecture 36

143

Lecture 37

146

Lecture 38

150

Lecture 39

153

5

6

Module 1

Existence and Uniqueness of
Solutions
Lecture 1
1
...
The existence of solutions for mathematical
models is vital as otherwise it may not be relevant to the physical problem
...
The Module 1 describes a few methods
for establishing the existence of solutions, naturally under certain premises
...
Let us now consider a class of functions
satisfying Lipschitz condition, which plays an important role in the qualitative theory of
differential equations
...

Definition 1
...
1
...
1)

holds whenever (t, x1 ), (t, x2 ) are in D
...

As a consequence of Definition 1
...
1, a function f satisfies Lipschitz condition if and only
if there exists a constant K > 0 such that
|f (t, x1 ) − f (t, x2 )|
≤ K,
|x1 − x2 |

x1 = x2 ,

whenever (t, x1 ), (t, x2 ) belong to D
...
The following is a result in this direction
...

7

Theorem 1
...
2
...
Let f : R → R be a real valued continuous
function
...
Then, f satisfies the Lipschitz condition
∂x
on R
...
Since

∂f
∂x

is continuous on R, we have a positive constant A such that
∂f
(t, x) ≤ A,
∂x

(1
...
Let (t, x1 ), (t, x2 ) be any two points in R
...

∂x

Since the point (t, x) ∈ R and by the inequality (1
...
The proof is complete
...

Example 1
...
3
...

Then, the partial derivative of f at (t, 0) fails to exist but f satisfies Lipschitz condition in
x on R with Lipschitz constant K = 1
...

Example 1
...
4
...
Then, f does not satisfy the inequality (1
...
This is because
f (t, x) − f (t, 0)
= x−1/2 , x = 0,
x−0
is unbounded in R
...
1
...
e
...
,
take R = {(t, x) : |t| ≤ 2, 2 ≤ |x| ≤ 4} in Example 1
...
4
...
In particular, we propose to employ it to establish the
uniqueness of solutions
...
1
...
(Gronwall inequality) Assume that f, g : [t0 , ∞] → R+ are non-negative
continuous functions
...
Then, the inequality
t

f (t) ≤ k +

g(s)f (s)ds,

t ≥ t0 ,

t0

implies the inequality
t

f (t) ≤ k exp

g(s)ds ,

t ≥ t0
...


t0

Proof
...
3)

Since,
d
k+
dt

t

g(s)f (s)ds = f (t)g(t),
t0

by integrating (1
...


t0

t0

In other words,
t

t

k+

g(s)ds
...
4) together with the hypotheses leads to the desired conclusion
...
1
...
Let f and k be as in Theorem 1
...
5 If the inequality
t

f (t) ≤ k
t0

f (s)ds, t ≥ t0 ,

holds then,
f (t) ≡ 0, for t ≥ t0
...
4)

Proof
...


By Theorem 1
...
5, we have
f (t) < exp k(t − t0 ),
Since

t ≥ t0
...

EXERCISES

1
...
1
...

2
...

3
...

(i) f (t, x) = et sin x,
(ii) f (t, x) = (x +

|x| ≤ 2π , |t| ≤ 1 ;

x2 ) cos t ,
t2

(iii) f (t, x) = sin(xt),

|x| ≤ 1 , |t − 1| ≤

|x| ≤ 1 ,

1
2

;

|t| ≤ 1
...
Show that the following functions do not satisfy the Lipschitz condition in the region
indicated
...

|x| < ∞, |t| ≤ 1
...

2

f (t, 0) = 0,

5
...
Let d, h : I → R be continuous functions Show that
the IVP
x + d(t)x = h(t), x(t0 ) = x0 ; t, t0 ∈ I,
has a unique solution
...
Let I = [a, b] ⊂ R be an interval and let f, g, h : I → R+ be non-negative continuous
functions
...


g(s)f (s)ds
...


Hence,
z (t) − g(t)z(t) ≤ g(t)h(t)
...


11

Lecture 2
1
...
Let D ⊂ R2
is an open connected set and f : D → R is continuous in (t, x) on D
...

(1
...
Geometrically speaking, solving (1
...
Such a class of problems is called a local existence problem
for an initial value problem
...
5)
...
5)
...
The iterative procedure for solving (1
...
The key to the general theory is an equivalent representation of (1
...


(1
...
6) is called an integral equation since the unknown function x also occurs under
the integral sign
...
5) and (1
...

Lemma 1
...
1
...
A function x : I → R is a solution of (1
...
6) on I
...
If x is a solution of (1
...
6)
...
6)
...
Differentiating both sides of (1
...

We do recall that f is a continuous function on D
...
5)
...
It is expected that the sequence of iterations converge to a solution
of (1
...
The importance of equation (1
...
In this connection,
we exploit the fact that the estimates can be easily handled with integrals rather than with
derivatives
...
5) is just the constant function
x0 (t) ≡ x0
...
6),
thus obtaining a new approximation x1 given by
t

x1 (t) = x0 +

t0

f (s, x0 (s))ds,

as long as (s, x0 (s)) ∈ D
...
In general, we define xn inductively by
t

xn (t) = x0 +

t0

f (s, xn−1 (s))ds,

n = 1, 2,
...
7)

as long as (s, xn−1 (s)) ∈ D, xn is known as the n-th successive approximation
...
In
the next section we show that the sequence {xn } does converge to a unique solution of (1
...
Befpre we conclude this section let us have a few
examples
...
2
...
For the illustration of the method of successive approximations consider
an IVP
x = −x, x(0) = 1, t ≥ 0
...

0

Let us note t0 = 0 and x0 = 1
...
The first
approximation is
t

x1 (t) = 1 −

0

x0 (s)ds = 1 − t
...

2

In general, the n-th approximation is ( use induction)
xn (t) = 1 − t +

t2
tn
+ · · · + (−1)n
...
It is easy to
directly verify that e−t is the solution of the IVP
...
2
...
Consider the IVP
x =

2x
, t > 0, x (0) = 0, x(0) = 0
...
The first approximation
is x1 ≡ 0
...

Thus, the sequence of functions {xn } converges to the identically zero function
...
On the other hand, it is not hard to check that
x(t) = t2
is also a solution of the IVP which shows that if at all the successive approximations converges, they converge to one of the solutions of the IVP
...
Calculate the successive approximations for the IVP
x = g(t), x(0) = 0
...
Solve the IVP
x = x, x(0) = 1,
by using the method of successive approximations
...
Compute the first three successive approximations for the solutions of the following
equations
(i) x = x2 ,

x(0) = 1;

ex ,

x(0) = 0;

(ii) x =
(iii) x =

x
,
1+x2

x(0) = 1
...
3

Picard’s Theorem

With all the remarks and examples, the reader may have a number of doubts about the
effectiveness and utility of Picard’s method in practice
...

However, we mention that Picard’s method has made a landmark in the theory of differential
equations
...

In all of what follows we assume that the function f : R → R is bounded by L and
satisfies the Lipschitz condition with the Lipschitz constant K on the closed rectangle
R = {(t, x) ∈ R2 : |t − t0 | ≤ a, |x − x0 | ≤ b, a > 0, b > 0}
...
7) are well defined on an interval I
...

b

...
7) are
L
defined on I = |t − t0 | ≤ h
...
3
...
Let h = min a,

|xj (t) − x0 | ≤ L |t − t0 | ≤ b,

j = 1, 2,
...


(1
...
The method of induction is used to prove the lemma
...
8)
...
8)
...
So, xn+1 is
defined on I
...


Using the induction hypothesis, it now follows that
t

|xn+1 (t) − x0 | =

t0

t

f (s, xn (s))ds ≤

t0

|f (s, xn (s))|ds ≤ L |t − t0 | ≤ Lh ≤ b
...
8)
...

We now state and prove the Picard’s theorem, a fundamental result dealing with the
problem of existence of a unique solution for a class of initial value problems ,as given by
(1
...
Recall that the closed rectangle is defined in Lemma 1
...
1
...
3
...
(Picard’s Theorem) Let f : R → R be continuous and be bounded by L
and satisfy Lipschitz condition with Lipschitz constant K on the closed rectangle R
...
, given by (1
...
5)
...

I : |t − t0 | ≤ h, h = min a,

15

Proof
...
5) is equivalent to the integral equation (1
...
6) and hence, to the unique solution of the IVP (1
...
First, note that
n

xn (t) = x0 (t) +

xi (t) − xi−1 (t)
i=1

is the n-th partial sum of the series


x0 (t) +

xi (t) − xi−1 (t)

(1
...
9)
...
9) converges uniformly to a continuous function x(t);
(b) x satisfies the integral equation (1
...
5)
...
By Lemma 1
...
1 the successive
approximations xn , n = 1, 2,
...
7) are well defined on I : |t − t0 | ≤ h
...
The proof on the interval I − = [t0 − h, t0 ] is similar
except for minor modifications
...
Let us denote

mj (t) = |xj+1 (t) − xj (t)|; j = 0, 1, 2,
...

Since f satisfies Lipschitz condition and by (1
...


(1
...


(1
...
12)

for j = 0, 1, 2,
...
The proof of the claim is by induction
...
12) is, in fact, (1
...
Assume that for an integer 1 ≤ p ≤ j the assertion (1
...
That is,
t

mp+1 (t) ≤ K

t0

t

mp (s)ds ≤ K

(s − t0 )p+1
ds
(p + 1)!

LK p

t0

≤ L K p+1

(t − t0 )p+2
,
(p + 2)!

t0 ≤ t ≤ t0 + h,

which shows that (1
...
12) holds for all j ≥ 0
...
9) converges uniformly and absolutely
on the I + = [t0 , t0 + h]
...


(1
...

Also, the points (t, x(t)) ∈ R for all t ∈ I and thereby completing the proof of (a)
...


(1
...
15)

from which, we have
t

x(t) − x0 −

t0

t

f (s, x(s))ds = x(t) − xn (t) +

t

f (s, xn−1 (s))ds −

t0

f (s, x(s))ds
t0

t

≤ |x(t) − xn (t)| +
17

t0

f (s, xn−1 (s)) − f (s, x(s)) ds
...
16)

Since xn → x uniformly on I, and |xn (t) − x0 | ≤ b for all n and for t ∈ I + , it follows that
|x(t)| ≤ b for all t ∈ I +
...


(1
...
17)
tends to zero as n → ∞
...
17) is independent of n
...
6) on I + which proves (b)
...
5),
¯
then they coincide on [t0 , t0 + h]
...
6) which yields
¯
t

|¯(t) − x(t)| ≤
x

|f (s, x(s)) − f (s, x(s))|ds
...
18)

t0

Both x(s)) and x(s) lie in R for all s in [t0 , t0 + h] and hence, it follows that
¯
t

|¯(t) − x(t)| ≤ K
x

|¯(s)) − x(s)|ds
...
This proves (c), completing the proof of the theorem
...
Indeed, we have a
result dealing with such a bound on the error
...
3
...

|x(t) − xn (t)| ≤

L (Kh)n+1 Kh
e ;
K (n + 1)!

t ∈ [t0 , t0 + h]
...
Since


x(t) = x0 (t) +

xj+1 (t) − xj (t)
j=0

we have



x(t) − xn (t) =

xj+1 (t) − xj (t)
...
19)

Consequently, by (1
...
(n + j + 1)

(Kh)n+1

L
eKh
...

Example 1
...
4
...
2
...
Note that all the conditions of the
Picard’s theorem are satisfied
...

Let us first note that K = 1
...
e
...

Then, L = 2
...
The question is to find a number n such
that |x − xn | ≤
...

K (n + 1)!
1

1

1
We have to find an n such that (n+1)!2n < e− 2 or, in other words, (n + 1)!2n > −1 e 2 which
holds since −1 e is finite and (n + 1)! → ∞
...


A doubt may arise whether the Lipschitz condition can be dropped from the hypotheses
in Picard’s theorem
...

Example 1
...
5
...

Obviously x0 (t) ≡ 0
...
In fact, in this case xn (t) ≡ 0 for all n ≥ 0
...
But x(t) = t4 is yet another solution of the IVP which
contradicts the conclusion of Picard’s theorem and so the Picard’s theorem may not hold
in case the Lipschitz condition on f is altogether dropped
...

EXERCISES
1
...

19

2
...
Guess the unique local solution
if f (0) = 0
...
) ≡ 0 along with the Lipschitz property of g(t, x) in x?
3
...

(i) x = x2 , x(0) = 1,

R = {(t, x) : |t| ≤ 2, |x − 1| ≤ 2},

(ii) x = sin x, x( π ) = 1, R = {(t, x) : |t − π | ≤ π , |x − 1| ≤ 1},
2
2
2
(iii) x = ex , x(0) = 0,

R = {(t, x) : |t| ≤ 3, |x| ≤ 4}
...
5 respectively for the three problems
...
4

Continuation And Dependence On Initial Conditions

As usual we assume that the function f in (1
...
By Picard’s theorem, we have an interval
I : t0 − h ≤ t ≤ t0 + h,
where h > 0 such that the closed rectangle R ⊂ D
...
By applying Theorem
1
...
2, we have the existence of a unique solution x passing through the point (t0 +h, x(t0 +h))
ˆ
ˆ ˆ
and whose graph lies in D ( for t ∈ [t0 + h, t0 + h + h], h > 0)
...
5) on the interval [t0 + h, t0 + h + h] ⊃ I
...
Naturally such a procedure is known as the continuation of solutions of the IVP (1
...

The continuation method just described can also be extended to the left of t0
...
Let us suppose that a unique solution x of (1
...

Consider the sequence

1
n
By (1
...


x h2 −

|x(h2 −

1
1
) − x(h2 − )| ≤
m
n

h2 −(1/m)

|f (s, x(s))|ds, (m > n)
h2 −(1/n)

≤L

1
1

...
Suppose h2 , x(h2 − 0) is in D
...

ˆ
By noting

t

x(t) = x0 +
ˆ

f (s, x(s))ds,
ˆ
t0

h1 < t ≤ h2 ,

it is easy to show that x is a solution of (1
...

ˆ
21

Exercise : Prove that x is a solution of (1
...

ˆ
Now consider a rectangle around P : (h2 , x(h2 − 0)) lying inside D
...
5) through P
...

Now define z by
z(t) = x(t),
ˆ

h1 < t ≤ h2

z(t) = y(t),

h2 ≤ t ≤ h2 + α
...
5) on h1 < t ≤ h2 + α
...
5) on
h2 − α ≤ t ≤ h2 + α, we have
x(t) = y(t),
ˆ

h2 − α ≤ t ≤ h2
...
5) on h2 ≤ t ≤ h2 + α and so it only remains to verify that
z is continuous at the point t = h2
...


(1
...


(1
...
20) and (1
...


Obviously, the derivatives at the end points h1 and h2 + α are one-sided
...
4
...
Let
(i) D ⊂ Rn+1 be an open connected set and let f : D → R be continuous and satisfy the
Lipschitz condition in x on D;
(ii) f be bounded on D and
(iii) x be a unique solution of the IVP (1
...

Then,
lim x(t)

t→h2 −0

exists
...

22

We now study the continuous dependence of solutions on initial conditions
...


(1
...
22)
...
The dependence on initial conditions means to know about
the behavior of x(t; t0 , x0 ) as a function of t0 and x0
...
This amounts to saying that the solution x(t; t0 , x0 ) of
(1
...

0
0

(1
...
Formally,we have the following theorem:
0
0
Theorem 1
...
2
...
22) and (1
...
Suppose that (t, x(t)), (t, x∗ (t)) ∈ D
for t ∈ I
...
Then, for any > 0, there exist
a δ = δ( ) > 0 such that
|x(t) − x∗ (t)| < , t ∈ I,
(1
...

0
0
Proof
...
Without loss of generality let t∗ ≥ t0
...
2
...
25)

f (s, x∗ (s))ds
...
26)

t0
t
t∗
0

From (1
...
26) we obtain
x(t) − x∗ (t) = x0 − x∗ +
0

t
t∗
0

f (s, x(s)) − f (s, x∗ (s)) ds +

t∗
0

f (s, x(s))ds
...
27)

t0

With absolute values on both sides of (1
...

0

Now by the Gronwall inequality, it follows that
|x(t) − x∗ (t)| ≤ |x0 − x∗ | + L|t0 − t∗ | exp[K(b − a)]
0
0
for all t ∈ I
...


(1
...
28), we obtain
|x(t) − x∗ (t)| ≤ δ(1 + L) exp K(b − a) =
if |t0 − t∗ | < δ( ) and |x0 − x∗ | < δ( ), which completes the proof
...
4
...
4
...
Indeed
the Gronwall inequality has many more applications in the qualitative theory of differential
equations which we shall see later
...
Consider a linear equation x = a(t)x with initial condition x(t0 ) = x0 , where a(t) is a
continuous function on an interval I containing t0
...

2
...
In addition, let
0

f ∈ Lip(R, K) and |f (t, x) − g(t, x)| ≤

for all (t, x) ∈ R,


for some positive number
...
If |x∗ − y0 | ≤ δ, then show that
0

|x(t) − y(t)| ≤ δ exp K|t − t0 | + ( /K) exp(K|t − t0 |) − 1 , t ∈ I
...
Let the conditions (i) to (iii) of Theorem 1
...
1 hold
...
Further, if the point (h1 , x(h1 + 0)) is in D, then show that x can be continued
to the left of h1
...
5

Existence of Solutions in the Large

We have seen earlier that the Theorem 1
...
2 is about the existence of solutions in a local
sense
...
Existence
of solutions in the large is also known as non-local existence
...

Example : By Picard’s theorem the IVP
x = x2 , x(0) = 1, − 2 ≤ t, x ≤ 2,
has a solution existing on
1
1
− ≤t≤ ,
2
2
where as its solution is
x(t) =

1
, −∞ < t < 1
...
In other words, we need to strengthen the
Picard’s theorem in order to recover the larger interval of existence
...
Under certain restrictions on f ,
we prove the existence of solutions of IVP
x = f (t, x), x(t0 ) = x0 ,

(1
...
We say
that x exists “non-locally” on I if x a solution of (1
...
The importance of such
problems needs little emphasis due to its necessity in the study of oscillations, stability and
boundedness of solutions of IVPs
...
29) is dealt
in the ensuing result
...
5
...
We define a strip S by
S = {(t, x) : |t − t0 | ≤ T and |x| < ∞},
where T is some finite positive real number
...
Then, the successive approximations defined by (1
...
29) exist on
|t − t0 | ≤ T and converge to a solution x of (1
...

Proof
...
7) is


x0 (t) ≡ x0 ,

t

xn (t) = x0 +

t0

f (s, xn−1 (s))ds, |t − t0 | ≤ T
...
30)

We prove the theorem for the interval [t0 , t0 +T ]
...
First note that (1
...
Also,
t

|x1 (t) − x0 (t)| =

t0

f (s, x0 (s))ds
...
31)

Since f is continuous, f (t, x0 ) is continuous on [t0 , t0 + T ] which implies that there exists a
real constant L > 0 such that
|f (t, x0 )| ≤ L, for all t ∈ [t0 , t0 + T ]
...
31), we get
|x1 (t) − x0 (t)| ≤ L(t − t0 ) ≤ LT, t ∈ [t0 , t0 + T ]
...
32)

The estimate (1
...

n!

(1
...
33), as in the proof of Theorem 1
...
2, yields the uniform convergence of the series


xn+1 (t) − xn (t) ,

x0 (t) +
n=0

and hence, the uniform convergence of the sequence {xn } on [t0 , t0 + T ] easily follows
...

n=0

In fact, (1
...

n!
K

Since xn converges to x on t0 ≤ t ≤ t0 + T , we have
|x(t) − x0 | ≤

L KT
(e
− 1)
...
34)

Since the function f is continuous on the rectangle
L KT
(e
− 1) ,
K

R = (t, x) : |t − t0 | ≤ T, |x − x0 | ≤
there exists a real number L1 > 0 such that

|f (t, x)| ≤ L1 , (t, x) ∈ R
...
From the corollary (1
...

K (n + 1)!

|x(t) − xn (t)| ≤

Finally, we show that x is a solution of the integral equation
t

x(t) = x0 +

t0

f (s, x(s))ds, t0 ≤ t ≤ t0 + T
...
35)

Also
t

|x(t) − x0 −

t0

t

f (s, x(s))ds| = x(t) − xn (t) +

t0

f (s, xn (s)) − f (s, x(s)) ds

t

≤ |x(t) − xn (t)| +

t0

f (s, x(t)) − f (s, xn (s))ds

(1
...
36) tends to zero as n → ∞
...
36) we indeed have
t

x(t) − x0 −

f (s, x(s))ds ≤ 0,
t0

or else

t ∈ [t0 , t0 + T ]
...


The uniqueness of x follows similarly as shown in the proof of Theorem 1
...
2
...
5
...

A consequence of the Theorem 1
...
1 is :
Theorem 1
...
2
...
Further,
let f satisfies Lipschitz condition on the the strip Sa for all a > 0, where
Sa = {(t, x) : |t| ≤ a, |x| < ∞}
...

27

x(t0 ) = x0 ,

(1
...
The proof is very much based on the fact that for any real number t there exists T
such that |t − t0 | ≤ T
...
5
...
Thus, by Theorem 1
...
1, the successive
approximations {xn } converge to a function x which is a unique solution of (1
...

EXERCISES
1
...
5
...

2
...
Prove the uniform convergence
of the series for x defined by (1
...

3
...
By solving the linear equation
x = a(t)x, x(t0 ) = x0 ,
show that it has a unique solution x on the whole of I
...
5
...

4
...
Does this example contradict Theorem
1
...
1, when T ≥ 1 ?

28

Lecture 6
1
...
In the sequel, we glance at these extensions
...
Let f1 , f2 ,
...

Consider a system of nonlinear equations
x1 = f1 (t, x1 , x2 ,
...
, xn ),
·····················
·····················
·····················
xn = fn (t, x1 , x2 ,
...
38)

Denoting (column) vector x with components x1 , x2 ,
...
, fn , the system of equations (1
...


(1
...
38) which means that the study
of n-th order nonlinear equation is naturally embedded in the study of (1
...
It speaks of
the importance of the study of systems of nonlinear equations, leaving apart numerous
difficulties that one has to face
...


(1
...
The detailed proofs are to be
supplied by readers with suitable modifications to handle the presence of vectors and their
norms
...
| is used to denote both the norms of a vector and the absolute
value
...

In all of what follows we are concerned with the region D, a rectangle in Rn+1 space,
defined by
D = {(t, x) : |t − t0 | ≤ a, |x − x0 | ≤ b},
where x, x0 ∈ Rn and t, t0 ∈ R
...
6
...
A function f : D → Rn is said to satisfy the Lipschitz condition in the
variable x, with Lipschitz constant K on D if
|f (t, x1 ) − f (t, x2 )| ≤ K|x1 − x2 |
uniformly in t for all (t, x1 ), (t, x2 ) in D
...
41)

The continuity of f in x for each fixed t is a consequence, when f is Lipschitzian in x
...

In addition, there exists a constant L > 0 such that L(t) ≤ L, when L is continuous on
|t − t0 | ≤ a
...

Lemma 1
...
2
...
x(t; t0 , x0 ) (denoted by x) is a
solution of (1
...


(1
...
First of all, we prove that the components xi of x satisfy
t

xi (t) = x0i +

t0

fi (s, x(s))ds,

t ∈ I, i = 1, 2,
...
, n,

holds
...
2
...

As expected, the integral equation (1
...
43)

for n = 1, 2,
...

Lemma 1
...
3
...

b
Define h = min a,

...
43) on
L
the interval I = |t − t0 | ≤ h
...


The proof is very similar to the proof of Lemma 1
...
1
...
6
...
(Picard’s theorem for system of equations)
...
6
...

Then, the successive approximations defined by (1
...
40)
...
6
...
A bound error left due to the truncation at the n-th approximation for x
is
L (Kh)n+1 Kh
|x(t) − xn (t)| ≤
e , t ∈ [t0 , t0 + h]
...
44)
K (n + 1)!
Corollary 1
...
6
...
Let I ⊂ R be an
interval
...
Then, the IVP
x = A(t)x,
x(a) = x0 , a ∈ I,
has a unique solution x existing on I
...

The proofs of Theorem 1
...
4 and Corollary 1
...
6 are exercises
...
6
...

Example 1
...
7
...
Obviously, x(t) ≡ 0 is
a solution
...


31

Lecture 7
1
...
6
...
It is not difficult to
verify, in this case, that f is continuous in (t, x) in the neighborhood of (0, 0)
...
The proofs in this section
are based on Ascoli-Arzela theorem which in turn needs the concept of equicontinuity of a
family of functions
...
Let I = [a, b] ⊂ R be an interval
...

Definition 1
...
1
...

Definition 1
...
2
...

Theorem 1
...
3
...
Then, every sequence of functions {fn } in B contains a subsequence
{fnk }, k = 1, 2
...

Theorem 1
...
4
...
Let S ⊂ R2 be a strip
defined by
S = {(t, x) : |t − t0 | ≤ a, |x| ≤ ∞}
...
Let f : S → R be a bounded continuous function
...
45)

has at least one solution existing on [x0 − a, x0 + a]
...
The proof of the theorem is first dealt on [t0 , t0 + a] and the proof on [t0 − a, t0 ] is
similar with suitable modifications
...
, n (1
...
, n
...
Let t1 , t2 be two
points in [t0 , t0 + a]
...

n

(k+1)a
n
t2 −(a/n)

|xn (t1 ) − xn (t2 )| =

f (s, xn (s))ds

t1 −(a/n)

≤ M |t2 − t1 |,
or else
|xn (t1 ) − xn (t2 )| ≤ M |t2 − t1 |,
Let

∀ t1 , t2 ∈ I
...
47)

be given with the choice of δ = /M
...
47), we have
|xn (t1 ) − xn (t2 )| ≤

if |t1 − t2 | < δ,

which is same as saying that {xn } is uniformly continuous on I
...
47), for all
t∈I
a
|xn (t)| ≤ |x0 | + M |x − − t0 | ≤ |x0 | + M a,
n
or else {xn } is uniformly bounded on I
...
7
...
The limit of {xnk } is continuous on I
since the convergence on I is uniform
...

t0

t
t−a/nk

f (s, xnk (s))ds → 0 as k → ∞,

(1
...
45), finishing the proof
...
The
same proof has a modification when S is replaced by a rectangle R (of finite area) except
that we have to ensure that (t, xn (t)) ∈ R
...

With these comments, we have
¯
Theorem 1
...
5
...
Let |f (t, x)| ≤ M for all (t, x) ∈ R , h = min(a, M )
and let Ih = |t − t0 | ≤ h, then the IVP (1
...


33

Proof
...
7
...
We note that, for all n,
¯
(t, xn (t)) ∈ R if t ∈ Ih
...

Theorem 1
...
4 has an alternative proof, details are given beow
...
7
...
Define a sequence {xn } on Ih by, for n ≥ 1,
xn (t) =

x0 ,
x0 +

t
t0

if t ≤ t0 ;
if t0 ≤ s ≤ t0 + h
...
It is not very
difficult to show that {xn } is uniformly continuous and uniformly bounded on Ih
...
Uniform convergence implies that x is continuous on Ih
...
49), we get
t

x(t) = x0 +

t0

f (s, x(s))ds, t ∈ Ih

that is, x is a solution of the IVP (1
...

EXERCISES
1
...
, x(n−1) (t0 ) = xn−1 ,
as a system
...

2
...
7
...

3
...
7
...

4
...
28 on [t0 − h, t0 ]
...
49)

Module 2

Linear Differential Equations of
Higher Order
Lecture 8
2
...
They occur in many branches of sciences and engineering
and so a systematic study of them is indeed desirable
...
On the other hand linear differential equations with variable coefficients pose a formidable task while obtaining closed form
solutions
...
In this chapter, we show that a general nth order linear equation admits precisely
n linearly independent solutions
...
We recall the following
Theorem 2
...
1
...
Then the IVP
a0 (t)x(n) + a1 (t)x(n−1) + · · · + an (t)x = b(t), t ∈ I
x(t0 ) = α1 , x (t0 ) = α2 , · · · , x(n−1) (t0 ) = αn , t0 ∈ I

(2
...


2
...
It naturally leads us to the concept of the general solution
of a linear differential equation
...

Consider real or complex valued functions defined on an interval I contained in R
...
We recall the following definition
...
2
...
(Linear dependence and independence) Two functions x1 and x2 defined
on an interval I are said to be linearly dependent on I, if and only if there exist two constants
c1 and c2 , at least one of them is non-zero, such that c1 x1 + c2 x2 = 0 on I
...

Remark : Definition 2
...
1 implies that in case two functions x1 (t) and x2 (t) are linearly
independent and, in addition,
c1 x1 (t) + c2 x2 (t) ≡ 0, ∀ t ∈ I,
then c1 and c2 are necessarily both zero
...
The scalars c1 and c2 may
be real numbers
...
2
...
Consider the functions
x1 (t) = eαt and x2 (t) = eα(t+1) , t ∈ R,
where α is a constant
...

Example 2
...
3
...

The above discussion of linear dependence of two functions defined on I is readily extended for a set of n functions where n ≥ 2
...
In the ensuing definition, we allow the functions
which are complex valued
...
2
...
A set of n real(complex) valued functions x1 , x2 , · · · , xn , (n ≥ 2) defined on I are said to be linearly dependent on I, if there exist n real (complex) constants
c1 , c2 , · · · , cn , not all of them are simultaneously zero, such that
c1 x1 + c2 x2 + · · · + cn xn = 0, t ∈ R
...

Example 2
...
5
...
Consider the functions
x1 (t) = eiαt , x2 (t) = sin αt, x3 (t) = cos αt, t ∈ R,
where α is a constant
...

It is a good question to enquire about the sufficient conditions for the linear independence
of a given set of functions
...

Definition 2
...
6
...

x1 (t) x2 (t)

W [x1 (t), x2 (t)] =

Theorem 2
...
7
...

Proof
...
Let us assume on the contrary that the
functions x1 and x2 are linearly dependent on I
...


(2
...
2) we have
c1 x1 (t) + c2 x2 (t) = 0 for all t ∈ I
...
3)

By assumption there exists a point, say t0 ∈ I, such that
x1 (t0 ) x2 (t0 )
x1 (t0 ) x2 (t0 )

= x1 (t0 )x2 (t0 ) − x2 (t0 )x1 (t0 ) = 0
...
4)

From (2
...


(2
...
5) as a system of linear equations with c1 and c2 as unknown quantities,
from the theory of algebraic equations we know that if (2
...
5)
admits only zero solution i
...
, c1 = 0 and c2 = 0
...

As an immediate consequence, we have :
Theorem 2
...
8
...
If two differentiable functions x1 and x2 (defined
on I) are linearly dependent on I then, their Wronskian
W [x1 (t), x2 (t)] ≡ 0 on I
...
It is easy to extend Definition 2
...
4 for a set of n functions
and derive the similar results of Theorems 2
...
7 and 2
...
8 for these sets of n functions
...
2
...

Definition 2
...
9
...


...

(n−1)

x1

x2 (t)
x2 (t)

...


...


...
, t ∈ I
...


...
2
...
If the Wronskian of n functions x1 , x2 , · · · , xn defined on I is non-zero
for at least one point of I, then the set of n functions x1 , x2 , · · · , xn is linearly independent
on I
...
2
...
If a set of n functions x1 , x2 , · · · , xn whose derivatives exist up to and
including that of order (n − 1) are linearly dependent on an interval I, then their Wronskian
W [x1 (t), x2 (t), · · · , xn (t)] ≡ 0 on I
...
2
...
11 may not be true in general
...
For example, let x1 (t) = t2 and x2 (t) = t|t|, −∞ < t < ∞
...

The situation is very different when the given functions are solutions of certain linear
homogeneous differential equation
...

Example 2
...
12
...
We note
cos βt
sin βt
, t ∈ I,
2αcosβt − βsinβt 2αsinβt + βcosβt

W [x1 (t), x2 (t)] = e2αt

= βe2αt = 0,

t ∈ I
...

EXERCISES
1
...

2
...

3
...


Then, prove that f and g are linearly independent on [−1, 1]
...

4
...

5
...
If two functions defined on I
are linearly independent on I1 then, show that they are linearly independent on I2
...
3

Basic Theory for Linear Equations

In this section the meaning that is attached to a general solution of the differential equation
and some of its properties are studied
...
The extension is not
hard at all
...
Consider
a0 (t)x (t) + a1 (t)x (t) + a2 (t)x(t) = 0,

a0 (t) = 0, t ∈ I
...
6)

Later we shall study structure of solutions of a non-homogeneous equation of second order
...


(2
...
6) is
L(x) = 0 on I
...
3
...
The operator L is linear on the space of twice differential functions on I
...
Let y1 and y2 be any two twice differentiable functions on I
...
For the linearity of L We need to show
L(c1 y1 + c2 y2 ) = c1 L(y1 ) + c2 L(y2 ) on I
which is a simple consequence of the linearity of the differential operator
...
14), we have the superposition principle:
Theorem 2
...
2
...
6)
for t ∈ I
...
6), where c1 and c2 are any constants
...
The first of the following examples illustrates
Theorem 2
...
2 while the second one shows that the linearity cannot be dropped
...
3
...
(i) Consider the differential equation for the linear harmonic oscillator,
namely
x + λ2 x = 0, λ ∈ R
...


40

(ii) The differential equation
x = −x 2 ,
admits two solutions
x1 (t) = log(t + a1 ) + a2 and x2 (t) = log(t + a1 ),
where a1 and a2 are constants
...
We note that the given equation is nonlinear
...
14) and Theorem 2
...
2 which prove the principle of superposition for the linear
equations of second order have a natural extension to linear equations of order n(n > 2)
...
8)
where a0 (t) = 0 on I
...
9)

where L is the operator defined by the relation (2
...
As a consequence of the definition, we
have :
Lemma 2
...
4
...
8), is a linear operator on the space of all n
times differentiable functions defined on I
...
3
...
Suppose x1 , x2 , · · · , xn satisfy the equation (2
...
Then,
c1 x1 + c2 x2 + · · · + cn xn ,
also satisfies (2
...

The proofs of the Lemma 2
...
4 and Theorem 2
...
5 are easy and hence omitted
...
3
...
9) given an additional hypothesis
that the set of solutions x1 , x2 , · · · , xn is linearly independent
...
9) is indeed a linear combination of x1 , x2 , · · · , xn
...
3
...
Let x1 , x2 , · · · , xn be n linearly independent solutions of (2
...
Then,
c1 x1 + c2 x2 + · · · + cn xn ,
is called the general solution of (2
...

Example 2
...
7
...

t2
1
We note that x1 (t) = t2 and x2 (t) = are 2 linearly independent solutions on 0 < t < ∞
...

t
x −

41

Example 2
...
8
...

The general solution x is
x(t) = c1 t + c2 t2 + c3 t3 , t > 0
...
3
...
3
...
The question now is whether this
property can be used to generate the general solution for a given linear equation
...
Here we make use of the interplay between linear independence
of solutions and the Wronskian
...
We recall the equation (2
...

Lemma 2
...
9
...

Proof
...
Then, the
system of linear algebraic equations for c1 and c2
c1 x1 (t0 ) + c2 (t)x2 (t0 ) = 0
c1 x1 (t0 ) + c2 (t)x2 (t0 ) = 0

,

(2
...
For such a nontrivial solution (c1 , c2 ) of (2
...


By Theorem 2
...
2, x is a solution of the equation (2
...

Since an initial value problem for L(x) = 0 admits only one solution, we therefore have
x(t) ≡ 0, t ∈ I, which means that
c1 x1 (t) + c2 x2 (t) ≡ 0,

t ∈ I,

with at least one of c1 and c2 is non-zero or else, x1 , x2 are linearly dependent on I, which is a
contradiction
...

As a consequence of the above lemma an interesting corollary is :
Corollary 2
...
10
...

Lemma 2
...
9 has an immediate generalization of to the equations of order n(n > 2)
...

Lemma 2
...
11
...
9) which exist on I, then the Wronskian
42

W [x1 (t), x2 (t), · · · , xn (t)],
is never zero on I
...

Example 2
...
12
...
3
...
20
...
3
...
The Wronskian of
these solutions is
W [x1 (t), x2 (t)] = −3 = 0 for t ∈ (−∞, ∞)
...
3
...

The conclusion of the Lemma 2
...
11 holds if the equation (2
...
A doubt may occur whether such a set of solutions exist or not
...
3
...

Example 2
...
13
...

Now, let x1 (t), t ∈ I be the unique solution of the IVP
L(x) = 0, x(a) = 1, x (a) = 0, x (a) = 0;
x1 (t), t ∈ I be the unique solution of the IVP
L(x) = 0, x(a) = 0, x (a) = 1, x (a) = 0;
and x3 (t), t ∈ I be the unique solution of the IVP
L(x) = 0, x(a) = 0, x (a) = 0, x (a) = 1
where a ∈ I
...
For
W [x1 (a), x2 (a), x3 (a)] =

1 0 0
0 1 0
0 0 1

= 1 = 0
...
3
...
Thus, a set of three linearly
independent solution exists for a homogeneous linear equation of the third order
...

Theorem 2
...
14
...
9) existing on
an interval I ⊆ R
...
9) existing on I is of the form
x(t) = c1 x1 (t) + c2 x2 (t) + · · · + cn xn (t), t ∈ I
43

where c1 , c2 , · · · , cn are some constants
...
Let x be any solution of L(x) = 0 on I, and a ∈ I
...

Consider the following system of equation:
c1 x1 (a) + c2 x2 (a) + · · · + cn xn (a) = a1
c1 x1 (a) + c2 x2 (a) + · · · + cn xn (a) = a2
·······································
(n−1)
(n−1)
(n−1)
c1 x1
(a) + c2 x2
(a) + · · · + cn xn
(a) = an










...
11)

We can solve system of equations (2
...
The determinant of the coefficients
of c1 , c2 , · · · , cn in the above system is not zero and since the Wronskian of x1 , x2 , · · · , xn at
the point a is different from zero by Lemma 2
...
11
...
11)
...

From the uniqueness theorem, there is one and only one solution with these initial conditions
...
This completes the proof
...
9) represents a n parameter family of
curves
...
Such
a notion motivates us define a general solution of a non-homogeneous linear equation
L(x(t)) = a0 (t)x (t) + a1 (t)x (t) + a2 (t)x(t) = d(t), t ∈ I

(2
...
Formally a n parameter solution x of (2
...
12)
...
12) ”contains” n arbitrary constants
...
3
...
Suppose xp is any particular solution of (2
...
Then x = xp + xh is a
general solution of (2
...

Proof
...
12), since
L(x) = L(xp + xh ) = L(xp ) + L(xh ) = d(t) + 0 = d(t),

t∈I

Or else x is a solution of (2
...
12)
...
12) is known, then the general solution of (2
...

The Theorem 2
...
15 has a natural extension to a n-th order non-homogeneous differential
equation of the form
L(x(t)) = a0 (t)xn (t) + a1 (t)xn−1 (t) + · · · + an (t)x(t) = d(t), t ∈ I
...
Then, the general solution of L(x) = d is of
the form
x(t) = xp (t) + c1 x1 (t) + c2 x2 (t) + · · · + cn xn (t),

t∈I

where {x1 , x2 , · · · , xn } is a linearly independent set of n solutions of (2
...

Example 2
...
16
...


The two solutions x1 (t) = t2 and x2 (t) = 1/t are linearly independent on 0 < t < ∞
...


− t and so the general solution x is
x(t) = ( 1 − t) + c1 t2 + c2 1 ,
2
t

where c1 and c2 are arbitrary constants
...
Suppose that z1 is a solution of L(y) = d1 and that z2 is a solution of L(y) = d2
...

2
...

3
...


where a0 , a1 and a2 are continuous functions defined on I
...
Show that x2 defined by
t

x2 (t) = x1 (t)

t0

1
2 (s) exp −
x1

s
t0

a1 (u)
du ds,
a0 (u)

t0 ∈ I,

is also a solution
...


2
...
3
...
13)

where L(x) is given by (2
...
9), is determined the moment we know xh and xp
...
13) as well as the general
solution xh of the homogeneous equation L(x) = 0
...
Variation of parameter
is a general method gives us a particular solution
...
To make the matter
simple let us consider a second order equation
L(x(t)) = a0 (t)x (t) + a1 (t)x (t) + a2 (t)x(t) = d(t), a0 (t) = 0,

t ∈ I,

(2
...
Let x1 and x2 be two linearly
independent solutions of the homogeneous equation
a0 (t)x (t) + a1 (t)x (t) + a2 (t)x(t) = 0, a0 (t) = 0,

t ∈ I
...
15)

Then, c1 x1 + c2 x2 is the general solution of (2
...

The general solution of (2
...
14)
...
In
other words, we would like to find u1 and u2 on I such that
xp (t) = u1 (t)x1 (t) + u2 (t)x2 (t), t ∈ I
satisfies (2
...

In order to substitute xp in (2
...
Now
46

(2
...

We do not wish to end up with second order equations for u1 , u2 and naturally we choose
u1 and u2 to satisfy
x1 (t)u1 (t) + x2 (t)u2 (t) = 0
(2
...
With (2
...

(2
...
18) leads to
xp = u1 x1 + u1 x1 + u2 x2 + u2 x2
...
19)

Now we substitute (2
...
18) and (2
...
14) to get
[a0 (t)x1 (t) + a1 (t)x1 (t) + a2 (t)x1 (t)]u1 + [a0 (t)x2 (t) + a1 (t)x2 (t) + a2 (t)x2 (t)]u2 +
u1 a0 (t)x1 + u2 a0 (t)x2 = d(t),
and since x1 and x2 are solutions of (2
...

a0 (t)

(2
...
17) and (2
...
It is easy to see
u1 (t) =

−x2 (t)d(t)
a0 (t)W [x1 (t),x2 (t)]

u2 (t) =

x1 (t)d(t)
a0 (t)W [x1 (t),x2 (t)]

where W [x1 (t), x2 (t)] is the Wronskian of the solutions x1 and x2
...
21)

x1 (t)d(t)

u2 (t) = a0 (t)W [x1 (t),x2 (t)] dt
Now substituting the values of u1 and u2 in (2
...
14)
...
To conclude, we have :
Theorem 2
...
1
...
14) be continuous functions on I
...
15)
...
14) is given by (2
...

Theorem 2
...
2
...
14) on I is
x(t) = xp (t) + xh (t),
where xp is a particular solution given by (2
...

Also, we note that we have an explicit expression for xp which was not so while proving
Theorem 2
...
15
...

47

Lecture 11
Example 2
...
3
...


Note that x1 = t and x2 = t2 are two linearly independent solutions of the homogeneous
equation on [1, ∞)
...

Substituting the values of x1 , x2 , W [x1 (t), x2 (t)], d(t) = t sin t and a0 (t) ≡ 1 in (2
...
Thus, the general solution is
x(t) = −t sin t + c1 t + c2 t2 ,
where c1 and c2 are arbitrary constants
...
Let us
consider an equation of the n-th order
L(x(t)) = a0 (t)xn (t) + a1 (t)xn−1 (t) + · · · + an (t)x(t) = d(t), t ∈ I
...
22)

Theorem 2
...
4
...
Let
c1 x1 + c2 x2 + · · · + cn xn
be the general solution of L(x) = 0
...
22) is given by
xp (t) = u1 (t)x1 (t) + u2 (t)x2 (t) + · · · + un (t)xn (t),
where u1 , u, · · · , un satisfy the equations
u1 (t)x1 (t) + u2 (t)x2 (t) + · · · + un (t)xn (t) = 0
u1 (t)x1 (t) + u2 (t)x2 (t) + · · · + un (t)xn (t) = 0
·············································
(n−2)

u1 (t)x1

(n−1)

a0 (t) u1 (t)x1

(n−2)

(t) + u2 (t)x2

(n−1)

(t) + u2 (t)x2

(n−2)

(t) + · · · + un (t)xn

(t) = 0

(n−1)

(t) + · · · + un (t)xn

(t) = d(t)
...
4
...

EXERCISES
48

1
...
Also find
the solution when x(0) = 0, x (0) = 1, x (0) = 0
...
Use the method of variation of parameter to find the general solution of x − x = d(t)
where
(i) d(t) = t, (ii) d(t) = et , (iii) d(t) = cos t, and (iv) d(t) = e−t
...

3
...
Show that particular solution of (2
...
(Hint : If xp is a particular solution of (2
...
15) then
show that xp + c x is also a particular solution of (2
...
)
Two Useful Formulae
Two formulae proved below are interesting in themselves
...
Consider an equation
L(y) = a0 (t)y + a1 (t)y + a2 (t)y = 0,

t ∈ I,

where a0 , a1 , a2 : I → R are continuous functions in addition a0 (t) = 0 for t ∈ I
...
Consider
uL(v) − vL(u) = a0 (uv − vu ) + a1 (uv − vu )
...
23)

The Wronskian of u and v is given by W (u, v) = uv − vu which shows that
d
W (u, v) = uv − vu
...
23) are W (u, v) and W (u, v)
respectively
...
4
...
If u and v are twice differential functions on I, then
uL(v) − vL(u) = a0 (t)

d
W [u, v] + a1 (t)W [u, v],
dt

(2
...
7)
...

dt
49

(2
...
4
...
( Able’s Formula) If u and v are solutions of L(x) = 0 given by (2
...


Proof
...
25) and Solving we get
a1 (t)
dt
a0 (t)

W [u, v] = k exp −

(2
...

The above two results are employed to obtain a particular solution of a non-homogeneous
second order equation
...
4
...
Consider the general non-homogeneous initial value problem given by
L(y(t)) = d(t), y(t0 ) = y (t0 ) = 0, t, t0 ∈ I,

(2
...
14)
...
Let x denote a solution of L(y) = d
...
24) by x1 and x to
get
a1 (t)
d(t)
d
W [x1 , x] +
W [x1 , x] = x1
(2
...
Hence
t

W [x1 , x] = exp −

t0

t

a1 (s)
ds
a0 (s)

t0

exp

s a1 (u)
t0 a0 (u) du

x1 (s)ds

a0 (s)

ds

(2
...
29) we have used the initial conditions x(t0 ) = x (t0 ) = 0 in view of which
W [x1 (t0 ), x(t0 )] = 0
...

a0 (s)W [x1 (s), x2 (s)]

(2
...
30) as well could have been derived with x2 in place of x1 in order to get
t

x2 x − xx2 = W [x1 , x2 ]

t0

x2 (s)d(s)
ds
...
31)

From (2
...
31) one easily obtains
t

x(t) =
t0

x2 (t)x1 (s) − x2 (s)x1 (t) d(s)
ds
...
32)

It is time for us to recall that a particular solution in the form of (2
...

50

Lecture 12
2
...
Now we attempt to obtain a general solution of a linear equation with
constant coefficients
...


(2
...
34)

where a0 , a1 , · · · , an are real constants and a0 = 0
...
33) or (2
...
33) or (2
...
Elementary calculus tell us that one such function
is the exponential, namely ept , where p is a constant
...

ept is a solution of (2
...

which means that ept is a solution of (2
...


(2
...
5
...
λ is a root of the quadratic equation (2
...
33)
...

Theorem 2
...
2
...
36)

iff eλt is a solution of the equation (2
...

Definition 2
...
3
...
35) or (2
...
33) or (2
...
The corresponding polynomials
are called characteristic polynomials
...
35) has two roots, say λ1 and λ2
...
5
...
33) provided λ1 = λ2
...
33)
...
35)
...
33) and the general solution x of
(2
...

Case 2 : When λ1 and λ2 are complex roots, from the theory of equations, it is well
known that they are complex conjugates of each other i
...
, they are of the form λ1 = a + ib
and λ2 = a − ib
...

Now, if h is a complex valued solution of the equation (2
...
This means that the real part and the imaginary part of a
solution are also solutions of the equation (2
...
Thus
eat cos bt, eat sin bt
are two linearly independent solutions of (2
...
The general solution is given by
eat [c1 cos bt + c2 sin bt],

t ∈ I
...
35) are equal, then the root is
λ1 = −a1 /2a0
...
5
...
33) namely eλ1 t
...

Method 1 : x1 (t) = eλ1 t is a solution and so is ceλ1 t where c is a constant
...
33) and then determine u
...

Differentiating x2 twice and substitution in (2
...

1
Since λ1 = −a1 /2a0 the coefficients of u and u are zero
...
33)
...
33) and x1 , x2 are linearly independent
...
37)

where p(λ) denotes the characteristic polynomial of (2
...
From the theory of equations we
know that if λ1 is a repeated root of p(λ) = 0 then
p(λ1 ) = 0 and


p(λ)
∂λ

λ=λ1

= 0
...
38)

Differentiating (2
...

∂λ
∂λ
∂λ
But,


∂ λt
L(eλt ) = L
e
= L(teλt )
...

∂λ
Substituting λ = λ1 and using the relation in (2
...
34)
...
33) is given by
L(teλt ) =

c1 eλ1 t + c2 teλ1 t ,
where λ1 is the repeated root of characteristic equation (2
...

Example 2
...
4
...
by case 1, e−3t , e2t are two linearly independent solutions
and the general solution x is given by
x(t) = c1 e−3t + c2 e2t , t ∈ I
...
5
...
For
x − 6x + 9x = 0, t ∈ I,
the characteristic equation is
p2 − 6p + 9 = 0,
which has a repeated root p = 3
...


Lecture 13
The results which have been discussed above for a second order have an immediate
generalization to a n-th order equation (2
...
The characteristic equation of (2
...

(2
...
39) then, ep1 t is a solution of (2
...
If p1 happens to be a complex
root, the complex conjugate of p1 i
...
, p1 is also a root of (2
...
In this case
¯
eat cos bt and eat sin bt
are two linearly independent solutions of (2
...

We now consider when roots of (2
...
There are two
cases:
(i) when a real root has a multiplicity m1 ,
(ii) when a complex root has a multiplicity m1
...
39) with the multiplicity m1
...
34), namely
eqt , teqt , t2 eqt , · · · , tm1 −1 eqt
...
39) with the multiplicity m1
...

Then, as in Case 1, we note that
est , test , · · · , tm1 −1 est ,

(2
...
34)
...
34), the real and
imaginary parts of each solution given in (2
...
34)
...
34) are given by

es1 t cos s2 t, es1 t sin s2 t




tes1 t cos s2 t, tes1 t sin s2 t

2 es1 t cos s t, t2 es1 t sin s t
t
(2
...
39) are known, no matter whether
they are simple or multiple roots, there are n linearly independent solutions and the general
solution of (2
...

To summarize :
54

Theorem 2
...
6
...
39) and suppose the root ri has multiplicity mi , i = 1, 2, · · · , s, with
m1 + m2 + · · · + ms = n
...
42)





are the solutions of L(x) = 0 for t ∈ I
...
Find the general solution of
(i) x(4) − 16 = 0,
(ii) x + 3x + 3x + x = 0,
(iii) x + ax + bx = 0, for some real constants a and b,
(iv) x + 9x + 27x + 27x = 0
...
Find the general solution of
(i) x + 3x + 3x + x = e−t ,
(ii) x − 9x + 20x = t + e−t ,
(iii) x + 4x = A sin t + B cos t, where A and B are constants
...
(Method of undetermined coefficients) To find the general solution of a non-homogeneous
equation it is necessary to know many times a particular solution of the given equation
...
Consider an equation with constant coefficients
a0 x + a1 x + a2 x = d(t),

a0 = 0,

(2
...

Let xp (t) = Beat , be a particular solution, where B is undetermined
...
In case P (a) = 0, assume that the particular solution is of the form Bteat
...


It is also possible that P (a) = P (a) = 0
...
Show that B = A/2a0 = A/P (a)
...
Using the method described in Example 2
...
5, find the general solution of
(i) x − 2x + x = 3e2t ,
(ii) 4x − 8x + 5x = et
...
When d(t) = A sin Bt or A cos Bt or their linear combination in equation (2
...
Determine the
constants C and D which yield the required particular solution
...

6
...

7
...

(i) Prove that every solution of the above equation approaches zero if and only if the
roots of the characteristic equation have strictly negative real parts
...


56

Module 3

System of Linear Differential
equations
Lecture 14
3
...
Their importance needs very little emphasis
...
Linear Algebra is a prerequisite
...
We try our best to keep the description as
self contained as possible
...


3
...
1)

where I is an interval and where x : I → R and f : I × R → R
...
2)
is a spacial case of (3
...
In fact, we can think of a more general set-up, where (3
...
2) are spacial cases
...
Let
f1 , f2 , · · · , fn : I × D → R,
be n real valued functions defined on an open connected set D ⊂ Rn
...
3)
57

where x1 , x2 , · · · , xn are real valued functions to be determined
...
3) is to find an interval I and n functions φ1 , φ2 , · · · , φn
defined on I such that:
(i) φ1 (t), φ2 (t), · · · , φn (t) exists for each t ∈ I,
(ii) (t, φ1 (t), φ2 (t), · · · , φn (t)) ∈ I × D for each t in I, and
(iii) φi (t) = fi (t, φ1 (t), φ2 (t), · · · , φn (t)), t ∈ I, i = 1, 2, · · · , n
...
3)
...
2
...
Suppose (t0 , α1 , α2 , · · · , αn ) is a point in I × D
...
3) is to find a solution (φ1 , φ2 , · · · , φn ) of (3
...

The system of n equations has a concise form if we use vector notation
...

Define
fi (t, x) = fi (t, x1 , x2 , · · · , xn ), i = 1, 2, · · · , n
...
3) can be written as
xi = fi (t, x),

i = 1, 2, · · · , n
...
4)

Now define a column vector f by
f (t, x) = (f1 (t, x), f2 (t, x), · · · , fn (t, x))
...
4) assumes the form
x = f (t, x)
...
5)

We note that the equation (3
...
5) looks alike (but for notations)
...
5)
is (3
...

Example 3
...
2
...
Let ϕ = (φ1 , φ2 ) be the solution of the system with initial conditions
ϕ(t0 ) = (φ1 (t0 ), φ2 (t0 )) = (α, β),
58

α > 0
...


In the above example we have seen a concise way of writing a system of two equations
in a vector form
...
2)
...
Depending on the context we should be able to
decipher whether x or f is a row or a column vector
...
Now we concentrate on a linear system of n equations
in this chapter
...
Let the functions aij , bj : I → R, i, j = 1, 2, · · · , n
be given
...


(3
...
6) is called a (general) non-homogeneous system of n equations
...
6) is a special case of the system (3
...
Define
a matrix A(t), for t ∈ I by the relation


a11 (t) a12 (t) · · · a1n (t)
 a21 (t) a22 (t) · · · a2n (t) 


A(t) = 
...


...


...


...


...


...
 and x(t) = 
...


...


...
With these notations (3
...


(3
...
6) is linear in x1 , x2 , · · · , xn when b ≡ 0 on I
...
7) is a vector representation of a linear non-homogeneous system of equations
(3
...
If b ≡ 0 on I, then (3
...
8)

is called Linear homogeneous system of n equations or just a system of equations
...
The map A : I → Mn (R) is called a variable
matrix,and if this map is continuous we say that A(t), t ∈ I is continuous
...
We use
these notions throughout the rest of the modules
...
2
...
Consider a system of equations
x1 = 5x1 − 2x2
x2 = 2x1 + x2
which has the form
x1
x2

=

5 −2
x
× 1
2 1
x2

Verify that a solution is given by
x1 (t) = (c1 + c2 t)e3t ,

1
x2 (t) = (c1 − 2 c2 + c2 t)e3t
...
9)

x(t0 ) = α0 , x (t0 ) = α1 , · · · , x(n−1) (t0 ) = αn−1 ,

t0 ∈ I,

(3
...
The n-th order equation is represented by
a system of n equations by defining x1 , x2 , · · · , xn by
x1 = x,

x = x2 , · · · , x(n−1) = xn
...
11)

Let ϕ = (φ1 , φ2 , · · · , φn ) be a solution of (3
...
Then
φ2 = φ1 ,

(n−1)

φ3 = φ2 = φ1 , · · · , φn = φ1

,
(n−1)

g(t, φ1 (t), φ2 (t), · · · , φn (t)) = g(t, φ1 (t), φ1 (t), · · · , φ1

(t))

(n)

= φ1 (t)
...
9)
...
9)
on I then, the vector ϕ = (φ1 , φ2 , · · · , φn ) is a solution of (3
...
Thus, the system (3
...
9)
...
at this time let
us observe that (3
...

In particular, an equation of n-th order of the form
60

a0 (t)x(n) + a1 (t)x(n−1) + · · · + an (t)x = b(t),

t∈I

is called a linear non-homogeneous n-th order equation which is equivalent to (in case a0 (t) =
0 for any t ∈ I)
a1 (t) (n−1)
an (t)
b(t)
x(n) +
x
+ ··· +
x=

...
12)
a0 (t)
a0 (t)
a0 (t)
By letting
x1 = x,

xn (t) = −
and with

x1 = x2 , · · · , xn−1 = xn

an (t)
an−1 (t)
a1 (t)
b(t)
x1 −
x2 − · · · −
xn +
a0 (t)
a0 (t)
a0 (t)
a0 (t)


x1

 x2 

 
x = 
...


...


...


1
0

...


...


...







b(t)
a0 (t)

0
1

...


...


...


...


0

···
···



−a1
a0

−an−2
a0






1 

the system (3
...


(3
...
12) and (3
...
The representations (3
...
13) gives
us a considerable simplicity in handling the systems of n equations
...
2
...
For illustration we consider a linear equation
x − 6x + 11x − 6x = 0
...


Then, the given equation is equivalent to the system x = A(t)x, where


 
0
1
0
x1
0
1
...

EXERCISES
1
...


is a solution
...
Represent the IVP
x1 = x2 + 3, x2 = x2 , x1 (0) = 0, x2 (0) = 0
2
1
as a system of 2 equations
x = f (t, x), x(0) = x0
...
Find a value of M such that
|f (t, x)| ≤ M on R = {(t, x) : |t| ≤ 1, |x| ≤ 1}
...
The system of three equations is given by
(x1 , x2 , x3 ) = (4x1 − x2 , 3x1 + x2 − x3 , x1 + x3 )
...

4
...

1
2
Find an upper bound for f ( on the rectangle R
...
Represent the linear system of equations
x1 = e−t x1 + sin tx2 + tx3 + t21 ,
+1
x2 = − cos tx3 + e−2t ,
x3 = cos tx1 + e−t sin tx2 + t
...

6
...

7
...

8
...

(i) Prove that x1 satisfies the second order equation
x1 − (a + d)x1 + (ad − bc)x1 = 0
...

9
...


10
...

dt
dt
63

3
...
14)

where x is (column) n-vector and A(t), t ∈ R is a n × n matrix In other words, consider a
set of n solutions of the system (3
...
Such a matrix is called a “solution matrix” and it satisfies the matrix differential
equation
Φ = A(t)Φ, t ∈ I
...
15)
The matrix Φ is called a fundamental matrix for the system (3
...
We associate with system (3
...


(3
...
16)
...
The answer is indeed in the affirmative
...

Theorem 3
...
1
...
Suppose a matrix Φ
satisfies (3
...
Then, det Φ satisfies the equation
(det Φ) = (trA)(det Φ),

(3
...


(3
...
By definition the columns ϕ1 , ϕ2 , · · · , ϕn of Φ are solutions of (3
...
Denote
ϕi = {φi1 , φi2 , · · · , φin },

i = 1, 2, · · · , n
...
Then,
n

φij (t) =

aik (t)φkj (t);

i, j = 1, 2, · · · , n

(3
...


...


φ12
φ22

...


...


...


...

φnn

and so it is seen that
φ11
φ21
(det Φ) =
...


...


...


···
···

...


φn1 φn2 · · ·

φ11 φ12 · · ·
φ1n
φ21 φ22 · · ·
φ2n

...


...


...


...


...

φn1 φn2 · · ·
φnn
64

φ11 φ12 · · ·
φ1n
φ21 φ22 · · ·
φ2n

...


...


...


...


...

φn1 φn2 · · ·
φnn

φ1n
φ2n

...


...
19), the first term on the right side reduces
to
n

n

a1k φk1
k=1

φ21

...


...


...

φn2

···

...

···

a1k φkn
k=1

φ2n

...


...
Carrying this out for the remaining terms it is seen that
(det Φ) = (a11 + a22 + · · · + ann )detΦ = (trA)detΦ
...
The proof of the theorem is
complete since we know that the required solution of this is given by (3
...

Theorem 3
...
2
...
16) on I is a fundamental matrix of (3
...

Proof
...
Then, the columns of Φ are
linearly independent on I
...
The proof of he converse is
still easier and hence omitted
...

Theorem 3
...
3
...
14) and let C be a constant
non-singular matrix
...
14)
...
14) is ΦC for some non-singular matrix C
...
The first part of the theorem is a single consequence of Theorem 3
...
2 and the fact
that the product of non-singular matrices is non-singular
...
14) and let Φ2 = Φ1 Ψ
...
Equation (3
...
Thus,we have Φ1 Ψ = 0 which shows that Ψ = 0
...
Since Φ1 and Φ2 are non-singular so is C
...
14)namely when the
matrix A(t) is independent of T or that A is a constant matrix
...
3
...
Let Φ(t), t ∈ I, be a fundamental matrix of the system
x = Ax,

(3
...
Here E denotes the identity matrix
...
21)
for all values of t and s ∈ R
...
By the uniqueness theorem there exists a unique fundamental matrix Φ(t) for the
given system such t hat Φ(0) = E
...
22)
Define for any real number s,
Y (t) = Φ(t + s)
Then,
Y (t) = Φ (t + s) = AΦ(t + s) = AY (t)
...
22) such that Y (0) = Φ(s)
...

Let us note that Z is solution of (3
...
Clearly
Z(0) = Φ(0)Φ(s) = EΦ(s) = Φ(s)
...
22) such that Y (0) = Z(0) = Φ(s)
...
21) holds, completing
the proof
...
3
...
Consider the linear system (3
...

66

EXERCISES
1
...
14) and C is any constant non-singular matrix
then, in general, show that CΦ need not be a fundamental matrix
...
Let Φ(t) be a fundamental matrix for the system (3
...

Then, show that the matrix (Φ−1 (t))T satisfies the equation
d −1 T
(Φ ) = −AT (Φ−1 )T ,
dt
and hence show that (Φ−1 )T is a fundamental matrix for the system
x = −AT (t)x, t ∈ I
...
23)

System (3
...
14) and vice versa
...
Let Φ be a fundamental matrix for Eq
...
14), with A(t) being a real matrix
...
23) if and only if ΨT Φ = C,
where C is a constant non-singular matrix
...
Consider a matrix P defined by
P (t) =

f1 (t) f2 (t)
, t ∈ I,
0
0

where f1 and f2 are any two linearly independent functions on I
...
Can the columns
P be solutions of linear homogeneous systems of equations of the form (3
...
4?)
5
...
20) where
(a)

(b)



−1 3 4
A =  0 2 0 ;
1 5 −1

1 3 8
A = −2 2 1
−3 0 5


6
...
4

 t

e
1 0
Φ(t) =  1 e−t 0
0
0 1


1 t
t2
1 
...
The system and let b : I −→ Rn be a
continuous function
...
24)
is called a non-homogeneous linear system of n equations
...
24) reduces to (3
...
The term b in (3
...
14)
...
24) is regarded as a perturbed state
of (3
...
The solution of (3
...
14) and to
some extent the connection is brought out in this section
...
24)
...
24) in term of (3
...
Let Φ be a fundamental matrix for the system (3
...
Let Ψ be a solution of (3
...
Let u : I → Rn be
differentiable and u(0) = 0 Now let us assume that ψ(t) is given by
ψ(t) = Φ(t)u(t),

t ∈ I,

(3
...
Assuming ψ a solution of (3
...
Substituting (3
...
24) we get, for t ∈ I,
ψ (t) = Φ (t)u(t) + Φ(t)u (t) = A(t)Φ(t)u(t) + Φ(t)u (t)
or else
ψ (t) = A(t)ψ(t) + b(t) = A(t)Φ(t)u(t) + b(t)
...

Since Φ(t) for t ∈ I is non-singular we have
u (t) = Φ−1 (t)
...
26)

Substituting the value of u in (3
...
27)

t0

To sum up :
Theorem 3
...
1
...
14) for t ∈ I
...
27), is a solution of the IVP
x = A(t)x + b(t), x(t0 ) = 0
...
28)

Now let xh (t) be the solution of the IVP
x = A(t)x, x(t0 ) = c, t, t0 ∈ I
...
29)

Then, a consequence of Theorem 3
...
1 is
t

ψ(t) = xh (t) + Φ(t)

Φ−1 (s)b(s)ds,

t∈I

(3
...

Thus, with a prior knowledge of the solution of (3
...
28) is given by
(3
...

EXERCISES
1
...
Prove that the equation (3
...

2
...

e−t

e3t 2te3t
0
e3t

is a fundamental matrix of x = Ax
...

1
69

3
...
Let b(t) =
0 e2t
of the non-homogeneous equation

fundamental matrix is Φ(t) =

x = Ax + b(t), f or which ψ(0) =

70

1 0

...
Find the solution ψ
cos bt
0
1


...
5

Linear Systems with Constant Coefficients

In previous sections, we have studied the existence and uniqueness of solutions of linear
systems of
x = A(t)x, x(t0 ) = x0 , t, t0 ∈ I
...
31)

However, there are certain difficulties in finding the explicit general solution of such systems
in an
...
31)
when A is a constant matrix
...
If the characteristic values of the matrix A are known then, the general solution
can be obtained in an explicit form
...

Before proceeding further, let us recall the definition of the exponential of a given-matrix
A
...
expB
For the present we assume the proof for the convergence of the series which defines the exp A
...

Now consider a linear homogeneous system with a constant matrix, namely,
x = Ax,

t ∈ I,

(3
...
From Module 1 recall that the solution of (3
...
A similar situation prevails when we
deal with (3
...
5
...

Theorem 3
...
1
...
32) is x(t) = etA c, where c is an
arbitrary constant column matrix
...
32) with the initial condition
x(t0 ) = x0 , t0 ∈ I, is
x(t) = e(t−t0 )A x0 , t ∈ I
(3
...
Let x be any solution of (3
...
Define a vector u by,
u(t) = e−tA x(t), t ∈ I
...

Since x is a solution of (3
...
Substituting the value c in place of u, we have
x(t) = etA c
...

Since A commutes with itself, etA e−t0 A = e(t−t0 )A , and thus, (3
...

In particular, let us choose t0 = 0 and n linearly independent vectors ej , j = 1, 2, · · · , n,
the vector ej being the vector with 1 at the jth component and zero elsewhere
...

Thus a fundamental matrix for (3
...
34)

since the matrix with columns represented by e1 , e2 , · · · , en is the identity matrix E
...
5
...
For illustration let us find a
where

α1
A=0
0

etA

(3
...

A fundamental matrix is etA
...


verify that

0
0
k
α3



exp(α1 t)
0
0

...
5
...
Consider a similar example to determine a fundamental matrix for x = Ax,
3 −2
where A =

...

−2
0

By the remark which followed Theorem 3
...
1, we have
exp(tA) = exp
since

3 0
0 3

and

0 −2
−2
0
exp

3 0
0 3

t
...
But

3 0
0 3

t = exp

3t 0
0 3t

=

e3t
0
3t
0 e

It is left as an exercise to the readers to verify that
exp
Thus, etA =

1
2

0 −2
−2
0

e5t + et et − e5t
et − e5t e5t + et

t=

1
2

e2t + e−2t e−2t − e2t
e−2t − e2t e2t + e−2t


...


Again we recall from Theorem 3
...
1 we know that the general solution of the system
(3
...
Once etA determined, the solution of (3
...
In
order to be able to do this the procedure given below is followed
...
32)
in the form
x(t) = eλt c,
(3
...
x is determined if λ and c are known
...
36) in (3
...


(3
...
The system
(3
...
Let
P (λ) = det(λE − A)
...
38)
is called the “characteristic equation” for A
...
38) is an algebraic equation, it admits
n roots which may be distinct, repeated or complex
...
38) are called the
“eigenvalues” or the “characteristic values” of A
...
37)
...
Note that any nonzero constant multiple of c1 is also an eigenvector corresponding to λ1
...
32)
...
Then, it is clear that
xk (t) = eλk t ck (k = 1, 2, · · · , n),
are n linearly independent solutions of the system (3
...
Here we stress that the eigenvectors
corresponding to the eigenvalues are linearly independent
...
32)
...


x(t) =

(3
...
32) and
hence, Φ is a fundamental matrix
...
4, we therefore have
etA = Φ(t)D,
where D is some non-singular constant matrix
...

Example 3
...
4
...

x = 0
6 −11 6

The characteristic equation is given by
λ3 − 6λ2 + 11λ − 6 = 0
...

Also the corresponding eigenvectors are
 
  
1
2
1
 1  ,  4  and  3 ,
9
8
1


respectively
...
Also a fundamental matrix is


α1 et 2α2 e2t α3 e3t
α1 et 4α2 e2t 3α3 e3t 
...
The next step is to find the nature of the fundamental matrix in the
case of repeated eigenvalues of A
...
Consider
the system of equations, for an eigenvalue λi (which has multiplicity ni ),
(λi E − A)ni x = 0,

i = 1, 2, · · · , m
...
40)

Let Xi be the subspace of Rn generated by the solutions of the system (3
...
From linear algebra we know that for any x ∈ Rn , there exist unique
vectors y1 , y2 , · · · , ym , where yi ∈ Xi , (i = 1, 2, · · · , m), such that
x = y1 + y2 + · · · + ym
...
41)

It is common in linear algebra to speak of Rn as a “direct sum” of the subspaces X1 , X2 , · · · , Xm
...
Let x be a solution of (3
...
Now there exist unique vectors α1 , α2 , · · · , αm such that
α = α1 + α2 + · · · + αm
...
5
...
32)) with x(0) = α is
m

etA αi

x(t) = etA α =
i=1

But,
etA αi = exp(λi t) exp[t(A − λi E)]αi
By the definition of the exponential function, we get
etA αi = exp(λi t)[E + t(A − λi E) + · · · +

tni −1
(A − λi E)ni −1 + · · · ]αi
...
Thus,
m

x(t) = etA

m

αi =
i=1

exp(λi t)

ni −1 j
t

i=1

j=0

j!

(A − λj E)j αj ,

t ∈ I
...
42)

Indeed one might wonder whether (3
...
To start with we were
aiming at exp(tA) but all we have in (3
...
α, where α is an arbitrary vector
...
42) is the deduction of exp(tA) which is done as follows
...

exp(tA)ei can be obtained from (3
...
It
is important to note that (3
...

Example 3
...
5
...

0 1 0
The characteristic equation is given by
λ3 = 0
...

Since the rank of the co-efficient matrix A is 2, there is only one eigenvector namely



0
 0 
...

The other two generalized eigenvectors are
 
 
0
1
 1  and  0 
0
0
Since
A3 = 0,
A2 t2
2

0 0
1 0
...

0
1 −1

77

3
...
In order to make life easy, we first go through a bit of elementary linear algebra
...
We may skip Parts A and B in case we are familiar
with curves and elementary canonical forms for real matrices
...

Let us recall: R denotes the real line
...
A n × n matrix A is denoted by (aij )n×n , aij ∈ R
...
A ∈ Mn (R) also induces a linear operator on
Rn (now understood as column vectors) defined by
A

x → A(x) or A : Rn → Rn
more explicitly defined by A(x) = Ax(matrix multiplication)
...
For a n × n real matrix A, we some times use
A ∈ Mn (R) or A ∈ L(Rn )
...
Then, Ker(T ) or N (T ) (read as kernel of T or
Null space of T ejectively) is defined by
Ker(T ) = N (T ) : = {x ∈ Rn : T x = o}
The dimension of Ker(T ) is called the nullity of T and is denoted by ν(T )
...
For any T ∈ L(Rn ) the Rank
Nullity Theorem asserts
ν + ρ = n
...
e
...
) T is one-one iff T is onto
...

1
...

Proof : We let T = a
...

2
...
The exponential eT of T is defined by


eT =
k=0

Tk
k!

Note
(a) It is clear that eT : Rn → Rn is a linear operator and

eT ≤ e

T


...

k!

3
...
Then
eS = P eT P −1
(b) For A ∈ Mn (R), if P −1 AP = diag(λ1 ,
...
, eλn t )P −1
...
e
...

(d) (c) ⇒ (eT )−1 = e−T
...
Then
b a

4
...
Let λ = a ± ib
...
Exercise : Supply the details for the proof of Lemma 4
...
In Lemma 4, eA is a rotation through b when a = 0
...
Lemma: Let A =

a b
; a, b ∈ R
...

0 1

Proof : Exercise
...
Then B = P −1 AP where
B=

λ 0
0 µ

λ 1
0 λ

or B =

or B =

a −b
b a

and a consequence is
eBt =

eλt 0
0 eµt

or eBt = eλt

1 t
0 1

or eBt = eat

cos bt − sin bt
sin bt cos bt

and eAt = P eBt P −1
...
Lemma : For A ∈ Mn (R)
d At
e = AeAt = eAt A, t ∈ R
dt

(3
...
)
h→0 k→0
2!
k!
= eAt A = AeAt

(3
...


Part B : Linear Systems of ODE
We recall the following for clarity :
Let A ∈ Mn (R)
...
45)

x(0) = x0

(3
...

80

9
...

Let A ∈ Mn (R) and x0 ∈ Rn (column vector)
...
45)
and (3
...
47)
Proof : Let y(t) = eAt x0
...
Thus, eAt x0 is a solution of the IVP (3
...
46) and by the
Picard’s Theorem
x(t) = eAt x0
is the unique solution of (3
...
46)
...
Example : Let us solve the IVP
x1 = −2x1 − x2 , x1 (0) = 1
˙
x2 = x1 − 2x2 , x2 (0) = 0
...

1 −2

It is easy to show that 2 ± i are the eigenvalues of A and so by
x(t) = eAt x0
= e−2t

cos t − sin t
sin t cos t

1
0

= e−2t

cos t

...
48)

Consequences :
(a) |x(t)| = e−2t → 0 as t → ∞
...

x1 (t)
(c) Parametrically (x1 (t), x2 (t))T describes a curve in R2 which spirals into (0, 0) as
shown in figure 1
...


81

Figure 3
...
7

Phase Portraits in R2 (continued)

Lecture 20
In this part, we undertake an elementary study of the Phase Portraits in R2 for a
system of two linear ordinary differential equations, viz,
x = Ax
˙

(3
...
e
...

The tuple (x1 (t), x2 (t)) for t ∈ R2 represents a curve C in R2 in a parametric form;
the curve C is called the phase portrait of (3
...
It is easier to draw the curve when
A is in its canonical form
...
e
...
The following
example clarifies the same ideas
...
The canonical form B is
, i
...
,
1 −2
0 −2
A = P −1 BP
...
49) is
y = By

(3
...
50) is sometimes is referred to (3
...
The
phase Portrait for (3
...
2:
while the phase portrait of (3
...
3
...
3:
Supply the details for drawing Figures 3
...
3
...
49) when A in its
canonical form
...
49), let P be an invertible 2 × 2 matrix such that
B = P −1 AP , where B is a canonical form of A
...
51)

By this time it is clear that phase portrait for (3
...
51)
under the transformation x = P y
...

(a) B =

λ 0
0 µ

(b) B =
83

λ 1
0 λ

(c) B =

a −b
b a

Let y0 be an initial condition for (??), i
...
,
y(0) = y0

(3
...
51) and (3
...
53)

and for the 3 different choices of B, we have

(a) y(t) =

eλt 0
1 t
cos bt − sin bt
y (b) y(t) = eλt
y (c) B = eat
y0
0 1 0
sin bt cos bt
0 eµt 0

With the above representation of y, we are now ready to draw the phase Portrait
...

λ 0
λ 1
or with B =

...
51) looks like the following (figure 4):

Case 1: Let λ ≤ µ < 0 with B =

Figure 3
...


(b) lim
(c) lim
(d) lim

84

and hence an arrow is indicated to note that
y1 (t) → 0, and y2 (t) → 0 as t → ∞,
in all the diagram
...

In case λ ≥ µ > 0 or µ ≥ λ > 0, the phase portrait essentially remains the same as
shown in Figure 3
...

The solutions are repelled away from origin
...

Case 1 essentially deals with real non-zero eigenvalues of B which are either both
positive or negative
...

λ 0
Case 2 : Let B =
with λ < 0 < µ
...
5 (below) depicts the phase
0 µ
portrait
...
5:

When µ < 0 < λ,we have a similar diagram but with arrows in opposite directions
...
The four non-zero trajectories OA,OB,OC
and OD are called the separatrices, two of them (OA and OB) approaches to the
origin as t → ∞ and the remaining two (namely OC and OD) approaches the origin
as t → −∞
...


85

Lecture 21
Now we move to the case when A has complex eigenvalues a ± ib, b = 0
...

Case 3 : B =
b a
Since the root is not real, we have b = 0 and so with b > 0 or b < 0
...


Figure 3
...
We also note that it spirals around the
origin and it tends to origin as t → ∞
...
When a > 0, the origin is called an
unstable focus
...
e
...

0 −b
The canonical form of A is B =
, b = 0
...
The phase portraits are as shown in
Figure 7
...
7:
Also, we note that the phase portraits for (??) is a family of ellipses as shown in
Figure 8
...
8:
In this case the origin is called the center for the system (??)
...

Example : Consider the linear system
x1 = −4x2 ; x2 = x1
˙
˙
or

x1
˙
x2
˙

=

0 −4
1 0

x1
0 −4
; A=
x2
1 0

It is easy to verify that A has two non-zero (complex) eigenvalues ±2i
...
It is left as an exercise to show
x2 + 4x2 = c2 + c2
1
2
1
2
or the phase portrait is a family of ellipses
...
1

Introduction

Qualitative properties of solutions of differential equations assume importance in the absence
of closed form solutions
...

One such qualitative property, which has wide applications, is the oscillation of solutions
...
A rewarding alternative is to resort to qualitative study
which justifies the inclusion of a chapter on qualitative theory which otherwise is not out of
place
...
Consider a second
order equation
x = f (t, x, x ), t ≥ 0,
(4
...
1) existing on [0, ∞)
...

Definition 4
...
1
...
1) if
x(t∗ ) = 0
...
1
...
(a) Equation (4
...
1) is called “oscillatory” if (a) is false
...
1
...
Consider the linear equation
x − x = 0, t ≥ 0
...

89

Example 4
...
4
...
The general solution in this case is
x(t) = A cos t + B sin t, t ≥ 0
and without loss of generality we assume that both A and B are non-zero constants; otherwise
x is trivially oscillatory
...

In this chapter we restrict our attention to only second order linear homogeneous equations
...
Let a, b : [0, ∞) → R be continuous functions
...
2)

Theorem 4
...
5
...
Equation (4
...
3)

is oscillatory, where
1
a (t)
c(t) = b(t) − a2 (t) −

...
3) is called the “normal” form of equation (4
...

Proof
...
2)
...
The substitution of x , x and their in (4
...

Thus, equating the coefficients of y to zero, we have

v(t) = exp(−

1
2

t

a(s)ds)
0


...

4
2

...
2), then

y(t) = x(t) exp(

t

1
2

a(s)ds)
0

is a solution of (4
...
Similarly if y is a solution of (4
...
2)
...

Remark
We note that (4
...
3) is oscillatory
...
1
...
The following two theorems are of interest in themselves
...
1
...
Let x1 and x2 be two linearly independent solutions of (4
...
Then, x1
and x2 do not admit common zeros
...
Suppose t = a is a common zero of x1 and x2
...
Thus, it follows that x1 and x2 are linearly dependent which is a
contradiction to the hypothesis or else x1 and x2 cannot have common zeros
...
1
...
The zeros of a solution of (4
...

Proof
...
2)
...
There are two cases
...

Since the derivative of x is continuous and positive at t = a it follows that x is strictly
increasing in some neighborhood of t = a which means that t = a is the only zero of x in
that neighborhood
...

Case 2:
x (a) < 0
...

EXERCISES
1
...
2) is non-oscillatory if and only if the equation (4
...

2
...
2) in (0, ∞), then, show that
lim tn = ∞
...
Prove that any solution x of (4
...

4
...


State and prove a result similar to Theorem 4
...
5 for equation (*) and (**)
...


92

Lecture 23
4
...

Sturm’s comparison theorem is a result in this direction concerning zeros of solutions of a
pair of linear homogeneous differential equations
...
We remind that a solution means a nonzero solution
...
2
...
(Sturm’s Comparison Theorem)
Let r1 , r2 and p be continuous functions on (a, b) and p > 0
...
4)
(py ) + r2 y = 0

(4
...
If r2 (t) ≥ r1 (t) for t ∈ (a, b) then between any two consecutive zeros
t1 , t2 of x in (a, b) there exists at least one zero of y (unless r1 ≡ r2 ) in [t1 , t2 ]
...

Proof
...
Suppose y does not vanish in (0, 1)
...
Without loss of generality, let us
assume that x(t) > 0 on (t1 , t2 )
...
4) and (4
...


If r2 = r1 on (t1 , t2 ), then, r2 (t) > r1 (t) in a small interval of (t1 , t2 )
...

t1

Using the identity
d
[p(x y − xy )] = (px ) y − (py ) x,
dt
93

(4
...
6) implies
p(t2 )x (t2 )y(t2 ) − p(t1 )x (t1 )y(t1 ) > 0,

(4
...
However, x (t1 ) > 0 and x (t2 ) < 0 as x is a non-trivial solution which is positive in (t1 , t2 )
...
7) leads to a
contradiction
...
7), we have
p(t2 )y(t2 )x (t2 ) − p(t1 )y(t1 )x (t1 ) ≥ 0
...
This completes the
proof
...
Many times y may vanish more than
once between t1 and t2
...
2
...
2
...
Let r1 and r2 be two continuous functions such that r2 ≥ r1 on (a, b)
...
8)
and
y + r2 (t)y = 0

(4
...
Then y has at least a zero between any two successive zeros t1 and t2
of x in (a, b) unless r1 ≡ r2 on [t1 , t2 ]
...

Proof
...
2
...
Notice that the hypotheses
of Theorem 4
...
1 are satisfied
...

Theorem 4
...
3
...
10)
where a, b are real valued continuous functions on (0, ∞)
...
e
...

(Note that the roles of x and y are interchangeable
...
First we note that all the hypotheses of Theorem 4
...
1 are satisfied by letting
t

r1 (t) ≡ r2 (t) = b(t) exp

a(s)ds
0
t

p(t) = exp

a(s)ds
0

So between any two consecutive zeros of x, there is at least one zero of y
...

94

By setting a ≡ 0 in Theorem 4
...
3 gives us the following result
...
2
...
Let r be a continuous function on (0, ∞) and let x and y be two linearly
independent solutions of
x + r(t)x = 0
...

A few comments are warranted on the hypotheses of Theorem 4
...
1
...
2
...

Example 4
...
5
...

All the conditions of Theorem 4
...
1 are satisfied except that r2 is not greater than r1
...
Thus, Theorem 4
...
1 may not hold true if the condition r2 ≥ r1 is dropped
...
2
...
5) there is a zero of a solution x of equation
(4
...
2
...

Example 4
...
6
...

Note that r2 ≥ r1 and also that the remaining conditions of Theorem 4
...
1 are satisfied
...
It is obvious that x(t) = sin t does not
vanish at any point in (0, π/2)
...
2
...

EXERCISES
1
...
Show that the
equation
x + (m2 + r(t))x = 0, t ≥ 0
is oscillatory
...
Assume that the equation
x + r(t)x = 0, t ≥ 0
is oscillatory
...

95

3
...
For a solution y of
y + r(t)y = 0, t ≥ 0
prove that y vanishes in any interval of length π/m
...
Show that the normal form of Bessel’s equation
t2 x + tx + (t2 − p2 )x = 0

(∗)

is given by
y + (1 +

1−4p2
)y
4t2

=0

(∗∗)

(a) Show that the solution Jp of (*) and Yp of (**) have common zeros for t > 0
...


(c) Suppose t1 and t2 are two consecutive zeros of Jp (t), 0 ≤ p < 1
...
What is your
comment when p = 1 in this case ?
2

96

Lecture 24
4
...
11)

where a is a real valued continuous function defined for t ≥ 0
...
3
...
(a) The equation (4
...

(b) Equation (4
...

Proof
...

Sufficiency Let z be the given solution which does not vanish on (t∗ , ∞) where t∗ ≥ 0
...
11) can vanish atmost once in (t∗ , ∞), i
...

The proof of (b) is obvious
...

Theorem 4
...
2
...
11) existing on (0, ∞)
...

Proof
...
It is clear that x (t0 ) = 0 for x(t) ≡ 0
...
Now
a < 0 implies that x is positive on the same interval which in turn implies that x is an
increasing function, and so, x does not vanish to the right of t0
...
Thus, x has utmost one zero
...
For the equay =0

any non-zero constant function y ≡ k is a solution
...
11) (observe that all the hypotheses of Theorem are satisfied) then, x
vanishes utmost once, for otherwise if x vanishes twice then y necessarily vanishes at least
once by Theorem 4
...
1,which is not true
...

From Theorem the question arises: If a is continuous and a(t) > 0 on (0, ∞), is the
equation (4
...

Theorem 4
...
3
...


(4
...
11) existing for t ≥ 0
...

97

Proof
...
Then,
there exists a point t0 > 1 such that x does not vanish on [t0 , ∞)
...
Thus

v(t) =

x (t)
, t ≥ t0
x(t)

is well defined
...

Integration on the above leads to
v(t) − v(t0 ) = −

t
t0

a(s)ds −

t
t0

v 2 (s)ds
...
12) now implies that there exist two constants A and T such that v(t) <
A(< 0) if t ≥ T since v 2 (t) is always non-negative and
t

v(t) ≤ v(t0 ) −

a(s)ds
...
Let T (≥ t0 ) be so large that x (T ) < 0
...
But
t

x (s)ds = x (t) − x (T ) ≤ 0
T

Now integrating once again we have
x(t) − x(T ) ≤ x (T )(t − T ), t ≥ T ≥ t0
...
13)

Since x (T ) is negative, the right hand side of (4
...
13) either tends to a finite limit (because x(T ) is finite) or tends to +∞ (in
case x(t) → ∞ as t → ∞)
...
So the assumption
that x has a finite number of zeros in (0, ∞) is false
...

It is not possible to do away with the condition (4
...

Example 4
...
4
...

9t2

which does not vanish anywhere in (0, ∞) and so the equation is non-oscillatory
...

98

2
2
dt = < ∞
9t2
9

Thus, all the conditions of Theorem are satisfied except the condition (4
...


EXERCISES
1
...

2
...
Show that
x + a(t)x = 0
is non-oscillatory
...
Check for the oscillations or non-oscillations of:
(i) x − (t − sin t)x = 0,
(ii) x + et x = 0,
(iii) x −
(iv) x
(v) x

et x

t≥0

t≥0

= 0,

t
− log t x = 0,
+ (t + e−2t )x

t≥0
t≥1
= 0,

4
...
The normal form of Bessel’s equation
x + (1 +

=0

1
4

1−4p2
)x
4t2

t2 x + tx + (t2 − p2 )x = 0, t ≥ 0, is

= 0, t ≥ 0
...

(ii) If p > 1 show that t2 − t1 > π and approaches π as t1 → ∞, where t1 , t2 (with
2
t1 < t2 ) are two successive zeros of Bessel’s function Jp
...
Then compare
(*) with x + x = 0, successive zeros of which are at a distance of π
...
2 and Exercise 5 above justify the assumption of the existence of
zeros of Bessel’s functions
...
Decide whether the following equations are oscillatory or non-oscillatory:
(i) (tx ) + x/t = 0,
(ii) x + x /t + x = 0,
(iii) tx + (1 − t)x + nx = 0, n is a constant(Laguerre’s equation),
(iv) x − 2tx + 2nx = 0, n is a constant(Hermite’s equation),
(v) tx + (2n + 1)x + tx = 0, n is a constant,
(vi) t2 x + ktx + nx = 0, k, n are constants
...
4

Boundary Value Problems

Boundary value problems (BVPs) appear in various branches of sciences and engineering
...
Solutions to the problems of vibrating
strings and membranes are the outcome of solutions of certain class of BVPs
...

Speaking in general, BVPs pose many difficulties in comparison with IVPs
...
Needless to say the nonlinear BVPs are far tougher to
solve than linear BVPs
...
Picard’s theorem on the existence of a unique solution to a nonlinear BVP is also
dealt with in the last section
...


(4
...
L is a differential operator defined on the set of twice continuously
differentiable functions on [A, B]
...
Let x1 , x2 , x3 , x4 be four
variables
...
V (x1 , x2 , x3 , x4 ) is denoted in short
by V
...
V1 and V2 are called linearly independent if V1
and V2 are not linearly dependent
...
4
...
(Linear Homogeneous BVP) Consider an equation of type (4
...
Let
V1 and V2 be two linearly independent linear forms in the variables x(A), x(B), x (A) and
x (B)
...
B) and

Vi (x(A), x(B), x (A), x (B)) = 0,

i = 1, 2

(4
...
The condition 4
...

Definition 4
...
2
...
A linear non-homogeneous BVP is the problem of finding a function x
defined on [A, B] satisfying
100

L(x) = d(t),

t ∈ (A
...
16)

where Vi are two given linear forms and the operator L is defined by equation (4
...

Example 4
...
3
...


Then, any solution x of
L(x) = 0, A < t < B
which satisfies x(A) = x(B) = 0 is a solution of the given BVP
...

(ii) An example of a linear homogeneous BVP is
L(x) = x + et x + 2x = 0, 0 < t < 1,
with boundary conditions x(0) = x(1) and x (0) = x (1)
...

Also
L(x) = sin 2πt, 0 < t < 1,
along with boundary conditions x(0) = x(1) and x (0) = x (1) is another example of
linear non-homogeneous BVP
...
4
...
(Periodic Boundary Conditions)

The boundary conditions

x(A) = x(B) and x (A) = x (B)
are usually known as periodic boundary conditions stated at t = A and t = B
...
4
...
(Regular Linear BVP) A linear BVP, homogeneous or non-homogeneous,
is called a regular BVP if A and B are finite and in addition to that a(t) = 0 for all t in
(A, B)
...
4
...
(Singular Linear BVP)
singular linear BVP
...
4
...
A linear BVP (4
...
15) (or (4
...
15)) is singular if and
only if one of the following conditions holds:
101

(a) Either A = −∞ or B = ∞
...

(c) a(t) = 0 for at least one point t in (A, B)
...

In this chapter, the discussions are confined to only regular BVPs
...

Definition 4
...
8
...

A careful analysis of the above definition shows that the nonlinearity in a BVP may be
introduced because
(i) the differential equation may be nonlinear;
(ii) the given differential equation may be linear but the boundary conditions may not be
linear homogeneous
...

Example 4
...
9
...

(ii) The BVP
x − 4x = et , 0 < t < 1
with boundary conditions
x(0)
...

EXERCISES
1
...

(i) x + sin x = 0,
(ii) x + x = 0,

x(0) = x(2π) = 0
...


(iii) x + x = sin 2t,

x(0) = x(π) = 0
...


2
...


x(−∞) = 0,

x(0) = 1,

x(0) = 1
...

102

3
...


103

Lecture 26
4
...
The importance of these problems lies in the fact that they generate sets of orthogonal
functions (sometimes complete sets of orthogonal functions)
...
Few examples of such sets
of functions are the Legendre and Bessel functions
...
17)

where p , q and r are real valued continuous functions on [A, B] and λ is a real parameter
...

Let us consider two sets of boundary conditions, namely
m1 x(A) + m2 x (A) = 0,

(4
...
19)

x(A) = x(B),

x (A) = x (B),

p(A) = p(B),

(4
...
A glance
at the boundary conditions (4
...
19) shows that the two conditions are separately
stated at x = A and x = B
...
20) is the periodic boundary condition at x = A
and x = B
...
17) with (4
...
19) or equation (4
...
20)
is called a Sturm-Liouville boundary value problem
...
It is of interest to
examine the existence of a non-trivial solution and its properties
...
17) with (4
...
19)
or (4
...
20)
...
17) with (4
...
19) or
with (4
...
The following theorem is of fundamental importance whose proof
is beyond the scope of this book
...
5
...
Assume that
(i) A, B are finite real numbers;
(ii) the functions p , q and r are real valued continuous functions on [A, B]; and
(iii) m1 , m2 , m3 and m4 are real numbers
...
17) with (4
...
19) or (4
...
20) has
countably many eigenvalues with no finite limit point
...
)
104

Theorem 4
...
1 just guarantees the existence of solutions
...
These expansions are a consequence
of the orthogonal property of the eigenfunctions
...
5
...
Two functions x and y ( smooth enough ), defined and continuous on
[A, B] are said to be orthogonal with respect to a continuous weight function r if
B

r(s)x(s)y(s)ds = 0
...
21)

A

By smoothness of x and y we mean the integral in Definition 4
...
2 exists
...

Theorem 4
...
3
...
5
...
For the parameters λ,
µ(λ = µ) let x and y be the corresponding solutions of (4
...
Then,

B

r(s)x(s)y(s)ds = 0
A


...
From the hypotheses we have
(px ) + qx + λrx = 0,
(py ) + qy + µry = 0
...

dt
Now integration of Equation (4
...

Since λ = µ ( by assumptions ) it readily follows that
B

r(s)x(s)y(s)ds = 0
A

which completes the proof
...
22)

= pW (x, y)

B
A

From Theorem 4
...
3 it is clear that if we have conditions which imply
pW (x, y)

B
A

= 0,

then, the desired orthogonal property follows
...
18) and
(4
...
20) play a central ole in the desired orthogonality of the eigenfunctions
...
18) and (4
...
20) imply pW (x, y)

B
A

= 0
...
5
...
Let the hypotheses of Theorem 4
...
1 be satisfied
...
17) and (4
...
19) corresponding to two distinct
eigenvalues λm and λn
...


(4
...
23) holds without the use of (4
...
If p(B) = 0, then (4
...
19) deleted
...
Let p(A) = 0, p(B) = 0
...
18) we note
m1 xn (A) + m2 xn (A) = 0,

m1 xm (A) + m2 xm (A) = 0
...
Elimination of m2 from the above
two equation leads to
m1 [xn (A)xm (A) − xm (A)xm (A)] = 0
...


(4
...
19) , it is seen that
xn (B)xm (B) − xn (B)xm (B) = 0
...
25)

From the relations (4
...
25) it is obvious that (4
...

If p(A) = 0, then the relation (4
...
25)
...
This completes the
proof
...
20)
...
5
...
Let the assumptions of theorem ?? be true
...
17) and (4
...
Then, xm and xn are orthogonal with respect to the weight function r(t)
...
In this case
pW (xn , xm )

B
A

= p(B)[xn (B)xm (B) − xn (B)xm (B) − xn (A)xm (A) + xn (A)xm (A)]
...
20)

...
17), (4
...
17), (4
...

Theorem 4
...
6
...
Suppose that r is positive on
(A, B) or r is negative on (A, B) and r is continuous on [a, B]
...
17), (4
...
17), (4
...

Proof
...
From (4
...

Equating the real and imaginary parts, we have
(pm ) + (q + ar)m − brn = 0
and
(pn ) + (q + ar)n + brm = 0
...


Thus, by integrating, we get
B

−b

(m2 (s) + n2 (s))r(s)ds = (pn )m − (pm )n

A

B
A


...
26)

Since m and n satisfy one of the boundary conditions (4
...
19) or (4
...


(4
...
Hence, from (4
...
27) it follows that b = 0, which means that
λ is real which completes the proof
...
5
...

Theorem 4
...
7
...
18) and (4
...
20)
...
17) and
(4
...
19) or (4
...
20)
...
28)

where cn ’s are given by
B

cn

A

r(s)x2 (s)ds =
n

B
A

r(s)g(s)xn (s)ds,

n = 1, 2, · · ·

(4
...
29) are well defined
...
5
...


(i) Consider the BVP
x + λx = 0, x(0) = 0, x (1) = 0
...

Hence, by Theorem 4
...
3 the eigenfunctions are pairwise orthogonal
...

(4
...
31)

where cn ’s are determined by the relation (4
...

(ii) Let the Legendre polynomials Pn (t) be the solutions of the Legendre equation
d
dt [(1

− t2 )x ] + λx = 0, λ = n(n + 1), −1 ≤ t ≤ 1
...
In this case p(t) =
(1 − t2 ), q ≡ 0, r ≡ 1
...
Hence, if g is any piece-wise continuous function, then the eigenfunction expansion
of g is
108

g(t) = c0 p0 (t) + c1 p1 (t) + · · · + cn pn (t) + · · · ,
where
cn =

2n+1
2

1
−1 g(s)Pn (s)ds, n

= 0, 1, 2, · · ·

since
1
2
−1 Pn (s)ds

=

2
2n+1 , n

= 0, 1, 2, · · ·

EXERCISES
1
...
17), (4
...
17), (4
...

2
...


Prove that the corresponding eigenfunctions are
sin(t λn )
where λn is an eigenvalue
...
Consider the equation
x + λx = 0, 0 < t ≤ π
...


109

Lecture 28
4
...
We start with
L(x) + f (t) = 0,

a≤t≤b

(4
...
Here p, p and q are given real
valued continuous functions defined on [a, b] such that p(t) is non-zero on [a, b]
...
32) is considered with separated boundary conditions
m1 x(a) + m2 x (a) = 0

(4
...
34)

with the usual assumptions that at least one of m1 and m2 and one of m3 and m4 are
non-zero
...
6
...
A function G(t, s) defined on [a, b] × [a, b] is called Green’s function for
L(x) = 0 if, for a given s, G(t, s) = G1 (t, s) if t < s and G(t, s) = G2 (t, s) for t > s where
G1 and G2 are such that:
(i) G1 satisfies the boundary condition (4
...
34) at t = b and L(G2 ) = 0 for t > s;
(iii) The function G(t, s) is continuous at t = s;
(iv) The derivative of G with respect to t has a jump discontinuity at t = s and
∂G2
∂t



∂G1
∂t t=s

1
= − p(s)
...
32) with conditions (4
...
34) is
constructed
...
33)
...
34)
...
For
some constants c1 and c2 define G1 = c1 y(t) and G2 = c2 z(t)
...


(4
...


(4
...
35) has all the properties of
the Green’s function
...

dt

(4
...
In particular it is seen that
y(s)z (s) − y (s)z(s)] = A/p(s), A = 0

(4
...
36) and (4
...

Hence the Green’s function is
G(t, s) =

−y(t)z(s)/A
−y(s)z(t)/A

if t ≤ s,
if t ≥ s
...
39)

The main result of this article is Theorem
...
6
...
Let G(t, s) be given by the relation (4
...
32)
,
(4
...
34) if and only if
b

x(t) =

G(t, s)f (s)ds
...
40)

a

Proof
...
40) hold
...


(4
...
41) with respect to t yields
t

x (t) = −

b

z (t)y(s)f (s)ds +
a

y (t)z(s)f (s)ds

A
...
42)

t

Next on computing (px ) from (4
...
43)
Further, from the relations (4
...
42), it is seen that
Ax(a) = −y(a)
Ax (a) = −y (a)
111

b
a z(s)f (s)ds,
b
a z(s)f (s)ds
...
44)

Since y(t) satisfies the boundary condition given in (4
...
44) that x(t)
also satisfies the boundary condition (4
...
Similarly x(t) satisfies the boundary condition
(4
...
This proves that x(t) satisfies (4
...
33) and (4
...

Conversely, let x(t) satisfy (4
...
33) and (4
...
Then from (4
...
45)

G2 (t, s)L(x(s))ds
...
46)

a

The left side of (4
...
47)

Applying the identity (4
...
46) and using the properties of G1 (t, s) and G2 (t, s) the
left side of (4
...
48)

The first and third term in (4
...
The condition (iv) in the definition of Green’s function now shows that the value
of the expression (4
...
But (4
...
45) which means x(t) =
b
a G(t, s)f (s)ds
...

Example 4
...
3
...


(4
...
49) is given by x(t) = −

if t ≤ s,
if t ≥ s
...
50)

1
0 G(t, s)f (s)ds
...
In theorem establish the relations (4
...
45) and (4
...
Also show that if x satisfies
(4
...
33) and (4
...

2
...
39) is symmetric, that is G(t, s) = G(s, t)
...
Show that the Green’s function for L(x) = x = 0, x(1) = 0; x (0) + x (1) = 0 is
G(t, s) =

1−s
1−t

if t ≤ s,
if t ≥ s
...

4
...
Show that x(t) is a solution
of the above BVP if and only if
x(t) =

b
a G(t, s)f (s, x(s), x

(s))ds,

where G(t, s) is the Green’s function given by
(b − a)G(t, s) =

(b − t)(s − a)
(b − s)(t − a)

if a ≤ s ≤ t ≤ b,
if a ≤ t ≤ s ≤ b
...

5
...
Show that x is a solution
of this BVP if, and only if, x satisfies
x(s) =

b
a H(t, s)f (s, x(s), x

(s))ds,

a≤t≤b

where H(t, s) is the Green’s function defined by
H(t, s) =

s−a
t−a

113

if a ≤ s ≤ t ≤ b,
if a ≤ t ≤ s ≤ b
...
1

Introduction

Once the existence of a solution for a differential equation is established, the next question
is :
How does a solution grow with time?
It is all the more necessary to investigate such a behavior of solutions in the absence of
an explicit solution
...
A few such criteria are studied below
...

In this chapter the asymptotic behavior of n-th order equations, autonomous systems
of order two, linear homogeneous and non-homogeneous systems with constant and variable
coefficients are dealt
...
These results may be viewed as some
kind of stability properties for the concerned equations
...
2

Linear Systems with Constant Coefficients

Consider a linear system
x = Ax,

0 ≤ t < ∞,

(5
...
The priori knowledge of eigenvalues of the matrix
A completely determines the solutions of (5
...
So much so, the eigenvalues determine the
behavior of solutions as t → ∞
...
1) is very
useful and we have one such result in the ensuing theorem
...
2
...
Let λ1 , λ1 , · · · , λm (m ≤ n) be the distinct eigenvalues of the matrix A
and λj be repeated nj times (n1 + n2 + · · · + nm = n)
...
2)
115

and η ∈ R be a number such that
αj < η,

(j = 1, 2, · · · , m)
...
3)

Then, there exists a real constant M > 0 such that
|eAt | ≤ M eηt ,

0 ≤ t < ∞
...
4)

Proof
...
Then,
ϕj (t) = eAt ej ,

(5
...
From the previous module on systems of equations, we know that
m

eAt ej =

(cr1 + cr2 t + · · · + crnr tnr −1 )eλr t ,

(5
...
From (5
...
6) we have
m

|ϕj (t)| ≤

m

(|cr1 | + |cr2 |t + · · · + |crnr |t

nr −1

Pr (t)eαr t

)| exp(αr + iβr )t| =

r=1

(5
...
By (5
...
8)

for sufficiently large values of t
...
7) and (5
...

Now
n

|ϕj (t)| ≤ (M1 + M2 + · · · + Mn )eηt = M eηt

|eAt | ≤

(0 ≤ t < ∞),

j=1

where M = M1 + M2 + · · · + Mn which proves the inequality (5
...

Actually we have estimated an upper bound for the fundamental matrix eAt for the
equation (5
...
4)
...
2
...
3
...
It tells us about a
necessary and sufficient conditions for the solutions of (5
...
In
other words, it characterizes a certain asymptotic behavior of solutions of (5
...

Theorem 5
...
2
...
1) tends to zero as t → +∞ if and only
if the real parts of all the eigenvalues of A are negative
...

We shift our attention to the system
x = Ax + b(t),

(5
...
Since a fundamental matrix for the
system (5
...
9) is ( by the method of variation of parameters) is
t

x(t) = e(t−t0 )A x0 +

t0

e(t−s)A b(s)ds, t ≥ t0 ≥ 0,

satisfies the equation (5
...
Here x0 is an n- (column)vector such that x(t0 ) = x0
...


t0

Suppose |x0 | ≤ K and η is a number such that
η > R exp(real part ofλi ), i = 1, 2, · · · , m,
where λi are the eigenvalues of the matrix A
...
4) we have
|x(t)| ≤ KM eη(t−t0 ) + M

t

eη(t−t0 ) |b(s)|ds
...
10)

t0

The inequality (5
...
3
...
We note that the right side is
independent of x
...
The inequality
(5
...
The behavior of x for large values of t depends on the sign
of the constant ϕ and the function b
...

Theorem 5
...
3
...
11)

where p and a are constants with p ≥ 0
...
9) satisfies
|x(t)| ≤ Leqt

(5
...

Proof
...
9)) exists on 0 ≤ t < ∞
...
13)

where c is a suitable constant vector
...
3
...


(5
...
13) we have ,
t

x(t) = eAt c +

t

e(t−s)A b(s)ds +

0

e(t−s)A b(s)ds
...
15)

T

Define
M1 = sup{|b(s)| : 0 ≤ s ≤ T }
...
16)

Now from the relation (5
...
14) and (5
...


T

Let us assume a = η
...
Thus, the behavior of the solution
for the large values of t depends on the the q and on L
...
2
...
Consider
x1 = − 3x1 − 4x2 ,
x2 =4x1 − 9x2
...

whose roots are
λ1 = −6 + 7i, λ2 = −6 − 7i
...
Hence, all solutions tend to zero at t → +∞
...
2
...
Consider





2
3 1
x1
x1
 x2  =  −3 0 1   x2 
x3
1
−1 0
x3


The characteristic equation is
λ3 − 2λ2 + 9λ − 8 = 0
whose roots are



31i
1 − 31i
λ1 = 1, λ2 =
, λ3 =

...
All non-trivial solutions of the system are unbounded
...
Give a proof of Theorem 5
...
2
...
Determine the limit of the solutions as t → +∞ for the solutions of the system x = Ax
where


−9 19 4
(i) A =  −3 7 1  ;
−7 17 2


1
1 2
2 2 ;
(ii) A =  0
1 −1 0


0
1 −1
0 −1 
...
Determine the behavior of solutions and their first two derivatives as t → +∞ for the
following equations:
(i) x + 4x + x − 6x = 0;
(ii) x + 5x + 7x = 0;
(iii) x + 4x + x + 6x = 0
...
Find all solutions of the following nonhomogeneous system and discuss their behavior
as t → +∞
...

119

x1
x2

+

b1 (t)
b2 (t)

5
...
17)

where for each t ∈ (0, ∞) , A(t) is a real valued, continuous n × n matrix
...
17) as t → +∞
...
Obviously, these eigenvalues are real and functions of t
...
3
...
For each t ∈ (0 ≤ t < ∞) let A(t) be a real valued, continuous n × n
matrix
...
If
t

lim

t→+∞ t
0

M (s)ds = −∞

(t0 > 0 is fixed);

(5
...
17) tends to zero as t → +∞
...
Let ϕ be a solution of (5
...
Then,differentiation of
|ϕ(t)|2 = ϕT (t)ϕ(t)
...
Since M (t) is the largest eigenvalue,
we have
|ϕT (t)[A(t) + AT (t)]ϕ(t)| ≤ M (t)|ϕ(t)|2
From the above relations we get
0 ≤ |ϕ(t)|2 ≤ |ϕ(t0 )|2 exp

t

M (s)ds


...
19)

t0

Now by the condition (5
...
19) tends to zero
...

Theorem 5
...
2
...
If
t

lim sup
t→+∞

m(s)ds = +∞
t0

(t0 > 0 is fixed);

then, every nonzero solution of (5
...

120

(5
...
As in the proof of Theorem 5
...
1 we have
d
|ϕ(t)|2 ≥ m(t)|ϕ(t)|2
...

t0

By (5
...


Example 5
...
3
...
17), we get
2/t2
0
0
−2

A(t) + AT (t) =
So
M (t) =

t

2
, m(t) = −2,
t2

lim

t→∞ t
0

2
2
> −∞
...
19)
...

Example 5
...
4
...
Now

−2
ds = lim (−2 log t + 2 log t0 ) = −∞
...
18) holds and so the solutions tends to zero as t → +∞
...
Theorem 5
...
5 stated below deals with a criterion for the boundedness of
the inverse of a fundamental matrix
...
3
...
Let Φ be a bounded fundamental matrix of (5
...
Suppose
t

lim inf

trA(s)ds > −∞ as t → ∞
...
21)

t0

Then, |Φ−1 | is bounded on [0, ∞)
...
Let Φ be a fundamental matrix of (5
...
By Abel’s formula
t

det Φ(t) = det Φ(0) exp

tr A(s)ds
...
22)

t0

Now the relations (5
...
22) imply det Φ(t) = 0, t ∈ [0, ∞)
...
Now we know that
Φ−1 (t) =

adj [Φ(t)]
det Φ(t)

or else we have a bound k > 0 such that
adj [Φ(t)] ≤ k, t ∈ [0, ∞)
...

Remark : Also let us note that (since det Φ(t) = 0) for all values of t and so none of the
solutions φ (which could be a column of of a fundamental matrix,) tends to zero as t → +∞
...
17) tends to zero as t → +∞
...
3
...
23)

where B is a continuous n × n matrix defined on [0, ∞)
...
23)
...


(5
...
23) as a consequence of the
variation of parameters formula
...
3
...
Let the hypotheses of the Theorem 5
...
5 and the condition (5
...

Then, any solution ψ of (5
...

Proof
...
17)
...

By using the variation of parameters formula, we obtain
t

ψ(t) = ϕ(t) + Φ(t)

Φ−1 (s)(B(s) − A(s))ψ(s)ds

0

from which we get
t

|ψ(t)| ≤ |ϕ(t)| + |Φ(t)|

|Φ−1 (s)||B(s) − A(s)||ψ(s)|ds

0

Using a generalized version of the Gronwall’s inequality,we have
t

|ψ(t)| ≤ |ϕ(t)|+

t

|Φ(t)||φ(s)||Φ−1 (s)||B(s) − A(s)| exp

0

|Φ(u)||Φ−1 (u)||B(u) − A(u)|du ds
...

s

By (5
...

An interesting consequence is :
Theorem 5
...
7
...
3
...
3
...
Then, corresponding
to any solution ϕ of (5
...
23), such that
|ψ(t) − ϕ(t)| → 0, as t → ∞
...
Let ϕ be a given solution of (5
...
Any solution ψ of (5
...

ψ(t) = ϕ(t) −
t

The above relation determines uniquely the solution ψ of (5
...
Clearly under the given
conditions
lim |ψ(t) − ϕ(t)| = 0 as t → ∞
...
17) and
(5
...
This relationship between the two systems many times is known as asymptotic
equivalence
...

Perturbed Systems: The equation
x = A(t)x + b(t),

0 ≤ t < ∞,

(5
...
17), where b is a continuous n-column vector function
defined on 0 ≤ t < ∞
...
25) is closely
related to the behavior of solution of the system (5
...

Theorem 5
...
8
...
17) tends to zero as t → +∞
...
25) is bounded then, all of its solutions are bounded
...
Let ψ1 and ψ2 be any two solutions of (5
...
Then ϕ = ψ1 −ψ2 is a solution of (5
...

By Noting ψ1 = ψ2 + ϕ then, clearly ψ1 (t) is bounded, if ψ2 is bounded, since ϕ(t) → 0 as
t → +∞
...

From the Theorem 5
...
8 it is clear that if ψ2 (t) → ∞ as t → ∞ then ψ1 (t) → ∞ as
t → ∞
...
The next result asserts the
boundedness of solutions of (5
...

Theorem 5
...
9
...
17) be such that
t

lim inf
t→∞

tr A(s)ds > −∞

(5
...
If every solution of (5
...
25) is bounded
...
Let ϕ(t) be any solution of (5
...
Then
t

ϕ(t) = Φ(t)C + Φ(t)

Φ−1 (s)b(s)ds
...
17) and C is a constant vector
...
17) is bounded on [0, ∞), there is a constant K such that
|Φ(t)| ≤ K for t ∈ [0, ∞)
...
The condition in (5
...
3
...
Taking the norm on either side we have
t

|ϕ(t)| ≤ |Φ(t)||C| + |Φ(t)|

|Φ−1 (s)||b(s)|ds

0

Now each term on the right side is bounded which shows that ϕ(t) is also bounded
...
Show that any solution of x = A(t)x tends to zero as t → 0 where,


−t
0 0
−t2 0  ;
(i) A(t) =  0
0
0 −t2


−et
−1 − cos t
;
−e2t
t2
(ii) A(t) =  1
2
3t
cos t −t
−e
(iii) A(t) =

−t sin t

...
Let x be any solution of a system x = A(t)x
...

t0

Show that x is bounded
...
Prove that all the solutions of x = A(t)x are bounded, where A(t) is given by





et −1 −2
(1 + t)−2
sin t
0
3 , (ii)  − sin t
0 cos t  and
(i) 1 e−2t
−3t
2
−3 e
0
− cos t
0

(iii)

e−t
0

...
What can you say about the boundedness of solutions of the system
x = A(t)x + f (t) on (0, ∞)
when a particular solution xp , the matrix A(t) and the function f are given below:
e−t sin t
−1
0
e−t
cos t
, A(t) =
, f (t) =
−t cos t
−t sin t ,
e
0
−1
−e





 1
sin t
−1
0 0
2 (sin t − cos t)
, A(t) =  0
−t2 0 , f (t) =  0 
...
Show that the solutions of
x = A(t)x + f (t)
are bounded on [0, ∞) for the following cases:
125

e−t
0
sin t
, f (t) =
;
0
e−2t
sin t2




(1 + t)−2 sin t 0
0
0 t , f (t) =  (1 + t)−2 
...
4

Second Order Linear Differential Equations

Hitherto, we have considered the asymptotic behavior and boundedness of solutions of a
linear system
...
Now we glance at
a few results on asymptotic behavior of solutions of second order linear differential equations
...
27)

where a : [0, ∞] → R is a continuous function
...
27)
...
4
...
Let a be a non-decreasing continuous function such that a90) = 0 and such
that
a(t) → ∞ as t → ∞
...
27) are bounded
...
Multiply (5
...

Integration leads to
t

t

x (s)x (s)ds +
0

0

a(s)x(s)x (s)ds = c1

(c1 is a constant)

which is the same as
1
1 2
x (t) + a(t)x2 (t) −
2
2

t
0

x2 (s)
da(s) = c1
2

(c2 is another constant)
...
Consequently
a(t)

t

x2 (t)
1
≤ c2 +
2
2

x2 (s)da(s)
...

126

Lecture 32
Theorem 5
...
2
...
27) and let


t|a(t)|dt < ∞
...
27) is asymptotic to a0 + a1 t, where
t→∞

a0 and a1 are constants simultaneously not equal to zero
...
We integrate (5
...
28)

1

from which we have, for t ≥ 1,
t

|x(t)| ≤ (|c1 | + |c2 |)t + t
That is,

|a(s)||x(s)|ds
...

s

Gronwall’s inequality now implies
|x(t)|
≤ (|c1 | + |c2 |) exp
t

t
1

s|a(s)|ds ≤ c3 ,

(5
...
Differentiation of (5
...

1

Now the estimate (5
...

1

Thus, lim sup |x (t)| as t → ∞, exists
...
Then, from (5
...

The second solution of (5
...

a1
1

Hence, the general solution of (5
...

127

(5
...
Such a choice is
t→∞

always possible
...
Let

1 − c3 t0 s|a(s)|ds > 0
...

t→∞

EXERCISES
1
...
If


0 |a(t)|dt

c1
exp
a(t)

t
0

a (t)
dt , t ≥ 0
...


3
...

4
...


5
...


Stability of Nonlinear Systems
Introduction
Till now We have seen a few results on the asymptotic behavior of solutions of linear systems
when t → ∞
...
We devote the rest of the module to introduce
the concept of stability of solutions
...

many of the physical phenomenon is governed by a differential equation ,consider one
such system
...
Let an external force act on the system which results in perturbing the stationary
128

state
...
In other words, what is the order of the magnitude of the change
from the stationary state ? Usually this change is estimated by a norm which also is used
to measure the size of the perturbation
...
If the perturbed system moves
away from the stationary state in spite of the size of the perturbation being small at the
initial time, then it is customary to label such a system as unstable
...

Let us consider the oscillation of a pendulum of a clock
...
If the pendulum is given a small deflection then after some time it returns to its vertical position
...
The clock then works for a long time with this amplitude
...
Such a system has two
equilibrium states (stationary solutions), one being the position of rest and the other the
normal periodic motion
...
The solution of the perturbed state approaches to
either of these two stationary solutions and after some time they almost coincide with one
of them
...

As said earlier this chapter is devoted to the study of the stability of stationary solutions
of systems described by ordinary differential equations
...
Among the methods known today, to study the stability properties, the direct or the second method due to Lyapunov is important and useful
...
Further it does not depend on the knowledge of solutions in a closed form
...
Analysis plays an important role for obtaining
proper estimates on energy functions
...

Such a study turns out to be difficult due to the lack of closed form for their solutions
...
The following notations are used:
I = [t0 , ∞), t0 ≥ 0 for ρ > 0, Sρ = {x ∈ Rn : |x| < ρ}
...
31)

Let f : I × Sρ −→ Rn be a given continuous function
...
32)

where x : Sρ −→ Rn
...
32) posses a unique solution x(t; t0 , x0 ) in Sρ passing
through a point (t0 , x0 ) ∈ I × Sρ and x continuously depend on (t0 , x0 )
...
We are basically interested in studying the
stability of x
...
32)
...
32)
...

129

Definition 5
...
3
...
32) existing on I satisfies
|y(t) − x(t)| <

, t ≥ t0 whenever |y(t0 ) − x(t0 ) | < δ
...
32) existing on I is such that
|y(t) − x(t)| → 0 as t → ∞ whenever |y(t0 ) − x(t0 )| < δ0
...

We emphasize that in the above definitions, the existence of a solution x of (5
...
In general, there is no loss of generality, if we let x to be the zero solution
...
33)

where y is any solution of (5
...
Since y satisfies (5
...

By setting
we have

˜
f (t, z(t)) = f (t, z(t) + x(t)) − x (t)
˜
z (t) = f (t, z(t))
...
34)

Clearly, (5
...
34) possesses a trivial solution or a zero solution
...
33) does not change the character of the stability of
a solution of (5
...
In subsequent discussions we assume that (5
...

The stability definitions can also be viewed geometrically
...
Figure 5
...
Time axis is the line perpendicular to the plane at the origin
...
Consider a disc with origin at the center and radius where < ρ
...
Further, y never reaches the boundary point of S
...
5
...


130

Let us assume that the zero solution ( sometimes referred to as origin) be stable
...
Let y approach the origin as t → ∞ ( in other
words time increases indefinitely)
...

Further consider an S region and any arbitrary number δ(δ < ) however small
...
If the system is unstable, y reaches the boundary of
S for some t in I
...
We have listed a few of the
stability definitions of solutions of (5
...
There are several other stability definitions which
have been investigated in detail and voluminous literature is now available on this topic
...

Example 5
...
4
...
Let the
solution x ≡ 0 be the unperturbed state
...
By choosing δ < , then, the criterion
for stability is trivially satisfied
...

Example 5
...
5
...
Let
stability of the origin x(t) ≡ 0 we need to verify
|y(t) − 0)| = |ce−(t−t0 ) | <

> 0 be given
...


whenever |y(t0 ) − 0| = |c| < δ
...
Further,
for any δ0 > 0, and |c| < δ0 implies
|ce−(t−t0 ) | → 0 as t → ∞
or in other words x ≡ 0 is asymptotically stable
...
4
...
Any solution of IVP
x = x, x(t0 ) = η
or a solution through (t0 , η) is
y(t) = η exp(t − t0 )
...
Clearly as t → ∞ (ie increases indefinitely) y escapes out of any
neighborhood of the origin or else the origin, in this case, is unstable
...

EXERCISES
1
...

2
...

132

3
...

4
...

x3
0
0 0
x3


Show that no non-trivial solution of this system tends to zero as t → ∞
...
Prove that for 1 < α < 2, x = (sin log t + cos log t − α)x is asymptotically stable
...
Consider the equation
x = a(t)x
...

0

Under what condition the zero solution is stable ?

5
...
Needless to stress the importance of these topics as these have
wide applications
...
32), which may
be written in a more useful form
x = A(t)x + f (t, x)
...
35)

The equation (5
...


(5
...
35) is perturbed form of (5
...
Many properties of (5
...
Under some restrictions on A and f , stability properties of (5
...
36)
...

(ii) the matrix A(t) is an n × n matrix which is continuous on I;
(iii) f : I × Sα → Rn is a continuous function with f (t, 0) ≡ 0, t ∈ I
...
35) on some interval
...
However, for stability we assume that solutions of (5
...
Let Φ(t) denote a fundamental matrix of (5
...
As a first step, we obtain necessary and sufficient
conditions for the stability of the linear system (5
...
Note that x ≡ 0, on I satisfies (5
...
36)
...
5
...
The zero solution of equation (5
...

(5
...
The solution y of (5
...

Suppose that the inequality (5
...
Then, for t ∈ I
|y(t)| = |Φ(t)c| ≤ k|c| < ,
if |c| < /k
...

Conversely, let
|y(t)| = |Φ(t)c| < , t ≥ t0 , for all c such that |c| < δ
...
By Choosing k = /δ the inequality (5
...


134

Lecture 34
The result stated below concerns about the asymptotic stability of the zero (or null) solution
of the system (5
...

Theorem 5
...
2
...
36) is asymptotically stable if and only if
|Φ(t)| → 0 as t → ∞
...
38)

Proof
...
37) is a consequence of (5
...
Since
|Φ(t)| → 0as t → ∞
in view of (5
...

The stability of (5
...

We have seen earlier that if the characteristic roots of the matrix A have negative real parts
then every solution of (5
...
In fact, this is asymptotic stability
...


(5
...


(5
...
41)

uniformly in t for t ∈ I
...
The proof of the following result depends on
origin,
|x|
the Gronwall’s inequality
...
5
...
In equation (5
...
Assume further that f satisfies the condition
(5
...
Then, the origin for the system (5
...

Proof
...
35) passing
through (t0 , y0 ) satisfies the integral equation
y(t) = e(t−t0 )A y0 +

t

e(t−s)A f (s, y(s))ds
...
42)

t0

The inequality (5
...
42) yields
|y(t)| ≤ M |y0 |e−ρ(t−t0 ) + M
135

t
t0

e−ρ(t−s) |f (s, y(s))|ds
...
43)

which takes the form
|y(t)|eρt ≤ M |y0 |e ρt0 + M

t

e ρs |f (s, y(s))|ds
...
Then, the relation (5
...
In
view of the condition (5
...


(5
...
Then, there exists a number T such that |y(t)| < δ for t ∈ [t0 , T ]
...
44) in (5
...
45)

t0

for t0 ≤ t < T
...
45), yields
e ρt |y(t)| ≤ M |y0 |e ρt0
...
46)

or for t0 ≤ t < T , we obtain
|y(t)| ≤ M |y0 |e(M

−ρ)(t−t0 )


...
47)

Choose M < ρ and y(t0 ) = y0
...
47) yields
|y(t)| < δ, t0 ≤ t < T
...
35) exists locally at each point (t, y), t ≥ t0 , |y| < α
...
So given any solution y(t) = y(t; t0 , y0 ) with |y0 | < δ/M , y exists
on t0 ≤ t < ∞ and satisfies |y(t)| < δ
...
Hence, y ≡ 0 is asymptotically stable when M < ρ
...
35) and (5
...
Let r : I → R+ be a non-negative continuous function
such that


r(s)ds < +∞
...
48)

The condition (5
...
35)
...
35)
...
5
...
Let the fundamental matrix Φ(t) satisfy the condition
|Φ(t)Φ−1 (s)| ≤ K,

(5
...
Let f satisfy the hypotheses given by
(5
...
Then, there exists a positive constant M such that if t1 ≥ t0 , any solution y of (5
...

Moreover, if |Φ(t)| → 0 as t → ∞, then
|y(t)| → 0 as t → ∞
...
Let t1 ≥ t0 and y be any solution of (5
...
We know thar y
satisfies the integral equation
y(t) = Φ(t)Φ−1 (t1 )y(t1 ) +

t

Φ(t)Φ−1 (s)f (s, y(s))ds
...
50)

t1

for t1 ≤ t < T , where |y(t)| < α for t1 ≤ t < T
...
48) and (5
...


(5
...
48) the integral on the right side is bounded
...


(5
...
Following the lines of proof of in Theorem
5
...
3, we extend the solution for all t ≥ t1
...
52) holds for t ≥ t1
...
35) also satisfies the integral equation
t

y(t) = Φ(t)Φ−1 (t0 )y(t0 ) +
t1

= Φ(t)y(t0 ) +

Φ(t)Φ−1 (s)f (s, y(s))ds

t0

Φ(t)Φ−1 (s)f (s, y(s))ds +

t0

t

Φ(t)Φ−1 (s)f (s, y(s))ds
...
By using the conditions (5
...
49) and (5
...


(5
...
53) can be made less than (arbitrary) /2
by choosing t1 sufficiently large
...
The first two terms on
the right side contain the term |Φ(t)|
...
Thus, |y(t)| < for large t
...

The inequality (5
...
But note that t1 ≥ t0
is any arbitrary number
...
52) holds for any t1 ≥ t0
...
In literature such a property is called
uniform stability
...

EXERCISES
1
...
36) are stable if and only if they are bounded
...
Let b : I → Rn be a continuous function
...
36)
...
Prove that if the characteristic polynomial of the matrix A is stable, the matrix C(t)

is continuous on 0 ≤ t < ∞ and 0 |C(t)|dt < ∞, then all solutions of
x = (A + C(t))x
are asymptotically stable
...
Prove that the system (5
...

t0

|E + hA(t)| − 1
, where E is the
h→0
h

5
...

(i) Prove that µ is a continuous function of t
...
36) prove that
t

|y(t0 )| exp −

t0

t

µ(−A(s))ds ≤ |y(t)| ≤ |y(t0 )| exp
138

µ(A(s))ds
...
Then
r+ (t) = lim

h→0+

|y(t) + hy (t)| − |y(t)|

...

(iii) When A(t) = A a constant matrix, show that | exp(tA)| ≤ exp[tµ(A)]
...

t0

(v) Show that the trivial solution is asymptotically stable if
t

µ(A(s))ds → −∞ as t → ∞
...

t0

Lecture 35
5
...
For example, the equation
x = kx
(where k is a constant) represents a simple model for the growth of population where t does
not appear explicitly
...
54)

where g : Rn → Rn
...
A system described by (5
...
Let g(0) = 0 so that (5
...

Presently,our aim is to study the stability of the zero of solution (5
...
Let us recall
that very few methods are known to solve nonlinear differential equations for getting a closed
form solution
...
54)
...
In fact it is a very useful to
determine stability properties of linear and nonlinear equations
...
The rest of the
module is devoted to the description of the Lyapunov’s direct method
...
In fact, this method
is the generalization of the energy method in classical mechanics
...
The energy is always positive quantity and is zero when the system is completely at
rest
...
This function is generally denoted by V
...


(ii) V (0) = 0
...

V is called negative definite if −V is positive definite
...
Further the origin is the only point in Sρ at which the minimum value is
attained
...
g(x)
...
54)
...
54)
...
54) is now known to us, although we
do not have the explicit form of a solution
...

For instance
V (x) = x2 , (x ∈ R) or V (x1 , x2 ) = x4 + x4 , (x1 , x2 ) ∈ R2
1
2
are some simple examples of positive definite functions
...
In general, let A be a n × n positive
definite real matrix then V defined by
V (x) = xT A x, where x ∈ Rn
is a positive definite function
...
Geometrically,when n = 3, we may visualize V in three dimensional
space
...
Let
z = x2 + x2
...

Further z = 0 when x1 = x2 = 0
...
Such a
surface is like a parabolic mirror pointing upwards
...

This section is a curve
x2 + x2 = k, z = k
...

1
2
Clearly these are circles with radius k, and the center at the origin
...
The geometrical picture for any Lyapunov
function in three dimensional, in a small neighborhood of the origin, is more or less is of this
character
...

We state below 3 results concerning the stability of the zero solution of the system (5
...

The geometrical explanation given below for these results shows a line of the proof
...
The detailed mathematical proofs are given
in the next section
...

141

˙
Theorem 5
...
1
...
54) is stable
...
Consider the hypersphere S
...
(Such a K always exists for each ; since
V > 0 is continuous on the compact set
¯
Sρ, = {x ∈ Rn : ≤ |x| ≤ ρ}
¯
V actually attains the minimum value K on the set Sρ,
...
In other
words, there exists a number δ > 0 such that the hypersphere Sδ lies inside the oval-shaped
surface, V (x) = K
...
Let x(t; t0 , x0 ) be a solution of (5
...
Obviously V (x0 ) < K
...
e
...
which shows that the solution x(t; t0 , x0 )
remains in S
...
54)
...

Proof
...
6
...
Let

> 0 be given and let 0 < < ρ
...
}

We note that A (closed annulus region) is compact ( being closed and bounded in Rn ) and
since V is continuous α = miny∈A V (y) is finite
...
Also V ≤ 0 along the solution x(t; t0 , x0 ),
implies V (x(t)) ≤ V (x(t0 )) < α which tells us that |x(t)| < by the definition of α
...
6
...
If in Sρ there exists a positive definite function V such that (−V ) is also
positive definite then, the origin of the equation (5
...

˙
By Theorem5
...
1 the zero solution origin (5
...
Since −V is positive definite,
V decreases along the solution
...
Let us show that this is impossible
...
But this cannot be true since −V is positive definite
...


t→∞

This implies that lim |x(t; t0 , x0 )| = 0
...

t→∞

142

Lecture 36
Theorem 5
...
3
...

Then, the origin of (5
...

Example 5
...
4
...

The system is autonomous and possesses a trivial solution
...

1
2
˙
is positive definite
...

So the hypotheses of Theorem 5
...
1 holds and hence the zero solution or origin is stable
...

Note that none of the nonzero solutions tend to zero
...
For the given system we also note that z = x1 satisfies
z +z =0
and z = x2
...
6
...
Consider the system
x1 = (x1 − bx2 )(αx2 + βx2 − 1)
1
2
x2 = (ax1 + x2 )(αx2 + βx2 − 1)
...

1
2
When a > 0, b > 0, V (x1 , x2 ) is positive definite
...

1
2
1
2
143

˙
Let α > 0, β > 0
...
6
...

Example 5
...
6
...
By letting
1
V = (x2 + x2 )
2
2 1
we have

˙
V (x1 , x2 ) = −(x2 + x2 )f (x1 , x2 )
...
If f is positive
definite in some neighborhood of the origin, the origin is asymptotically stable
...

Some more examples:
1
...
For
V (x) = x2 , |x| < 1
˙
is positive definite and its derivative V along the solution is negative definite
...
again we claim that the zero solution of a scalar equation
x = x(1 − x)
is unstable
...

EXERCISES
1
...

1
2
2
...

144

3
...
Prove that Q is positive definite if and only if
r11 > 0, r11 r22 − r21 r12 > 0, and

det[rij ] > 0, i = 1, 2, · · · ; m = 3, 4, · · · , n
...
Find a condition on a, b, c under which the following matrices are positive definite:


ac
c
0
1
(i) ab−c  c a2 + b a 
0
a
1
 6a+27

a + 2a 9 − a
a
1
3a 
...
Let

1
V (x1 , x2 ) = x2 +
2 2

x1

f (s)ds
0

where f is such that f (0) = 0, and xf (x) > 0 for x = 0
...

6
...

7
...

1
2
(ii) x1 = −x3 − x1 x3 , x2 = x4 − x3
...

1
2
8
...


145

Lecture 37
5
...
Systems of this kind are given by (5
...
For this purpose a Lyapunov function V (t, x)
is needed which depends on t and x
...
32) be such that f (t, 0) ≡ 0, t ∈ I
...
This condition guarantees
the existence and the uniqueness of solutions
...
32) exist on the entire time interval I and that the trivial solution is the equilibrium or
the steady state
...
7
...
A real valued function φ is said to belong to the class K if
(i) φ is defined and continuous on 0 ≤ r < ∞,
(ii) φ is strictly increasing on 0 ≤ r < ∞,
(iii) φ(0) = 0 and φ(r) → ∞ as r → ∞
...

Definition 5
...
2
...

It is negative definite if
V (t, x) ≤ −φ(|x|), t, x) ∈ I × Sρ
...
Example: The function
V (t, x) := (t2 + 1)x4
is positive definite since V (t, 0) ≡ 0 and there exists a φ ∈ K such that V (t, x) ≥ φ(|x|)
...
7
...
A real valued function V defined on I × Sρ is said to be decrescent if
there exists a function ψ ∈ K such that in a neighborhood of the origin and for all
t ≥ t0 , V (t, x) ≤ ψ(|x|)
...
In this case, we may choose Ψ(r) = r2
...

We are now set to prove the fundamental theorems on the stability of the equilibrium of the
system (5
...
We need the energy function in these results
...

˙
By using the chain rule the derivative V (t, x) is
dV (t, x)
∂V (t, x)
˙
V (t, x) =
=
+
dt
∂t

n
i=1

∂V dxi

...
32)
...

∂t
∂xi
i=1

Theorem 5
...
4
...
32) is stable
...
The positive definiteness of V tells us that there exists a function φ ∈ K such that
0 ≤ φ(|x|) ≤ V (t, x), |x| < ρ, t ∈ I
...
55)

˙
Let x(t) = x(t; t0 , x0 ) be a solution of (5
...
Since V (t, x) ≤ 0, we have
V (t, x(t; t0 , x0 )) ≤ V (t0 , x0 ), t ∈ I
...
56)

> 0, there exists a δ = δ( ) > 0 so that
V (t0 , x0 ) < φ( ),

(5
...
Now the inequalities (5
...
56) yield
0 ≤ φ(|x(t; t0 , x0 )|) ≤ V (t, x(t; t0 , x0 )) ≤ V (t0 , x0 ) < φ( )
...

The ensuing result provides us sufficient conditions for the asymptotic stability of the
origin
...
7
...
Let V be a positive definite decrescent function satisfying the hypotheses
˙
˙
(H*) such that V (t, x) ≤ 0 and , and V is negative definite
...
32) is asymptotically stable
...
Let x(t; t0 , x0 ) be a solution of (5
...
Since the hypotheses of Theorem 5
...
4 the
null or the zero solution of (5
...
In other words, given > 0 there exists |x0 | < δ
such that
0 < |x(t; t0 , x0 )| < , t ≥ t0 , whenever |x0 | < δ
...
Suppose that for some λ > 0
V (x(t; t0 , x0 )) ≥ λ > 0, for t ≥ t0
...


(5
...
58) we have a number γ > 0 such that
˙
V (t, x(t; t0 , x0 )) ≤ −γ < 0, t ≥ t0
...


(5
...
59) becomes negative which contradicts the fact that
V is positive definite
...

is false
...
Since V is a positive definite and decrescent function,
V (t, x(t; t0 , x0 )) → 0 as t → ∞
and therefore it follows that
|x(t; t0 , x0 )| → 0 as t → ∞
...

In some cases ρ may be infinite
...

for any choice of x0
...

Theorem 5
...
6
...
32) is asymptotically stable in the large if
there exists, a positive definite function V (t, x) which is decrescent everywhere and such that
˙
V (t, x) → ∞ as |x| → ∞ for each t ∈ I and such that V is negative definite
...
7
...
Consider the system x = A(t)x, where A(t) = (aij ), aij = −aji , i = j
and aij ≤ 0, for all values of t ∈ I and i, j = 1, 2, · · · , n
...

n
1
2
Obviously V (x) > 0 for x = 0 and V (0) = 0
...

i

aij xi (t)xj (t) = 2
i=1 j=1

i=1

The last step is obtained by using the assumption for the matrix A(t)
...
If aii < 0 for all values of t then it is seen
˙
that V (x(t)) < 0 which implies asymptotic stability of the origin of the given system
...


(i) Show that
V (t, x1 , x2 ) = t(x2 + x2 ) − 2x1 x2 cos t
1
2
is positive definite for n = 2 and t > 2
...


2
...

1
2
2 + (1 + t)x2
(ii) x1
is positive definite but not decrescent
...

1
2
(iv) x2 + e−2t x2 is decrescent
...

(v) (1 + e
1
2
3
...


∂V
∂xi (i

= 1, 2, · · · , n) on

4
...
For y = tx it becomes y = y(y 2 − 1)
...

0
5
...

6
...

149

Lecture 38
5
...
Let
us study a method of construction for a linear equation and we also exploit it for studying
stability of zero solution of a nonlinear systems close enough to the corresponding linear
system
...
60)

where A = (aij ) is an n × n constant matrix
...
60) by Lyapunov’s direct method
...
Let V represent a quadratic form
V (x) = xT R x,

(5
...
The time derivative
of V along the solution of (5
...


(5
...
For the asymptotic stability of (5
...
On the otherhand if we start with
an arbitrary matrix R then, the matrix Q may not be positive definite
...
62)
for R
...
60)
...
6
...
60) is asymptotically stable
...
62) for R
...
60)
...
62) gives rise to a unique solution? The answer
lies in the following result whose proof is given here
...

Proposition: Let A be a real matrix
...
62) namely,
(AT R + RA) = −Q
has a positive definite solution R for every positive definite matrix Q if and only if A is a
stable matrix
...
60)
...
62)is unaffected if
the system (5
...
The system (5
...

Now choose the matrix P such that
P −1 AP
is a triangular matrix
...
So
there is no loss of generality by assuming in (5
...
In other words the matrix A is
of the following form:


λ1
0
0 ··· 0
 a21 λ2
0 ··· 0 


 a31 a32 λ3 · · · 0 
A=

...


...


...


...


...


...


...

an1 an2 an3 · · ·
The equation (5
...


...


...


...


...


...


...


...


...


...


a31 · · ·
a32 · · ·

...


...


...


...


...

rn2 rn3 · · ·

q11
 q21

= −
...


...


...


···
···

...



r1n
r2n 


...


...


...


...


...


...


...


...


...


...


...


...


...


...


qn1 qn2 qn3 · · ·

qnn

Equating the corresponding terms on both sides results in the following system of equations
(λj + λk )rjk = −qjk + δjk (· · · , rhk , · · · ),
where δjk is a linear form in rhk , h + k > j + k, with coefficients in ars
...
The solution of the linear system is unique if the determinant of the
151

coefficients is non-zero
...

In such a case the matrix R is uniquely determined if none of the characteristic roots λi
is zero and further the sum of any two different roots is not zero
...

Example 5
...
1
...
In this case
−3 k
A=

...

0 2

Now Eq
...
62) is
−3 −2
k −4

r11 r12
r
r
+ 11 12
r21 r22
r21 r22

−3 k
−2 0
=

...

14(k + 6) −6 + 4k 21 + 2k + k 2

Now R is positive definite if
(i)

32 + 2k
> 0,
14(k + 6)

(ii)

(32 + 2k)(21 + 2k + k 2 ) − (4k − 6)2
> 0
...
Thus for any k between (−16, −6) the matrix
R is positive definite and therefore, the zero solution of the system is asymptotically stable
...
Let us consider the following system of equations (
in a vector form)
x = g(x),
(5
...
Let us denote

∂gi
∂xj

by aij
...
63) may be written as
x = Ax + f (x),

(5
...
Now we study the stability
of the zero solution of the system (5
...
The system (5
...
63) ( which sometimes is also called the
linearized part of the system (5
...
We know that the zero solution of the system (5
...
We now make use of the Lyapunov function
given by (5
...
60)
...
62)
...

For the asymptotic stability of the zero solution system (5
...
We expect that if f is small then, the zero solution of the system (5
...
With this short introduction let us employ the same Lyapunov
function (5
...
64)
...
64) is
˙
V (x) = x T Rx + xT Rx = (xT AT + f T )Rx + xT R(Ax + f )
= xT (AT R + RA)x + f T Rx + xT Rf = −xT Qx + 2xT Rf,

(5
...
62) and (5
...
The second term on the right side of (5
...
The first one contains a term of degree two in x
...
Whatever the second
term is, at least a small region containing the origin can definitely be found such that the
˙
first term predominates the second term and thus, in this small region the sign of V remains
negative
...
64) is asymptotically
˙
stable
...

Definition 5
...
2
...
64) is the set of all
initial points x0 such that
lim x(t, t0 , x0 ) = 0
...
We give below a method of
determining the stability region for the system (5
...

Consider 3 a surface {x : V (x) = k} (where k is a constant to be determined) lying
˙
entirely inside the region {x : V (x) ≤ 0}
...
Then, stability region for the system (5
...

the surface V
Example 5
...
3 given below illustrates a procedure for finding the region of stability
...
8
...
Consider a nonlinear system
x1
x2

=

−1 3
−3 −1

x1
0
+ 2
...
Thus
0 4
0 2
V (x1 , x2 ) = 2(x2 + x2 )
1
2
˙
V (x1 , x2 ) = 4(x1 x1 + x2 x2 ) = 4[−x2 − x2 (1 − x2 )]
1
2

with respect to the given system
...

1
2
When

˙
x2 < 1, V (x1 , x2 ) < 0 for all x1
...

The size of the stability region thus obtained depends on the choice of a matrix Q
...
Prove that the stability of solutions the equation (5
...

2
...
62) then, prove that so is RT and hence, RT = R
...
The matrices A and Q are given below
...
62)
for each of the following cases
...

0 2

4
...

x3
−1 −1 −1
x3
Choose



2 0 0
Q =  0 0 0 
...

5
...

6
...


155

Bibliography
Refences for Analysis and Linear Algebra
1
...
M
...
,Addison-Wesley,Massachusetts(1971)
...
W
...
Mcgraw-Hill,Int
...
,1976
...
F
...
Hohn,Elementary Matrix Algebra,Amerind,New Delhi(1971)
...
K
...
Kunz, Linear Algebra,EEE,2nd edn
...

bf Refencences for for ordinary Differential Equations
5
...
Bailey,L
...
Waltman, Two Point Boundary value Problems,Academic press,New York(1968)
...
R
...
kalaba,Quasilineariztion and Nonlinear BVPs,American
Elsevier,New York,(1965)
...
F
...
A
...

8
...
A
...
Levinson,Theory of Ordinary differential Equations,TMH,New Delhi,
9
...
G
...
lakshmikantham and V’Raghavendra ,Text book of Ordinary
Differential equations ,2nd ed
...

10
...
F
...

11
...
S
Title: Ordinary Differential equations
Description: Subject contents are for 2nd and 3rd year student.Full notes concise and precise material.Good for Last minute preparation