Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Ordinary Differential Equations Simon Brendle, Lecture notes of Differential Equations

Solving systems of linear differential equations using matrix exponentials ... These notes grew out of courses taught by the author at Stanford University.

Typology: Lecture notes

2022/2023

Uploaded on 05/11/2023

arold
arold 🇺🇸

4.7

(24)

376 documents

1 / 121

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Ordinary Dierential Equations
Simon Brendle
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32
pf33
pf34
pf35
pf36
pf37
pf38
pf39
pf3a
pf3b
pf3c
pf3d
pf3e
pf3f
pf40
pf41
pf42
pf43
pf44
pf45
pf46
pf47
pf48
pf49
pf4a
pf4b
pf4c
pf4d
pf4e
pf4f
pf50
pf51
pf52
pf53
pf54
pf55
pf56
pf57
pf58
pf59
pf5a
pf5b
pf5c
pf5d
pf5e
pf5f
pf60
pf61
pf62
pf63
pf64

Partial preview of the text

Download Ordinary Differential Equations Simon Brendle and more Lecture notes Differential Equations in PDF only on Docsity!

Ordinary Di↵erential Equations

Simon Brendle

  • Chapter 1. Introduction Preface vii - integrating factors §1.1. Linear ordinary di↵erential equations and the method of
    • §1.2. The method of separation of variables
    • §1.3. Problems
  • Chapter 2. Systems of linear di↵erential equations
    • §2.1. The exponential of a matrix
    • §2.2. Calculating the matrix exponential of a diagonalizable matrix
    • §2.3. Generalized eigenspaces and the L + N decomposition
    • §2.4. Calculating the exponential of a general n ⇥ n matrix
      • exponentials §2.5. Solving systems of linear di↵erential equations using matrix
    • §2.6. Asymptotic behavior of solutions
    • §2.7. Problems
  • Chapter 3. Nonlinear systems
    • §3.1. Peano’s existence theorem
    • §3.2. Existence theory via the method of Picard iterates
    • §3.3. Uniqueness and the maximal time interval of existence
    • §3.4. Continuous dependence on the initial data
    • §3.5. Di↵erentiability of flows and the linearized equation
    • §3.6. Liouville’s theorem
    • §3.7. Problems vi Contents
  • Chapter 4. Analysis of equilibrium points
    • §4.1. Stability of equilibrium points
    • §4.2. The stable manifold theorem
    • §4.3. Lyapunov’s theorems
    • §4.4. Gradient and Hamiltonian systems
    • §4.5. Problems - Bendixson theorem Chapter 5. Limit sets of dynamical systems and the Poincar´e-
    • §5.1. Positively invariant sets
    • §5.2. The !-limit set of a trajectory
    • §5.3. !-limit sets of planar dynamical systems
    • §5.4. Stability of periodic solutions and the Poincar´e map
    • §5.5. Problems - biology Chapter 6. Ordinary di↵erential equations in geometry, physics, and
    • §6.1. Delaunay’s surfaces in di↵erential geometry
    • §6.2. The mathematical pendulum
    • §6.3. Kepler’s problem
    • §6.4. Predator-prey models
    • §6.5. Mathematical models for the spread of infectious diseases
    • §6.6. A mathematical model of glycolysis
    • §6.7. Problems
  • Chapter 7. Sturm-Liouville theory - second order §7.1. Boundary value problems for linear di↵erential equations of
    • §7.2. The Sturm comparison theorem
    • §7.3. Eigenvalues and eigenfunctions of Sturm-Liouville systems
    • §7.4. The Liouville normal form
      • system §7.5. Asymptotic behavior of eigenvalues of a Sturm-Liouville
    • §7.6. Asymptotic behavior of eigenfunctions
    • §7.7. Orthogonality and completeness of eigenfunctions
    • §7.8. Problems
  • Bibliography

Preface

These notes grew out of courses taught by the author at Stanford University

during the period of 2006 – 2009. The material is all classical. The author is

grateful to Messrs. Chad Groft, Michael Eichmair, and Jesse Gell-Redman,

who served as course assistants during that time.

vii

Chapter 1

Introduction

1.1. Linear ordinary di↵erential equations and the method

of integrating factors

A di↵erential equation is an equation which relates the derivatives of an

unknown function to the unknown function itself and known quantities. We

distinguish two basic types of di↵erential equations: An ordinary di↵erential

equation is a di↵erential equation for an unknown function which depends on

a single variable (usually denoted by t and referred to as time). By contrast,

if the unknown function depends on two or more variables, the equation

is a partial di↵erential equation. In this text, we will restrict ourselves to

ordinary di↵erential equations, as the theory of partial di↵erential equations

is considerably more dicult.

Perhaps the simplest example of an ordinary di↵erential equation is the

equation

(1) x

0 (t) = a x(t),

where x(t) is a real-valued function and a is a constant. This is an example

of a linear di↵erential equation of first order. Its general solution is described

in the following proposition:

Proposition 1.1. A function x(t) is a solution of (1) if and only if x(t) =

c e

at for some constant c.

Proof. Let x(t) be an arbitrary solution of (1). Then

d

dt

(e

at x(t)) = e

at (x

0 (t) a x(t)) = 0.

2 1. Introduction

Therefore, the function e

at x(t) is constant. Consequently, x(t) = c e

at for

some constant c.

Conversely, suppose that x(t) is a function of the form x(t) = c e

at for

some constant c. Then x

0 (t) = ca e

at = a x(t). Therefore, any function of

the form x(t) = c e at^ is a solution of (1). ⇤

We now consider a more general situation. Specifically, we consider the

di↵erential equation

(2) x

0 (t) = a(t) x(t) + f (t).

Here, a(t) and f (t) are given continuous functions which are defined on some

interval J ⇢ R. Like (1), the equation (2) is a linear di↵erential equation

of first order. However, while the equation (1) has constant coecients,

coecients in the equation (2) are allowed to depend on t. In the following

proposition, we describe the general solution of (2):

Proposition 1.2. Fix a time t 0 2 J, and let '(t) =

R (^) t

t (^0)

a(s) ds. Then a

function x(t) is a solution of (2) if and only if

x(t) = e

'(t)

✓ Z (^) t

t (^0)

e

'(s) f (s) ds + c

for some constant c.

Proof. Let x(t) be an arbitrary solution of (2). Then

d

dt

(e

'(t) x(t)) = e

'(t) (x

0 (t) '

0 (t) x(t))

= e

'(t) (x

0 (t) a(t) x(t))

= e

'(t) f (t).

Integrating this identity, we obtain

e

'(t) x(t) =

Z (^) t

t (^0)

e

'(s) f (s) ds + c

for some constant c. This implies

x(t) = e

'(t)

✓ Z

t

t (^0)

e

'(s) f (s) ds + c

for some constant c.

Conversely, suppose that x(t) is of the form

x(t) = e

'(t)

✓ Z

t

t (^0)

e

'(s) f (s) ds + c

4 1. Introduction

To solve this equation, we observe that

R

1 1+x 2

dx = arctan(x). Hence, if

x(t) is a solution of the given di↵erential equation, then

d

dt

arctan(x(t)) =

1 + x(t)

2

x

0 (t) = t.

Integrating this equation, we obtain

arctan(x(t)) =

t

2

  • c

for some constant c. Thus, we conclude that

x(t) = tan

t

2

  • c

1.3. Problems

Problem 1.1. Find the solution of the di↵erential equation x

0 (t) =

2 t 1+t 2

x(t)+

1 with initial condition x(0) = 1.

Problem 1.2. Find the solution of the di↵erential equation x

0 (t) =

t t+

y(t)+

1 with initial condition x(0) = 0.

Problem 1.3. Find the general solution of the di↵erential equation x

0 (t) =

x(t) (1 x(t)). This di↵erential is related to the logistic growth model.

Problem 1.4. Find the general solution of the di↵erential equation x

0 (t) =

x(t) log

1 x(t)

. This equation describes the Gompertz growth model.

Problem 1.5. Let x(t) be the solution of the di↵erential equation x

0 (t) =

cos x(t) with initial condition x(0) = 0.

(i) Using separation of variables, show that

log(1 + sin x(t)) log(1 sin x(t)) = 2t.

(Hint: Write

2 cos x

cos x 1+sin x

cos x 1 sin x

(ii) Show that

x(t) = arcsin

e

t e

t

e t^ + e t

= arctan(e

t ) arctan(e

t ).

Chapter 2

Systems of linear

di↵erential equations

2.1. The exponential of a matrix

Let A 2 C

n⇥n be an n ⇥ n matrix. The operator norm of A is defined by

kAk (^) op = sup x 2 C n^ , kxk 1

kAxk.

It is straightforward to verify that the operator norm is submultiplicative;

that is,

kABkop  kAkop kBk (^) op.

Iterating this estimate, we obtain

kA

k kop  kAk

k op

for every nonnegative integer k. This implies that the sequence

X^ m

k=

k!

A

k

is a Cauchy sequence in C

n⇥n

. Its limit

exp(A) := lim m!

X^ m

k=

k!

A

k

is referred to as the matrix exponential of A.

Proposition 2.1. Let A, B 2 C

n⇥n be two n ⇥ n matrices satisfying AB =

BA. Then

exp(A + B) = exp(A) exp(B).

2.2. Calculating the matrix exponential of a diagonalizable matrix 7

Proof. We compute

exp(A)

I +

m

A

⌘ (^) m

h

exp

m

A

⌘i (^) m

I +

m

A

⌘ (^) m

mX 1

l=

h

exp

m

A

⌘i (^) ml 1 h

exp

m

A

I

m

A

i ⇣

I +

m

A

⌘ (^) l

.

This gives

exp(A)^

I +

m

A

⌘ (^) m op

mX 1

l=

exp

m

A

ml 1

op

exp

m

A

I

m

A

op

I^ +

m

A

l

op

mX 1

l=

e

ml 1 m kAk^ op

exp

m

A

I

m

A

op

m

kAk (^) op

⌘ (^) l

mX 1

l=

e

ml 1 m kAk^ op

exp

m

A

I

m

A

op

e

l m kAk^ op

= m e

m 1 m kAk^ op

exp

m

A

I

m

A

op

On the other hand,

exp

m

A

I

m

A =

1 X

k=

k!

mk^

A

k ,

hence

exp

m

A

I

m

A

op

1 X

k=

k!

m

k

kAk

k op ^

m

2

kAk

2 op e^

1 m kAk^ op^.

Putting these facts together, we conclude that

exp(A)

I +

m

A

⌘ (^) m op

m

kAk

2 op e^

kAk (^) op .

From this, the assertion follows easily. ⇤

2.2. Calculating the matrix exponential of a diagonalizable

matrix

In this section, we consider a matrix A 2 C

n⇥n which is diagonalizable.

In other words, there exists an invertible matrix S 2 C

n⇥n and a diagonal

8 2. Systems of linear di↵erential equations

matrix

D =

0...... (^) n

such that A = SDS

1

. Equivalently, a matrix A is diagonalizable if there

exists a basis of C

n which consists of eigenvectors of A.

In order to compute the exponential of such a matrix we need two aux-

iliary results. The first one relates the matrix exponentials of two matrices

that are similar to each other.

Proposition 2.5. Suppose that A, B 2 C

n⇥n are similar, so that A =

SBS

1 for some invertible matrix S 2 C

n⇥n

. Then exp(tA) = S exp(tB)S

1

for all t 2 R.

Proof. Using induction on k, it is easy to show that A

k = SB

k S

1 for all

integers k 0. Consequently,

exp(tA) =

X^1

k=

t

k

k!

A

k

X^1

k=

t

k

k!

SB

k S

1 = S exp(tB)S

1 .

This completes the proof. ⇤

The second result gives a formula for the exponential of a diagonal ma-

trix:

Proposition 2.6. Suppose that

D =

0...... (^) n

is a diagonal matrix. Then

exp(tD) =

e

t (^1) 0... 0

0 e

t (^2)

... 0

. . .

0...... e

t (^) n

Proof. Using induction on k, we can show that

D

k

k 1 0...^0

0

k 2...^0 . . .

k n

10 2. Systems of linear di↵erential equations

Therefore, the matrix exponential of A is given by

exp(tA) = S

e

t(↵+i) 0

0 e

t(↵i)

S

1

i i

e

t(↵+i) 0

0 e

t(↵i)

1 2

i 2 1 2

i 2

1 2

(e t(↵+i)^ + e t(↵i^ )

i 2

(e t(↵+i)^ + e t(↵i)^ ) i 2

(e t(↵+i)^ + e t(↵i)^ )

1 2

(e t(↵+i)^ + e t(↵i^ )

e

↵t cos(t) e

↵t sin(t)

e

↵t sin(t) e

↵t cos(t)

2.3. Generalized eigenspaces and the L + N decomposition

In order to compute the exponential of a matrix that is not diagonaliz-

able, it will be necessary to consider decompositions of C

n into generalized

eigenspaces. We will need the following theorem due to Cayley and Hamil-

ton:

Theorem 2.7. Let A be a n ⇥ n matrix, and let p (^) A () = det(I A) denote

the characteristic polynomial of A. Then p (^) A (A) = 0.

Proof. The proof involves several steps.

Step 1: Suppose first that A is a diagonal matrix with diagonal entries

1 ,... , (^) n , i.e.

A =

0...... (^) n

Then

p(A) =

p( 1 ) 0... 0

0 p( 2 )... 0

. . .

0...... p( (^) n )

for every polynomial p. In particular, if p = p (^) A is the characteristic polyno-

mial of A, then p (^) A ( (^) j ) = 0 for all j, hence p (^) A (A) = 0.

Step 2: Suppose next that A is an upper triangular matrix whose diag-

onal entries are pairwise distinct. In this case, A has n distinct eigenvalues.

In particular, A is diagonalizable. Hence, we can find a diagonal matrix B

and an invertible matrix S such that A = SBS

1

. Clearly, A and B have

the same characteristic polynomial, so p (^) A (A) = p (^) B (A) = SpB (B)S

1 = 0

by Step 1.

2.3. Generalized eigenspaces and the L + N decomposition 11

Step 3: Suppose now that A is a arbitrary upper triangular matrix. We

can find a sequence of matrices A (^) k such that limk!1 A (^) k = A and each

matrix A (^) k is upper triangular with n distinct diagonal entries. This implies

p (^) A (A) = limk!1 p (^) A (^) k (Ak ) = 0.

Step 4: Finally, if A is a general n ⇥ n matrix, we can find an upper

triangular matrix B such that A = SBS

1

. Again, A and B have the same

characteristic polynomial, so we obtain p (^) A (A) = p (^) B (A) = SpB (B)S

1 = 0

by Step 3. ⇤

We will also need the following tool from algebra:

Proposition 2.8. Suppose that f () and g() are two polynomials that are

relatively prime. (This means that any polynomial that divides both f ()

and g() must be constant, i.e. of degree 0 .) Then we can find polynomials

p() and q() such that p() f () + q() g() = 1.

This is standard result in algebra. The polynomials p() and q() can be

found using the Euclidean algorithm. A proof can be found in most algebra

textbooks.

Proposition 2.9. Let A be an n ⇥ n matrix, and let f () and g() be two

polynomials that are relatively prime. Moreovr, let x be a vector satisfying

f (A) g(A) x = 0. Then there exists a unique pair of vectors y, z such that

f (A) y = 0, g(A) z = 0, and y + z = x. In other words, ker(f (A) g(A)) =

ker f (A) ker g(A).

Proof. Since the polynomials f () and g() are relatively prime, we can

find polynomials p() and q() such that

p() f () + q() g() = 1.

This implies

p(A) f (A) + q(A) g(A) = I.

In order to prove the existence part, we define vectors y, z by y = q(A) g(A) x

and z = p(A) f (A) x. Then

f (A) y = f (A) q(A) g(A) x = q(A) f (A) g(A) x = 0,

g(A) z = g(A) p(A) f (A) x = p(A) f (A) g(A) x = 0,

and

y + z = (p(A) f (A) + q(A) g(A)) x = x.

Therefore, the vectors y, z have all the required properties.

In order to prove the uniqueness part, it suces to show that ker f (A) \

ker g(A) = { 0 }. Assume that x lies in the intersection of ker f (A) and

ker g(A), so that f (A) x = 0 and g(A) x = 0. This implies p(A) f (A) x = 0

and q(A) g(A) x = 0. Adding both equations, we obtain x = (p(A) f (A) +

2.3. Generalized eigenspaces and the L + N decomposition 13

Consider the linear transformation from C

n into itself that sends a vector x 2

ker(A (^) j I)

⌫ (^) j to (^) j x (j = 1,... , m). Let L be the n ⇥ n matrix associated

with this linear transformation. This implies Lx = (^) j x for all x 2 ker(A

(^) j I)

⌫ (^) j

. Clearly, ker(L (^) j I) = ker(A (^) j I)

⌫ (^) j for j = 1,... , m. Therefore,

there exists a basis of C

n that consists of eigenvectors of L. Consequently,

L is diagonalizable.

We claim that A and L commute, i.e. LA = AL. It suces to show that

LAx = ALx for all vectors x 2 ker(A (^) j I)

⌫ (^) j and all j = 1,... , m. Indeed,

if x belongs to the generalized eigenspace ker(A (^) j I)

⌫ (^) j , then Ax lies in

the same generalized eigenspace. Therefore, Lx = (^) j x and LAx = (^) j Ax.

Putting these facts together, we obtain LAx = (^) j Ax = ALx, as claimed.

Therefore, LA = AL.

We now put N = A L. Clearly, L + N = A and LN = LA L

2

AL L

2 = N L. Hence, it remains to show that N

n = 0. As above, it

is enough to show that N

n x = 0 for all vectors x 2 ker(A (^) j I)

⌫ (^) j and all

j = 1,... , m. By definition of L and N , we have N x = AxLx = (A (^) j I)x

for all x 2 ker(A (^) j I)

⌫ (^) j

. From this it is easy to see that N

n x = (A (^) j I)

n x.

However, (A (^) j I)

n x = 0 since x 2 ker(A (^) j I)

⌫ (^) j and ⌫ (^) j  n. Thus, we

conclude that N

n x = 0 for all x 2 ker(A (^) j I)

⌫ (^) j

. This completes the proof

of the existence part.

We next turn to the proof of the uniqueness statement. Suppose that

L, N 2 C

n⇥n satsify (i) – (iv). We claim that Lx = (^) j x for all vectors

x 2 ker(A (^) j I)

⌫ (^) j and all j = 1,... , m. To this end, we use the formula

L (^) j I = (A (^) j I) N. Since N commutes with A (^) j I, it follows that

(L (^) j I)

2 n

X^2 n

l=

2 n

l

(N )

l (A (^) j I)

2 nl .

Using the identity N

n = 0, we obtain

(L (^) j I)

2 n

n 1 X

l=

2 n

l

(N )

l (A (^) j I)

2 nl .

Suppose that x 2 ker(A (^) j I)

⌫ (^) j

. Since ⌫ (^) j  n, we have (A (^) j I)

2 nl x =

0 for all l = 0,... , n 1. This implies (L (^) j I)

2 n x = 0. Since L is

diagonalizable, we it follows that (L (^) j I)x = 0. Thus, we conclude that

Lx = (^) j x for all vectors x 2 ker(A (^) j I)

⌫ (^) j and all j = 1,... , m.

Since

C

n = ker(A 1 I)

⌫ (^1) ... (A (^) m I)

⌫ (^) m ,

there is exactly one matrix L such that Lx = (^) j x for x 2 ker(A (^) j I)

⌫ (^) j and

j = 1,... , m. This completes the proof of the uniqueness statement. ⇤

14 2. Systems of linear di↵erential equations

As an example, let us compute the L + N decomposition of the matrix

A =

We begin by computing the eigenvalues and eigenvectors of A. The charac-

teristic polynomial of A is given by

det(I A) = det

= det

(^5) + det

3 ) + (3

2

4

  • 2

2

  • 1

= ( i)

2 ( + i)

2 .

Thus, the eigenvalues of A are i and i, and they both have algebraic

multiplicity 2. A straightforward calculation shows that the generalized

eigenspaces are given by

ker(A iI)

2 = ker

4 4 i 6 i 2 + 8i

2 2 i 2 4 i 4 + 8i

4 + 2i 2 + 4i 4 + 8i 6 12 i

2 2 i 4 i 6 i

= span{v 1 , v 2 }

and

ker(A + iI)

2 = ker

4 4 i 6 i 2 8 i

2 2 i 2 + 4i 4 8 i

4 2 i 2 4 i 4 8 i 6 + 12i

2 2 i 4 i 6 i

= span{v 3 , v 4 },