Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Diagonalization of a matrix, Lecture notes of Linear Algebra

One powerful application of diagonalization is in computing powers of a matrix. We conclude in addition that the eigenvalues of Ak are the kth powers of the ...

Typology: Lecture notes

2022/2023

Uploaded on 05/11/2023

zeb
zeb 🇺🇸

4.6

(26)

231 documents

1 / 11

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Applied Linear Algebra
——–
Chapter 5: Diagonalization and Eigenvalues and Eigenvectors
Section 2: Diagonalization of a matrix
Ivan Contreras, Sergey Dyachenko and Bob Muncaster
University of Illinois at Urbana-Champaign
April 9 2018
Ivan Contreras, Sergey Dyachenko and Bob MuncasterUniversity of Illinois at Urbana-ChampaignApplied Linear Algebra——– April 9 2018 1 / 8
pf3
pf4
pf5
pf8
pf9
pfa

Partial preview of the text

Download Diagonalization of a matrix and more Lecture notes Linear Algebra in PDF only on Docsity!

Applied Linear Algebra

——–

Chapter 5: Diagonalization and Eigenvalues and Eigenvectors Section 2: Diagonalization of a matrix Ivan Contreras, Sergey Dyachenko and Bob Muncaster University of Illinois at Urbana-Champaign April 9 2018 Ivan Contreras, Sergey Dyachenko and Bob MuncasterUniversity of Illinois at Urbana-ChampaignApplied Linear Algebra——– April 9 2018 1 / 8

The Value of Diagonal Systems

One of the most important uses of eigenvalue analysis is in the process of diagonalizing a square matrix. For example, which of these two systems would you rather attempt to solve:  

A

x 1 x 2 x 3

 OR

Λ

y 1 y 2 y 3

The second is diagonal and clearly the easier of the two. It turns out that the two are linked by  

x 1 x 2 x 3

S

y 1 y 2 y 3

It is, in fact, the case that the diagonal entries in Λ are the eigenvalues of A and the columns of S are the corresponding eigenvectors of A.

Changing Coordinates

The formulas in this theorem allow you to switch from a problem involving A and x to an equivalent problem involving Λ and y : Ax = b ⇐⇒ S ΛS−^1 x = b ⇐⇒ ΛS−^1 x = S−^1 b ⇐⇒ Λy = c

where y = S−^1 x, c = S−^1 b Note that x = Sy which gives a way to recover x once y is known Now that we see the value of diagonalizing a matrix, can we always do it?

Ex:

A =

[

]

⇒ |A − λ I (^) | =

∣∣ 4 −^ λ^^1 0 4 − λ

∣∣ = ( 4 − λ )^2 ⇒ λ = 4, 4

Now for the corresponding eigenvectors:

(A − λ I ) x =

[

] [

a b

]

[

]

⇒ b = 0 ⇒ x = a

[

]

In this case we get only one linearly independent eigenvector, and so the theorem cannot be used.

Linear Independence of Eigenvectors

Fact: n × n matrices that DO NOT have n linearly independent eigenvectors cannot be diagonalized (the next best specialized form of a matrix is its Jordan normal form and that can always be found - see the text). So we need n eigenvectors AND linear independence to be able to diagonalize. So here is a useful result:

Theorem: Let A have eigenvalues λ i and corresponding eigenvectors xi , i = 1, ..., n. If the λ ’s are all different, then the eigenvectors are linearly independent. Proof: Suppose that x 1 , ..., xp are linearly independent but x 1 , ..., xp , xp+ 1 are not. This means that there are coefficients ci , i = 1, ..., p + 1, not all zero, such that c 1 x 1 + · · · + cp xp + cp+ 1 xp+ 1 = 0, cp+ 1 6 = 0 Multiply by A and use Axi = λ i xi to get c 1 λ 1 x 1 + · · · + cp λ p xp + cp+ 1 λ p+ 1 xp+ 1 = 0

An Example

Theorem: Any matrix with distinct eigenvalues can be diagonalized.

Ex: P =

[ 1

2

1 1 2 2

1 2

]

⇒ Px =

[ 1

2 x^1 +^

1 1 2 x^2 2 x^1 +^

1 2 x^2

]

x 1 + x 2 2

[

]

and so P is an orthogonal projection onto the vector with 1’s as entries. This means that

P

[

]

[

]

= 1 ×

[

]

P

[

]

[

]

= 0 ×

[

]

From these we read off the eigenvalues 1 and 0 and their corresponding eigenvectors, and see that

S =

[

]

[

]

, P = S ΛS−^1

Another Example

Ex: Since  

we can see the eigenvalues and eigenvectors directly and conclude that

 , S =

 = S ΛS−^1

Eigenvalues of the Inverse of a Matrix

While the previous result seemed to require k to be a positive integer, it also works for negative integers provided we view A−k^ as the inverse of Ak^. This follows from the following observation.

Since det A = λ 1 λ 2 · · · λ n, we can see that an invertible matrix never has zero as an eigenvalue. Then

Ax = λ x ⇒ x = λ A−^1 x ⇒ A−^1 x =

λ

x

We conclude that the eigenvalues of A−^1 are the reciprocals of the eigenvalues of A, with the same eigenvectors. In terms of diagonalization we can see this equally well by

A−^1 = (S ΛS−^1 )−^1 = (S−^1 )−^1 Λ−^1 S−^1 = S diagonal(

λ 1

λ 2

λ n

) S−^1

The Eigenvalues of a Matrix and Its Transpose

The diagonalization factorization formula gives a simple way to relate the eigenvalues of A and AT^ : A = S ΛS−^1 ⇒ AT^ = (S−^1 )T^ ΛT^ ST Since Λ is diagonal, ΛT^ = Λ. Also we already know that the transpose of an inverse is the inverse of the transpose: (S−^1 )T^ = (ST^ )−^1. Thus AT^ = (ST^ )−^1 ΛST^ = RΛR−^1 where R = (ST^ )−^1 This gives us a diagonalization of AT^ with the same Λ as for A. This establishes:

Theorem: A and AT^ have the same eigenvalues. Thus det A = λ 1 · · · λ n = det AT^.

Another way to see this is to recall that a matrix and its transpose have the same determinant: 0 = det(A − λ I ) = det((A − λ I )T^ ) = det(AT^ − λ I T^ ) = det(AT^ − λ I )