






Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
One powerful application of diagonalization is in computing powers of a matrix. We conclude in addition that the eigenvalues of Ak are the kth powers of the ...
Typology: Lecture notes
1 / 11
This page cannot be seen from the preview
Don't miss anything!
——–
Chapter 5: Diagonalization and Eigenvalues and Eigenvectors Section 2: Diagonalization of a matrix Ivan Contreras, Sergey Dyachenko and Bob Muncaster University of Illinois at Urbana-Champaign April 9 2018 Ivan Contreras, Sergey Dyachenko and Bob MuncasterUniversity of Illinois at Urbana-ChampaignApplied Linear Algebra——– April 9 2018 1 / 8
One of the most important uses of eigenvalue analysis is in the process of diagonalizing a square matrix. For example, which of these two systems would you rather attempt to solve:
A
x 1 x 2 x 3
Λ
y 1 y 2 y 3
The second is diagonal and clearly the easier of the two. It turns out that the two are linked by
x 1 x 2 x 3
S
y 1 y 2 y 3
It is, in fact, the case that the diagonal entries in Λ are the eigenvalues of A and the columns of S are the corresponding eigenvectors of A.
The formulas in this theorem allow you to switch from a problem involving A and x to an equivalent problem involving Λ and y : Ax = b ⇐⇒ S ΛS−^1 x = b ⇐⇒ ΛS−^1 x = S−^1 b ⇐⇒ Λy = c
where y = S−^1 x, c = S−^1 b Note that x = Sy which gives a way to recover x once y is known Now that we see the value of diagonalizing a matrix, can we always do it?
Ex:
A =
⇒ |A − λ I (^) | =
∣∣ 4 −^ λ^^1 0 4 − λ
∣∣ = ( 4 − λ )^2 ⇒ λ = 4, 4
Now for the corresponding eigenvectors:
(A − λ I ) x =
a b
⇒ b = 0 ⇒ x = a
In this case we get only one linearly independent eigenvector, and so the theorem cannot be used.
Fact: n × n matrices that DO NOT have n linearly independent eigenvectors cannot be diagonalized (the next best specialized form of a matrix is its Jordan normal form and that can always be found - see the text). So we need n eigenvectors AND linear independence to be able to diagonalize. So here is a useful result:
Theorem: Let A have eigenvalues λ i and corresponding eigenvectors xi , i = 1, ..., n. If the λ ’s are all different, then the eigenvectors are linearly independent. Proof: Suppose that x 1 , ..., xp are linearly independent but x 1 , ..., xp , xp+ 1 are not. This means that there are coefficients ci , i = 1, ..., p + 1, not all zero, such that c 1 x 1 + · · · + cp xp + cp+ 1 xp+ 1 = 0, cp+ 1 6 = 0 Multiply by A and use Axi = λ i xi to get c 1 λ 1 x 1 + · · · + cp λ p xp + cp+ 1 λ p+ 1 xp+ 1 = 0
Theorem: Any matrix with distinct eigenvalues can be diagonalized.
Ex: P =
2
1 1 2 2
1 2
⇒ Px =
2 x^1 +^
1 1 2 x^2 2 x^1 +^
1 2 x^2
x 1 + x 2 2
and so P is an orthogonal projection onto the vector with 1’s as entries. This means that
P
From these we read off the eigenvalues 1 and 0 and their corresponding eigenvectors, and see that
S =
Ex: Since
we can see the eigenvalues and eigenvectors directly and conclude that
While the previous result seemed to require k to be a positive integer, it also works for negative integers provided we view A−k^ as the inverse of Ak^. This follows from the following observation.
Since det A = λ 1 λ 2 · · · λ n, we can see that an invertible matrix never has zero as an eigenvalue. Then
Ax = λ x ⇒ x = λ A−^1 x ⇒ A−^1 x =
λ
x
We conclude that the eigenvalues of A−^1 are the reciprocals of the eigenvalues of A, with the same eigenvectors. In terms of diagonalization we can see this equally well by
A−^1 = (S ΛS−^1 )−^1 = (S−^1 )−^1 Λ−^1 S−^1 = S diagonal(
λ 1
λ 2
λ n
The diagonalization factorization formula gives a simple way to relate the eigenvalues of A and AT^ : A = S ΛS−^1 ⇒ AT^ = (S−^1 )T^ ΛT^ ST Since Λ is diagonal, ΛT^ = Λ. Also we already know that the transpose of an inverse is the inverse of the transpose: (S−^1 )T^ = (ST^ )−^1. Thus AT^ = (ST^ )−^1 ΛST^ = RΛR−^1 where R = (ST^ )−^1 This gives us a diagonalization of AT^ with the same Λ as for A. This establishes:
Theorem: A and AT^ have the same eigenvalues. Thus det A = λ 1 · · · λ n = det AT^.
Another way to see this is to recall that a matrix and its transpose have the same determinant: 0 = det(A − λ I ) = det((A − λ I )T^ ) = det(AT^ − λ I T^ ) = det(AT^ − λ I )