Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Symmetrics - Lecture Notes - Introduction to Linear Algebra | MATH 311, Study notes of Linear Algebra

Material Type: Notes; Class: Intro Linear Algebra; University: University of Hawaii at Hilo; Term: Unknown 1989;

Typology: Study notes

2009/2010

Uploaded on 04/12/2010

koofers-user-hrd
koofers-user-hrd 🇺🇸

10 documents

1 / 1

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Math 311 Lecture 35
RECALL. D A is symmetric iff AT = A.
E For the standard column vector inner product,
[(u, v)] = uT[v.
F L:VéV is diagonalizable iff V has a basis of
eigenvectors for L.
LEMMA. (Av, u) = (v, ATu).
PROOF. (Av, u) = (Av)T[u = (vT[AT)[u = vT[(ATu) = (v, ATu).
DEFINITION. A matrix is orthogonal iff A-1 = AT. A matrix
A or a linear transformation L is an isometry iff it
preserves inner products, i.e., for all u, vLV,
(Au, Av) = (u,v) or (Lu, Lv) = (u,v) respectively.
THEOREM. A is orthogonal iff its columns are orthonormal
(orthogonal unit vectors) iff its rows are orthonormal iff
A is an isometry.
PARTIAL PROOF. Assume A is orthogonal. W A-1 = AT and so
ATA = A-1A = I. We want orthonormal columns.
If vi, vj are columns of A, then viT is the ith row of AT.
W(vi, vj) = viT[vj = the (i,j)th entry of ATA = the (i,j)th
entry of I (since ATA = I) = [0 if i j and 1 if i = j].
If i j, then (vi, vj) = 0 and vi and vj are orthogonal.
If i = j, then (vi, vi) = 1 and so vi has length 1.
Finally we show that A is an isometry. For any u, vLV,
(Au, Av) = (u, ATAv) = (u, A-1Av) = (u, Iv) = (u, v). E
THEOREM. A symmetric matrix has a basis of eigenvectors.
PROOF. Omitted.
THEOREM. Eigenvectors for distinct eigenvalues of a
symmetric matrix are orthogonal.
PROOF. Suppose A is symmetric and O and P are distinct
eigenvalues with eigenvectors v and u.
WA=AT, Av = Ov, Au = Pu and, since O P, (OP) 0 .W
O(v,u)=(Ov,u)=(Av,u)=(v, ATu)=(v, Au)=(v, Pu)=P(v,u).
WO(v, u) P(v, u) = 0. W(OP)(v, u) = 0. W(v, u) = 0.
Hence v and u are orthogonal. E
THEOREM. Let A be a symmetric n[n matrix; let D be the
diagonal matrix of eigenvalues for A. Then A = PDPT
where P is an orthogonal matrix whose ith column is an
eigenvector for the ith eigenvalue on D’s diagonal.
PROOF. Let A be symmetric. By the second theorem, there
is a basis S = {v1, v2, ..., vn} of eigenvectors for A. Let D
be the diagonal matrix with diagonal entries O1, O2, ..., On
where Oi = the eigenvalue of vi. The eigenvectors of
different eigenvalues are already orthogonal. Apply
Gramm-Schmidt to orthonormalize eigenvectors which
have the same eigenvalue. As before, apply the
“change-of-basis” theorem to get A = PDP-1 where P =
PU
á
S = [v1| v2| v3| ... | vn] = the orthogonal matrix with
columns vi. P orthogonal î P -1 = PT î A = PDPT. E
When P is orthogonal, the inverse P -1 = PT is easy to
compute.
RECALL. For diagonalizable matrices, an eigenvalue of
degree k has k independent eigenvectors.
CFind a diagonal matrix D and an orthogonal matrix P
such that A = PDP-1 where A = .
022
202
220
Characteristic polynomial = |OIA| = (O+2)2(O4).
Eigenvalues: O = 4 with degree 1, O = -2 with degree 2.
List the positives first, then the negatives, then the 0’s.
Eigenvectors for O = 4: [1,1,1]T.
Eigenvectors for O = -2: [-1,1,0]T, [-1,0,1]T.
For O = 4, the normal vector is (1/O3¯)[1,1,1]T.
For O = -2, apply Gramm-Schmidt. We’ll omit T until the
end.
{w1, w2} = {[-1,1,0], [-1,0,1]}.
v1 = (1/O2¯ )[-1,1,0]
u2 = w2 (w2, v1)v1 = [-1,0,1]
½([-1,0,1], [-1,1,0])[-1,1,0]
= [-1,0,1]
½
1
[-1,1,0]= [-½,-½,1] ~ [-1,-1,2].
v2 = (1/O6¯ )[-1,-1,2]T.
Wfor O = -2, we have (1/O2¯)[-1,1,0]T, (1/O6¯ )[-1,-1,2]T.
Answer: D = , P =
40 0
020
002
1
3
1
2
1
6
1
3
1
2
1
6
1
302
6
Hw 33 Answers
3DJH.
. [8, 2, 1]T
. D No E Yes
. Consider a transition matrix A =
0.20
0.3.3
1.5.7
D Suppose x = [0, 1, 0]T. Find
x(1) = [.2, .3, .5]T
x(2) = [.06, .24, .70]T
x(3) = [.048, .282, .67]T
F [3/53, 15/53, 35/53]T = [.0566, .2830, .6604]T
. For each matrix, find the steady-state probability
vector, if any. Write “none” if there are none.
D F
2
3
1
3
9
17
4
17
4
17

Partial preview of the text

Download Symmetrics - Lecture Notes - Introduction to Linear Algebra | MATH 311 and more Study notes Linear Algebra in PDF only on Docsity!

Math 311 Lecture 35

R ECALL. D A is symmetric iff A T^ = A. E For the standard column vector inner product, [(u, v)] = uT[v. F L:VÈV is diagonalizable iff V has a basis of eigenvectors for L.

LEMMA. ( A v, u) = (v, A Tu). P ROOF. ( A v, u) = ( A v)T[u = (vT[ A T)[u = vT[( A Tu) = (v, A Tu).

DEFINITION. A matrix is orthogonal iff A -1^ = A T. A matrix A or a linear transformation L is an isometry iff it preserves inner products, i.e., for all u, vLV, ( A u, A v) = (u,v) or (Lu, Lv) = (u,v) respectively.

THEOREM. A is orthogonal iff its columns are orthonormal (orthogonal unit vectors) iff its rows are orthonormal iff A is an isometry. P ARTIAL PROOF. Assume A is orthogonal. W A -1^ = A T^ and so A T A = A -1 A = I. We want orthonormal columns. If vi, vj are columns of A , then viT^ is the i th row of A T. W(vi, vj) = viT[vj = the ( i , j )th entry of A T A = the ( i , j )th entry of I (since A T A = I ) = [0 if i j and 1 if i = j ]. If i j , then (v (^) i, vj) = 0 and vi and v (^) j are orthogonal. If i = j , then (v (^) i, vi) = 1 and so v (^) i has length 1. Finally we show that A is an isometry. For any u, vLV, ( A u, A v) = (u, A T A v) = (u, A -1 A v) = (u, Iv) = (u, v). E

THEOREM. A symmetric matrix has a basis of eigenvectors. P ROOF. Omitted.

THEOREM. Eigenvectors for distinct eigenvalues of a symmetric matrix are orthogonal. P ROOF. Suppose A is symmetric and O and P are distinct eigenvalues with eigenvectors v and u. W A = A T, A v = Ov, A u = Pu and, since O P, (OP) 0 .W O(v,u)=(Ov,u)=( A v,u)=(v, A Tu)=(v, A u)=(v, Pu)=P(v,u). WO(v, u)  P(v, u) = 0. W(OP)(v, u) = 0. W(v, u) = 0. Hence v and u are orthogonal. E

THEOREM. Let A be a symmetric n [ n matrix; let D be the diagonal matrix of eigenvalues for A. Then A = P D P T where P is an orthogonal matrix whose i th column is an eigenvector for the i th eigenvalue on D ’s diagonal. P ROOF. Let A be symmetric. By the second theorem, there is a basis S = {v 1 , v 2 , ..., vn} of eigenvectors for A. Let D be the diagonal matrix with diagonal entries O 1 , O 2 , ..., On where Oi = the eigenvalue of vi. The eigenvectors of different eigenvalues are already orthogonal. Apply Gramm-Schmidt to orthonormalize eigenvectors which have the same eigenvalue. As before, apply the “change-of-basis” theorem to get A = P D P-1^ where P = P (^) UáS = [v 1 | v2| v3| ... | vn] = the orthogonal matrix with columns v (^) i. P orthogonal Ó^ P^ -1^ =^ P^ T^ Ó^ A^ = P D PT.^ E

When P is orthogonal, the inverse P -1^ = P T^ is easy to compute. R ECALL. For diagonalizable matrices, an eigenvalue of degree k has k independent eigenvectors.

CFind a diagonal matrix D and an orthogonal matrix P such that A = PDP -1^ where A =.



0 2 2 2 0 2 2 2 0



Characteristic polynomial = |OI A | = (O+2)^2 (O4). Eigenvalues: O = 4 with degree 1, O = -2 with degree 2. List the positives first, then the negatives, then the 0’s. Eigenvectors for O = 4: [1,1,1]T. Eigenvectors for O = -2: [-1,1,0]T, [-1,0,1]T.

For O = 4, the normal vector is (1/O 3 ¯ )[1,1,1] T.

For O = -2, apply Gramm-Schmidt. We’ll omit T^ until the end. {w 1 , w 2 } = {[-1,1,0], [-1,0,1]}. v 1 = (1/O 2 ¯ )[-1,1,0] u 2 = w 2  (w 2 , v 1 )v 1 = [-1,0,1]½([-1,0,1], [-1,1,0])[-1,1,0] = [-1,0,1]½ 1 [-1,1,0] = [-½,-½,1] ~ [-1,-1,2]. v 2 = (1/O 6 ¯ )[-1,-1,2]T. Wfor O = -2, we have (1/O 2 ¯ )[-1,1,0] T, (1/O 6 ¯ )[-1,-1,2]T.

Answer: D = , P =



4 0 0 0 − 2 0 0 0 − 2



 

1 3

− 1 2

− 1 6 1 3

1 2

− 1 6 1 3 0

2 6

 

Hw 33 Answers 3DJH.  . [8, 2, 1] T  . D No E Yes

 . Consider a transition matrix A =

D Suppose x = [0, 1, 0]T. Find

x (1)^ = [.2, .3, .5] T

x (2)^ = [.06, .24, .70]T

x (3)^ = [.048, .282, .67]T

F [3/53, 15/53, 35/53]T^ = [.0566, .2830, .6604] T

 . For each matrix, find the steady-state probability vector, if any. Write “none” if there are none.

D F

2 3 1 3

9 17 4 17 4 17