Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Vector Space-Linear Algebra-Lecture 34 Notes-Applied Math and Statistics, Study notes of Linear Algebra

Direct Sum of Vector Spaces, Sum, Linearly, Independent, Direct Sum, Vector Space, Skew, Symmetric, Invariant Spaces, Subspaces, Invariant, Jordan, Canonical, Form, Decomposition, Root Vector, Height, Root Space, Dimension, Nilpotent, Jordan Block, Jordan Matrix, Eigenvalue, Linear Algebra, Lecture Notes, Andrei Antonenko, Department of Applied Math and Statistics, Stony Brook University, New York, United States of America.

Typology: Study notes

2011/2012

Uploaded on 03/08/2012

wualter
wualter 🇺🇸

4.8

(95)

288 documents

1 / 7

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Lecture 33
Andrei Antonenko
May 2, 2003
1 Direct sum of vector spaces
In the first part of this lecture we will consider some more concepts from the theory of vector
spaces.
Definition 1.1. Sum of two vector spaces Uand V U +Vis a vector space which consists of
all vectors u+v, where uUand vV.
Definition 1.2. Vector spaces U1, U2, . . . , Unare called linearly independent if from u1+
· · · +un= 0, where uiUiit follows that ui= 0 for all i.
Sum of the linearly independent vector spaces is called a direct sum of these vector spaces
and is denoted by U1 · · · Un.
Definition 1.3. The vector space Vis said to be equal to a direct sum of vector spaces
U1, . . . , Un
V=U1 · · · Un
if any vector vfrom Vcan be represented as
v=u1+· · · +un, uiUi
uniquely.
For example, the plane R2is equal to a direct sum of xand yaxes.
Example 1.4. The space of all matrices is equal to a direct sum of the space of all symmetric
matrices and all skewsymmetric matrices, since any matrix Acan be uniquely represented as
A=A+A>
2+AA>
2,
and one can check that A+A>
2is always symmetric, and AA>
2is always skewsymmetric. More-
over, the sum is direct, since if a matrix is both symmetric and skewsymmetric, it is equal to
0-matrix.
1
pf3
pf4
pf5

Partial preview of the text

Download Vector Space-Linear Algebra-Lecture 34 Notes-Applied Math and Statistics and more Study notes Linear Algebra in PDF only on Docsity!

Lecture 33

Andrei Antonenko

May 2, 2003

1 Direct sum of vector spaces

In the first part of this lecture we will consider some more concepts from the theory of vector spaces.

Definition 1.1. Sum of two vector spaces U and V U + V is a vector space which consists of all vectors u + v, where u ∈ U and v ∈ V.

Definition 1.2. Vector spaces U 1 , U 2 ,... , Un are called linearly independent if from u 1 + · · · + un = 0, where ui ∈ Ui it follows that ui = 0 for all i.

Sum of the linearly independent vector spaces is called a direct sum of these vector spaces and is denoted by U 1 ⊕ · · · ⊕ Un.

Definition 1.3. The vector space V is said to be equal to a direct sum of vector spaces U 1 ,... , Un V = U 1 ⊕ · · · ⊕ Un

if any vector v from V can be represented as

v = u 1 + · · · + un, ui ∈ Ui

uniquely.

For example, the plane R^2 is equal to a direct sum of x− and y−axes.

Example 1.4. The space of all matrices is equal to a direct sum of the space of all symmetric matrices and all skewsymmetric matrices, since any matrix A can be uniquely represented as

A = A^ +^ A

2 +^

A − A>

and one can check that A+ 2 A >is always symmetric, and A− 2 A > is always skewsymmetric. More- over, the sum is direct, since if a matrix is both symmetric and skewsymmetric, it is equal to 0 -matrix.

2 Invariant spaces

Definition 2.1. Let A be an operator in vector space V. The subspace U ⊂ V is called an invariant subspace with respect to operator A if

AU ⊂ U.

This definition means, that the vectors from invariant subspace remain in this subspace after application of the operator A.

Example 2.2. Considering the operator of rotation in the 3-dimensional space around some axes, we can see, that all the planes, perpendicular to the axes of rotation are invariant. More- over, the axes of rotation is invariant itself.

If the basis {e 1 ,... en} of V is such that first k vectors {e 1 ,... , ek} is a basis of U , then the matrix of the operator A in this basis has the following form: ( B D 0 C

Moreover, if the space V is equal to a direct sum of two subspaces V = U ⊕ W , and {e 1 ,... , ek} is a basis of U , and {ek+1,... , en} is a basis of W , then the matrix of A has the following form: ( B 0 0 C

Example 2.3. Consider the rotation in the 3-dimensional space about some axes be an angle α. In the basis {e 1 , e 2 , e 3 }, if the vector e 3 is directed along the axes of rotation, the matrix of this operator has the following form:  

cos α − sin α 0 sin α cos α 0 0 0 1

This matrix is consistent with the decomposition of R^3 into a direct sum of two invariant subspaces: R^3 = 〈e 1 , e 2 〉 ⊕ 〈e 3 〉.

3 Jordan canonical form

As we saw in previous lectures, some of the operators are not diagonalizable. But we are still able to simplify the matrix of the operator to some extent.

Thus pA(t) = (t − λ)k^ det(tI − C).

Now let C be an operator in the space W = 〈ek+1,... , en〉 with the matrix C. We need to prove, that λ is not a root of the polynomial det(tI − C), i.e. it is not an eigenvalue of C. Let’s assume the contrary. Then there exists v ∈ W , such that Cv = λv. Then

Av = λv + u, u ∈ V λ(A),

and thus (A−λI)v = u is a root vector, but in this case v is also a root vector, which contradicts the definition of V λ(A).

Proposition 3.4. The root spaces, corresponding to different λi’s are linearly independent.

Proof. Assume c 1 e 1 + c 2 e 2 + · · · + ckek = 0, ei ∈ V λi^ (A). (3)

Let’s apply to this equality the operator (A − λkI)m, where m is the height of ek. We obtain:

(A − λkI)mc 1 e 1 + · · · + (A − λkI)mck− 1 ek− 1 = 0. (4)

Using induction we have

(A − λkI)mc 1 e 1 = · · · = (A − λkI)mck− 1 ek− 1 = 0. (5)

But since the operator (A − λkI) is not degenerate on any of V λ^1 (A),... , V λk−^1 (A) (i.e., not equal to 0 for nonzero vectors), we have c 1 = · · · = ck− 1 = 0, and thus ck = 0 also.

These two propositions lead to the following theorem:

Theorem 3.5. If the characteristic polynomial pA(t) can be factored into linear terms, then

V =

⊕^ s i=

V λi^ (A), (6)

where λi’s are different roots of pA(t).

Now we will discuss the action of the operator A on any of the root spaces.

Definition 3.6. The linear operator N is called nilpotent if there exists integer m ≥ 0 such that Nm^ = 0. The minimal m is called the height of the nilpotent operator N.

Example 3.7. The operator of taking derivative in the space of polynomials of bounded degree Pn(t) is a nilpotent operator of the height n + 1.

Since V λ(A) = Ker(A − λI)m^ for some m, the operator N = (A − λI) is nilpotent on V λ(A). Thus we need to study nilpotent operators. Let N be a nilpotent operator in the space V. The height of the vector v with respect to N is the minimal number m, such that Nmv = 0. Obviously, the height of any vector is less than or equal to the height of the nilpotent operator. We will denote the height of the vector v as ht v.

Lemma 3.8. If v is a vector of the height m, then vectors

v, Nv, N^2 v,... , Nm−^1 v

are linearly independent.

Proof. Assume λ 0 v + λ 1 Nv + λ 2 N^2 v + · · · + λm− 1 Nm−^1 v = 0. (7)

Let λk is the first nonzero coefficient. Then applying the operator Nm−k−^1 we obtain incorrect equality λkNm−^1 v = 0. (8)

Definition 3.9. The subspace 〈v, Nv, N^2 v,... , Nm−^1 v〉 (m = ht v) is called a cyclic subspace of the nilpotent operator N, generated by vector v.

Obviously the cyclic subspace is invariant with respect to N. The operator N on the cyclic subspace 〈v, Nv, N^2 v,... , Nm−^1 v〉 has height m and in the basis {v, Nv, N^2 v,... , Nm−^1 v} has a matrix

J(0) =

which is called a nilpotent Jordan block.

Theorem 3.10. The space V can be decomposed into a direct sum of the cyclic subspaces of the operator N. The number of the spaces in such decomposition is equal to dim Ker N.

Proof. The proof is done by induction over n = dim V. If n = 1 the theorem is obvious. If n > 1 let U ⊂ V be an arbitrary (n − 1)-dimensional subspace, containing Im N. Obviously, U is invariant with respect to N. By the induction hypothesis,

U = U 1 ⊕ · · · ⊕ Uk,

Now getting back to any linear operator A, we can see that in the cyclic subspace of the nilpotent operator (A−λI), restricted to V λ(A), the operator A has the matrix of the following form:

J(λ) = J(0) + λI =

λ 1 0... 0 0 0 λ 1... 0 0 0 0 λ... 0 0

................... 0 0 0... λ 1 0 0 0... 0 λ

This matrix is called a Jordan block with the eigenvalue λ.

Definition 3.11. The Jordan matrix is a matrix with Jordan blocks over diagonal, and zeros everywhere else.

Combining the previous results, we obtain the following most important result from the theory of linear operators:

Theorem 3.12. If the characteristic polynomial of the operator can be factored in linear terms, then there exists a basis, in which the matrix of the operator is Jordan.

Corollary 3.13. Matrix of any operator can be transposed to Jordan canonical form over the complex numbers.