


Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Image and Kernel, Image, Kernel, Existence of Zero, Scalar, Multiplication, Vector, Subspace, Linear, Function, Matrix, Basis, Projection, Derivative, Column Vector, Linear Algebra, Lecture Notes, Andrei Antonenko, Department of Applied Math and Statistics, Stony Brook University, New York, United States of America.
Typology: Study notes
1 / 4
This page cannot be seen from the preview
Don't miss anything!
Last lecture we studied image and kernel of a linear function. Now we will prove one of the properties of image and kernel. First let’s consider kernel. Let f : V → U be a linear function, and let its kernel be Ker f — set of all elements v from V which map to 0. Then we can state the following properties of it.
Existence of zero. The zero vector 0 belongs to kernel of f , since f ( 0 ) = 0 — maps to 0 , so 0 is in kernel.
Summation. Let vectors v and u belong to kernel, so, f (v) = 0 and f (u) = 0. Then
f (v + u) = f (v) + f (u) = 0 ,
and thus u + v belongs to Ker f.
Multiplication by a scalar. Let vector v belongs to the kernel of f. Then we know that f (v) = 0. Now for any constant k we have:
f (kv) = kf (v) = k · 0 = 0 ,
thus kv belongs to Ker f.
So, we proved the following theorem:
Theorem 1.1. The kernel of linear function f : V → U is a vector subspace in V.
Example 1.2. Consider the projection function f (x, y, z) = (x, y, 0). It’s kernel consists of vectors of the form (0, 0 , c) for any constant c. Geometrically speaking, this is a z-axis in the 3-dimensional space. This is a vector subspace.
Now let’s consider the image. Let f : V → U be a linear function, and it’s image Im f is the set of all vectors from U where we can get by applying a function to vectors from V. We’ll state some properties of it.
Existence of zero. The zero vector is in Im f since by taking f ( 0 ) we can get to 0 : f ( 0 ) = 0.
Addition. Let u 1 and u 2 be elements from the image of f , so there exist v 1 and v 2 from V such that f (v 1 ) = u 1 and f (v 2 ) = u 2. Now we can consider the element v 1 + v 2 from V. We have: f (v 1 + v 2 ) = f (v 1 ) + f (v 2 ) = u 1 + u 2 , and thus u 1 + u 2 belongs to Im f.
Multiplication by a scalar. Let u be a vector from Im f. Then there exists a vector v from V such that f (v) = u. So, let’s consider an element kv for any constant k. We have:
f (kv) = kf (v) = ku,
thus ku belongs to Im f.
As for the kernel, we proved the following theorem:
Theorem 1.3. The image of a linear function f : V → U is a vector subspace in U.
Example 1.4. Consider the projection function f (x, y, z) = (x, y, 0). It’s image consists of vectors of the form (x, y, 0) for all x, y ∈ R. Geometrically speaking, this is an xy-plane in the 3-dimensional space. This is a vector subspace.
In order to continue studies of image and kernel, we would like to know more about linear functions.
2 Matrix of a linear function
When we studied linear function for the first time we considered the following example. If A is an m × n matrix, then we can define a linear function FA : Rn^ → Rm^ by the following formula: FA(x) = Ax for any vector x ∈ Rn. In this part we can see, that it is one of the general cases of linear functions. Let’s consider any linear function f : V → W. Let vectors e 1 , e 2 ,... , en form a basis in the space V. And let we know the values f (e 1 ), f (e 2 ),... , f (en). Then we can compute the function f for any vector from V using only these given values. To show it let’s note, that is ei’s form a basis, then any vector v from V can be represented as a linear combination of them:
v = a 1 e 1 + a 2 e 2 + · · · + anen.
Now let’s show how to compute the value f (v):
f (v) = f (a 1 e 1 + a 2 e 2 + · · · + anen) = f (a 1 e 1 ) + f (a 2 e 2 ) + · · · + f (anen) = a 1 f (e 1 ) + a 2 f (e 2 ) + · · · + anf (en).
Example 2.3. Now let’s consider the function of taking a derivative in the space P 2 : D(at^2 + bt + c) = 2at + b. Let’s take the standard basis in the space of polynomials P 2 and compute values of function on basis vectors:
So, the matrix is
AD =
For example, let’s take a derivative of 3 t^2 +5t+7. We’ll write this polynomial as a column-vector
, and multiply AD by it:
So, the derivative of this polynomial 6 t + 5.
Let’s prove that if a matrix Af is constructed using the method provided here, then
f (x) = Af x.
Proof. Let’s take any vector x = (x 1 , x 2 ,... , xn) = x 1 e 1 + x 2 e 2 + · · · + xnen. Since we have that
f (ej ) = (a 1 j , a 2 j ,... , amj )
— j-th column of the matrix A, then
f (x) = x 1 f (e 1 ) + x 2 f (e 2 ) + · · · + xnf (en) = x 1 (a 11 , a 21 ,... , am 1 ) + · · · + xn(a 1 n, a 2 n,... , amn) = (a 11 x 1 + a 12 x 2 + · · · + a 1 nxn,... , am 1 x 1 + am 2 x 2 + · · · + amnxn) = (
j
a 1 j xj ,
j
a 2 j xj ,... ,
j
amj xj ).
Comparing this with the formal definition of matrix multiplication, we get that
f (x) = Af x.