Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Summary of Math 51H Linear Algebra Material, Schemes and Mind Maps of Calculus

Mathematics Department Stanford University. Summary of Math 51H Multivariable Calculus/Real Analysis Material. The following is a brief summary of the main ...

Typology: Schemes and Mind Maps

2022/2023

Uploaded on 05/11/2023

agrata
agrata 🇺🇸

4

(7)

258 documents

1 / 2

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Mathematics Department Stanford University
Summary of Math 51H Linear Algebra Material
The following is a brief summary of the main results covered in the linear algebra part of 51H; you should of course
know all these results and their proofs and be able to apply them in the manner required e.g. as in the homework
problems.
Vectors in Rn, meaning of linear combinations, l.i., l.d., span, subspace.Dot product, Cauchy Schwarz inequality
and its proof. Angle between non-zero vectors.
Gaussian elimination, Underdetermined systems lemma. The Linear Dependence Lemma and its consequences:
Basis Theorem and definition of dimension, and the facts that (a) kl.i. vectors in a k-dimensional subspace auto-
matically span, (b) kvectors which span a k-dimensional subspace are automatically l.i.
For an m×nmatrix A: Definition of N(A), C (A). Rank/nullity theorem, Basic matrix algebra (product and sums)
and the fact that rank(AB)min{rank(A),rank(B)}.The transpose ATof Aand the formula (AB )T=BTAT.
Reduction of Ato reduced row echelon form rref Aand its consequences, including the alternate proof of the rank
nullity theorem and the fact that C(A) is the span of the columns of Awith column numbers equal to the column
numbers of the pivot columns of rref A.
For each subspace Vof Rn, the definition of the orthogonal complement Vof V, and the facts that VV={0},
V+V=Rnand that this is a direct sum (i.e. for each zRnthere are unique x, y V, V with z=x+y),
dim V+ dim V=n, (V)=V.Existence of unique orthogonal pro jection P:RnRnwith the properties (a)
P(x)Vand (b) xP(x)VxRn.Proof that such Pis automatically has the additional properties: (i) It
is linear, (ii) P(x) = xxV, (iii) it is symmetric (i.e. x·P(y) = y·P(x)x, y Rn), (iv) P(V) = {0}and (v)
kxP(x)kgives the distance of a point xfrom V(i.e. P(x) is the nearest point of the subspace Vto the vector x).
(Terminology: Pis called “the orthogonal projection onto V.”)
The fact that C(AT) = (N(A))for any m×nmatrix, and the consequence that row rank(A) = rank(A) =
rank(AT).
Affine spaces x0+Vin Rn(where Vis a subspace) and the fact that the nearest point of x0+Vto 0 is given by
x0P(x0), where Pis the orthogonal projection onto V.
The main theorem of inhomogeneous systems: that if Ais m×nand yRmis given, then
(i) If Ax =yhas at least one solution x0, then the whole solution set is precisely the affine space x0+N(A),
(ii) Ax =yhas a solution yC(A) y(N(AT))
Permutations and definition of even/odd permutations;inverse permutation of a given permutation and the fact
that the parity of a permutation and its inverse are the same.
Definition of determinant of an n×nmatrix (in terms of the function Dand the formula for det Aas a sum
of n! terms, each ±a product of nterms, each term taken from a distinct row and column of A); the properties
that (a) det(A) is linear in each row, (b) det( ˜
A) = det(A) if ˜
Ais obtained by interchanging two distinct rows
of A, and (c) det(A) = det(AT). Computation of det Aby elementary row operations, and the fact that det A6=
0 rrefA=I⇐⇒ rrefAhas no zero rows. The formulae for the expansion of det Aalong the i’th row
and j’th column of Aand the corresponding formulae Pn
j=1(1)i+jakj det(Aij ) = detA δik for each i, k = 1,...,n,
Pn
i=1(1)i+jaik det(Aij ) = det A δj k for each j, k = 1,...,n, where δij is the i,j ’th entry of the identity matrix (i.e.
1 if i=jand 0 if i6=j). The formula A1= (det A)1(1)i+jdet(Aji)if det(A)6= 0. Computation of A1via
elementary row operations.
For an n×nmatrix A:A1exists det A6= 0 ⇐⇒ N(A) = {0} rank(A) = n rref(A) = I
the map x7→ Ax is 1:1 the map x7→ Ax is onto. The formula (AB)1=B1A1if A1, B1exist and if
Bis n×n.
Gram-Schmidt orthogonalization and the existence of an orthonormal basis for each non-trivial subspace Vof
Rn; the explicit formula for the orthogonal projection Ponto V:P(x) = Pk
j=1(x·wj)wj, where w1,...,wkis any
orthonormal basis for the non-trivial k-dimensional subspace V, and the formula matrix of P=W W T,where Wis
the n×kmatrix with j’th column = wj.
Definition of eigenvalues/eigenvectors of an n×nmatrix.
The Spectral Theorem (that if Ais a symmetric matrix, then there is an orthonormal basis of Rnconsisting of
eigenvectors of A, and if Qis the matrix with columns given by such an orthonormal basis, then Qis orthogonal
(i.e. QTQ=I) and QTAQ = diagonal matrix with the eigenvalues of Aalong the leading diagonal).
1
pf2

Partial preview of the text

Download Summary of Math 51H Linear Algebra Material and more Schemes and Mind Maps Calculus in PDF only on Docsity!

Mathematics Department Stanford University Summary of Math 51H Linear Algebra Material

The following is a brief summary of the main results covered in the linear algebra part of 51H; you should of course know all these results and their proofs and be able to apply them in the manner required e.g. as in the homework problems.

Vectors in Rn, meaning of linear combinations, l.i., l.d., span, subspace. Dot product, Cauchy Schwarz inequality and its proof. Angle between non-zero vectors.

Gaussian elimination, Underdetermined systems lemma. The Linear Dependence Lemma and its consequences: Basis Theorem and definition of dimension, and the facts that (a) k l.i. vectors in a k-dimensional subspace auto- matically span, (b) k vectors which span a k-dimensional subspace are automatically l.i.

For an m×n matrix A: Definition of N (A), C(A). Rank/nullity theorem, Basic matrix algebra (product and sums) and the fact that rank(AB) ≤ min{rank(A), rank(B)}. The transpose AT^ of A and the formula (AB)T^ = BTAT. Reduction of A to reduced row echelon form rref A and its consequences, including the alternate proof of the rank nullity theorem and the fact that C(A) is the span of the columns of A with column numbers equal to the column numbers of the pivot columns of rref A.

For each subspace V of Rn, the definition of the orthogonal complement V ⊥^ of V , and the facts that V ∩V ⊥^ = { 0 }, V + V ⊥^ = Rn^ and that this is a direct sum (i.e. for each z ∈ Rn^ there are unique x, y ∈ V, V ⊥^ with z = x + y), dim V + dim V ⊥^ = n, (V ⊥)⊥^ = V. Existence of unique orthogonal projection P : Rn^ → Rn^ with the properties (a) P (x) ∈ V and (b) x − P (x) ∈ V ⊥^ ∀ x ∈ Rn. Proof that such P is automatically has the additional properties: (i) It is linear, (ii) P (x) = x ∀ x ∈ V , (iii) it is symmetric (i.e. x · P (y) = y · P (x) ∀x, y ∈ Rn), (iv) P (V ⊥) = { 0 } and (v) ‖x − P (x)‖ gives the distance of a point x from V (i.e. P (x) is the nearest point of the subspace V to the vector x). (Terminology: P is called “the orthogonal projection onto V .”)

The fact that C(AT) = (N (A))⊥^ for any m × n matrix, and the consequence that row rank(A) = rank(A) = rank(AT).

Affine spaces x 0 + V in Rn^ (where V is a subspace) and the fact that the nearest point of x 0 + V to 0 is given by x 0 − P (x 0 ), where P is the orthogonal projection onto V.

The main theorem of inhomogeneous systems: that if A is m × n and y ∈ Rm^ is given, then (i) If Ax = y has at least one solution x 0 , then the whole solution set is precisely the affine space x 0 + N (A), (ii) Ax = y has a solution ⇐⇒ y ∈ C(A) ⇐⇒ y ∈ (N (AT))⊥ Permutations and definition of even/odd permutations; inverse permutation of a given permutation and the fact that the parity of a permutation and its inverse are the same.

Definition of determinant of an n × n matrix (in terms of the function D and the formula for det A as a sum of n! terms, each ± a product of n terms, each term taken from a distinct row and column of A); the properties that (a) det(A) is linear in each row, (b) det( A˜) = − det(A) if A˜ is obtained by interchanging two distinct rows of A, and (c) det(A) = det(AT). Computation of det A by elementary row operations, and the fact that det A 6 = 0 ⇐⇒ rrefA = I ⇐⇒ rrefA has no zero rows. The formulae for the expansion of det A along the i’th row and j’th column of A and the corresponding formulae

∑n j=1(−1)

i+j (^) akj det(Aij ) = det A δik for each i, k = 1,... , n, ∑n i=1(−1)

i+j (^) a ik det(Aij ) = det^ A δjk for each^ j, k^ = 1,... , n, where^ δij is the^ i, j’th entry of the identity matrix (i.e. 1 if i = j and 0 if i 6 = j). The formula A−^1 = (det A)−^1

(−1)i+j^ det(Aji)

if det(A) 6 = 0. Computation of A−^1 via elementary row operations.

For an n × n matrix A: A−^1 exists ⇐⇒ det A 6 = 0 ⇐⇒ N (A) = { 0 } ⇐⇒ rank(A) = n ⇐⇒ rref(A) = I ⇐⇒ the map x 7 → Ax is 1:1 ⇐⇒ the map x 7 → Ax is onto. The formula (AB)−^1 = B−^1 A−^1 if A−^1 , B−^1 exist and if B is n × n.

Gram-Schmidt orthogonalization and the existence of an orthonormal basis for each non-trivial subspace V of Rn; the explicit formula for the orthogonal projection P onto V : P (x) =

∑k j=1(x^ ·^ wj )wj , where^ w 1 ,... , wk is any orthonormal basis for the non-trivial k-dimensional subspace V , and the formula matrix of P = W W T, where W is the n × k matrix with j’th column = wj.

Definition of eigenvalues/eigenvectors of an n × n matrix. The Spectral Theorem (that if A is a symmetric matrix, then there is an orthonormal basis of Rn^ consisting of eigenvectors of A, and if Q is the matrix with columns given by such an orthonormal basis, then Q is orthogonal (i.e. QTQ = I) and QTAQ = diagonal matrix with the eigenvalues of A along the leading diagonal).

1

Mathematics Department Stanford University Summary of Math 51H Multivariable Calculus/Real Analysis Material

The following is a brief summary of the main results covered in the multivariable calculus and real analysis part of 51H; you should of course know all these results and their proofs and be able to apply them in the manner required e.g. in the homework assignments.

Open and closed sets in Rn. Theorem that a set C is closed if and only if its complement Rn^ \ C is open. (Equivalently, since Rn^ \ (Rn^ \ C) = C, a set U is open if anly only if its complement Rn^ \ U is closed). Bolzano- Weierstrass theorem for bounded sequences in Rn. Theorem that a continuous real-valued function on a compact set attains both its maximum and minimum values.

Definition of differentiability, and the fact that differentiability of f implies all partials and all directional deriva- tives exist, and Dv f (a) =

∑n j=1 vj^ Dj^ f^ (a)^ if^ f^ is differentiable at^ a. The chain rule for the composite of differentiable functions. Theorem that differentiability at point a implies continuity at a.

Theorem that f of class C^1 on U implies f differentiable at each point of U , and f of class C^2 on U implies DiDj f = Dj Dif at each point of U. The gradient ∇f of a real-valued C^1 function f and the fact that the gradient gives the direction of fastest increase of f at points where ∇f 6 = 0.

Quadratic forms Q(ξ) on Rn^ and definition of positive definite and negative definite. The fact that Q positive definite implies that there is an m > 0 such that Q(ξ) ≥ m‖ξ‖^2 for all ξ ∈ Rn.

For a C^2 function on the ball Bρ(x 0 ), the second derivative identity f (x) = f (x 0 ) + (x − x 0 ) · ∇f (x 0 ) + 12 Qx 0 (x − x 0 ) + E(x), with limx→x 0 ‖x − x 0 ‖−^2 E(x) = 0, where Qx 0 (ξ) is the Hessian quadratic form

∑n i,j=1 DiDj^ f^ (x 0 )ξiξj^ of f at x 0. The consequent facts that if x 0 is a critical point (i.e. ∇f (x 0 ) = 0) then (i) if the Hessian quadratic form Qx 0 (ξ) is positive definite, then f has a local minimum at x 0 , and (ii) if Qx 0 (ξ) is negative definite, then f has a local maximum x 0.

Length of a C^0 curve γ : [a, b] → Rn, and the fact the C^1 curves have finite length given by the formula

ℓ(γ) =

∫ (^) b a ‖γ

′(t)‖ dt. Arc-length parameter s = S(t) for C (^1) curves γ(t) with γ′(t) 6 = 0, velocity and unit tangent

vectors for such curves, and the curvature vector of such a curve assuming γ is C^2.

Definition of k-dimensional C^1 submanifold M of Rn^ and the tangent space TaM. Proof that the tangent space can be expressed as the span of the partial derivatives of the relevant graph map. Tangential gradient ∇M f of a C^1 function f defined in a neighborhood of M and the fact that ∇M f = 0 at a local max/min of f |M. Lagrange muliplier theorem as in 9.14 of Ch.2 of text.

Contraction mapping principle and its proof. The inverse function theorem and its proof. Implicit function theorem. (Note: Implicit function theorem will not be tested in the final examination)

Real Analysis Lecture 1: Basic properties of the real numbers, including the supremum axiom and the density of the rationals and the irrationals.

Real Analysis Lecture 2: Basic properties of real sequences, including definition of convergence and the Bolzano- Weierstrass theorem for bounded sequences in R, and the fact that bounded monotone sequences are convergent.

Real Analysis Lecture 3: Definition and basic properties of continuous functions, including the proof that a continuous function on a closed interval attains its maximum and minimum values.

Real Analysis Lecture 4: Basic properties of series of real numbers, including definition of convergence, the theorem that convergence implies n’th term → 0. For sequences with non-negative terms, the theorem that the series converges if and only if the sequence of partial sums is bounded. Comparison and integral tests for convergence/divergence. Absolute convergence implies convergence. (Omit rearrangement of series, since we did not cover that topic this year.)

Real Analysis Lecture 5: Power series. Definition of radius of convergence and theorem that for a given power series

n=0 anx n (^) there are just 3 possibilities: (i) the series diverges at each point x 6 = 0, (ii) the series converges

absolutely at each point of R, or (iii) there is ρ > 0 such that the series converges absolutely for each point x with |x| < ρ, and diverges for each point x with |x| > ρ.

Real Analysis Lecture 6: Taylor series: Change of base point theorem, termwise differentiability theorem, Taylor’s

theorem, and the sufficient condition supn=0, 1 , 2 ,... sup|x|<r^ |f^

(n)(x)| n! r

n (^) ≤ M to ensure convergence of the Taylor series

to the function f on the interval |x| < r.