Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Finite-Dimensional Spaces, Exercises of Vector Analysis

Department of Mathematics, Carnegie Mellon University, Pittsburgh,. PA 15213 USA ... Vector Spaces”, “Modern Algebra”, “Vector and Tensor Analysis”,.

Typology: Exercises

2022/2023

Uploaded on 05/11/2023

ekasha
ekasha 🇺🇸

4.8

(22)

270 documents

1 / 45

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Finite-Dimensional Spaces
Algebra, Geometry, and Analysis
Volume I
By
Walter Noll
Department of Mathematics, Carnegie Mellon University, Pittsburgh,
PA 15213 USA
This book was published originally by Martinus Nijhoff Publishers in 1987.
This is a corrected reprint, posted in 2006 on my website
math.cmu.edu/ wn0g/noll.
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d

Partial preview of the text

Download Finite-Dimensional Spaces and more Exercises Vector Analysis in PDF only on Docsity!

Finite-Dimensional Spaces

Algebra, Geometry, and Analysis

Volume I

By

Walter Noll

Department of Mathematics, Carnegie Mellon University, Pittsburgh,

PA 15213 USA

This book was published originally by Martinus Nijhoff Publishers in 1987. This is a corrected reprint, posted in 2006 on my website math.cmu.edu/ wn0g/noll.

i

Introduction

A. Audience. This treatise (consisting of the present Vol.I and of Vol.II, to be published) is primarily intended to be a textbook for a core course in mathematics at the advanced undergraduate or the beginning graduate level. The treatise should also be useful as a textbook for selected stu- dents in honors programs at the sophomore and junior level. Finally, it should be of use to theoretically inclined scientists and engineers who wish to gain a better understanding of those parts of mathemat- ics that are most likely to help them gain insight into the conceptual foundations of the scientific discipline of their interest.

B. Prerequisites. Before studying this treatise, a student should be familiar with the material summarized in Chapters 0 and 1 of Vol.I. Three one-semester courses in serious mathematics should be sufficient to gain such fa- miliarity. The first should be an introduction to contemporary math- ematics and should cover sets, families, mappings, relations, number systems, and basic algebraic structures. The second should be an in- troduction to rigorous real analysis, dealing with real numbers and real sequences, and with limits, continuity, differentiation, and inte- gration of real functions of one real variable. The third should be an introduction to linear algebra, with emphasis on concepts rather than on computational procedures.

C. Organization. There are ten chapters in Vol.I, numbered from 0 to 9. A chapter contains from 4 to 12 sections. The first digit of a section number indi- cates the chapter to which the section belongs; for example, Sect.611 is the 11th section of Chap.6. The appropriate section title and number are printed on the top of each odd-numbered page. A descriptive name is used for each important theorem. Less impor- tant results are called Propositions and are enumerated in each section; for example, Prop.5 of Sect.83 refers to the 5th proposition of the 3rd section of Chap.8. Similar enumerations are used, if needed, for for- mal Definitions, Remarks, and Pitfalls. The term Pitfall is used for comments designed to prevent possible misconceptions. At the end of most sections there are notes in small print. Their purpose is to relate the notations and terms used in the text to other notations and terms in the mathematical literature, and to comment on symbols, terms, and procedures that appear in print here for the first time (to the best of my knowledge).

iii

(renamed Carnegie-Mellon University in 1968). At first, the course was entitled “Tensor Analysis”. I soon realized that what usually passes for “Tensor Analysis” is really an undigested mishmash of lin- ear and multilinear algebra, differential calculus in finite-dimensional spaces, manipulation of curvilinear coordinates, and differential geom- etry on manifolds, all treated with mindless formalisms and without real insight. As a result, I omitted the abstract differential geometry, which is too difficult to be treated properly at this level, and renamed the course “Multidimensional Algebra, Geometry, and Analysis”, and later “Finite-Dimensional Spaces”. The notes were rewritten several times. They were widely distributed and they served as the basis for appendices to the books Viscometric Flows of Non-Newtonian Fluids by B. D. Coleman, H. Markovitz, and W. Noll (Springer-Verlag 1966) and A First Course in Rational Continuum Mechanics by C. Truesdell (Academic Press 1977). Since 1973 my notes have also been used by J. J. Sch¨affer and me in an undergraduate honors program entitled “Mathematical Studies”. One of the purposes of the program has been to present mathematics as an integrated whole and to avoid its traditional division into separate and seemingly unrelated courses. In this connection, Sch¨affer and I gradually developed a system of notation and terminology that we believe is useful for all branches of mathematics. My involvement in the Mathematical Studies Program has had a profound influence on my thinking; it has led to radical revisions of my notes and finally to this treatise. Chapter 9 of Vol. I is an adaptation of notes entitled “On the Structure of Linear Transformations”, which were written for a course in “Mod- ern Algebra”. (They were issued as Report 70–12 of the Department of Mathematics, Carnegie-Mellon University, in March 1970.)

F. Apologia. I wish to list certain features which make this treatise different from much, and in some cases most, of the existing literature. Much of the substance of this treatise is covered in textbooks with titles such as “Linear Algebra”, “Analytic Geometry”, “Finite-Dimensional Vector Spaces”, “Modern Algebra”, “Vector and Tensor Analysis”, “Advanced Calculus”, “Functions of Several Variables”, or “ Elemen- tary Differential Geometry”. However, I believe this treatise to be the first that deals with finite-dimensional spaces in a unified way and that emphasizes the interplay between algebra, geometry, and analysis.

iv

The approach of this treatise is conceptual, geometric, and uncompro- misingly “coordinate-free”. In some of the literature, “tensors” are still defined in terms of coordinates and their transformations. To me, this is like looking at shadows dancing on the wall rather than at real- ity itself. Coordinates have no place in the definition of concepts. Of course, when it comes to dealing with specific problems, coordinates are sometimes useful. For this reason, I have included a chapter in which I show how to handle coordinates efficiently. The space Rn, with n ∈ N, is very rarely mentioned in this trea- tise. It is misused nearly every time it appears in the literature, be- cause it is only a special model for the structure that is appropriate in most situations, and as a special model Rn^ contains extraneous features that impede geometric insight. Thus any textbook on finite- dimensional calculus with a title like “Functions of Several Variables” must be defective. I consider it a travesty to call Rn^ “the Euclidean n-space”, as so many do. To quote N. D. Goodman: “Obviously, this is not what Euclid meant” (in “Mathematics as an objective science”, Am. Math. Monthly, Vol. 86, p. 549, 1979). In this treatise, I have tried to present every mathematical topic in a setting that fits the topic naturally and hence leads to a maximum of insight. For example, the structure of a flat (a.k.a. affine) space is the most natural setting for the differential and integral calculus. Most treatments use Rn, a linear space, a normed linear space, or a Euclidean space as the setting. Each of them has extraneous structure which conceals the true nature of the calculus. On the other hand, the structure of a differentiable manifold is too impoverished to be a setting for many aspects of calculus. In this treatise, a very careful distinction is made between a set and a family (see Sect.02). Almost all the literature is very sloppy on this point. I have found it liberating to resist the compulsion to think of finite sets always in enumerated form and thus to confuse them with lists. Also, I have found it very useful to be able to use a single symbol for a family and to use the same symbol, with an index, for the terms of the family. For example, I use Mi,j for the (i, j)-term of the matrix M. It seems nonsensical to me to change from an upper case letter M to the lower case letter m when changing from the matrix to its terms. A notation such as (mi,j ) for a matrix, often seen in textbooks, is poison to me because it contains the dangling dummies i and j (dangling dummies are like cigarettes: both are poison, but I used

ii

Chapter 0

Basic Mathematics

In this chapter, we introduce the notation and terminology used through- out the book. Also, we give a brief explanation of the basic concepts of contemporary mathematics to the extent needed in this book. Finally, we give a summary of those topics of elementary algebra and analysis that are a prerequisite for the remainder of the book.

00 Notations

The equality sign = is used to express the assertion that on either side of = are symbolic names (possibly very complicated) for one and the same object. Thus, a = b means that a and b are names for the same object; a 6 = b means that a and b are names for distinct objects. The symbol := is used to mean that the left side is defined by the right side, that the left side is an abbreviation of the right side, or that the right side is to be substituted for the left side. The symbol =: has an analogous meaning. The logical equivalence sign ⇔ is used to indicate logical equivalence of statements. The symbol :⇔ is used to define a phrase or property; it may be read as “means by definition that” or “is equivalent by definition to”. Given a set S and a property p that any given member of S may or may not have, we use the shorthand

? x ∈ S, x has the property p (00.1)

to describe the problem “Find all x ∈ S, if any, such that x has the property p”. An element of S having the property p is called a solution of the problem. Often, the property p involves an equality; then the problem is called an equation.

iii

  1. SETS, PARTITIONS v

the set of non-zero natural numbers; they do not consider zero to be a natural number. Sometimes P is used for what we call P×, the set of strictly positive real numbers. The notation R+^ for what we denote by P is often used. Older textbooks often use boldface letters or script letters instead of the special letters now common.

(4) In most of the literature, S ∼= T is used to indicate that S is isomorphic to T. I prefer to use ∼= only when the isomorphism is natural and used for identification. (5) The notations n[^ and n]^ were invented by J. J. Sch¨affer in about 1973. I cannot understand any more how I ever got along without them. Some textbooks use the boldface n for what we call n]. I do not consider the change from lightface to boldface a legitimate notation for a functorial process, quite apart from the fact that it is impossible to produce on a blackboard. (6) The use of the superscript × to indicate the removal of 0 was introduced by J. J. Sch¨affer and me in about 1973. It has turned out to be a very effective notation. It is consistent with the commonly found notation F×^ for the multiplicative group of a field F (see Sect.06).

01 Sets, Partitions

To specify a set S, one must have a criterion for deciding whether any given object x belongs to S. If it does, we write x ∈ S and say that x is a member or an element of S, that S contains x, or that x is in S or contained in S. If x does not belong to S we write x /∈ S. We use abbreviations such as “x, y ∈ S” for “x ∈ S and y ∈ S”. Let S and T be sets. If every member of S is also a member of T we write S ⊂ T or T ⊃ S and say that S is a subset of T , that S is included in T , or that T includes S. We have

S = T ⇐⇒ (S ⊂ T and T ⊂ S). (01.1)

If S ⊂ T but S 6 = T we write S $ T and say that S is a proper subset of T or that S is properly included in T. There is exactly one set having no members at all; it is denoted by ∅ and called the empty set. The empty set is a subset of every set. A set having exactly one member is called a singleton; it is denoted by {a} if a denotes its only member. If the set S is known to be a singleton, we write a :∈ S to indicate that we wish to denote the only member of S by a. A set having exactly two members is called a doubleton; it is denoted by {a, b} if a and b denote its two (distinct) members. A set C whose members are themselves sets is often called a collection of sets. The collection of all subsets of a given set S is denoted by Sub S.

vi CHAPTER 0. BASIC MATHEMATICS

Hence, if T is a set, then

T ⊂ S ⇐⇒ T ∈ Sub S. (01.2)

Many sets in mathematics are specified by naming an encompassing set A and a property p that any given member of A may or may not have. The set S of all members of A that have this property p is denoted by

S := {x ∈ A | x has the property p}, (01.3)

which is read as “S is the set of all x in A such that x has the property p”. Occasionally, one has no encompassing set and p is a property that any object may or may not have. In this case, (01.3) is replaced by

S := {x | x has the property p}. (01.4)

Remark: Definitions of sets of the type (01.4) must be treated with caution. Indiscriminate use of (01.4) can lead to difficulties known as “para- doxes”. Sometimes, a set with only few members is specified by an explicit listing of its members and by enclosing the list in braces {}. Thus, {a, b, c, d} denotes the set whose members are a, b, c and d. Given any sets S and T , one can form their union S ∪ T , consisting of all objects that belong either to S or to T (or to both), and one can form their intersection S ∩ T , consisting of all objects that belong to both S and T. We say that S and T are disjoint if they have no elements in common; i.e. if S ∩ T = ∅. The following rules (01.5)–(01.10) are valid for any sets S, T , U.

S ∪ S = S ∩ S = S ∪ ∅ = S, S ∩ ∅ = ∅, (01.5) T ⊂ S ⇐⇒ T ∪ S = S ⇐⇒ T ∩ S = T. (01.6)

The following rules remain valid if ∩ and ∪ are interchanged.

S ∪ T = T ∪ S, (01.7) (S ∪ T ) ∪ U = S ∪ (T ∪ U ), (01.8) (S ∪ T ) ∩ U = (S ∩ U ) ∪ (T ∩ U ), (01.9) T ⊂ S =⇒ T ∪ U ⊂ S ∪ U. (01.10)

Given any sets S and T , the set of all members of S that do not belong to T is called the set-difference of S and T and is denoted by S \ T , so that

S \ T := {x ∈ S | x /∈ T }. (01.11)

viii CHAPTER 0. BASIC MATHEMATICS

a partition of S, called the singleton-partition of S. If S 6 = ∅, then {S} is also a partition of S, called the trivial partition. If T is a non-empty proper subset of S, then {T, S \ T } is a partition of S. Let S be a set and let ∼ be a relation on S, i.e. a fragment that becomes a statement x ∼ y, true or not, when x, y ∈ S. We say that ∼ is an equivalence relation if for all x, y, z ∈ S we have

(i) x ∼ x (reflexivity),

(ii) x ∼ y =⇒ y ∼ x (symmetry), and

(iii) (x ∼ y and y ∼ z) =⇒ x ∼ z (transitivity).

If ∼ is an equivalence relation on S, then

P := {P ∈ Sub S | P = {x ∈ S | x ∼ y} for some y ∈ S}

is a partition of S; its pieces are called the equivalence classes of the relation ∼, and we have x ∼ y if and only if x and y belong to the same piece of P. Conversely, if P is a partition of S, then

x ∼ y : ⇐⇒ (for some P ∈ P, x, y ∈ P )

defines an equivalence relation on S whose equivalence classes are the pieces of P.

Notes 01

(1) Some authors use ⊆ when we use ⊂, and ⊂ when we use $. There is some confusion in the literature concerning the use of “contain” and “include”. We carefully observe the distinction.

(2) The term“null set” is often used for what we call the “empty set”. Also the phrase “S is void” instead of “S is empty” can often be found.

(3) The notation Sub S is used here for the first time. The notations P(S) and 2S^ are common. The collection Sub S is often called the “power set” of S.

(4) The notation S–T instead of S \ T is used by some people. It clashes with the member-wise difference notation (06.16). If T is a subset of S, the notations CS T or T c^ are used by some people for the complement S \ T of T in S.

  1. FAMILIES, LISTS, MATRICES ix

02 Families, Lists, Matrices

A family a is specified by a procedure by which one associates with each member i of a given set I an object ai. The given set I is called the index set of the family and the object ai is called the term of index i or simply the i-term of the family a. If a and b are families with the same index set I and if ai = bi for all i ∈ I, then a and b are considered to be the same, i.e. a = b. The notation (ai | i ∈ I) is often used to denote a family, especially if no name is available a priori. The set of all terms of a family a is called the range of a and is denoted by Rng a or {ai | i ∈ I}, so that

Rng a = Rng (ai | i ∈ I) = {ai | i ∈ I}. (02.1)

Many sets in mathematics are specified by naming a family a and by letting the set be the range (02.1) of a. We say that a family a = (ai | i ∈ I) is injective if, for all i, j ∈ I,

ai = aj =⇒ i = j.

Roughly, a family is injective if there is no repetition of terms. The concept of a family may be viewed as a generalization of the concept of a set. With each set S one can associate a family by letting the index set be S itself and by letting the term corresponding to any given x ∈ S be x itself. Thus, the family corresponding to S is (x | x ∈ S). We identify this family with S and refer to it as “S self-indexed.” In this manner, every assertion involving arbitrary families includes, as a special case, an assertion involving sets. The empty set ∅ is identified with the empty family, which is the only family whose index set is empty. If all the terms of a given family a belong to a given set S; i.e. if Rng a ⊂ S, we say that a is a family in S. The set of all families in S with a given index set I is denoted by SI^ and called the I-set-power of S. If T ⊂ S, then T I^ ⊂ SI^. We have S∅^ = {∅}. Let n ∈ N be given. A family whose index set is n]^ or n[^ is called a list of length n. If n is small, a list a indexed on n]^ can often be specified by a bookkeeping scheme of the form

(a 1 , a 2 ,... , an) := (ai | i ∈ n]). (02.2)

where a 1 , a 2 ,... , an should be replaced by specific names of objects, to be filled in an actual use. For each i ∈ n], we call ai the i’th term of a. The only list of length 0 is the empty family ∅. A list of length 1 is called a singlet,

  1. MAPPINGS xi

Notes 02

(1) A family is sometimes called an “indexed set”. The trouble with this term is that an “indexed set” is not a set. The notation {ai}i∈I is used by some people for what we denote by (ai | i ∈ I). Some people even use just {ai}, which is poison because of the dangling dummy i and because it also denotes a singleton with member ai. (2) The terms “member” or “entry of a family” are often used instead of “term of a family”. The use of “member” can lead to confusion with “member of a set”. (3) One often finds the barbarism “n-tuple” for what we call “list of length n”. The term “finite sequence” is sometimes used for what we call “list”. A list of numbers is very often called a “vector”. I prefer to use the term “vector” only when it has its original geometric meaning (see Def.1 of Sect.32). (4) The terms “Cartesian product” and “direct product” are often used for what we call “set product”. (5) In most of the literature, the use of the term “matrix” is confined to the case when the index set is of the form n]^ × m]^ and when the terms are numbers of some kind. The generalization used here turns out to be very useful.

03 Mappings

In order to specify a mapping f , one first has to prescribe two sets, say D and C, and then some kind of prescription, called the assignment rule of f , by which one can assign to every element x ∈ D an element f (x) ∈ C. We call f (x) the value of f at x. It is very important to distinguish very carefully between the mapping f and its values f (x), x ∈ D. The set D of objects to which the prescription embodied in f can be applied is called the domain of the mapping f and is denoted by Dom f := D. The set C to which the values of f must belong is called the codomain of f and is denoted by Cod f := C. In order to put C and D into full view, we often write f : D → C or D f → C

instead of just f and we say that f maps D to C or that f is a mapping from D to C. The phrase “f is defined on D” expresses the assertion that D is the domain of f. If f and g are mappings with Dom f = Dom g, Cod f = Cod g, and f (x) = g(x) for all x ∈ Dom f , then f and g are considered to coincide, i.e. f = g. Terms such as “function”, “map”, “functional”, “transformation”, and “operator” are often used to mean the same thing as “mapping”. The term “function” is preferred when the codomain is the set of real or complex

xii CHAPTER 0. BASIC MATHEMATICS

numbers or a subset thereof. A still greater variety of names is used for mappings having special properties. Also, in some contexts, the value of f at x is not written f (x) but f x, xf , fx or xf^. In order to specify a mapping f explicitly without introduc- ing unnecessary symbols, it is often useful to employ the notation (x 7 → f (x)) : Dom f → Cod f instead of just f. (Note the use of 7 → instead of →.) For example, (x 7 → (^1) x ) : R×^ → R denotes the function f with Dom f := R×, Cod f := R and evaluation rule

f (x) :=

x for all x ∈ R×.

The graph of a mapping f : D → C is the subset Gr (f ) of the set-product D × C defined by

Gr (f ) := {(x, y) ∈ D × C | y = f (x)}. (03.1)

The mappings f and g coincide if and only if they have the same domain, codomain, and graph.

Remark: Very often, a mapping is specified by two sets D and C and a statement scheme F (x, y), which may be become valid or not, depending on what elements of D and C are substituted for x and y, respectively. If, for every x ∈ D, there is exactly one y ∈ C such that F (x, y) is valid, then F defines a mapping f : D −→ C, namely by the prescription that assigns to x ∈ D the unique y ∈ C that makes F (x, y) valid. Then

Gr(f ) = {(x, y) | F (x, y) is valid }.

In some cases, given x ∈ D, one can define or obtain f (x) by a formula, algorithm, or other procedure. Finding an efficient procedure to this end is often a difficult task. With every mapping f we can associate the family (f (x) | x ∈ Dom f ) of its values. Roughly, the family is obtained from the mapping by forgetting the codomain. Conversely, with every family a := (ai | i ∈ I) and every set C that includes Rng a we can associate the mapping (i 7 → ai) : I → C. Roughly, the mapping is obtained from the family by specifying the codomain C. For example, if U is a subset of a given set S, we can obtain from the characteristic family of U in S, defined by (02.5), the characteris- tic function of U in S, also denoted by chU⊂S or simply chU , by specifying a codomain, usually R.

xiv CHAPTER 0. BASIC MATHEMATICS

If f , g and h are mappings with Dom g = Cod f and Cod g = Dom h, then

(h ◦ g) ◦ f = h ◦ (g ◦ f ). (03.5)

Because of this rule, we may omit parentheses and write h ◦ g ◦ f. Let f be a mapping from a set D to itself. For every n ∈ N, the n’th iterate f ◦n^ : D → D of f is defined recursively by f ◦^0 = 1D, and f ◦(k+1)^ := f ◦ f ◦k^ for all k ∈ N. We have f ◦^1 = f , f ◦^2 = f ◦ f , f ◦^3 = f ◦ f ◦ f , etc. An element z ∈ D is called a fixed point of f if f (z) = z. We say that given mappings f and g, both from D to itself, commute if f ◦ g = g ◦ f. Let a mapping f : D → C be given. A mapping g : C → D is called a right-inverse of f if f ◦ g = 1C , a left-inverse of f if g ◦ f = 1D. If f has a right-inverse, it must be surjective; if f has a left-inverse, it must be injective. If g is a right-inverse of f and h a left-inverse of f , then f is invertible and g = h = f ←. We always have

f ◦ (^1) Dom f = 1Cod f ◦ f = f. (03.6)

Again, let a mapping f : D → C be given. We define the image mapping f> : Sub D → Sub C of f by the evaluation rule

f>(U ) := {f (x) | x ∈ U } for all U ∈ Sub D (03.7)

and the pre-image mapping f <^ : Sub C → Sub D of f by the rule

f <(V ) := {x ∈ D | f (x) ∈ V } for all V ∈ Sub C. (03.8)

The mappings f> and f <^ satisfy the following rules for all subsets U and U ′^ of D and all subsets V and V ′^ of C:

U ⊂ U ′^ =⇒ f>(U ) ⊂ f>(U ′), (03.9) V ⊂ V ′^ =⇒ f <(V ) ⊂ f <(V ′), (03.10) U ⊂ f <(f>(U )), f>(f <(V )) = V ∩ Rng f, (03.11) f>(U ∪ U ′) = f>(U ) ∪ f>(U ′), f>(U ∩ U ′) ⊂ f>(U ) ∩ f>(U ′), (03.12) f <(V ∪ V ′) = f <(V ) ∪ f <(V ′), f <(V ∩ V ′) = f <(V ) ∩ f <(V ′), (03.13) f <(C \ V ) = D \ f <(V ). (03.14)

The inclusions ⊂ in (03.11) and (03.12) become equalities if f is injective. If f is injective, so is f>, and f <^ is a left-inverse of f>. If f is surjective, so is f>, and f <^ is a right-inverse of f>. If f is invertible, then (f>)←^ = f <. If f and g are mappings such that Dom g = Cod f , then

(g ◦ f )> = g> ◦ f>, (g ◦ f )<^ = f <^ ◦ g<, (03.15)

  1. MAPPINGS xv

Rng (g ◦ f ) = g>(Rng f ). (03.16) If chV ⊂C is the characteristic function of a subset V of a given set C and if f : D → C is given, we have

chV ⊂C ◦ f = chf <(V )⊂D. (03.17)

Let a mapping f and sets A and B be given. We define the restriction f |A of f to A by Dom f |A := A ∩ Dom f, Cod f |A := Cod f , and the evaluation rule

f |A(x) := f (x) for all x ∈ A ∩ Dom f. (03.18)

We define the mapping

f |BA : A ∩ f <(B ∩ Cod f ) → B (03.19)

by the rule

f |BA (x) := f (x) for all x ∈ A ∩ f <(B ∩ Cod f ). (03.20)

We say that f |BA is an adjustment of f. We have f |A = f |CodA f. We use the abbreviations

f |B^ := f |B Dom f , f |Rng^ := f |Rng Dom^ ff. (03.21)

We have Dom (f |B^ ) = Dom f if and only if Rng f ⊂ B. We note that

f |BA = (f |A) |B^ = (f |B^ ) |A. (03.22)

Let f be a mapping from a set D to itself. We say that a subset A of D is f - invariant if f>(A) ⊂ A. If this is the case, we define the A-adjustment f|A : A → A of f by f|A := f |AA. (03.23) Let a set A and a collection C of subsets of A be given such that A ∈ C. For every S ∈ Sub A, the subcollection {U ∈ C | S ⊂ U } of C then contains A and hence is not empty. We define Sp : Sub A → Sub A, the span-mapping corresponding to C, by the rule

Sp (S) :=

{U ∈ C | S ⊂ U } for all S ∈ Sub A. (03.24)

The following rules hold for all S, T ∈ Sub A:

S ⊂ Sp (S), (03.25)