


Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Material Type: Notes; Professor: Lambers; Class: Numerical Analysis I; Subject: Mathematics; University: University of Southern Mississippi; Term: Fall 2009;
Typology: Study notes
1 / 4
This page cannot be seen from the preview
Don't miss anything!
Jim Lambers MAT 460/ Fall Semester 2009- Lecture 6 Notes
These notes correspond to Section 1.3 in the text.
Mathematical problems arising from scientific applications present a wide variety of difficulties that prevent us from solving them exactly. This has led to an equally wide variety of techniques for computing approximations to quantities occurring in such problems in order to obtain approximate solutions. In this lecture, we will describe the types of approximations that can be made, and learn some basic techniques for analyzing the accuracy of these approximations.
Suppose that we are attempting to solve a particular instance of a problem arising from a mathe- matical model of a scientific application. We say that such a problem is well-posed if it meets the following criteria:
By the first condition, the process of solving a well-posed problem can be seen to be equivalent to the evaluation of some function f at some known value x, where x represents the problem data. Since, in many cases, knowledge of the function f is limited, the task of computing f (x) can be viewed, at least conceptually, as the execution of some (possibly infinite) sequence of steps that solves the underlying problem for the data x. The goal in numerical analysis is to develop a finite sequence of steps, i.e., an algorithm, for computing an approximation to the value f (x). There are two general types of error that occur in the process of computing this approximation to f (x):
Intuitively, it is not difficult to conclude that any scientific computation can include several ap- proximations, each of which introduces error in the computed solution. Therefore, it is necessary to understand the effects of these approximations on accuracy. The study of these effects is known as error analysis. Error analysis will be a recurring theme in this course. In this lecture, we will introduce some basic concepts that will play a role in error analyses of specific algorithms in later lectures.
Forward Error and Backward Error
Suppose that we compute an approximation ˆy = fˆ (x) of the value y = f (x) for a given function f and given problem data x. Before we can analyze the accuracy of this approximation, we must have a precisely defined notion of error in such an approximation. We now provide this precise definition.
Definition (Forward Error) Let x be a real number and let f : R → R be a function. If ˆy is a real number that is an approximation to y = f (x), then the forward error in yˆ is the difference ∆y = ˆy − y. If y 6 = 0, then the relative forward error in ˆy is defined by
∆y y
yˆ − y y
Clearly, our primary goal in error analysis is to obtain an estimate of the forward error ∆y. Un- fortunately, it can be difficult to obtain this estimate directly. An alternative approach is to instead view the computed value ˆy as the exact solution of a problem with modified data; i.e., ˆy = f (ˆx) where ˆx is a perturbation of x.
Definition (Backward Error) Let x be a real number and let f : R → R be a function. Suppose that the real number yˆ is an approximation to y = f (x), and that ˆy is in the range of f ; that is, y ˆ = f (ˆx) for some real number xˆ. Then, the quantity ∆x = ˆx − x is the backward error in yˆ. If x 6 = 0, then the relative forward error in yˆ is defined by
∆x x
ˆx − x x
that end, we assume, for simplicity, that f : R → R is differentiable and obtain
κrel =
|x∆y| |y∆x|
= |x(f (x + ∆x) − f (x))| |f (x)∆x|
≈ |xf ′(x)∆x| |f (x)∆x|
≈
∣∣^ xf^
′(x) f (x)
Therefore, if we can estimate the backward error ∆x, and if we can bound f and f ′^ near x, we can then bound the condition number and obtain an estimate of the relative forward error. Of course, the condition number is undefined if the exact value f (x) is zero. In this case, we can instead use the absolute condition number. Using the same approach as before, the absolute condition number can be estimated using the derivative of f. Specifically, we have κabs ≈ |f ′(x)|.
Determining the condition, or sensitivity, of a problem is an important task in the error analysis of an algorithm designed to solve the problem, but it does not provide sufficient information to determine whether an algorithm will yield an accurate approximate solution. Recall that the condition number of a function f depends on, among other things, the absolute forward error f (ˆx) − f (x). However, an algorithm for evaluating f (x) actually evaluates a function fˆ that approximates f , producing an approximation ˆy = fˆ (x) to the exact solution y = f (x). In our definition of backward error, we have assumed that fˆ (x) = f (ˆx) for some ˆx that is close to x; i.e., our approximate solution to the original problem is the exact solution to a “nearby” problem. This assumption has allowed us to define the condition number of f independently of any approximation fˆ. This independence is necessary, because the sensitivity of a problem depends solely on the problem itself and not any algorithm that may be used to approximately solve it. Is it always reasonable to assume that any approximate solution is the exact solution to a nearby problem? Unfortunately, it is not. It is possible that an algorithm that yields an accurate approximation for given data may be unreasonably sensitive to perturbations in that data. This leads to the concept of a stable algorithm: an algorithm applied to a given problem with given data x is said to be stable if it computes an approximate solution that is the exact solution to the same problem with data ˆx, where ˆx is a small perturbation of x. It can be shown that if a problem is well-conditioned, and if we have a stable algorithm for solving it, then the computed solution can be considered accurate, in the sense that the relative error in the computed solution is small. On the other hand, a stable algorithm applied to an ill-conditioned problem cannot be expected to produce an accurate solution.