















Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
These lecture notes cover the topics of numerical differentiation and integration in the computational physics course taught by dr. Donald luttermoser at east tennessee state university. The limitations of numerical differentiation, the taylor series expansion, and the forward and central difference methods for approximating derivatives. It also covers numerical integration, including the trapezoid rule, simpson's rule, and gaussian quadrature.
Typology: Study notes
1 / 23
This page cannot be seen from the preview
Don't miss anything!
Dr. Donald G. Luttermoser East Tennessee State University
Version 4.
Abstract
These class notes are designed for use of the instructor and students of the course PHYS-4007/5007: Computational Physics taught by Dr. Donald Luttermoser at East Tennessee State University.
VII–2 PHYS-4007/5007: Computational Physics
' f ′(x) + h 2 f^
′′(x) + · · · , (VII-4)
where the subscript c denotes the computed expression. i) The approximation of Eq. (VII-3) has an error pro- portional to h as shown in Eq. (VII-4).
ii) We can make the approximation error smaller and smaller by making h smaller and smaller.
iii) We can’t make it too small however, since all precision will be lost if the subtraction cancellation error becomes larger than the step size.
c) Consider for example f (x) = a + bx^2. (VII-5) the exact derivative is f ′^ = 2bx , (VII-6) and the computed derivative is
f (^) c′(x) ≈ f^ (x^ +^ h)^ −^ f^ (x) h = 2bx + bh. (VII-7) This would only be a good approximation if h 2 x.
f (^) c′(x) ≈ f (x + h/2) − f (x − h/2) h ≡ Dcf (x, h). (VII-8)
Donald G. Luttermoser, ETSU VII–
x
y
x-h/2 x x+h/2 x+h
Figure VII–1: Forward difference (solid line) and central difference (dashed line) methods for numerical integration.
i) The symbol Dc represents center difference.
ii) Carrying out the Taylor series for f (x ± h) gives
f (^) c′ ' f ′(x) +
h^2 f (3)(x) + · · ·. (VII-9)
iii) The important difference from Eq. (VII-3) is that when f (x − h/2) is subtracted from f (x + h/2), all terms containing an odd power of h in the Taylor series cancel.
iv) Therefore, the central-difference algorithm be- comes accurate to one order higher than h =⇒ h^2.
v) If (f (3)h^2 )/ 24 (f (2)h)/2, then the error with the central-difference method should be smaller than the forward difference.
b) For the polynomial given in Eq. (VII-5), the central dif-
Donald G. Luttermoser, ETSU VII–
e) As an example, for the ex^ and cos x functions f ′^ ≈ f (2)^ ≈ f (3)^ in single precision with m ≈ 10 −^7 , one would get hfd ≈ 0 .0005 and hcd ≈ 0 .01. This makes the central-difference algorithm better for calculating derivatives since a larger h would mean a smaller error =⇒ here the error in the central-difference method is 20 times smaller than the er- ror in the forward-difference method.
b) We can remedy this with a little algebra in the central- difference method. Taking the second derivative of Eq. (VII-8) and then rewriting the first derivatives with a for- ward and backward difference equation, we get
f (2)^ ' f ′(x + h/2) − f ′(x − h/2) h
h^2 {[f (x + h/2) − f (x)]− [f (x) − f (x − h/2)]}. (VII-19) Note, however, that one must keep the second derivative formula in this form to minimize can- cellation error. B. Integration.
^ (b−∑a)/h i=
f (xi)
(^). (VII-20)
VII–6 PHYS-4007/5007: Computational Physics
b) If we ignore the limit, the integral just becomes a sum- mation of boxes or quadrilaterals lying below the curve: ∫ (^) b a f^ (x)^ dx^ ≈^
∑N i=
f (xi) wi , (VII-21)
where N is the number of points in the interval [a, b] and f is evaluated at those interval points ‘i’, fi ≡ f (xi). The wi’s are integration weights which are proportional to h the distance between points i and i + 1.
c) The different integration schemes presented here will all make use of Eq. (VII-21).
d) In the simplest integration schemes, the integrand is ap- proximated by a few terms in the Taylor series expansion of f and the terms are integrated =⇒ typically, adding more terms in the series yield higher precision.
e) This is the basis of the Newton-Cotes methods =⇒ the total interval is divided into equal subintervals with the integrand evaluated at equally spaced points xi. Two such Newton-Cotes methods include the: i) Trapezoid rule (a first-order method).
ii) Simpson (doh′^ !) rule (a second-order method).
f) More accurate integration methods involve the use of non- equally spaced intervals =⇒ these are the methods of Gaus- sian quadrature. i) Gaussian quadrature methods are typically more precise than Newton-Cotes methods as long as there are no singularities (i.e., denominators going to in- finity, non-continuous functions) in the integrand or its derivative.
VII–8 PHYS-4007/5007: Computational Physics
[a, b] (square bracket means we include the endpoints, i.e., there are N–2 points in between the endpoints). Note that the trapezoid rule requires an odd number of points N. i) Straight lines connect the points and this piecewise linear function serves as our fitting curve.
ii) The integral of this fitting function is easy to com- pute since it is the sum of the areas of trapezoids. The area of a single trapezoid is
Ti =
(xi+1 − xi)(fi+1 + fi). (VII-26)
iii) The true integral is estimated as the sum of the areas of the trapezoids, so
I ≈ IT = T 1 + T 2 + · · · + TN − 1 =
N∑ − 1 i=
Ti. (VII-27)
Notice that the last term in the sum is N − 1 since there is one fewer panel than grid points.
iv) The general formula simplifies if we take equally spaced grid points as given by Eq. (VII-25).
v) Then the area for an individual trapezoid (i.e., the area of the one trapezoid within that interval) becomes Ti =
h(fi+1 + fi) , (VII-28) or in our original integral notation, ∫ (^) xi+h xi^ f^ (x)^ dx^ '^
h(fi + fi+1) 2
hfi +^1 2 hfi+. (VII-29) As such, our weights in Eq. (VII-21) for the indi- vidual points are wi = 12 h.
Donald G. Luttermoser, ETSU VII–
d) We now need to add up all of the trapezoids in the subin- tervals across the entire interval [a, b] =⇒ the trapezoid rule:
IT (h) =
2 hf^1 +^ hf^2 +^ hf^3 +^ · · ·^ +^ hfN^ −^1 +
h(f 1 + fN ) + h
N∑ − 1 i=
fi (VII-30)
or ∫ (^) b a f^ (x)^ dx^ ≈^
h 2 f^1 +^ hf^2 +^ hf^3 +^ · · ·^ +^ hfN^ −^1 +^
h 2 fN^. (VII-31)
e) Note that since each internal point gets counted twice, it has a weight of h, whereas the endpoints get counted just once and thus have weights of h/2:
wi =
{ (^) h 2 , h, ..., h, h 2
}
. (VII-32)
f) Our approximation error, also called the truncation error or the algorithmic error here, for the trapezoid rule can be written either as
apprx = I − IT (x, h) = −
(b − a)h^2 f ′′(ζ) (VII-33)
for some value x = ζ that lies in [a, b] or as
apprx = −
h^2 [f ′(b) − f ′(a)] + O(h^4 ). (VII-34)
g) This error is proportional to h^2 and Eq. (VII-34) warns you that the trapezoidal rule will have difficulties if the derivative diverges at the end points.
Donald G. Luttermoser, ETSU VII–
i) The method builds a triangular table of the form
R(1, 1) R(2, 1) R(2, 2) R(3, 1) R(3, 2) R(3, 3) ... ... ......
ii) The formula for the first column is just the recur- sive trapezoidal rule:
R(n, 1) = IT (hn). (VII-38)
iii) The successive columns to the right are com- puted using the Richardson extrapolation for- mula:
R(n + 1, m + 1) = r(n + 1, m) + (VII-39) 1 4 m^ − 1 [R(n + 1, m) − R(n, m)].
iv) The most accurate estimate for the integral is R(n, m), the value at the bottom right corner of the table.
v) To understand why the Romberg scheme works, consider the approximation (i.e., truncation) error, apprx(hn) = I − IT (hn), then using Eq. (VII-34):
apprx(hn) = − 1 12 h^2 n[f ′(b) − f ′(a)] + O(h^4 n). (VII-40) Since hn+1 = hn/2,
apprx(hn+1) = − 1 48 h^2 n[f ′(b) − f ′(a)] + O(h^4 n). (VII-41)
VII–12 PHYS-4007/5007: Computational Physics
vi) Consider the second column of the Romberg ta- ble. The approximation error for R(n + 1, 2) is
I − R(n + 1, 2) = I −
{ IT (hn+1) +^1 3 [IT (hn+1) − IT (hn)]
}
= apprx(hn+1) +
[apprx(hn+1) − apprx(hn)]
= −
)] h^2 n [f ′(b) − f ′(a)] +O(h^4 n) = O(h^4 n). (VII-42)
vii) Notice that the h^2 n term serendipitously cancels out, leaving us with a truncation error that is of order h^4 n. The next (third) column of the Romberg table removes this term, and so forth.
c) The following is an example of an integration subroutine using the Romberg method. If you are interested in using this routine, let me know and send you a copy of the routine shown below written in Fortran 77. Note that func is an externally supplied subroutine that contains the function to be integrated.
subroutine rombf(a, b, N, func, param, R, NR) ! ! Routine to compute integrals by Romberg algorithm ! real R(NR,NR), param(*) external func h = b - a! This is the coarsest panel. R(1,1) = h/2 * (func(a,param) + func(b,param)) np = 1! np is the number of panels. do i = 2, N h = h / 2. np = 2 * np sum = 0.
VII–14 PHYS-4007/5007: Computational Physics
i) Note, however, for the function itself, f (−1) = α − β + γ , α = f^ (1)+ 2 f (−1)− f (0) ,
f (0) = γ , β = f^ (1)− 2 f (−1),
f (1) = α + β + γ , γ = f (0). (VII-46)
ii) Using the results of Eq. (VII-46) in Eq. (VII-45) yields, ∫ (^1) − 1 (αx
(^2) + βx + γ) dx = f^ (−1) 3 +^4 f^ (0) 3
iii) Because 3 values of the function are needed, we evaluate the integral over two adjacent intervals =⇒ evaluate the functions at the two endpoints and the middle: ∫ (^) xi+h xi−h f^ (x)^ dx^ =
∫ (^) xi+h xi^ f^ (x)^ dx^ +
∫ (^) xi xi−h f^ (x)^ dx ' h 3 fi− 1 +^4 h 3 fi + h 3 fi+. (VII-48)
e) Simpson’s rule requires the elementary integration to be over pairs of intervals =⇒ this requires the number of intervals to be even, and hence, the number of points N to be odd.
f) To integrate across the entire range [a, b], we add up con- tributions from each pair of subintervals, counting all but the first and last endpoints twice: ∫ (^) b a f^ (x)^ dx^ ≈^ h 3 f^1 +^43 h f^2 +^23 h f^3 +^43 h f^4 +^ · · · +^43 h fN − 1 + h 3 fN. (VII-49)
Donald G. Luttermoser, ETSU VII–
i) From this integral, we see that the total set of weights is
wi =
{ (^) h 3
4 h 3
2 h 3
4 h 3
2 h 3
4 h 3
h 3
}
. (VII-50)
ii) The sum of your weights provides a useful check on your integration: ∑^ N i=
wi = (N − 1) h. (VII-51)
Remember, N must be odd.
b) The approximation (i.e., truncation or algorithmic) error apprx can be estimated by the next higher term in the Taylor series that is not used in the evaluation of the integral:
apprx-t = O
[b^ −^ a]
3 N^2
(^) f (2)^ , (VII-52)
apprx-s = O
[b^ −^ a]
5 N^4
(^) f (4)^ , (VII-53)
where the subscripts t and s in the subscripts refer to trapezoid and Simpson’s rule, respectively.
c) The relative error is just these approximation errors divided by the value of the function:
t,s = apprx-t,s f
Donald G. Luttermoser, ETSU VII–
ii) The corresponding errors are
ro ≈
N m =
3 × 10 −^6 , for single precision, 10 −^12 , for double precision. (VII-63)
g) For Simpson’s rule, Eq. (VII-57) becomes √ N m ≈ f^
(4)(b − a) 5 f N^4
(m)^2 /^9
i) Then, the optimum numbers of N steps for Simp- son’s rule are
N =
h =
(1/ 10 −^7 )^2 /^9 = 36 , for single precision, (1/ 10 −^15 )^2 /^9 = 2154 , for double precision. (VII-66)
ii) The corresponding errors are
ro ≈
N m =
6 × 10 −^7 , for single precision, 5 × 10 −^14 , for double precision. (VII-67)
h) These results are illuminating because they show that i) Simpson’s (why you little ...) rule is an improve- ment over the trapezoid rule.
ii) It is possible to obtain an error close to the ma- chine precision with Simpson’s rule (and with other higher-order integration algorithms).
iii) Obtaining the best numerical approximation to an integral is not obtained by letting N → ∞, but with a relatively small N ≤ 1000.
VII–18 PHYS-4007/5007: Computational Physics
b) This question leads us to formulate a new class of integra- tion formulas, known collectively as Gaussian quadra- ture. i) In this class, we will use only the most common formula, namely Gauss-Legendre quadrature.
ii) The theory of Gaussian integration is based on the following theorem.
k (^) dx = 0 , (VII-69)