

Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Material Type: Assignment; Professor: McNelis; Class: Intro Scientific Comp; Subject: Computer Science; University: Western Carolina University; Term: Spring 2005;
Typology: Assignments
1 / 2
This page cannot be seen from the preview
Don't miss anything!
CS 340 – Introduction to Scientific Computing Section 6.4: Multistep Methods Monday, April 19, 2005
Definition 1 (m-step multistep method) An m-step multistep method for solving the initial-value problem
dy dx = f (x, y), a ≤ x ≤ b, y(x 0 ) = y 0
uses past values of y and f (x, y) to construct a polynomial that approximates f , integrates the polynomial to approximate the solution y, and extrapolates it’s values. The number of past points that are used sets the degree of the polynomial and is therefore responsible for the truncation error. The order of the global error of the method is equal to number of previous terms of the formula, which is also equal to one more than the degree of the interpolating polynomial.
Thus has a difference equation for finding the approximation yi+1 at the mesh point xi+1 using the previous m values (m is an integer greater than 1) is given by:
yi+1 = aiyi + ai− 1 yi− 1 + · · · + ai−(m−1)yi−(m−1) +h[bif (xi, yi) + · · · + bi−(m−1)f (xi−(m−1), yi−(m−1))]
with the PREVIOUSLY SPECIFIED values
y 0 , y 1 , y 2 , · · · , ym− 1
which are often generated by some higher order method such as Runge-Kutta-Fehlberg. This method has global error O(hm).
Method 1 (Adams-Bashford Second Order Method (Linear Interpolating Polynomial))
yi+1 = yi + h 2
[3f (xi, yi) − f (xi− 1 , yi− 1 )]
given y 0 and y 1 to start. This method has global truncation error O(h^2 ) and local truncation error O(h^3 ).
Method 2 (Adams-Bashford Third Order Method (Quadratic Interpolating Polynomial))
yi+1 = yi +
h 12 [23f (xi, yi) − 16 f (xi− 1 , yi− 1 ) + 5f (xi− 2 , yi− 2 )]
given y 0 , y 1 and y 2 to start. This method has global truncation error O(h^3 ) and local truncation error O(h^4 ).
Method 3 (Adams-Bashford Fourth Order Method (Cubic Interpolating Polynomial))
yi+1 = yi + h 24
[55f (xi, yi) − 59 f (xi− 1 , yi− 1 ) + 37f (xi− 2 , yi− 2 ) − 9 f (xi− 3 , yi− 3 )]
given y 0 , y 1 , y 2 and y 3 to start. This method has global truncation error O(h^4 ) and local truncation error O(h^5 ).
Method 4 (Adams-Moulton Predictor-Corrector Method)
Predictor Step
Premise: Use the Adams-Bashford Fourth Order Method get an approximate, ˆyi+1, then use another in- terpolating cubic through the four points (xi− 2 , yi− 2 ), (xi− 1 , yi− 1 ), (xi, yi), and the new approximation (xi+1, yˆi+1) to get an even more accurate approximation for yi+1.
y ˆi+1 = yi +
h 24 [55f (xi, yi) − 59 f (xi− 1 , yi− 1 ) + 37f (xi− 2 , yi− 2 ) − 9 f (xi− 3 , yi− 3 )]
which has local truncation error O(h^5 ).
Corrector Step
yi+1 = yi + h 24
[9f (xi+1, yˆi+1) + 19f (xi, yi) − 5 f (xi− 1 , yi− 1 ) + f (xi− 2 , yi− 2 )]
which also has local truncation error O(h^5 ).
Method 5 (Milne’s Predictor-Corrector Method)
Predictor Step
y ˆi+1 = yi− 3 +
4 h 3 [2f (xi, yi) − f (xi− 1 , yi− 1 ) + 2f (xi− 2 , yi− 2 )]
which has local truncation error O(h^5 ).
Corrector Step
yi+1 = yi− 1 + h 3
[f (xi+1, yˆi+1) + 4f (xi, yi) + f (xi− 1 , yi− 1 )]
which also has local truncation error O(h^5 ).
Definition 2 (Stable and Unstable Methods) In a stable method for approximating the solution to a differential equation, early errors (due to impreci- sion of the method or an initial value that is slightly incorrect) are damped out as the computations proceed; they do not grow without bound. The opposite is true for an unstable method.
Euler’s method is a stable method, while the predictor-corrector methods tend to be unstable.
Homework: pp.395-396: # 29 (Analytical solution is y(t) = t^2 + 2t + 2 − et), 30, 33