Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Root-Finding Algorithms: Bisection, Fixed-Point, and Newton-Raphson - Prof. Mary Kathryn C, Study notes of Statistics

Various root-finding algorithms, including the bisection method, fixed-point iteration, and newton-raphson method. The bisection method is a numerical technique for finding the root of a function using the intermediate value theorem. Fixed-point iteration is a method for finding the fixed point of a function, which is a solution to the equation g(x) = x. The newton-raphson method is a powerful root-finding algorithm that uses taylor series approximation to find the root of a function. Examples and theorems to illustrate the concepts.

Typology: Study notes

Pre 2010

Uploaded on 09/17/2009

koofers-user-4uj
koofers-user-4uj 🇺🇸

10 documents

1 / 3

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
22S:166
Lecture 10
Sept. 22, 2006
Root-finding
1
Root finding algorithms
problem: to find values of variable xthat satisfy
f(x) = 0 for given function f
solution is called “zero of f or “root of f
when is this an important problem in statistics?
2
The bisection method
also called “binary-search method”
conditions for use
fcontinuous, defined on interval [a, b]
f(a) and f(b) of opposite sign
by Intermediate Value Theorem, there exists a
p, a < p < b, such that f(p) = 0
procedure works when f(a) and f(b) of opposite sign
and more than one root in [a, b]
for simplicity, we’ll assume unique root in interval
method consists of
repeated halving of subintervals of [a, b]
at each step, locating half containing p
requires following inputs
endpoints a,b
tolerance T OL
maximum number of iterations N0
3
Fixed-point iteration
solution to
g(x) = x
is called fixed point of function g
Theorem
conditions
gcontinuous on [a, b]
g(x)[a, b]x[a, b]
conclusions
ghas a fixed point in [a, b]
if further
g0(x) exists on (a, b) and a positive constant
k < 1 exists such that
|g0(x)| k < 1,x(a, b)
then
ghas a unique fixed point pin [a, b]
4
pf3

Partial preview of the text

Download Root-Finding Algorithms: Bisection, Fixed-Point, and Newton-Raphson - Prof. Mary Kathryn C and more Study notes Statistics in PDF only on Docsity!

22S:

Lecture 10 Sept. 22, 2006 Root-finding

1

Root finding algorithms

  • problem: to find values of variable x that satisfy f (x) = 0 for given function f
  • solution is called “zero of f ” or “root of f ”
  • when is this an important problem in statistics?

2

The bisection method

  • also called “binary-search method”
  • conditions for use
    • f continuous, defined on interval [a, b]
    • f (a) and f (b) of opposite sign
  • by Intermediate Value Theorem, there exists a p, a < p < b, such that f (p) = 0
  • procedure works when f (a) and f (b) of opposite sign and more than one root in [a, b]
  • for simplicity, we’ll assume unique root in interval
  • method consists of
    • repeated halving of subintervals of [a, b]
    • at each step, locating half containing p
  • requires following inputs
    • endpoints a, b
    • tolerance T OL
    • maximum number of iterations N 0

Fixed-point iteration

  • solution to g(x) = x is called fixed point of function g
  • Theorem
    • conditions ∗ g continuous on [a, b] ∗ g(x) ∈ [a, b] ∀x ∈ [a, b]
    • conclusions ∗ g has a fixed point in [a, b]
    • if further ∗ g′(x) exists on (a, b) and a positive constant k < 1 exists such that |g′(x)| ≤ k < 1 , ∀x ∈ (a, b)
    • then ∗ g has a unique fixed point p in [a, b]

Example 1

g(x) =

x^2 − 1 3

on [− 1 , 1]

  • absolute minimum of g is g(0) = −^13
  • absolute minimum of g is g(±1) = 0
  • |g′(x)| = |^23 x | ≤ 23 ∀x ∈ [− 1 , 1]
  • so g has unique fixed point p in interval
  • in this case, can be determined exactly by quadratic formula

Example 2

g(x) = 3−x^ on [0, 1]

  • g(1) = 13 ≤ g(x) ≤ 1 = g(0) ∀ 0 ≤ x ≤ 1, so fixed point exists in interval
  • theorem cannot be used to determine uniqueness of fixed point since |g′(0)| = 1. 0986 > 1
  • but fixed point must be unique since g is a decreasing function

5

Fixed-point iteration

  • choose initial approximation p 0
  • set pn = g(pn− 1 ) for each n ≥ 1

Example

x^3 + 4x^2 − 10 = 0 has unique root in [1, 2].

6

Fixed-Point Theorem

  • conditions
    • g continuous on on [a, b]
    • g(x) ∈ [a, b] ∀x ∈ [a, b]
    • g′(x) exists on (a, b) and |g′(x)| ≤ k < 1 , ∀x ∈ (a, b)
  • then if p 0 is any number in [a, b] then the sequence defined by pn = g(pn− 1 ), n ≥ 1 converges to the unique fixed point p in [a, b]

The Newton-Raphson Method

  • one of most powerful and well-known numerical methods for solving root-finding problem f (x) = 0
  • one derivation: Taylor series approximation
    • suppose f ′^ and f ′′^ are continuous on [a, b]
    • let x 0 ∈ [a, b] be an approximation to p such that f ′(x 0 ) 6 = 0 and |x 0 − p| is “small”
    • first order Taylor approximation for f (x) expanded around x 0

f (x) = f (x 0 ) + (x − x 0 )f ′(x 0 ) +

(x − x 0 )^2 2

f ′′(ξ(x))

where ξ(x) is between x and x 0.

  • with x = p this gives

0 = f (x 0 ) + (p − x 0 )f ′(x 0 ) +

(p − x 0 )^2 2

f ′′(ξ(x))

  • since |x 0 − p| is “small”, (x 0 − p)^2 should be negligible and 0 ' f (x 0 ) + (p − x 0 )f ′(x 0 )
  • solving for p yields

p ' x 0 −

f (x 0 ) f ′(x 0 )