Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Review of Numerical Methods-Numerical Methods in Engineering-review1-Civil Engineering and Geological Sciences, Study notes of Numerical Methods in Engineering

Review of Numerical Methods, Taylor Series, System of Linear Equations, Numerical Solution, Direct Methods, Gauss Elimination, LU Decomposition, Cholesky, Decomposition, Factor Method, Matrix Conditioning, Matrix Storage, Iterative Methods, Interpolation, Lagrange Interpolation, Chebyshev Roots, Extrapolation, Hermite Interpolation, Root Finding Algorithm, Bisection Method, Newton Raphson Method

Typology: Study notes

2011/2012

Uploaded on 02/20/2012

damyen
damyen 🇺🇸

4.4

(27)

274 documents

1 / 18

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
CE 341/441 - Review 1 - Fall 2004
p. R1.1
REVIEW NO. 1
TAYLOR SERIES
Find away from given and the derivatives of evaluated at
Notes
and the derivatives of evaluated at are constant and not x-dependent.
• When we use Taylor Series we do not carry all terms!
Our derivations typically carry enough terms to allow us to establish the error in our
formula.
fx() xa=fa() fx() xa=
fx() fa() xa()
df
dx
------
xa=
xa()
2
2!
------------------- d2f
dx2
---------
xa=
xa()
3
3!
------------------- d3f
dx3
---------
xa=
xa()
4
4!
------------------- d4f
dx4
---------
xa=
++++=
xa()
n
n!
--------------------dnf
dxn
----------
xa=
1
n1+()!
------------------- xa()
n1+ dn1+
dxn1+
----------------
ξ
++ + aξx≤≤
fa()
f
xa=
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12

Partial preview of the text

Download Review of Numerical Methods-Numerical Methods in Engineering-review1-Civil Engineering and Geological Sciences and more Study notes Numerical Methods in Engineering in PDF only on Docsity!

CE 341/441 - Review 1 - Fall 2004

p. R1.

REVIEW NO. 1TAYLOR SERIES • Find

away from

given

and the derivatives of

evaluated at

  • Notes -^

and the derivatives of

evaluated at

are constant and

not

x

-dependent.

  • When we use Taylor Series we do not carry all terms!• Our derivations typically carry enough terms to allow us to establish the error in our

f formula.

x ( )

x^

a

f^

a (

)^

f^

x (

)^

x^

a

f^

x (

)^

f a (

)^

x^

a

(^

df ) ----- dx

x^

a =

x^

a

(^

d - 2

f d x

2 --------

x^

a =

x^

a

(^

d - 3

f d x

3 --------

x^

a =

x^

a

(^

d - 4

f d x

4 --------

x^

a =

x^

a

(^

n ) n

d - n^

f d x

n ---------

x^

a =

n

(^

-^

x^

a

(^

n )

1 +^

d^ n^

1 + d x

n^

1 +

---------------

ξ

a^

ξ^

x

f^

a (

)^

f^

x^

a

CE 341/441 - Review 1 - Fall 2004

p. R1.

  • Depending on what our purposes are, we may:
    • Truncate the series and only carry the

term

  • The error term is
    • Carry enough terms so that we have a detailed form of the largest portion of the error

term

  • The error term is•^

The

term is carried to ensure that we know that this next term is

where we systematically truncate all terms

O x

a

(^

n )

f^

x ( )

f^

a (

)^

x^

a

(^

df ) ----- dx

x^

a

x^

a

(^

d - 2

f d x

2

x^

a

O x

a

(^

E

O x

a

(^

f^

x ( )

f^

a (

)^

x^

a

(^

df -----) dx

x^

a

x^

a

(^

d 2 f d x

2

x^

a

x^

a

(^

d - 3

f d x

3

x^

a

O x

a

(^

E

x^

a

(^

d

3 f d x

3

x^

a

O x

a

(^

CE 341/441 - Review 1 - Fall 2004

p. R1.

  • Carry only the remainder term which represents

all

terms in the truncated series

  • Error term is• We note that

therefore depends on where you’re evaluating

  • Typically we just estimate

as a starting or a mid point in the interval as a

constant!

  • However when you differentiate or integrate the error terms which involve

you must be very careful. It is best to consider a sequence of terms evaluated at

f^

x ( )

f^

a (

)^

x^

a

(^

df -----) dx

x^

a

x^

a

(^

d - 2 f d x

2

x^

a

x^

a

(^

d - 3

f d x

3

x^

ξ

a^

ξ^

x

E

x^

a

(^

d - 3 f d x

3

x^

ξ

ξ^

ξ^

x ( )

=

f^

x ( )

ξ

ξ

x^

a

CE 341/441 - Review 1 - Fall 2004

p. R1.

NUMERICAL SOLUTION TO LINEAR SYSTEMS OF ALGEBRAIC EQUA-TIONS • Solve the system of linear algebraic equations Direct Methods • All direct methods are based on some type of triangulation Gauss elimination • Develop an upper triangular matrix by manipulating

and

operations

  • Perform backward solution sweep

operations

A

X

B

A

B

O n

(^

O n

(^

CE 341/441 - Review 1 - Fall 2004

p. R1.

Matrix conditioning • Ill-conditioned matrices lead to inaccurate solutions for• Diagonally dominant matrices are not ill-conditioned.• We use pivoting to improve structure/conditioning of the matrix• Roundoff effects how badly ill-conditioning effects the solution

  • Use larger word size• Use iterative methods

only

versus

operations

Matrix storage • full• banded• symmetrical• skyline• non-zero locations only (must still store pointers to identify matrix locations)

X

O n

(^

O n

(^

CE 341/441 - Review 1 - Fall 2004

p. R1.

Iterative Methods • Based on using a starting approximation to all terms in every equation except the term

on the diagonal which you solve for

-^

Point Jacobi Method:

Iterate using values from the previous iteration. It is the simplest

method

-^

Gauss-Seidel:

Like Point Jacobi except you update all values with the most recently

computed value

-^

Point Relaxation Methods:

Improve estimates based on previous estimates (either

average or extrapolate out)

- Stability of iterative methods is only guaranteed if the matrix is diagonally dominant.

The solution may or may not be stable if the matrix is diagonal.

  • Iterative methods are useful for
    • very large sparse systems (benefits include storage and number of operations)• ill-conditioned systems

CE 341/441 - Review 1 - Fall 2004

p. R1.

Method 1 to derive g(x): Power series • Set up generic

N

th^

degree polynomial

-^

Force

at

nodes

(or data or interpolation points)

  • Solve for unknown coefficients

,^

from the resulting constraint equations

Method 2 to derive g(x): Lagrange basis functions • Each Lagrange basis function is associated with a data point/node• Results in the same function

as method 1.

g x

(^

)^

a^ o^

a^1

x

a

x 2 2

a^ N^

x^ N

g x

i (^

)^

f^ i =

i^

N ,

a^ i^

i^

N ,

g x

(^

)^ g x

(^

)^

f^ i

V

i^

x (

i^

0

N ∑ =

V

i^

x (

)^

x^

x^ o

(^

)^

x^

x^1

(^

)^

x^

x^2

(^

x

x^ i

1

(^

)^

)^

x^

x^ i

1 +

(^

x

x^ N

(^

x^ i

x^ o

(^

)^

x^ i

x 1

(^

)^

x^ i

x 2

(^

x

i^

x^ i^

1

(^

)^

x^ i

x^ i^

1 +

(^

x^ i

x^ N

(^

V

i^

x^ j (^

)^

i^

j

0

i^

j

CE 341/441 - Review 1 - Fall 2004

p. R1.

  • Example for a 3 data point quadratic interpolation function

g x

(^

)^

f^ o

V

o^

x ( )

f^^1

V

^1

x (

)^

f^^2

V

^2

x ( )

1.0 V

0 (x)

x

4.0 V

(x) 2

x^1

= 4

x^0

= 3

x^2

= 5

x^1

= 4

x^0

= 3

x^2

= 5

x^1

= 4

x^0

= 3

x^2

= 5

x x

2.0 V

(x) 1

CE 341/441 - Review 1 - Fall 2004

p. R1.

Method 3 to derive g(x): Newton forward interpolation

where 0 1 2 3 4

g x

(^

)^

f^ o

x^

x^ o

(^

f o h ---------

-^

x^

x^ o

(^

)^

x^

x^1

(^

2

f^ o h

2 -----------

x

x^ o

(^

)^

x^

x 1

(^

)^

x^

x 2

(^

3 f^ o h 3 -----------

1 ------ N!

x^

x^ o

(^

)^

x^

x 1

(^

)^

x^

x 2

(^

x

x^ N

1

(^

N f o h

N -------------

i^

f^ i

f^

i^

2 f^ i

3 f^ i

4 f^ i

f^ o

f^

o^

f^^1

f^ o

2

f^ o

f^

1

f^

o

3 f^ o

2 f^^1

2 f^ o

4 f^ o

3 f^^1

3 f^ o

f^^1

f^

1

f^^2

f^^1

2

f^^1

f^

2

f^

1

3 f^^1

2 f^^2

2 f^^1

f^^2

f^

2

f^^3

f^^2

2

f^^2

f^

3

f^

2

f^^3

f^

3

f^^4

f^^3

f^^4

CE 341/441 - Review 1 - Fall 2004

p. R1.

  • Establishes the

same

N

th

degree polynomial interpolating function as Lagrange except

using forward difference operators

  • However it is much more efficient to implement computationally as compared to the

Power series or Lagrange basis functions methods

  • The error is again given by:

and can be approximated as:

e x (

)^

f^

x (

)^

g x

(^

–^

x^

x^ o

(^

)^

x^

x^1

(^

x

x^ N

(^

N

(^

-^

f^

N^

1 + (^

)^

ξ(

x^ o

ξ^

x

e x (

)^

x^

x^ o

(^

)^

x^

x^1

(^

x

x^ N

(^

N

(^

-^

N

1 +^

f^ o

h

N

1 +

--------------------

CE 341/441 - Review 1 - Fall 2004

p. R1.

Extrapolation • Extend the interpolation to range outside of the range of the data points Hermite Interpolation • Develop an interpolating function which passes through

functional values as

well as the 1st, 2nd and subsequent derivatives at the

data points

  • Need to set up a

degree polynomial to match up to the

p

th^

derivative

at

data points.

N

N

f^0

,^ f

0 ,... ,

f^0

xN

x^0

x^1

x

(1)

(P) f^1

,^ f

1 ,... ,

f^1

(1)

(P)

fN

,^ f

N

,... ,

fN

(1)

(P)

p^

(^

)^

N

(^

)^

N

CE 341/441 - Review 1 - Fall 2004

p. R1.

ROOT FINDING ALGORITHMSBisection Method for Finding Roots • Solve for the roots of nonlinear algebraic equations • Based on interval halving and the sign of the function changing on the interval • Problem with double roots, multiple roots in an interval and singularities Newton-Raphson Method • Iterative formula for finding roots derived by developing a Taylor Series expansion for

and using only two terms of the Taylor Series and setting this to zero.

  • Since we are trying to find the root such that

f^

x (

f^

x ( )

f^

x^ ( o

)^

f^

(^1) ( )

x

o (^

)^

x^

x^ o

(^

)^

H.O.T.

f^

x^ ( r

)^

f^

x^ ( o

)^

f^

(^1) ( )

x

o (^

)^

x^ r

x^ o

(^