Docsity
Docsity

Prepare-se para as provas
Prepare-se para as provas

Estude fácil! Tem muito documento disponível na Docsity


Ganhe pontos para baixar
Ganhe pontos para baixar

Ganhe pontos ajudando outros esrudantes ou compre um plano Premium


Guias e Dicas
Guias e Dicas

Polynomial approximation, Notas de aula de Economia Aplicada

Polynomial approximation using calculus

Tipologia: Notas de aula

2020

Compartilhado em 29/06/2020

sunlias
sunlias 🇦🇴

1 documento

1 / 27

Toggle sidebar

Esta página não é visível na pré-visualização

Não perca as partes importantes!

bg1
Contents
Contents
31
31
of Approximation
Numerical Methods
31.1 Polynomial Approximations 2
31.2 Numerical Integration 28
31.3 Numerical Differentiation 58
31.4 Nonlinear Equations 67
Learning
In this Workbook you will learn about some numerical methods widely used in
engineering applications.
You will learn how certain data may be modelled, how integrals and derivatives may be
approximated and how estimates for the solutions of non-linear equations may be found.
outcomes
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b

Pré-visualização parcial do texto

Baixe Polynomial approximation e outras Notas de aula em PDF para Economia Aplicada, somente na Docsity!

ContentsContents 3131

of Approximation

Numerical Methods

31.1 Polynomial Approximations 2

31.2 Numerical Integration 28

31.3 Numerical Differentiation 58

31.4 Nonlinear Equations 67

Learning

In this Workbook you will learn about some numerical methods widely used in engineering applications.

You will learn how certain data may be modelled, how integrals and derivatives may be approximated and how estimates for the solutions of non-linear equations may be found.

outcomes

Polynomial

Approximations







31.1 

Introduction

Polynomials are functions with useful properties. Their relatively simple form makes them an ideal candidate to use as approximations for more complex functions. In this second Workbook on Nu- merical Methods, we begin by showing some ways in which certain functions of interest may be approximated by polynomials.









Prerequisites

Before starting this Section you should...

  • revise material on maxima and minima of functions of two variables
  • be familiar with polynomials and Taylor series '

&

$

%

Learning Outcomes

On completion you should be able to...

  • interpolate data with polynomials
  • find the least squares best fit straight line to experimental data

2 HELM (2008):

Workbook 31: Numerical Methods of Approximation

We have in fact already seen, in 16, one way in which some functions may be approximated by polynomials. We review this next.

2. Taylor series

In 16 we encountered Maclaurin series and their generalisation, Taylor series. Taylor series are a useful way of approximating functions by polynomials. The Taylor series expansion of a function f (x) about x = a may be stated

f (x) = f (a) + (x − a)f ′(a) + 12 (x − a)^2 f ′′(a) + (^) 3!^1 (x − a)^3 f ′′′(a) +....

(The special case called Maclaurin series arises when a = 0.) The general idea when using this formula in practice is to consider only points x which are near to a. Given this it follows that (x − a) will be small, (x − a)^2 will be even smaller, (x − a)^3 will be smaller still, and so on. This gives us confidence to simply neglect the terms beyond a certain power, or, to put it another way, to truncate the series.

Example 2

Find the Taylor polynomial of degree 2 about the point x = 1, for the function f (x) = ln(x).

Solution In this case a = 1 and we need to evaluate the following terms f (a) = ln(a) = ln(1) = 0, f ′(a) = 1/a = 1, f ′′(a) = − 1 /a^2 = − 1.

Hence

ln(x) ≈ 0 + (x − 1) −

(x − 1)^2 = −

  • 2x −

x^2 2 which will be reasonably accurate for x close to 1, as you can readily check on a calculator or computer. For example, for all x between 0.9 and 1.1, the polynomial and logarithm agree to at least 3 decimal places.

One drawback with this approach is that we need to find (possibly many) derivatives of f. Also, there can be some doubt over what is the best choice of a. The statement of Taylor series is an extremely useful piece of theory, but it can sometimes have limited appeal as a means of approximating functions by polynomials.

Next we will consider two alternative approaches.

4 HELM (2008):

Workbook 31: Numerical Methods of Approximation

®

3. Polynomial approximations - exact data

Here and in subsections 4 and 5 we consider cases where, rather than knowing an expression for the function, we have a list of point values. Sometimes it is good enough to find a polynomial that passes near these points (like putting a straight line through experimental data). Such a polynomial is an approximating polynomial and this case follows in subsection 4. Here and in subsection 5 we deal with the case where we want a polynomial to pass exactly through the given data, that is, an interpolating polynomial.

Lagrange interpolation

Suppose that we know (or choose to sample) a function f exactly at a few points and that we want to approximate how the function behaves between those points. In its simplest form this is equivalent to a dot-to-dot puzzle (see Figure 1(a)), but it is often more desirable to seek a curve that does not have“corners” in it (see Figure 1(b)).

x x

(a) Linear, or “dot-to-dot”, interpolation, with corners at all of the data points.

(b) A smoother interpolation of the data points.

Figure 1

Let us suppose that the data are in the form (x 1 , f 1 ), (x 2 , f 2 ), (x 3 , f 3 ),... , these are the points plotted as crosses on the diagrams above. (For technical reasons, and those of common sense, we suppose that the x-values in the data are all distinct.) Our aim is to find a polynomial which passes exactly through the given data points. We want to find p(x) such that

p(x 1 ) = f 1 , p(x 2 ) = f 2 , p(x 3 ) = f 3 ,...

There is a mathematical trick we can use to achieve this. We define Lagrange polynomials L 1 , L 2 , L 3 ,... which have the following properties:

L 1 (x) = 1 , at x = x 1 , L 1 (x) = 0 , at x = x 2 , x 3 , x 4... L 2 (x) = 1 , at x = x 2 , L 2 (x) = 0 , at x = x 1 , x 3 , x 4... L 3 (x) = 1 , at x = x 3 , L 3 (x) = 0 , at x = x 1 , x 2 , x 4... .. .

Each of these functions acts like a filter which “turns off” if you evaluate it at a data point other than its own. For example if you evaluate L 2 at any data point other than x 2 , you will get zero. Furthermore, if you evaluate any of these Lagrange polynomials at its own data point, the value you get is 1. These two properties are enough to be able to write down what p(x) must be:

HELM (2008): Section 31.1: Polynomial Approximations

®

Figure 2 shows L 1 and L 2 in the case where there are five data points (the x positions of these data points are shown as large dots). Notice how both L 1 and L 2 are equal to zero at four of the data points and that L 1 (x 1 ) = 1 and L 2 (x 2 ) = 1.

In an implementation of this idea, things are simplified by the fact that we do not generally require an expression for p(x). (This is good news, for imagine trying to multiply out all the algebra in the expressions for L 1 , L 2 ,... .) What we do generally require is p evaluated at some specific value. The following Example should help show how this can be done.

Example 3

Let p(x) be the polynomial of degree 3 which interpolates the data

x 0. 8 1 1. 4 1. 6 f (x) − 1. 82 − 1. 73 − 1. 40 − 1. 11 Evaluate p(1.1).

Solution We are interested in the Lagrange polynomials at the point x = 1. 1 so we consider

L 1 (1.1) =

(1. 1 − x 2 )(1. 1 − x 3 )(1. 1 − x 4 ) (x 1 − x 2 )(x 1 − x 3 )(x 1 − x 4 )

Similar calculations for the other Lagrange polynomials give

L 2 (1.1) = 0. 93750 , L 3 (1.1) = 0. 31250 , L 4 (1.1) = − 0. 09375 ,

and we find that our interpolated polynomial, evaluated at x = 1. 1 is

p(1.1) = f 1 L 1 (1.1) + f 2 L 2 (1.1) + f 3 L 3 (1.1) + f 4 L 4 (1.1) = − 1. 82 × − 0 .15625 + − 1. 73 × 0 .9375 + − 1. 4 × 0 .3125 + − 1. 11 × − 0. 09375 = − 1. 670938 = − 1. 67 to the number of decimal places to which the data were given.

Key Point 2

Quote the answer only to the same number of decimal places as the given data (or to less places).

HELM (2008): Section 31.1: Polynomial Approximations

Task Let p(x) be the polynomial of degree 3 which interpolates the data x 0. 1 0. 2 0. 3 0. 4 f (x) 0. 91 0. 70 0. 43 0. 52 Evaluate p(0.15).

Your solution

Answer We are interested in the Lagrange polynomials at the point x = 0. 15 so we consider

L 1 (0.15) =

(0. 15 − x 2 )(0. 15 − x 3 )(0. 15 − x 4 ) (x 1 − x 2 )(x 1 − x 3 )(x 1 − x 4 )

Similar calculations for the other Lagrange polynomials give

L 2 (0.15) = 0. 9375 , L 3 (0.15) = − 0. 3125 , L 4 (0.15) = 0. 0625 , and we find that our interpolated polynomial, evaluated at x = 0. 15 is

p(0.15) = f 1 L 1 (0.15) + f 2 L 2 (0.15) + f 3 L 3 (0.15) + f 4 L 4 (0.15) = 0. 91 × 0 .3125 + 0. 7 × 0 .9375 + 0. 43 × − 0 .3125 + 0. 52 × 0. 0625 = 0. 838750 = 0. 84 , to 2 decimal places.

The next Example is very much the same as Example 3 and the Task above. Try not to let the specific application, and the slight change of notation, confuse you.

Example 4

A designer wants a curve on a diagram he is preparing to pass through the points x 0. 25 0. 5 0. 75 1 y 0. 32 0. 65 0. 43 0. 10 He decides to do this by using an interpolating polynomial p(x). What is the y-value corresponding to x = 0. 8?

8 HELM (2008):

Workbook 31: Numerical Methods of Approximation

Answer We are interested in the Lagrange polynomials at the point v = 2. 5 so we consider

L 1 (2.5) =

(2. 5 − v 2 )(2. 5 − v 3 )(2. 5 − v 4 )(2. 5 − v 5 ) (v 1 − v 2 )(v 1 − v 3 )(v 1 − v 4 )(v 1 − v 5 )

=

Similar calculations for the other Lagrange polynomials give

L 2 (2.5) = − 5. 0 , L 3 (2.5) = 10. 0 , L 4 (2.5) = − 10. 0 , L 5 (2.5) = 5. 0 and we find that our interpolated polynomial, evaluated at x = 2. 5 is

p(2.5) = f 1 L 1 (2.5) + f 2 L 2 (2.5) + f 3 L 3 (2.5) + f 4 L 4 (2.5) + f 5 L 5 (2.5) = 0. 00 × 1 .0 + 19. 32 × − 5 .0 + 90. 62 × 10 .0 + 175. 71 × − 10 .0 + 407. 11 × 5. 0 = 1088. 05

This gives us the approximation that the hull drag on the yacht at 2.5 m s−^1 is about 1100 N.

The following Example has time t as the independent variable, and two quantities, x and y, as dependent variables to be interpolated. We will see however that exactly the same approach as before works.

Example 5

An animator working on a computer generated cartoon has decided that her main character’s right index finger should pass through the following (x, y) positions on the screen at the following times t t 0 0. 2 0. 4 0. 6 x 1. 00 1. 20 1. 30 1. 25 y 2. 00 2. 10 2. 30 2. 60

Use Lagrange polynomials to interpolate these data and hence find the (x, y) position at time t = 0. 5. Give x and y to 2 decimal places.

Solution In this case t is the independent variable, and there are two dependent variables: x and y. We are interested in the Lagrange polynomials at the time t = 0. 5 so we consider

L 1 (0.5) =

(0. 5 − t 2 )(0. 5 − t 3 )(0. 5 − t 4 ) (t 1 − t 2 )(t 1 − t 3 )(t 1 − t 4 )

Similar calculations for the other Lagrange polynomials give

L 2 (0.5) = − 0. 3125 , L 3 (0.5) = 0. 9375 , L 4 (0.5) = 0. 3125

10 HELM (2008):

Workbook 31: Numerical Methods of Approximation

®

Solution (contd.) These values for the Lagrange polynomials can be used for both of the interpolations we need to do. For the x-value we obtain

x(0.5) = x 1 L 1 (0.5) + x 2 L 2 (0.5) + x 3 L 3 (0.5) + x 4 L 4 (0.5) = 1. 00 × 0 .0625 + 1. 20 × − 0 .3125 + 1. 30 × 0 .9375 + 1. 25 × 0. 3125 = 1. 30 to 2 decimal places

and for the y value we get

y(0.5) = y 1 L 1 (0.5) + y 2 L 2 (0.5) + y 3 L 3 (0.5) + y 4 L 4 (0.5) = 2. 00 × 0 .0625 + 2. 10 × − 0 .3125 + 2. 30 × 0 .9375 + 2. 60 × 0. 3125 = 2. 44 to 2 decimal places

Error in Lagrange interpolation

When using Lagrange interpolation through n points (x 1 , f 1 ), (x 2 , f 2 ),... , (xn, fn) the error, in the estimate of f (x) is given by

E(x) =

(x − x 1 )(x − x 2 )... (x − xn) n!

f (n)(η) where η ∈ [x, x 1 , xn]

N.B. The value of η is not known precisely, only the interval in which it lies. Normally x will lie in the interval [x 1 , xn] (that’s interpolation). If x lies outside the interval [x 1 , xn] then that’s called extrapolation and a larger error is likely.

Of course we will not normally know what f is (indeed no f may exist for experimental data). However, sometimes f can at least be estimated. In the following (somewhat artificial) example we will be told f and use it to check that the above error formula is reasonable.

Example 6

In an experiment to determine the relationship between power gain (G) and power output (P ) in an amplifier, the following data were recorded.

P 5 7 8 11 G 0.00 1.46 2.04 3.

(a) Use Lagrange interpolation to fit an appropriate quadratic, q(x), to estimate the gain when the output is 6.5. Give your answer to an appropriate accuracy.

(b) Given that G ≡ 10 log 10 (P/5) show that the actual error which occurred in the Lagrange interpolation in (a) lies withing the theoretical error limits.

HELM (2008): Section 31.1: Polynomial Approximations

®

Task (a) Use Lagrange interpolation to estimate f (8) to appropriate accuracy given the table of values below, by means of the appropriate cubic interpolating polynomial x 2 5 7 9 10 f (x) 0.980067 0.8775836 0.764842 0.621610 0.

Your solution

Answer The most appropriate cubic passes through x at 5 , 7 , 9 , 10

x = 8 x 1 = 5, x 2 = 7, x 3 = 9, x 4 = 10

p(8) =

× 0. 877583

× 0. 764842

× 0. 621610

× 0. 540302

× 0 .877583 +

× 0 .764842 +

× 0. 621610 −

× 0. 540302

Suitable accuracy is 0. 6967 (rounded to 4 d.p.).

HELM (2008): Section 31.1: Polynomial Approximations

(b) Given that the table in (a) represents f (x) ≡ cos(x/10), calculate theoretical bounds for the estimate obtained:

Your solution

Answer E(8) =

f (4)(η), 5 ≤ η ≤ 10

f (η) = cos

( (^) η

10

so f (4)(η) =

cos

( (^) η

10

E(8) =

4 × 104

cos

( (^) η

10

, η ∈ [5, 10]

Emin =

4 × 104

cos(1) Emax =

4 × 104

cos(0.5)

This leads to 0 .696689 + 0. 000014 ≤ True Value ≤ 0 .696689 + 0. 000022

⇒ 0. 696703 ≤ True Value ≤ 0. 696711

We can conclude that the True Value is 0. 69670 or 0. 69671 to 5 d.p. or 0. 6967 to 4 d.p. (actual value is 0. 696707 ).

14 HELM (2008):

Workbook 31: Numerical Methods of Approximation

In order to minimise R we can imagine sliding the clear ruler around on the page until the line looks right; that is we can imagine varying the slope m and y-intercept c of the line. We therefore think of R as a function of the two variables m and c and, as we know from our earlier work on maxima and minima of functions, the minimisation is achieved when

∂R ∂c

= 0 and

∂R

∂m

(We know that this will correspond to a minimum because R has no maximum, for whatever value R takes we can always make it bigger by moving the line further away from the data points.) Differentiating R with respect to m and c gives

∂R ∂c

= 2 (mx 1 + c − f 1 ) + 2 (mx 2 + c − f 2 ) + 2 (mx 3 + c − f 3 ) +...

mxn + c − fn

and ∂R ∂m

= 2 (mx 1 + c − f 1 ) x 1 + 2 (mx 2 + c − f 2 ) x 2 + 2 (mx 3 + c − f 3 ) x 3 +...

mxn + c − fn

xn,

respectively. Setting both of these quantities equal to zero (and cancelling the factor of 2) gives a pair of simultaneous equations for m and c. This pair of equations is given in the Key Point below.

Key Point 3

The least squares best fit straight line to the experimental data (x 1 , f 1 ), (x 2 , f 2 ), (x 3 , f 3 ),... (xn, fn)

is

y = mx + c where m and c are found by solving the pair of equations

c

( (^) n ∑

1

  • m

( (^) n ∑

1

xn

∑^ n

1

fn,

c

( (^) n ∑

1

xn

  • m

( (^) n ∑

1

x^2 n

∑^ n

1

xnfn.

(The term

∑^ n

1

1 is simply equal to the number of data points, n.)

16 HELM (2008):

Workbook 31: Numerical Methods of Approximation

®

Example 7

An experiment is carried out and the following data obtained: xn 0. 24 0. 26 0. 28 0. 30 fn 1. 25 0. 80 0. 66 0. 20 Obtain the least squares best fit straight line, y = mx + c, to these data. Give c and m to 2 decimal places.

Solution For a hand calculation, tabulating the data makes sense: xn fn x^2 n xnfn

  1. 24 1. 25 0. 0576 0. 3000
  2. 26 0. 80 0. 0676 0. 2080
  3. 28 0. 66 0. 0784 0. 1848
  4. 30 0. 20 0. 0900 0. 0600
  5. 08 2. 91 0. 2936 0. 7528

The quantity

1 counts the number of data points and in this case is equal to 4. It follows that the pair of equations for m and c are:

4 c + 1. 08 m = 2. 91

  1. 08 c + 0. 2936 m = 0. 7528

Solving these gives c = 5. 17 and m = − 16. 45 and we see that the least squares best fit straight line to the given data is

y = 5. 17 − 16. 45 x

Figure 4 shows how well the straight line fits the experimental data.

0.24 0.25 0.26^ 0.27 0.28 0.29^ 0.

0

1

x

Figure 4

HELM (2008): Section 31.1: Polynomial Approximations

®

Task An experiment is carried out and the data obtained are as follows: xn 0. 2 0. 3 0. 5 0. 9 fn 5. 54 4. 02 3. 11 2. 16 Obtain the least squares best fit straight line, y = mx + c, to these data. Give c and m to 2 decimal places.

Your solution

Answer Tabulating the data gives xn fn x^2 n xnfn

  1. 2 5. 54 0. 04 1. 108
  2. 3 4. 02 0. 09 1. 206
  3. 5 3. 11 0. 25 1. 555
  4. 9 2. 16 0. 81 1. 944 ∑
  5. 9 14. 83 1. 19 5. 813

The quantity

1 counts the number of data points and in this case is equal to 4. It follows that the pair of equations for m and c are:

4 c + 1. 9 m = 14. 83

  1. 9 c + 1. 19 m = 5. 813

Solving these gives c = 5. 74 and m = − 4. 28 and we see that the least squares best fit straight line to the given data is

y = 5. 74 − 4. 28 x

HELM (2008): Section 31.1: Polynomial Approximations

Task Power output P of a semiconductor laser diode, operating at 35 ◦C, as a function of the drive current I is measured to be I 70 72 74 76 P 1. 33 2. 08 2. 88 3. 31

(Here I and P are measured in mA and mW respectively.) It is known that, above a certain threshold current, the laser power increases linearly with drive current. Use the least squares approach to fit a straight line, P = mI + c, to these data. Give c and m to 2 decimal places.

Your solution

Answer Tabulating the data gives I P I^2 I × P 70 1. 33 4900 93. 1 72 2. 08 5184 149. 76 74 2. 88 5476 213. 12 76 3. 31 5776 251. 56 292 9. 6 21336 707. 54

The quantity

1 counts the number of data points and in this case is equal to 4. It follows that the pair of equations for m and c are:

4 c + 292m = 9. 6 292 c + 21336m = 707. 54

Solving these gives c = − 22. 20 and m = 0. 34 and we see that the least squares best fit straight line to the given data is

P = − 22 .20 + 0. 34 I.

20 HELM (2008):

Workbook 31: Numerical Methods of Approximation