DOC

Approximating a Function

By Jason Elliott,2014-06-22 11:41
14 views 0
Approximating a Function

    Finding The Best Quadratic Approximation of a Function

    Yajun Yang and Sheldon P. Gordon

    Farmingdale State College of New York

    yangy@farmingdale.edu gordonsp@farmingdale.edu

    Abstract This article examines the question of finding the best quadratic function to

    xapproximate a given function on an interval. The prototypical function considered is f(x) = e.

    Two approaches are considered, one based on Taylor polynomial approximations at various points in the interval under consideration, the other based on the fact that three non-collinear points determine a unique quadratic function. Three different techniques for measuring the error in the approximations are considered.

    Keywords quadratic approximation, Taylor polynomial, error in the approximation, total error, root-square error, maximum error

     When most mathematicians think about the concept of approximating a function, they invariably think of it either in terms of local linearity or its natural extension, the Taylor polynomial approximations to the function. In a previous article [5], the authors investigated a variety of ways that a function can be approximated by a linear function. In this article, we extend some of those ideas to consider a variety of different ways to think about approximating a function using a quadratic function.

     To make things simple, it is standard procedure in numerical analysis to consider continuous functions that are monotonic with fixed concavity on an interval [a, b]; any function

    that has turning points or inflection points can be broken down into a set of such functions and each portion is treated separately. To make things concrete, we will consider the function

    x on the interval . Most of the ideas and developments that follow can be applied [0,1]fxe()

    to most other functions with similar behavior properties namely, they are either increasing or

    decreasing with fixed concavity on an interval. However, the conclusions regarding which approximation is best will depend both on the function and on the interval selected.

     Let‟s begin with the Taylor polynomial approximation of degree 2 at : Tx()x00

    x2. fxexxTx()1/2()?:;;?0

    428029263.doc 1

     and, if we stay close to How accurate is this approximation? Well, it is a perfect fit at x0

    , it is extremely good. However, if our objective is to approximate the function across the x0

    entire interval, it is clearly not as good an approximation the farther we move away from , x0

    as seen in Figure 1.

    Figure 1 Taylor polynomial approximation of degree 2 at ; Tx()x00

    the solid curve is , the dotted curve is the fT0

    Before proceeding, though, we need a way to measure how well an approximating function fits a function on an interval . There are actually several different Px()fx()[,]ab

    ways for doing this. Two of the simplest ways to measure the error in the approximation over the entire interval involve the vertical differences between the function f and the approximating

    polynomial P at a fixed number of points x, x, …, x, for some positive integer n. One 12n

    approach is to calculate the sum of the absolute values of these vertical differences and the other is to use the sum of the squares of the vertical differences. Both methods are used to avoid the cancellations that could occur if the two curves cross each other. Notice that both sums

    2nnfxPx()() and increase as n increases. An easy way to eliminate fxPx()()!,iiii1ii1

    the effect of n from the error calculations is to divide either sum by n. We then have the

    following error formulas:

    21n1n and . fxPxfxPx()()()()!,iiii1ii1nn

    428029263.doc 2

    Both of these expressions can be thought of as finding the average of the vertical discrepancies between the original function and the approximating function at the n points.

     are One main drawback of using either formula is that in most cases the values of fx()i

    difficult to compute. However, both formulas lead to somewhat more sophisticated and more useful approaches, as we discuss. For instance, with the absolute value formulation,

    b11n, as . fxPxfxPxdx;,;n,;()()()()ii)1ianba

    Suppose we partition the interval into n uniformly spaced subintervals, so that the width of [,]ab

    ba1xan individual subinterval is , and so . Define by x,?xinnba

    1~? i = 1, 2, …, n. xaix?;;,i?,2??

    These are the midpoints of n evenly spaced subintervals of . Then [,]ab

    11nn. fxPxfxPxx;?;,()()()();;iiii??11iinba

    nThe quantity is a Riemann Sum associated with the definite integral fxPxx()();,ii1i

    b. fxPxdx()())a

    We define

    b1x. Error = ()()()fxPxdxePxdx;?;Total))a0

    This error represents the area bounded between the curve and the approximating function. It measures the total error of the approximation over the interval. For our quadratic approximation x2x2, we use the fact that across the interval , so [0,1]exx:;;1/2exx?;;1/2

    22?:11~?~?xxxx Error = 11exdxexdx;;;?;;;?,?,?,Total))0022????(?

    11 . ?;;;;:e(1)10.051615?:(?26

    If the approximating function is not always above or below the function f, however, integrating

    the absolute value of the difference can cause some significant computational problems. 428029263.doc 3

     Alternatively, if we use the squares of the vertical differences instead of the absolute

    1n2for the value of the vertical differences in the above approach, we have fxPx()()!,ii1in

    error of the approximation. As the number of subdivisions increases, we have

    b11n22, as . fxPxfxPxdx;,;n,;()()()()!,!,ii)1ianba

    2We can think of this integral as representing the average value of the function fxPx()()!,

    over the entire interval. This formula leads to a second measure of the error in the approximation:

    b122x. ?:Error = ()()()fxPxdxePxdx;?;!,Root-Square))(?a0

    2This is essentially the L norm from topology. Although it might seem somewhat unreasonable

    to do this, the root-square error is used quite frequently in numerical analysis because it is easier to program the square of the differences than the absolute value function and, more significantly, it is considerably faster to execute. Using this measure, we find that the error associated with the

    x2quadratic approximation on the interval is [0,1]exx:;;1/2

    221?:xx, Error = (1)0.079682exdx;;;:Root-Square?,)02(?

    using Derive to evaluate the definite integral.

    The problem with the above two error estimates is that they provide a representative value to indicate how good, or poor, the fit is across the entire interval, but does not provide information on the worst case scenario. Thus, if a quadratic function was actually a very good approximation to a curve across most of the interval, but peaked at some point in question, neither criterion would likely reveal that. So, a third measure of the error in the approximation is to use the maximum difference between the function and the polynomial. We define

    x. Error = Max()()Max()fxPxePx;?;Max01((((axbx

    This is equivalent to finding the maximum deviation between the function and the approximating function on the entire interval. It is known variously as the maximum norm, the uniform norm,

    the infinity norm, and the Chebyshev norm in numerical analysis. For our Taylor quadratic

    x2approximation to the exponential function on the interval , Figure 1 [0,1]exx:;;1/2

    obviously suggests that the maximum deviation occurs at the right endpoint of the interval and so 428029263.doc 4

    . We show the this error, correct to six decimal places, would be e;;;:(111/2)0.218282

    x2difference function in Figure 2, which provides somewhat better insight into exx;;;[1/2]

    the behavior of the error than Figure 1 does. More generally, one would typically have to apply calculus to determine this maximum if it were not visually evident. Essentially, then, this error estimate is the formalization of the kind of error measurement we make when we talk about an approximation being good to a certain number of decimal places.

    It is important to note that these three error estimates are not in the least commensurate in the sense that we cannot compare the level of accuracy based on the three different measures.

    Figure 2 Difference function fT0

Improving on the Accuracy

     We next consider several ways to obtain a more accurate quadratic approximation to the exponential function on this interval. Perhaps the most obvious alternative to the quadratic Taylor polynomial approximation at the left endpoint is to use the quadratic Taylor Tx()0

    polynomial at some point within the interval. In particular, we consider the point at the x12

    center of the interval, as shown in Figure 3. The corresponding quadratic function, , is Tx()12

    2?:111~?12. Txexx()1()?;;;;?,?,12222???,(?

    428029263.doc 5

     of degree 2 at ; Figure 3 Taylor polynomial approximation Tx()x1212

    the solid curve is , the dotted curve is the fT12

    This parabola appears to be an almost perfect match to the exponential function across the entire interval. If you zoom in on different portions of this graph, you will observe that the quadratic polynomial crosses the exponential curve at on this interval. The corresponding three x12

    error values are then

    Error 0.008659 (after a simple integration), Total

    Error 0.013188 (using Derive), Root-Square

    Error 0.039110 (using Derive) Max

Table 1

     Tx()Tx()Tx()1201

    Error0.051615 0.008659 0.093906 Total

    Error0.079682 0.013188 0.139598 Root-Square

    Error0.218282 0.039110 0.359141 Max

    To help in comparing the different error values, we tabulate the results in Table 1, and also add the corresponding results based on the quadratic Taylor polynomial based on the right Tx()1

    hand endpoint . From the table, we see that all three errors with the quadratic Taylor x1

    polynomial at the midpoint of the interval are significantly smaller than the corresponding values 428029263.doc 6

    . based on the quadratic approximations at the left endpoint, let alone those based on x1

    xTherefore, among the three possibilities, is by far the best approximation to , no matter Tx()e12

    which error measurement we use.

     The significant improvement in the accuracy for compared to the other two Tx()12

    choices suggests that we might gain even greater accuracy by choosing a suitable point xx0

    somewhere other than at one of the endpoints or the midpoint of the interval as the point where we construct the quadratic Taylor approximation. However, there is no reason to expect that the same choice for will necessarily minimize all three of the error estimates; in fact, we should x0

    expect that each of the three will likely be smallest at a different point in the interval.

     For example, if we conduct a simple computer search routine, we find that the Error Max

    achieves a minimal value of 0.034961, which corresponds to the point (correct to x0.52080

    four decimal places). This is somewhat better than what we obtained above by using the Taylor

    xpolynomial at . For the function f(x) = e, it is possible to optimize the error analytically, x0.50

    but the ability to do this is a rarity and so we focus only on results from search routines, which apply to any function.

    Using a different search routine, we find that the Error achieves a minimal value of Total

    0.008659, which corresponds to the point . Finally, a search approach yields a minimal x0.50

    value for Error of 0.013145, which occurs when , and this is significantly x0.5089Root-Square0

    smaller than the other values displayed in Table 1. We display all of these results, in addition to the ones we obtained above, in Table 2. Interestingly, all three of these improved error estimates occur relatively close to the midpoint . x0.5

Table 2

     Error at Tx()Tx()Tx()xxx120100

    Error0.051615 0.008659 0.093906 0.5 0.008659 Total

    Error0.079682 0.013188 0.139598 0.5089 0.013145 Root-Square

    Error0.218282 0.039110 0.359141 0.5208 0.034961 Max

428029263.doc 7

Using Parabolas through Three Points

     Instead of using a tangent parabola as our approximating quadratic, we next use the fact that a quadratic function is uniquely determined by any three non-collinear points. Suppose, for

     and , along with the midpoint of the interval. instance, that we take the two endpoints, x0x1

    0.5The parabola that passes through these three points , , and is, correct to six (0,1)(1,)e(0.5,)e

    decimal places,

    2, Qxxx()0.8416790.8766031?;;{0,0.5,1}

    so that

    x2. exx:;;0.8416790.8766031

    (We can find such a polynomial very simply using the quadratic regression routine in any graphing calculator or in Excel, for instance.) We show this quadratic approximation in Figure 4 and observe that the parabola appears to be an excellent match to the exponential function across

    xthe entire interval. We also show the difference between and the approximating quadratic in e

    Figure 5 and observe how small the difference remains across the entire interval. We should therefore expect that all three of the error estimates will likely be considerably smaller than the

    Figure 4 Parabola approximation ; Qx(){0,0.5,1}

    the solid curve is , the dotted curve is the Qf{0,0.5,1}

428029263.doc 8

     Figure 5 Difference function fQ{0,0.5,1}

    preceding values based on the Taylor approximations. In particular, we obtain

    Error 0.008731 Total

    Error 0.009665, Root-Square

    Error 0.014421 Max

    To help compare the different quadratics and how well they fit the exponential function, we extend Table 1 to include these values as well in Table 3.

Table 3

     Tx()Qx()Tx()Tx()12{0,0.5,1}01

    Error0.051615 0.008659 0.093906 0.008731 Total

    Error0.079682 0.013188 0.139598 0.009665 Root-Square

    Error0.218282 0.039110 0.359141 0.014421 Max

We therefore see that Error and Error are both significantly smaller for this quadratic MaxRoot-Square

    than with any of the three Taylor approximations. The value for Error is 0.008731, and it is Total

    very close to the smallest value of Error associated with the Taylor approximations. Total

     Can we improve on this quadratic approximation? The parabola through three points is a perfect fit to the function at each of those three points and is extremely close to the function for values of x close to those three points. Therefore, using the two endpoints of the interval actually 428029263.doc 9

results in our losing some of the advantages of such an approximation the nearby points outside

    the interval are not “counted”. It therefore makes sense to select three points that are all inside the interval. For instance, suppose we take the uniformly spaced points x = 0.25, 0.50, and 0.75.

     and the corresponding approximation is The resulting quadratic function is Qx(){0.25,0.5,0.75}

    x2. exx:;;0.8286630.8372961.022912

    Note that the coefficients of this parabola haven‟t changed much compared to the preceding approximation.

     Now let‟s see what happens to the values of the error estimates. We extend the above

    table to include these results in Table 4.

Table 4

     Qx()Qx()Tx()Tx()Tx()0121{0,0.5,1}{0.25,0.5,0.75}

    Error0.051615 0.008659 0.093906 0.008731 0.005430 Total

    Error0.079682 0.013188 0.139598 0.009665 0.008840 Root-Square

    Error0.218282 0.039110 0.359141 0.014421 0.029420 Max

We therefore see that most of the error estimates have been reduced, other than Error, which Max

    is roughly twice as large for than for . Thus, in two of the three cases, Qx()Qx(){0.25,0.5,0.75}{0,0.5,1}

    we have achieved a greater degree of accuracy by using the parabola through these three points rather than the one through the endpoints and the midpoint of the interval.

     This suggests that we can likely improve on the error estimates still further by examining other sets of three points to determine a parabolic approximation. For instance, interested readers might want to explore what happens if the three points are 1/6, 1/2, and 5/6. The idea here is to choose one point from each equally divided subintervals [0, 1/3], [1/3, 2/3], and [2/3, 1] of [0, 1]. And by selecting three midpoints, one from each subinterval, we likely maximize the potential benefit of the approximating function being a perfect fit at the midpoints.

    Naturally, the above improvements lead to a search for a quadratic approximation that could minimize each of the three errors by finding, in some sense, the “best” three points in the

    interval, though what that “best” is will depend on the choice of the error estimate selected. For

    example, if we want the smallest approximation error by Error, we will look for three points Max

    428029263.doc 10

Report this document

For any questions or suggestions please email
cust-service@docsford.com