"); grf.document.close(); } function dontshow(){ grf.window.close(); } function hlts(){ hg=window.open('','','toolbar=0,width=240,height=170'); hg.document.open(); hg.document.writeln("newton's highlights"); hg.document.writeln(" "); hg.document.close(); } function hltsclose(){ hg.window.close(); }
Curve Fitting
Curve fitting is nothing but approximating the given function f(x) using simpler functions say polynomials, trignometric functions, exponential functions and rational functions. However, the main difference between interpolation and Curve fitting is, in the former, the approximated curve has to pass through the given data points. Here again polynomial functions are the one which are been used widely in the applications than the other functions. The existence of a polynomial function P(x) which approximate any continuous function f(x) as a finite interval [a, b] is gauranteed by Weiestrass approximation theorem.
Weiestrass approximation Theorem : If the function f(x) is continuous as a finite interval [a, b], then given any e > 0 then exists a n = n(e) and a polynomial P(x) of degree n such that 
| f(x) - P(x) | < e for all x Î[a, b].
In curve fitting f(x) will be taken as
f(x) @  P(x, c0, c1 . . . cn)
 = c0F0(x) + c1F1(x) +  . . . + cnFn(x)
where Fi(x), i = 0, 1, . . . n are n chosen linearly independent functions(xifor polynomials) and 
ci, i = 0, 1, 2, . . . n are parameters to be determined.
The error in the approximation is defined as
E(f;c) = || f(x) - (c0F0 + . . . + cnFn) ||
where || || is a norm. Now the problem is to find ci, i = 0, 1, 2, . . . n such that E(f;c) is as small as possible in the sense of the norm || ||.
The most commonly used norms :

For discrete data Lp - norm defined as

(
n
) 1/p
|| x ||  = 
S
| xi |p
,
  P > 1
i =1
where x = {xi} is a sequence of real or complex numbers is most commonly used norms.
The special cases in Lp - norm are for p = 2 and p = µ  i.e., respectively the Euchlidean norm 
(
n
) 1/2
|| x ||  = 
S
| xi |2  
  and the infinity norm  || x || =  max   | xi |
(0 < x < 1) 
i =1
out of these norms, Euchlidean norm is most widely used which results in least squares curve fitting. Now the error function in least square sence is defined as
n
(
n
) 2
E(f; ci)  = 
S
W( xi ) f( xi ) - 
S
ciFi
i = 0
i = 0
where W( xi ) is some weighting function.
Then minimization of E(f; ci) with Fi = xi, i = 0, 1, . . . n and W( xi ) = 1 gives the condition

E / ci = 0  for i = 0, 1, . . . n

called normal equations. This gives a system of (n + 1) linear equations for (n + 1) unknowns 
ci ,   i = 0, 1, . . . n.

Normal equations :

Case(i)

If f(x) is approximated with a linear polynomial i.e., P1(x) = c0 + c1x and W(x) =1 then

n
(   ) 2
E(f; ci)  = 
S
W( xi ) f( xi ) - 
( c0 + c1x )
i = 1
 

 
n
(   ) 2
S
f( xi ) - 
( c0 + c1x )
i = 1
 

 
E   = 
n
(   )

S
2 f
 c0 + c1x
( -1 )  =  0
c0
i = 1
 

 
E   = 
n
(   )

S
2 f
 c0 + c1x
( -x ) = 0
c1
i = 1
 
c0n + c1Sxi = Sfi    and  c0Sxi + c1Sxi2 = +Sxifi
 
Case(ii)

If f(x) is approximated with a second degree polynomial i.e., P2(x) = c0 + c1x+ c2x2
with W(x) = 1. Then

n
(   ) 2
E(f; ci)  = 
S
W( xi ) f( xi ) - 
( c0 + c1x + c2x2 )
i = 1
 

 
n
(   ) 2
S
f( xi ) - 
( c0 + c1x + c2x2 )
i = 1
 

 
E   = 
n
(   )

S
2 f
c0 - c1xi - c2xi2
( -1 )  =  0
c0
i = 1
 

 
E   = 
n
(   )

S
2 f
 c0 - c1xi - c2xi2
( -x ) = 0
c1
i = 1
 

 
E   = 
n
(   )

S
2 f
 c0 - c1xi - c2xi2
( -xi2 ) = 0
c2
i = 1
 
Þc0n + c1Sxi + c2Sxi2 = Sfi
    c0Sxi+ c1Sxi2 + c2Sxi3 = +Sfixi
    c0Sxi2+ c1Sxi3 + c2Sxi4 = +Sfixi2
 
Case(iii)

If f(x) is approximated with a mth degree polynomial 
i.e., Pm(x) = c0 + c1x+ . . . + cmxm    with  W(x) = 1.

then the normal equations are:

c0n + c1Sxi + c2Sxi + . . . +cmSxim = Sfi
c0Sxi+ c1Sxi2 + c2Sxi3   + . . . +cmSxim+1 = +Sfixi
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
c0Sxim+ c1Sxim+1 + c2Sxim+2   + . . . +cmSxi2m = +Sfixim

Example - 1 :

Find the least squares line for the data points
xi
-1
0
1
2
3
4
5
6
fi
10
9
7
5
4
3
0
-1
Sxi = 20, Sfi = 37, Sxi2 = 92, Sxifi = 25
The normal equations for P(x) = c0 + c1x are   c0n + c1Sxi = Sfi      c0Sxi + c1Sxi2 = +Sxifi
Þ 8c0 + 20c1 = 37
    20c0 + 92c1 = 25
Þ c0= 8.643     c1= -1.607

The required line P1(x) is 8.643 - 1.607x
 

Example - 2 :

Find the least squares second degree curve for the data

xi
2
4
6
8
10
fi
3.07
12.85
31.47
57.38
91.29
let the curve be P(x) = c0 + c1x+ c2x2
Normal equations are

c0n + c1Sxi + c2Sxi2 = Sfi
c0Sxi+ c1Sxi2 + c2Sxi3 = +Sfixi
c0Sxi2+ c1Sxi3 + c2Sxi4 = +Sfixi2

consider the transformation X = (x - 6) / 2

xi
2
4
6
8
10
Xi
-2
-1
0
1
2
fi
3.07
12.85
31.47
57.38
91.29

 
SXi = 0
SXi2 = 10
SXi3 = 0
SXi4 = 34
 
     
Sfi = 196.06
SfiXi = 220.93
SfiXi2= 447.67
 
Now if we use above normal equations replacing xiwith Xi we get
5c0 + 0 + 10c2 = 196.06

0 + 10c1 + 0 = 220.97

10c0 + 0 + 34c2 = 447.67

c0 = 31.276   c1 = 22.097   c2 = 3.9679

P(X) = 31.276 + 22.097X + 3.9679X2

(or)

The required line P2(x) is  0.696 - 0.855x + 0.992x2
 

WORK OUT LEAST SQUARES CURVE HERE :


Solution of Transcendental Equations | Solution of Linear System of Algebraic Equations | Interpolation & Curve Fitting
Numerical Differentiation & Integration | Numerical Solution of Ordinary Differential Equations
Numerical Solution of Partial Differential Equations