Search for notes by fellow students, in your own course and all over the country.

Browse our notes for titles which look like what you need, you can preview any of the notes via a sample of the contents. After you're happy these are the notes you're after simply pop them into your shopping cart.

My Basket

You have nothing in your shopping cart yet.

Title: Least Squares Fitting
Description: Least squares fitting for the physical sciences. Suitable for 4th or 5th year student.

Document Preview

Extracts from the notes are below, to see the PDF you'll receive please use the links above


12
...

Often, experiments cannot be designed such that, e
...
, a reading on a voltmeter can be directly converted into what you want to measure via a simple linear
model
...
These free parameters can be varied and when the model matches the
experiment it can be claimed that these parameters have been measured
...


1
...

Inherent in any analysis of this sort is that we assume that the model we are
using is correct
...
If the model is unable to describe the data and/or is incorrect then
no amount of fitting can draw valid conclusions from the experiment
...
2

Errors and covariances

Fitting procedures of the sort used here can also give out errors in the parameterised quantities and covariances between these parameters
...

The covariance between two parameters is a measurement of how much the
parameters can compensate one another
...
Analysis of these covariances is of key importance when
understanding a fit
...
1

Basis of the method

Firstly, a model simulation is performed for an initial set of input parameters to
evaluate the available measurements, we call these predicted quantities fn and
the corresponding actual measurements yn with errors ∆yn
...
We give each experimental point a weight, wn , based
upon the error attributed to that particular data point according to
wn =

1

...
From these solutions and the
initial solution, partial derivatives1 can be found at each point, n, from
∂fn
f − fn
= n

...
2

(2)

Using the method to fit data

A matrix and a vector are then assembled in order to perform the fit, viz
...


2

N

bi = −

∂fn
(fn − yn ) wn
...

Here, ∆pi are the suggested changes in the initial parameters pi 2
...

A χ2 merit function is introduced to grade the accuracy of a solution, defined
as
N

wn (yn − fn )2
...
A more useful quantity to look at is the normalised χ2 (χ2 ) which is χ2 normalised to the number of
N
degrees of freedom (the number of data points minus the number of free parameters)
...
e
...
e
...
This increases the speed of convergence
...
These three cases are processed and the χ2
evaluated for each one allowing the best damping factor to be used
...


2
...
e
...

• Errors in fixed model parameters
...
The error in the ith fixed parameter (pi ) is denoted by σi and the
covariance between two fixed parameters (pi and pj ) is given by ρi,j
...


3

In order to take into account both types of errors, we define a new χ2 merit
function given by
N

N

χ2 =

wn,m (fn − yn ) (fm − ym ) ,

(8)

n=1 m=1

where w are the elements of the weighting matrix (W ) given by
W = S −1 ,

(9)

where S is given by
P

P

ρp,q σp σq

Sn,m = ρn,m σn σm +
i=1 j=1

∂fn ∂fm

...

In order to calculate the error in each fit parameter (and also the error in any
modelled quantity) we need to construct a matrix, M , similar to that of equation
3,
N
N
∂fn ∂fm
Mij =
wn,m

...

In either case (with or without correlated errors), we construct a covariance
matrix, C, by inverting M (from equation 3 or equation 11)
C = M −1
...

In addition, this matrix can also be used to give an error in any modelled
quantity, fn , from
P

∆fn =

P

∂fn ∂fn
Cij ,
i=1 j=1 ∂pi ∂pj
4

(15)

Measurement Index
x @ t=3s
1
x @ t=6s
2
x @ t=9s
3
y @ t=1s
4
y @ t=6s
5
y @ t=7s
6

Value
11
...
3
35
...
3
-59
...
2

Error
0
...
0
1
...
0
1
...
0

Table 1: Experimental measurements for the horizontal and vertical position of a
ball rolling off a table
...

The treatment of the covariance matrix, and the derivation of the errors from
it, implicitly assumes that all the statistical errors have Gaussian distribution
...

The covariance matrix as defined by equations 12 and 14 contain an important
set of numbers which must be studied carefully before drawing conclusions from
any fit
...
2
...
We will also consider an example where the solution can be
found analytically
...
Table 1 shows these six measurements; we will
label them y1 , y2
...
The aim here is
to determine the initial speed at which the ball was rolling and also acceleration
due to gravity from the measurements
...

If we take initial guesses of v = 3ms−1 and a = 8ms−2 then running our
model gives the results shown in table 2
...


5

Measurement Index
x @ t=3s
1
x @ t=6s
2
x @ t=9s
3
y @ t=1s
4
y @ t=6s
5
y @ t=7s
6

Modelled Value
9
...
0000
27
...
00000
-48
...
0000

Table 2: Modelled data for the an initial guess of v = 3ms−1 and a = 8ms−2
...
8999996
19
...
699999
-8
...
000000
-56
...
The parameters used here are
v = 3
...

Moving v and a each in turn by 10% gives two new sets of data shown in
tables 3 and 4 respectively
...
0000000
18
...
000000
-8
...
800003
-61
...
The parameters used here are
v = 3ms−1 and a = 8
...


6

We can now construct our partial derivatives (equation 2):
∂f1
∂f2
∂f3
= 3
...
0
= 9
...
0
= 0
...
0
∂p1
∂p1
∂p1
∂f1
∂f2
∂f3
= 0
...
0
= 0
...
0
= −6
...
0
∂p2
∂p2
∂p2

(17)

and our weights (equation 1),
w1 = 2
...
00 w3 = 0
...
25 w5 = 0
...
25

(18)

Hence our matrix (equation 3),
M=

110
...
0
0
...
5

(19)

and vector (equation 4),
b=

109
...
0

(20)

We can now solve the equation M ∆p = b:
M ∆p = b
∆p = M −1 b
∆p =
∆p =

110
...
0
0
...
5

−1

0
...
0
0
...
035
∆p =

(21)
(22)

109
...
0

(23)

109
...
0

(24)

0
...
83

(25)

Thus our improved parameters are v = 4
...
83
...
0 and a = 9
...

The normalised χ2 for our initial guess was 51
...
86
...

• The system was completely linear so the solution was found without the
need for iteration, again a situation which will seldom occur in a real experiment
...
This is more indicative of real life
...


8

Figure 1: Experimental data which will be fitted with a Gaussian
...
1

Fitting a Gaussian

Figure 1 shows an example of data which could be measured during an experiment5
...
Our model of the data is now:
(x − p3 )2
fi = p1 + p2 exp −

...

For our initial estimates, we can look at the data (as shown in figure 1) and
guess some parameters
...

Figure 2 shows how this guess compares to the measured data
...
64
...


9

Parameter
Physical meaning
p1
Height of Background
p2
Height of Gaussian
p3
Position of Gaussian
p4
Width of Gaussian

Initial guess
3
...
0
5
...
8

Table 5: Initial parameters to attempt to fit the Gaussian show in figure 1
Parameter
Physical meaning
p1
Height of Background
p2
Height of Gaussian
p3
Position of Gaussian
p4
Width of Gaussian

Fitted Value
1
...
316413
5
...
59928457

Initial guess
3
...
0
5
...
8

Table 6: Final fit parameters to the data show in in 1, the fit is show in figure 3
...
The χ2 for the fit is 0
...
The
parameters it finds are given in table 6
...
3 gives a normalised covariance matrix (equation 14) of:
p1
p2
p3
p4
p1
0
...
244 −0
...
421
p2 −0
...
540
0
...
458
p3 −0
...
466
0
...
372
p4 −0
...
458 −0
...
030

(27)

The diagonal terms of this matrix are the errors in our fit parameters, so we
can quote our derived values as:
Height of Background 1
...
184
Height of Gaussian
13
...
540
Position of Gaussian
5
...
030
Width of Gaussian
0
...
030
The off-diagonal elements are also important, these give the covariance between free parameters
...
Small absolute values indicate a better
independent measurement of the free parameters
...
From simply looking at the fit, it is clear that one
can’t really compensate a change in the background with changing the width of
the Gaussian (and vice-versa), hence there is a weak covariance between these
parameters
...


12

Figure 4: Experimental data which will be fitted with a Gaussian, these data define
a Gaussian less well than the original data show in in figure 1
...

This covariance says that if you increase the position then you must also increase
the height to satisfy the data
...

There is a negative covariance between the width and the height of the Gaussian
...
This also isn’t unexpected
...

The covariances in this fit are not particularly high, depending on the experiment, one should be aiming for (normalised) covariances below 0
...


4
...

Applying the same methodology as before, we can fit the same 4-parameter
model to it, this fit is shown in figure 5
...
2
...
2
...
854 −0
...
222 −0
...
999
4
...
231
0
...
222
0
...
150
0
...
993
0
...
192
1
...
46
6
...
61
2
...

14

ble to the baseline height of the data
...
Note also that the diagonals
of the covariance matrix are very high, meaning the the parameters have not been
measured accurately
...
We can see that the only parameter we can
really measure is the position of the Gaussian, this has a low error and also low
covariance with the other parameters
...
3

Fixing and coupling parameters

It is possible to both ‘fix’ and ‘couple’ parameters
...
Fixing a parameter implies setting it equal to a constant (i
...
it is no longer a
parameter)
...
For example, in spectroscopy we often know an expected line ratio so when
performing a fit we can say that the height of one line is always, e
...
, double the
height of another line
...


5

References

Marquardt D W 1963 J
...
for Ind
...
Math
Title: Least Squares Fitting
Description: Least squares fitting for the physical sciences. Suitable for 4th or 5th year student.