Search for notes by fellow students, in your own course and all over the country.
Browse our notes for titles which look like what you need, you can preview any of the notes via a sample of the contents. After you're happy these are the notes you're after simply pop them into your shopping cart.
Title: Methods of moments
Description: To show how the method of moments determines an estimator, we first consider the case of one parameter. We start with independent random variables X1,X2, . . . chosen according to the probability density fX(x|✓) associated to an unknown parameter value ✓. The common mean of the Xi, μX, is a function k(✓) of ✓. For example, if the Xi are continuous random variables, then
Description: To show how the method of moments determines an estimator, we first consider the case of one parameter. We start with independent random variables X1,X2, . . . chosen according to the probability density fX(x|✓) associated to an unknown parameter value ✓. The common mean of the Xi, μX, is a function k(✓) of ✓. For example, if the Xi are continuous random variables, then
Document Preview
Extracts from the notes are below, to see the PDF you'll receive please use the links above
Topic 13
Method of Moments
13
...
be independent random variables having a common distribution possessing a mean µM
...
n
1X
¯
Mn =
Mi ! µ M
n i=1
as n ! 1
...
We
start with independent random variables X1 , X2 ,
...
The common mean of the Xi , µX , is a function k(✓) of ✓
...
1
The law of large numbers states that
n
1X
¯
Xn =
Xi ! µX
n i=1
as n ! 1
...
e
...
ˆ
This can be turned into an estimator ✓ by setting
ˆ
¯
X = k(✓)
...
We shall next describe the procedure in the case of a vector of parameters and then give several examples
...
195
Introduction to the Science of Statistics
13
...
chosen according to the probability distribution derived
from the parameter value ✓ and m a real valued function, if k(✓) = E✓ m(X1 ), then
n
1X
m(Xi ) ! k(✓)
n i=1
as n ! 1
...
Write
µm = EX m = km (✓)
...
1)
for the m-th moment
...
• Step 1
...
1) for the first d moments,
µ1 = k1 (✓1 , ✓2
...
, ✓d ),
...
, ✓d ),
obtaining d equations in d unknowns
...
We then solve for the d parameters as a function of the moments
...
,
✓d = gd (µ1 , µ2 , · · · , µd )
...
2)
• Step 3
...
, xn ), we compute the first d sample moments,
n
x=
n
1X
xi ,
n i=1
x2 =
1X 2
x ,
n i=1 i
n
...
n i=1 i
Using the law of large numbers, we have, for each moment, m = 1,
...
• Step 4
...
2) give
ˆ ˆ
ˆ
us formulas for the method of moment estimators (✓1 , ✓2 ,
...
For the data x, these estimates are
ˆ
✓1 (x) = g1 (¯, x2 , · · · , xd ),
x
ˆ
✓2 (x) = g2 (¯, x2 , · · · , xd ),
x
...
x
How this abstract description works in practice can be best seen through examples
...
3
Examples
Example 13
...
Let X1 , X2 ,
...
,
+1
x > 1
...
Introduction to the Science of Statistics
The Method of Moments
In this situation, we have one parameter, namely
...
For step 2, we solve for as a function of the mean µ
...
µ
µ
1
...
1
A good estimator should have a small variance
...
we compute
0
g1 (µ)
=
1
(µ
1)
,
2
0
g1
giving
✓
and find that ˆ has mean approximately equal to
2
ˆ
0
⇡ g1 (µ)2
2
n
=(
1
◆
=
1
(
1)2
1
(
=
(
1)2
=
(
1))2
(
1)2
and variance
1)4
n(
1)2 (
2)
(
n(
=
1)2
2)
As a example, let’s consider the case with
2
ˆ
= 3 and n = 100
...
346
...
Recall that the probability transform states that
if the Xi are independent Pareto random variables, then Ui = FX (Xi ) are independent uniform random variables on
the interval [0, 1]
...
If
u = FX (x) = 1
x
3
,
then x = (1
u)
1/3
=v
1/3
,
where v = 1
u
...
Consequently, 1/ V1 , 1/ V2 , · · ·
have the appropriate Pareto distribution
...
053254
> sd(betahat)
[1] 0
...
4
1
...
8
2
...
0
2
...
0
3
...
0
4
...
053 is close to the simulated value of 3
...
e
...
The
sample standard deviation value of 0
...
346 estimated by the delta method
...
Exercise 13
...
The muon is an elementary particle with an electric charge of 1 and a spin (an intrinsic angular
momentum) of 1/2
...
2 µs
...
Since the muon’s charge and spin are the same as the electron, a muon can be
viewed as a much heavier version of the electron
...
p + p ! p + n + ⇡ + or p + n ! n + n + ⇡ +
From the subsequent decay of the pions (mean lifetime 26
...
The decay of a muon into a positron (e+ ), an electron neutrino (⌫e ),
and a muon antineutrino (¯µ )
⌫
µ+ ! e+ + ⌫ e + ⌫ µ
¯
has a distribution angle t with density given by
f (t|↵) =
1
(1 + ↵ cos t),
2⇡
0 t 2⇡,
with t the angle between the positron trajectory and the µ+ -spin and anisometry parameter ↵ 2 [ 1/3, 1/3] depends
the polarization of the muon beam and positron energy
...
tn , give the method of
moments estimate ↵ for ↵
...
)
Example 13
...
The size of an animal population in a habitat of
interest is an important question in conservation biology
...
One estimation technique is to capture some of the animals, mark them and release them back
into the wild to mix randomly with the population
...
In this case, some of the animals were not in the
first capture and some, which are tagged, are recaptured
...
Thus, t and k is under the control of the experimenter
...
We will use a method of moments strategy to estimate N
...
the proportion of the tagged fish in the second capture ⇡ the proportion of tagged fish in the population
r
t
⇡
k
N
This can be solved for N to find N ⇡ kt/r
...
To begin, let
⇢
1 if the i-th individual in the second capture has a tag
...
The Xi are Bernoulli random variables with success probability
P {Xi = 1} =
t
...
We are sampling without replacement
...
N 1
In words, we are saying that the probability model behind mark and recapture is one where the number recaptured is
random and follows a hypergeometric distribution
...
N
N
N
N
¯
The proportion of tagged individuals, X = (X1 + · · · + Xk )/k, has expected value
µ
t
¯
EX = =
...
µ
Now in this case, we are estimating µ, the mean number recaptured with r, the actual number recaptured
...
we replace µ with the previous equation by r
...
We perform 1000
simulations of this experimental design
...
)
199
Introduction to the Science of Statistics
>
>
>
>
The Method of Moments
r<-rep(0,1000)
fish<-c(rep(1,200),rep(0,1800))
for (j in 1:1000){r[j]<-sum(sample(fish,400))}
Nhat<-200*400/r
The command sample(fish,400) creates a vector of length 400 of zeros and ones for, respectively, untagged
and tagged fish
...
This is repeated 1000
ˆ
times and stored in the vector r
...
> mean(r)
[1] 40
...
245705
> mean(Nhat)
[1] 2031
...
6233
Histogram of Nhat
350
250
0
0
50
150
Frequency
250
150
50
Frequency
350
Histogram of r
20
30
40
50
60
1500
r
2000
2500
3000
Nhat
To estimate the population of pink salmon in Deep Cove Creek in southeastern Alaska, 1709 fish were tagged
...
The estimate for the population size
6375 ⇥ 1709
ˆ
N=
⇡ 78948
...
4
...
ˆ
N
...
5
...
Relative fitness is quantified as the average
number of surviving progeny of a particular genotype compared with average number of surviving progeny of competing genotypes after a single generation
...
A basic understanding of the distribution of fitness
effects is still in its early stages
...
His approach used a gamma-family of random variables and
gave the estimate of ↵ = 0
...
35
...
Because we have two parameters, the method of
moments methodology requires us, in step 1, to determine the first two moments
...
Note that
µ1
and
µ1 ·
So set
µ1
µ2
µ2
1
=
µ2
1
↵
↵
µ2 =
1
µ2
µ2
=
·
2
↵/
↵/
= ↵,
2
to obtain estimators
¯
X
X2
¯
(X)2
+
↵2
2
...
n
and
X2
1X 2
=
X
n i=1 i
¯
and ↵ = ˆX =
ˆ
¯
(X)2
...
23, 5
...
0
0
...
4
0
...
8
x
Figure 13
...
23, 5
...
201
1
...
Introduction to the Science of Statistics
The Method of Moments
To investigate the method of moments on simulated data using R, we consider 1000 repetitions of 100 independent
observations of a (0
...
35) random variable
...
23,5
...
2599894
> sd(alphahat)
[1] 0
...
315644
> sd(betahat)
[1] 2
...
ˆ
> hist(alphahat,probability=TRUE)
> hist(betahat,probability=TRUE)
Histogram of betahat
0
...
00
1
0
...
15
5
6
Histogram of alphahat
0
...
2
0
...
4
0
...
We will revisit this example using maximum likelihood
estimation in the hopes of reducing this variance
...
Indeed, from
the simulation, we have an estimate
...
8120864
Moreover, the two estimators ↵ and ˆ are fairly strongly positively correlated
...
> cor(alphahat,betahat)
[1] 0
...
ˆ
202
Answers to Selected Exercises
0
...
4
The Method of Moments
0
...
0
0
...
2
...
1
⇡ 2 /3)/2
...
2
Thus, ↵ = (µ2
ments estimate
Figure 13
...
3
where t2 is the sample mean of the square of the observations
...
1/3 (red), 0 (black), 1/3 (blue), 1 (light blue)
...
3
-0
...
1
0
...
1
0
...
3
13
...
Let X be the random variable for the number of tagged fish
...
µX
2
X
t N tN
=k
N N N
Thus, g 0 (µX ) =
k
1
kt
...
r3
kt r
For t = 200, k = 400 and r = 40, we have the estimate N = 268
...
This compares to the estimate of 276
...
For t = 1709, k = 6375 and r = 138, we have the estimate N = 6373
...
ˆ
203
Title: Methods of moments
Description: To show how the method of moments determines an estimator, we first consider the case of one parameter. We start with independent random variables X1,X2, . . . chosen according to the probability density fX(x|✓) associated to an unknown parameter value ✓. The common mean of the Xi, μX, is a function k(✓) of ✓. For example, if the Xi are continuous random variables, then
Description: To show how the method of moments determines an estimator, we first consider the case of one parameter. We start with independent random variables X1,X2, . . . chosen according to the probability density fX(x|✓) associated to an unknown parameter value ✓. The common mean of the Xi, μX, is a function k(✓) of ✓. For example, if the Xi are continuous random variables, then