Search for notes by fellow students, in your own course and all over the country.

Browse our notes for titles which look like what you need, you can preview any of the notes via a sample of the contents. After you're happy these are the notes you're after simply pop them into your shopping cart.

My Basket

You have nothing in your shopping cart yet.

Title: Generating Functions
Description: Well comprehensive notes on Generating Functions

Document Preview

Extracts from the notes are below, to see the PDF you'll receive please use the links above



to si and m times with respect to sj
...

Properties of moment generating function
(i) The moment-generating function of X is unique in the sense that, if two random variables X
and Y have the same moment generating function(mgf) MX (t) = MY (t), for t in an interval
containing 0, then X and Y have the same distribution
...
That is,
the mgf of the sum of two independent random variables is the product of the mgfs of the
individual random variables
...

(iii) Let Y = aX + b
...

Deriving moments using moment generating function
An advantage of the moment generating function is its ability to give the moments of the random
variable
...
The Maclaurin series of the function etx is
given by
(tx)2 (tx)3
(tx)n
+
+
...
;
2!
3!
n!
By using the fact that the expected value of the sum equals the sum of the expected values, the
etx = 1 + tx +

moment-generating function can be written as
(tX)2 (tX)3
(tX)n
+
+
...
]
2!
3!
n!
t3
tn
t2
= 1 + tE[X] + E[X 2 ] + E[X 3 ]
...

2!
3!
n!

E[etX ] = E[1 + tX +

5

Taking the derivative of MX (t) with respect to t; we obtain
dMx (t)
t2
t(n−1)
= E[X] + tE[X 2 ] + E[X 3 ]
...

dt
2!
(n − 1)!
substituting t = 0; MX′ (0) = E[X]
Similarly, evaluating the second derivative at t = 0, we obtain
MX′′ (0) = MX (0) = E[X 2 ]
...
e
d(r) Mx (t)
(r)
|t=0 = MX (0) = E[X r ]
dtr
Examples
(a) Let X be a random variable having p
...
f
1 x2
f (x) = √ e 2

Find the m
...
f of X

−∞Z


1 2
1
√ e− 2 x etx dx
MX (t) = E[e ] =

−∞
Z ∞
1 − 1 (x2 −2tx)
√ e 2
dx
=

−∞
Z ∞
1
1
2
2
2
√ e− 2 (x −2tx+t −t ) dx
=

Z−∞

1 − 1 (x−t)2 t2
√ e 2
=
e 2 dx

−∞
Z ∞
t2
1
1
2
√ e− 2 (x−t) dx
= e2

−∞
tX

t2

= e2

This follows because √12π e− 2 (x−t) is the p
...
f of a random variable having normal distribution
1

2

t2

with mean t and variance 1
...
g
...
g
...
g
...
g
...
g
...

Finally let us use m
...
f of Y to find E[Y] and Var(Y)
...
d
...
, n 0 < p < 1

Find the m
...
f of X
MX (t) =

n
X

P (X = x)etx

x=0
n 
X


n x
p (1 − p)n−x etx
=
x
x=0

n
X n
x
=
pet (1 − p)n−x
x
x=0
= [(1 − p) + pet ]n

7

Joint moment generating function for random vector:
Let X = (X1 , X2 ,
...
The joint moment generating
function of the vector is defined by


Mx (t) = E[et x ]
if the expectation exists for all values t = [t1 , t2 ,
...

Mx (t) = E [exp(t1 x1 + t2 x2 +
...

tk xk
f (x1 , x2 ,
...
dxp
exp
xp

x2

x1

k=1

or
Mx (t) = E [exp(t1 x1 + t2 x2 +
...

exp
tk xk
f (x1 , x2 ,
...
g
...
, p can be obtained by substituting tj = 0;

tj , j ̸= i in the

joint m
...
f;i
...
ti , 0, 0
...
g
...



Example Suppose that X = (X1 , X2 , X3 ) has the joint m
...
f


1 2
2
2
MX1 ,X2 ,X3 (t1 , t2 , t3 ) = exp (5t1 + 3t2 + 2t3 + 4t1 t2 + 6t1 t3 )
2
Find the marginal m
...
f of X2 and X1 , X3
...
g
...
, p equal to
zero
...
g
...

∂r
Mx (t) |t=0
∂tri
∂r ∂s
E[Xir Xjs ] =
Mx (t) |t=0
∂tri ∂tsj
E[Xir ] =

Example Using the previous example, find E[X12 ] and E[X2 X3 ]
...
g
...

To find E[X2 X3 ];


∂ ∂
1 2
2
2
E[X2 X3 ] =
exp (5t1 + 3t2 + 2t3 + 4t1 t2 + 6t1 t3 ) |(t1 ,t2 ,t3 )=(0,0,0)
∂t2 ∂t3
2
and



1 2

2
2
exp (5t1 + 3t2 + 2t3 + 4t1 t2 + 6t1 t3 ) = (2t3 + 3t1 )
∂t3
2


1 2
2
2
exp (5t1 + 3t2 + 2t3 + 4t1 t2 + 6t1 t3 )
2


1

(2t3 + 3t1 )exp (5t21 + 3t22 + 2t23 + 4t1 t2 + 6t1 t3 ) = (2t3 + 3t1 )(3t2 + 2t1 )
∂t2
2


1 2
2
2
exp (5t1 + 3t2 + 2t3 + 4t1 t2 + 6t1 t3 )
2

putting t1 = 0, t2 = 0, t3 = 0, E[X2 X3 ] = 0

9

CHARACTERISTIC FUNCTIONS:
Definition: The characteristic function of a random variable X denoted by ϕ(t) is defined by
ϕ(t) = E[eitX ]
Z
=
eitx f (x)dx
x

if X is a continuous random variable and
ϕ(t) = E[eitX ]
X
=
eitx f (x)dx
x

if X is a discrete random variable
...
This implies that ϕ(t) exists for all distributions
...
A characteristic function is in general a complex variable
whose argument is a real variable whose variation takes place within the unit circle in the complex
plane
...
Then
Z



ϕ(0) =

1f (x)dx = 1, −∞ < t < ∞
Z ∞

itx
|eitx |f (x)dx
e f (x)dx ≤

Z−∞∞


|ϕ(t)| =
Z
=

−∞

−∞


f (x)dx = 1
...


10

(ii) Uniform continuity: Consider the difference
Z ∞

ϕ(t + h) − ϕ(t) =
eitx eihx − 1 f (x)dx
−∞

where h is any positive constant
...

(iii) Uniqueness: Two distributions F1 (x) and F2 (x) are identical if and only if their charactristic functions ϕ1 (t) and ϕ2 (t) are identical
...
If we differentiate the
function r times with respect to t, we have

Z
dr
dr
itx
ϕ(t) =
e f (x) dx
dtr
dtr
x

Z  r
d itx
=
e
f (x) dx
dtr
x
Z
=
ir xr eitx f (x) dx
x

letting t → 0,

dr ϕ(0)
=
dtr

Z
ir xr f (x) dx =
x

E[X r ]

...
e if X is a discrete random
variable then
E[X r ] =

11

ϕ(0)
ir

Examples
(i) Exponential Distribution:
Let X be a random variable having probability density function
f (x) = λe−λx , x > 0
then the characteristic function of this distribution is given by
Z ∞
itx
ϕX (t) = E[e ] =
λe−λx eitx dx
0
Z ∞
=
λe−(λ−it)x dx
0

−e−(λ−it)x x=∞
|x=0
λ − it
λ
=
λ − it

= λ

the first moment about the point zero is
dϕ(t)
=
dt
=

λ
λ−it

dt
λi
(λ − it)2

Hence the expected value of X is
E[X] =

ϕ′ (0)
1
= f racλii(λ)2 =
i
λ

The second moment is
ϕ′ (t)
d2 ϕ(t)
=
dt
dt
λi
d (λ−it)
2
2λi
=
=
dt
(λ − it)3
E[X]2 =

ϕ′′ (0)
2
=
i2
λ2

Thus Var(X) is
2
1
1

=
λ2 λ2
λ2

12

(ii) Poisson Distribution:
Let X have Poisson distribution with parameter λ;
P (X = x) =

e−λ λx
,
x!

x = 0, 1, 2,
...
, Xp )′ be a p-dimensional random vector
...
xp ) be
the joint probability distribution function of the random vector
...
, tp ) = E [exp(it1 x1 + it2 x2 +
...

exp i
tk xk
f (x1 , x2 ,
...
dxp
xp

Z

x2

x1

k=1

exp(it′ x)f (x) dx

ϕ(t) =
x
or

ϕ(t1 , t2 ,
...
+ itp xp )]
" p
#
X
X XX
t k xk
f (x1 , x2 ,
...

exp i
xp

x2

x1

k=1

where t = (t1 , t2 ,
...
, p are real variables
...
0) = 1
and |ϕ(t)| ≤ 1, t ∈ Rp
(ii) Marginal characteristic functions(M
...
F):
Let
X=

X1

!

...
Further let t be partitioned similarly into t1 and t2
...
C
...

(iii) Independence:
If the random vectors X1 and X2 are independent, then their joint characteristic function
ϕ(t1 , t2 ) is given by ϕ(t1 , t2 ) = ϕ(t1 )ϕ(t2 ); ϕ(ti ) is the c
...
f r times with respect ti and substituting all the tj , j = 1, 2,
...

Also E[Xir Xjs ] can be obtained from the joint c
...

E[Xir ]

=

E[Xir Xjs ] =

∂r
ϕ (t)
∂tri x
∂r
∂tri

∂s
∂tsj

|t=0

ir
Mx (t) |t=0
ir is

14


Title: Generating Functions
Description: Well comprehensive notes on Generating Functions