Search for notes by fellow students, in your own course and all over the country.
Browse our notes for titles which look like what you need, you can preview any of the notes via a sample of the contents. After you're happy these are the notes you're after simply pop them into your shopping cart.
Title: Some exercices in probability
Description: Exercices to practicing at home .. ( from low levels to high ones ) students from schools to college goodluck
Description: Exercices to practicing at home .. ( from low levels to high ones ) students from schools to college goodluck
Document Preview
Extracts from the notes are below, to see the PDF you'll receive please use the links above
Random variables I
Probability Examples c-2
Leif Mejlbro
Download free books at
Leif Mejlbro
Probability Examples c-2
Random variables I
Download free eBooks at bookboon
...
com
3
Random variables I
Contents
Contents
Introduction
3
1
Some theoretical results
4
2
Simple introducing examples
18
3
Frequencies and distribution functions in 1 dimension
20
4
Frequencies and distributions functions, 2 dimensions
33
5
Functions of random variables, in general
75
6
Inequalities between two random variables
81
7
Functions Y = f(X) o f random variables
93
8
Functions of two random variables, f(X; Y )
107
9
Means and moments of higher order
129
10
Mean and variance in special cases
143
Index
157
www
...
com
We do not reinvent
the wheel we reinvent
light
...
An environment in which your expertise is in high
demand
...
Implement sustainable ideas in close
cooperation with other specialists and contribute to
influencing our future
...
Light is OSRAM
Download free eBooks at bookboon
...
This topic is not my favourite,
however, thanks to my former colleague, Ole Jørsboe, I somehow managed to get an idea of what it is
all about
...
On the other hand, it will probably also be closer to the way of thinking which is more common among
many readers, because I also had to start from scratch
...
We shall here deal with the basic stuff, i
...
frequencies and distribution
functions in 1 and 2 dimensions, functions of random variables and inequalities between random
variables, as well as means and variances
...
g
...
g
...
Unfortunately errors cannot be avoided in a first edition of a work of this type
...
360°
thinking
...
Leif Mejlbro
25th October 2009
360°
thinking
...
deloitte
...
Discover the truth at www
...
ca/careers
© Deloitte & Touche LLP and affiliated entities
...
com
Discover the truth5at www
...
ca/careers
© Deloitte & Touche LLP and affiliated entities
...
D
1
...
This definition leads to the concept of a distribution function for the random variable X, which is the
function F : R → R, which is defined by
F (x) = P {X ≤ x}
(= P {ω ∈ Ω | X(ω) ≤ x}),
where the latter expression is the mathematically precise definition which, however, for obvious reasons
everywhere in the following will be replaced by the former expression
...
The function F is weakly increasing, i
...
F (x) ≤ F (y) for x ≤ y
...
The function F is continuous from the right, i
...
limh→0+ F (x + h) = F (x) for every x ∈ R
...
We define a median of a random variable X with the distribution function F (x) as a real
number a = (X) ∈ R, for which
P {X ≤ a} ≥
1
2
and
P {X ≥ a} ≥
1
...
2
In general we define a p-quantile, p ∈ ]0, 1[, of the random variable as a number a p ∈ R, for which
P {X ≤ ap } ≥ p
and
P {X ≥ ap } ≥ 1 − p,
which can also be expressed by
F (ap ) ≥ p
and
F (ap −) ≤ p
...
, we call it
discrete, and we say that X has a discrete distribution
...
In this case we say that X is causally distributed,
or that X is constant
...
com
6
1
...
In this case we also say that X has a continuous
distribution, and the integrand f : R → R is called a frequency of the random variable X
...
Let us consider two random variables X and Y , which
are both defined on Ω
...
We say that the simultaneous distribution, or just the distribution, of (X, Y ) is known, if we know
P {(X, Y ) ∈ A}
for every Borel set A ⊆ R2
...
Notice that we can always find the marginal distributions from the simultaneous distribution, while it
is far from always possible to find the simultaneous distribution from the marginal distributions
...
Send us your CV
...
Send us your CV on
www
...
com
Download free eBooks at bookboon
...
Some theoratical results
Random variables I
The simultaneous distribution function of the 2-dimensional random variable (X, Y ) is defined as the
function F : R2 → R, given by
F (x, y) := P {X ≤ x ∧ Y ≤ y}
...
• If x ∈ R is kept fixed, then F (x, y) is a weakly increasing function in y, which is continuous from
the right and which satisfies the condition limy→−∞ F (x, y) = 0
...
• When both x and y tend towards infinity, then
lim
x, y→+∞
F (x, y) = 1
...
Given the simultaneous distribution function F (x, y) of (X, Y ) we can find the distribution functions
of X and Y by the formulæ
FX (x) = F (x, +∞) = lim F (x, y),
y→+∞
Fy (x) = F (+∞, y) = lim F (x, y),
x→+∞
for x ∈ R,
for y ∈ R
...
The 2-dimensional random variable (X, Y ) is called continuous, or we say that it has a continuous
distribution, if there exists a nonnegative integrable function (a frequency) f : R 2 → R, such that the
distribution function F (x, y) can be written in the form
x
y
F (x, y) =
f (t, u) du dt,
−∞
−∞
for (x, y) ∈ R2
...
∂x∂y
It should now be obvious why one should know something about the theory of integration in more
variables, cf
...
g
...
We note that if f (x, y) is a frequency of the continuous 2-dimensional random variable (X, Y ), then X
and Y are both continuous 1-dimensional random variables, and we get their (marginal) frequencies
by
+∞
fX (x) =
f (x, y) dy,
−∞
for x ∈ R,
Download free eBooks at bookboon
...
Some theoratical results
Random variables I
and
+∞
fY (y) =
f (x, y) dx,
−∞
for y ∈ R
...
It is, however, possible in the case when the two random variables
X and Y are independent
...
We say
that X and Y are independent, if for all pairs of Borel sets A, B ⊆ R,
P {X ∈ A ∧ Y ∈ B} = P {X ∈ A} · P {Y ∈ B},
which can also be put in the simpler form
F (x, y) = FX (x) · FY (y)
for every (x, y) ∈ R2
...
In two special cases we can obtain more information of independent random variables:
If the 2-dimensional random variable (X, Y ) is discrete, then X and Y are independent, if
hij = fi · gj
for every i and j
...
If the 2-dimensional random variable (X, Y ) is continuous, then X and Y are independent, if their
frequencies satisfy
f (x, y) = fX (x) · fY (y)
almost everywhere
...
Roughly speaking it means that the relation above holds outside a set in R 2 of area zero, a so-called
null set
...
There exists, however,
also non-countable null sets
...
Concerning maps of random variables we have the following very important results,
Theorem 1
...
Let ϕ : R → R and ψ : R → R be
given functions
...
If X is a continuous random variable of the frequency I, then we have the following important theorem,
where it should be pointed out that one always shall check all assumptions in order to be able to
conclude that the result holds:
Download free eBooks at bookboon
...
Some theoratical results
Random variables I
Theorem 1
...
1) Let I be an open interval, such that P {X ∈ I} = 1
...
3) Furthermore, assume that τ is differentiable with a continuous derivative τ , which satisfies
τ (x) = 0
for alle x ∈ I
...
We note that if just one of the assumptions above is not fulfilled, then we shall instead find the
distribution function G(y) of Y := τ (X) by the general formula
G(y) = P {τ (X) ∈ ] − ∞ , y]} = P X ∈ τ ◦−1 (] − ∞ , y]) ,
where τ ◦−1 = τ −1 denotes the inverse set map
...
At a first glance it may be strange that we at this early stage introduce 2-dimensional random variables
...
Thus we have the following general result for a
continuous 2-dimensional random variable
...
3 Let (X, Y ) be a continuous random variable of the frequency h(x, y)
...
The frequency of the difference X − Y is
k2 (z) =
+∞
−∞
h(x, x − z) dx
...
1
z
·
dx
...
If we furthermore assume that X and Y are independent, and f (x) is a frequency of X, and g(y) is a
frequency of Y , then we get an even better result:
Download free eBooks at bookboon
...
Some theoratical results
Random variables I
Theorem 1
...
The frequency of the sum X + Y is
k1 (z) =
+∞
−∞
f (x)g(z − x) dx
...
The frequency of the product X · Y is
k3 (z) =
+∞
−∞
f (x) g
The frequency of the quotient X/Y is
k4 =
+∞
−∞
1
z
·
dx
...
Let X and Y be independent random variables with the distribution functions F X and FY , resp
...
Then these are given by
FU (u) = FX (u) · FY (u)
for u ∈ R,
and
FV (v) = 1 − (1 − FX (v)) · (1 − FY (v))
for v ∈ R
...
I joined MITAS because
I wanted real responsibili�
I joined MITAS because
I wanted real responsibili�
Real work
International
Internationa opportunities
al
�ree wo placements
work
or
�e Graduate Programme
for Engineers and Geoscientists
Maersk
...
discovermitas
...
com
11
�e
for Engin
Click on the ad to read more
1
...
The results above can also be extended to bijective maps ϕ = (ϕ1 , ϕ2 ) : R2 → R2 , or subsets of R2
...
g
...
It is important here to define the notation and the variables in the most convenient way
...
e
...
Then let ϕ = (ϕ1 , ϕ2 ) be a bijective map of D
the opposite of what one probably would expect:
˜
ϕ = (ϕ1 , ϕ2 ) : D → D,
with (x1 , x2 ) = ϕ (y1 , y2 )
...
Then recall the Theorem of
transform of plane integrals, cf
...
g
...
∂ (y1 , y2 )
Of course, this formula is not mathematically correct; but it shows intuitively what is going on:
Roughly speaking we “delete the y-s”
...
Download free eBooks at bookboon
...
Some theoratical results
Random variables I
Theorem 1
...
Let D ⊆ R2 be an open domain, such that
P {(X1 , X2 ) ∈ D} = 1
...
Then the 2-dimensional random variable
(Y1 , Y2 ) = τ (X1 , X2 ) = (τ1 (X1 , X2 ) , τ2 (X1 , X2 ))
has the frequency k (y1 , y2 ), given by
⎧
⎪
⎪ h (ϕ1 (y1 , y2 ) , ϕ2 (y1 , y2 )) · ∂ (x1 , x2 ) ,
⎨
∂ (y1 , y2 )
k (y1 , y2 ) =
⎪
⎪
⎩
0,
˜
for (y1 , y2 ) ∈ D,
otherwise
We have previously introduced the concept conditional probability
...
If X and Y are discrete, we define the conditional distribution of X for given Y = y j by
P {X = xi | Y = yj } =
hij
P {X = xi ∧ Y = yj }
=
...
We note in
particular that we have the law of the total probability
P {X = xi } =
P {X = xi | Y = yj } · P {Y = yj }
...
Note that the conditional distribution function is not defined at points in which f Y (y) = 0
...
We shall use the convention that “0 times undefined = 0”
...
We now introduce the mean, or expectation of a random variable, provided that it exists
...
com
13
1
...
The mean, or expectation, of X is defined by
xi pi ,
E{X} :=
i
provided that the series is absolutely convergent
...
2) Let X be a continuous random variable with the frequency f (x)
...
If this is not the case, the mean does not exist
...
e
...
Concerning maps of random variables, means are transformed according to the theorem below, provided that the given expressions are absolutely convergent
...
6 Let the random variable Y = ϕ(X) be a function of X
...
2) If X is a continuous random variable with the frequency f (x), then the mean of Y = ϕ(X) is
given by
+∞
E{ϕ(X)} =
ϕ(x) g(x) dx,
−∞
provided that the integral is absolutely convergent
...
We add the following concepts, where k ∈ N:
The k-th moment,
E Xk
...
The k-th central moment,
E (X − μ)k
...
The variance, i
...
the second central moment,
V {X} = E (X − μ)2 ,
Download free eBooks at bookboon
...
Some theoratical results
Random variables I
provided that the defining series or integrals are absolutely convergent
...
We mention
Theorem 1
...
Then
E (X − c)2 = V {X} + (μ − c)2
for every c ∈ R,
V {X} = E X 2 − (E{X})2
for c = 0,
E{aX + b} = a E{X} + b
for every a, b ∈ R,
V {aX + b} = a2 V {X}
for every a, b ∈ R
...
We have the
following result which gives an estimate of the probability that a random variable X differs more than
some given a > 0 from the mean E{X}
...
8 (Cebyˇev’s inequality)
...
a2
If we here put a = kσ, we get the equivalent statement
P {μ − kσ < X < μ + kσ} ≥ 1 −
1
...
com
15
Click on the ad to read more
1
...
Thus,
Theorem 1
...
1) If (X, Y ) is discrete, then the mean of Z = ϕ(X, Y ) is given by
ϕ (xi , yj ) · P {X = xi ∧ Y = yj } ,
E{ϕ(X, Y )} =
i, j
provided that the series is absolutely convergent
...
It is easily proved that if (X, Y ) is a 2-dimensional random variable, and ϕ(x, y) = ϕ 1 (x) + ϕ2 (y),
then
E {ϕ1 (X) + ϕ2 (Y )} = E {ϕ1 (X)} + E {ϕ2 (Y )} ,
provided that E {ϕ1 (X)} and E {ϕ2 (Y )} exists
...
If we furthermore assume that X and Y are independent and choose ϕ(x, y) = ϕ 1 (x) · ϕ2 (y), then also
E {ϕ1 (X) · ϕ2 (Y )} = E {ϕ1 (X)} · E {ϕ2 (Y )} ,
provided that E {ϕ1 (X)} and E {ϕ2 (Y )} exists
...
These formulæ are easily generalized to n random variables
...
g
...
If two random variables X and Y are not independent, we shall find a measure of how much they
“depend” on each other
...
Consider a 2-dimensional random variable (X, Y ), where
E{X} = μX ,
E{Y } = μY ,
2
V {X} = σX > 0,
Download free eBooks at bookboon
...
Some theoratical results
Random variables I
all exist
...
We define the correlation between X and Y , denoted by (X, Y ), as
(X, Y ) :=
Cov(X, Y )
...
10 Let X and Y be two random variables, where
2
V {X} = σX > 0,
E{Y } = μY ,
E{X} = μX ,
2
V {Y } = σY > 0,
all exist
...
Let Z be another random variable, for which the mean and the variance both exist- Then
for every a, b ∈ R,
Cov(aX + bY, Z) = a Cov(X, Z) + b Cov(Y, Z),
and if U = aX + b and V = cY + d, where a > 0 and c > 0, then
(U, V ) = (aX + b, cY + d) = (X, Y )
...
By the obvious generalization,
n
V
n j−1
n
Xi
V {Xi } + 2
=
i=1
i=1
Cov (Xi , Xj )
...
, Xn are independent of each other, this is of course reduced to
n
V
n
Xi
i=1
V {Xi }
...
We consider a sequence Xn of random variables, defined on the same probability
field (Ω, F, P )
...
com
17
1
...
2) We say that Xn converges in probability towards a constant c, if every fixed ε > 0,
P {|Xn − c| ≥ ε} → 0
for n → +∞
...
n→+∞
Finally, we mention the following theorems which are connected with these concepts of convergence
...
Theorem 1
...
Let Xn be a sequence of independent random
variables, all defined on (Ω, F, P ), and assume that they all have the same mean and variance,
E {Xi } = μ
and
V {Xi } = σ 2
...
i=1
A slightly different version of the weak law of large numbers is the following
Theorem 1
...
i=1
We have concerning convergence in distribution,
Theorem 1
...
Assume that the sequence Xn of random variables converges in distribution towards the random variable X, and assume that there are real constants a and
b, such that
P {a ≤ Xn ≤ b} = 1
for every n ∈ N
...
n→+∞
In particular,
lim E {Xn }
n→+∞
and
lim V {Xn } = V {X}
...
com
18
1
...
14 1) If Xn converges in probability towards X, then Xn also converges in distribution
towards X
...
Download free eBooks at bookboon
...
Simple introduction examples
Random variables I
2
Simple introducing examples
Example 2
...
We assume that at each of the traffic lights there
is the probability p that he must stop
...
Let X be the random variable, which indicates
1
the number of stops
...
Sketch in the case p = the corresponding diagram
...
Is Y a random variable?
In this case the model is given by the binomial distribution X ∈ B(4, p), thus
P {X = k} =
4
k
pk (1 − p)4−k ,
k = 0, 1, 2, 3, 4
...
0
...
3
0
...
2
0
...
1
0
...
2
We get in particular For p =
P {X = k} =
4
k
1
2
1
,
2
4
,
k = 0, 1, 2, 3, 4,
thus
p0 = p4 =
1
,
16
p1 = p 3 =
1
4
= ,
16
4
p2 =
3
6
=
...
Download free eBooks at bookboon
...
Simple introduction examples
Random variables I
Then
4
4
P {Y = k} =
k=1
p(1 − p)k−1 = 1 − (1 − p)4 < 1,
n˚ p < 1
...
The reason why Y is not a random variable, is that we have in the setup forgotten the possibility of
“no stops at all” of the probability (1 − p)4
...
g
...
A more
reasonable definition would of course be Y = 5
...
Example 2
...
of the probabilities
P {X = k} = A
qk
,
k
k∈N
(where q ∈ ]0 , 1[)
...
We put p = 1 − q
...
p
1
1
> 1 follows that ln > 0, hence
p
p
A=
1
1
1 = | ln p| ,
ln p
and thus
P {X = k} =
qk
,
k | ln(1 − q)|
k ∈ N
...
com
21
3
...
1 Check if the function
⎧ 1
⎨ 2 − kx, x ∈ [0, 6],
f (x) =
⎩
0
otherwise,
is a frequency for some k
...
By putting x = 6 into (1) we get
1
1
− 6k ≥ 0, thus k ≤
...
18
9
12
The two requirements can never be satisfied simultaneously, so f (x) is not a frequency for any k ∈ R
...
2 Find k, such that
⎧
⎨ kx2 1 − x3 , x ∈ [0, 1],
f (x) =
⎩
0,
otherwise,
is a frequency of a random variable, and sketch the function
...
Obviously, f (x) ≥ 0
...
3 6
6
If we choose k = 6, then f (x) becomes a frequency, thus
⎧
⎨ 6x2 1 − x3 = 6x2 − 6x5 for x ∈ [0, 1],
f (x) =
⎩
0
otherwise
...
com
22
3
...
95
...
5
The distribution function F (x) is in the interval [0, 1] given by
x
F (x) =
x
f (t) dt =
0
0
6t2 − 6t5 dt = 2x3 − x6
...
e
...
2
√
2
...
66
...
”
CLICK HERE
to discover why both socially
and academically the University
of Groningen is one of the best
places for a student to be
www
...
nl/feb/education
Download free eBooks at bookboon
...
Frequencies and distribution functions in 1 dimension
Random variables I
It is possible to apply MAPLE, e-g
...
1,color=black);
1
...
5
0
0
...
4
0
...
8
1
> F:=int(f(x),x=0
...
1);
...
com
24
3
...
8
0
...
4
0
...
2
0
...
4
0
...
1,color=black);
The former figure shows the graph of the frequency, and the latter figure shows the graph of the
distribution function
...
With the exception of the sketches of the graphs we see that it is easy to perform the same computations
without using MAPLE
...
American online
LIGS University
is currently enrolling in the
Interactive Online BBA, MBA, MSc,
DBA and PhD programs:
▶▶ enroll by September 30th, 2014 and
▶▶ save up to 16% on the tuition!
▶▶ pay in 10 installments / 2 years
▶▶ Interactive Online education
▶▶ visit www
...
com to
find out more!
Note: LIGS University is not accredited by any
nationally recognized accrediting agency listed
by the US Secretary of Education
...
Download free eBooks at bookboon
...
Frequencies and distribution functions in 1 dimension
Random variables I
Example 3
...
Find the constant C and the distribution function
...
This distribution is called the triangular distribution over ]a, b[
...
1) By considering the graph we immediately get
∞
f (x) dx =
1=
−∞
b−a
1
·C
· (b − a) = C
2
2
b−a
2
2
,
because the integral can be interpreted as the area of a triangle
...
a+b
the distribution function is given by
2
C(t − a) dt = C
(t − a)1
2
x
=
a
1
2
2
b−a
Download free eBooks at bookboon
...
3
...
Summing up we get
⎧
0,
⎪
⎪
⎪
⎪
⎪
⎪
2
⎪
⎪
⎪
⎪ 2 x−a ,
⎪
⎪
⎨
b−a
F (x) =
⎪
2
⎪
⎪
⎪ 1−2 b−x ,
⎪
⎪
⎪
b−a
⎪
⎪
⎪
⎪
⎪
⎩
1,
for x ≤ a,
for a < x ≤
for
a+b
,
2
a+b
< x ≤ b,
2
for x > b
...
8
0
...
4
0
...
5
–1
0
...
3) By considering the graph (or by insertion of x a+b ) we get
2
P
X≤
a+b
2
=F
a+b
2
=
1
...
⎪b−a⎪
9
9
⎩
⎭
Download free eBooks at bookboon
...
Frequencies and distribution functions in 1 dimension
Random variables I
Example 3
...
Find the distribution function of X, and compute P {−1 ≤ X ≤ 3} and P {X ≥ 0}
...
Obviously, f (x) is continuous, and f (x) > 0, when k > 0
...
Then by a computation,
∞
2
−∞
∞
ex−2 dx +
f (x) dx = k
−∞
which is equal to 1 for k =
e−(x−2) dx
= 2k,
2
1
...
5
0
...
3
0
...
1
–1
0
1
2
3
4
5
x
Figure 4: The graph of the frequency f
...
The random variable X has the frequency
f (x) =
1 −|x−2|
e
,
2
x ∈ R
...
com
28
1 1 −(x−2)
− e
2 2
3
...
0
...
6
0
...
2
–1
0
1
2
3
4
5
x
Figure 5: The graph of the distribution function F
...
Download free eBooks at bookboon
...
Frequencies and distribution functions in 1 dimension
Random variables I
Finally,
P {−1 ≤ X ≤ 3} = F (3) − F (−1) = 1 −
1 −1 1 −3
e − e ≈ 0
...
93
...
2
Example 3
...
Find the distribution function F of X
...
Find the median of X
...
6
0
...
4
0
...
2
0
...
5
1
1
...
5
3
x
Figure 6: The graph of the frequency f
...
The requirement for f (x) being a frequency is then reduced to
∞
∞
f (x) dx = k
1=
−∞
0
1
x · exp − x2
2
∞
dx = k
0
Download free eBooks at bookboon
...
Frequencies and distribution functions in 1 dimension
Random variables I
1
where we have used the substitution u = x2 with du = x dx
...
1
0
...
6
0
...
2
0
0
...
5
2
...
2) If x > 0, we use the substitution u =
x
F (x) =
x
f (t) dt =
0
0
1 2
t to obtain
2
1
t exp − t2
2
1
2
dt =
0
x2
1
e−u du = 1 − exp − x2
...
1
3) Consider the previous figures
...
4) The median is found from the equation
1
F (x) = 1 − exp − x2
2
=
1
,
2
i
...
exp
1 2
x
2
1 2
x = ln 2, and hence
2
√
(X) = 2 ln 2 ≈ 1
...
thus
Download free eBooks at bookboon
...
Frequencies and distribution functions in 1 dimension
Random variables I
Example 3
...
Prove that P {X ≤ θ} does not depend on b
...
Clearly, f (x) ≥ 0
...
, x > 0,
x ≤ 0,
is the distribution function of a random variable X with f (x) as its frequency
...
Download free eBooks at bookboon
...
Frequencies and distribution functions in 1 dimension
Random variables I
Example 3
...
The probability is p, where p ∈ ]0, 1[,
that he will be treated immediately; but if he does not, the probability that he must wait longer than
the time x is equal to e−ax , where a is some positive constant
...
1) If the patient is treated immediately, then the waiting time is X = 0, thus
P {X = 0} = p
...
must wait} · P {waiting time > x | pt
...
Join the best at
the Maastricht University
School of Business and
Economics!
Top master’s programmes
• 3rd place Financial Times worldwide ranking: MSc
3
International Business
• 1st place: MSc International Business
• 1st place: MSc Financial Economics
• 2nd place: MSc Management of Learning
• 2nd place: MSc Economics
• nd place: MSc Econometrics and Operations Research
2
• nd place: MSc Global Supply Chain Management and
2
Change
Sources: Keuzegids Master ranking 2013; Elsevier ‘Beste Studies’ ranking 2012;
Financial Times Global Masters in Management ranking 2012
Visit us and find out why we are the best!
Master’s Open Day: 22 February 2014
Maastricht
University is
the best specialist
university in the
Netherlands
(Elsevier)
www
...
nl
Download free eBooks at bookboon
...
Frequencies and distribution functions in 1 dimension
Random variables I
1
0
...
6
0
...
2
–1
–0
...
5
1
1
...
5
Figure 8: The graph of the distribution function F (x) when a = 1 and p =
3) The distribution function F (x) = P {X ≤ x} is here
⎧
0,
x < 0,
⎨
F (x) =
⎩
1 − (1 − p)e−x ,
x ≥ 0
...
e
...
Download free eBooks at bookboon
...
2
4
...
1 Let X and Y be independent random variables with the frequencies
f (x) = x e−x ,
x > 0,
g(y) = e−y ,
y > 0,
(both frequencies are otherwise 0)
...
Find the mean E{X}, E{Y } og E{X + Y }
...
This expression is only > 0, when z > 0
...
The means are
E{X}
=
∞
0
x f (x) dx =
E{Y }
=
∞
0
y g(y) dy =
E{X + Y }
=
∞
0
z k(z) dz =
∞
0
∞
0
1
2
x2 e−x = 2,
y e−y dy = 1,
∞
0
z 3 e−z dz = 3
...
1 Here we are given that
∞
xn e−x dx = n!
0
for n ∈ N0
...
When n = 0, it is trivial
...
♦
Download free eBooks at bookboon
...
Frequencies and distributions functions, 2 dimensions
Random variables I
Example 4
...
Since F ∈ C ∞ (R+ × R+ ), we have
∂2F
= −e−(x+y) < 0
∂x∂y
for (x, y) ∈ R+ × R+ ,
so F cannot be a distribution function
...
Alternatively we prove that one of the necessary conditions is not fulfilled
...
Then
F (1 + α, 1 + α) − F (1, 1 + α) − F (1 + α, 1) + F (1, 1) = e−(1+α) − e−(2+2α) + e−(2+α) − e−2
= e−2 2e−α − e−2α − 1 = −e−2 1 − e−α
2
< 0,
and not ≥ 0, as it should be
...
3 Prove that the function
⎧
⎨ x e−x(y+1) , x > 0, y > 0,
f (x, y) =
⎩
0,
otherwise,
is a frequency of a 2-dimensional random variable (X, Y )
...
Check if the random variables X and Y are independent
...
1) If x > 0 is kept fixed, it follows by a vertical integration,
fX (x) = e−x
∞
x e−xy dy = e−x ,
y=0
and fX (x) = 0 for x ≤ 0
...
com
36
4
...
35
0
...
25
0
...
15
0
...
05
0
...
5
1
1
...
5
2
...
2) If y > 0 is kept fixed, we get by a horizontal integration, where we use the substitution z = x(y +1),
etc
...
3) It follows from
∞
e−x dx = 1,
∞
possibly
0
0
1
dy = 1 ,
(y + 1)2
that f (x, y) is a frequency of a 2-dimensional random variable (X, Y ), and that X and Y have the
marginal frequencies fX (x) and fY (y), given in (1) and (2), resp
...
com
37
4
...
0,
5) Medians:
1
1
= 1 − e−x for e−x = , hence x = (X) = ln 2
...
b) Fy (y) = = 1 −
2
y+1
y+1
2
a) FX (x) =
6) Since
fX (x) · fY (y) =
e−x
= x e−x(y+1) = f (x, y)
(y + 1)2
for x, y > 0,
X and Y are not independent
...
4 A 2-dimensional random variable (X, Y ) has the frequency
⎧
⎨ c x y, 0 < x < y < 1,
f (x, y) =
⎩
0,
otherwise
...
Find the frequencies and the distribution function of the random variables X and
Y
...
Finally, find the distribution function of
the 2-dimensional random variable (X, Y )
...
It follows from
1
y
1=c
1
xy dx dy = c
0
0
0
c
1 3
y dy = ,
2
8
that c = 8, hence the frequency is given by
⎧
⎨ 8xy, 0 < x < y < 1,
f (x, y) =
⎩
0,
otherwise
...
com
38
4
...
5
y
0
0
...
4
x
0
...
8
1
Figure 10: The graph of the frequency f (x, y) over 0 < x < y < 1
...
8
0
...
4
0
...
2
0
...
4
0
...
2) Clearly, fX (x) = 0 for x ∈ ]0, 1[
...
When x ∈ ]0, 1[, we get
x
FX (x) =
0
x
fX (t) dt =
0
4t − 4t3 dt = 2x2 − x4 ,
Download free eBooks at bookboon
...
Frequencies and distributions functions, 2 dimensions
Random variables I
thus the marginal distribution function is
⎧
0,
x ≤ 0,
⎪
⎪
⎪
⎪
⎨
2x2 − x4 , 0 < x < 1,
FX (x) =
⎪
⎪
⎪
⎪
⎩
1,
x ≥ 1
...
If y ∈ ]0, 1[, we get by a horizontal integration that
/
y
fY (y) =
x2
2
8xy dx = 8y
0
y
= 4y 3 ,
0
and the marginal frequency is
⎧
⎨ 4y 3 , y ∈ ]0, 1[,
fY (y) =
⎩
0, otherwise
...
3) Since fX (x) · fY (y) = f (x, y), we see that X and Y are not stochastically independent
...
2 If in general the domain, in which the frequency f (x, y) > 0 is strictly positive, is not
a rectangle (possibly with infinite sides, so e
...
R × R is in this sense considered as a degenerated
rectangle), then the random variables X and Y are never stochastic independent
...
Furthermore, f (t, u) = 8ty = 0 for 0 < t < u < 1, and 0
otherwise, so 0 < t < min{x, u}, and thus
y
min{x,u}
y
8tu dt du =
F (x, y) =
0
0
0
4u · (min{x, u})2 du =
If x ≥ 0, we get 0 < u < y ≤ x ≤ 1, hence min x2 , u2 = u2 , and thus
y
F (x, y) =
0
4u · u2 du = y 4
...
com
40
y
0
4u · min x2 , u2 du
...
Frequencies and distributions functions, 2 dimensions
Random variables I
If 0 ≤ x ≤ y, then we get instead
y
F (x, y)
=
0
4
4u · min x2 , u2 du =
x
4u3 du +
0
4u x2 du
x
= x + 2x2 y 2 − x2 = 2x2 y 2 − x4
...
> Apply now
redefine your future
- © Photononstop
AxA globAl grAduAte
progrAm 2015
Download free eBooks at bookboon
...
indd 1
19/12/13 16:36
41
Click on the ad to read more
4
...
4
1
...
8
y
2*x^2*y^2-x^4
0
...
4
0
...
2
–0
...
2
0
...
4
0
...
2
1
...
2
0
0
–0
...
4
...
5 A 2-dimensional random variable (X, Y ) has the frequency
⎧ 2
⎨ ct , 0 < x < y < 1,
f (x, y) =
⎩
0, otherwise
...
2) Find the frequencies and the distribution functions of the random variables X and Y
...
1) If c > 0, then obviously f (x, y) ≥ 0
...
2) By a vertical integration, x ∈ ]0, 1[ fixed, we obtain the marginal frequency of X,
1
fX (x) =
x
4y 2 dy =
4
1 − x3 ,
3
thus
⎧
⎪ 4 1 − x3 , for x ∈ ]0, 1[,
⎨
3
fX (x) =
⎪
⎩
0,
otherwise
...
com
42
4
...
8
1
0
...
4
y
0
...
2
0
...
6
0
...
1
0
...
6
y=x
0
...
2
0
0
...
4
0
...
8
1
Figure 14: The domain 0 < x < y < 1
...
com
43
4
...
By a horizontal integration, y ∈ ]0, 1[ fixed, we get the marginal frequency of Y ,
y
fY (y) =
hence
fY (y) =
4y 2 dx = 3y 3 ,
0
⎧
⎨ 4y 3 , y ∈ ]0, 1[,
⎩
0,
otherwise
...
III
I
V
x
...
Download free eBooks at bookboon
...
Frequencies and distributions functions, 2 dimensions
Random variables I
3) When the plane is divided into the five sub-domains I–V, it follows that
I F (x, y) = 1 for x ≥ 1 and y ≥ 1
...
1
III F (x, y) = FX (x) =
4x − x4 for 0 < x < 1 and y ≥ 1
...
V Only here we need some computations
...
the figure
...
e
...
Then we get
x
F (x, y)
y
x
f (t, u) du
=
0
=
0
x
0
y
dt =
0
4u2 du dt
t
4
4 3 4 3
1
y − t dt = xy 3 − x4 ,
3
3
3
3
hence
F (x, y) =
1
4xy 3 − x4
3
for 0 < x < y < 1
...
com
45
Click on the ad to read more
4
...
6 A 2-dimensional random variable (X, Y ) has the frequency
⎧
⎨ cx, 0 < y < 2x < 2,
f (x, y) =
⎩
0, otherwise
...
2) Find the marginal frequencies and the distribution functions of the random variables X and Y
...
Find the simultaneous distribution function F (x, y) of the 2-dimensional random variable (X, Y )
...
2
1
...
5
1
0
...
5
0
0
...
4
0
...
8
1
Figure 16: The graph of f (x, y), and its projection A, where f (x, y) > 0
...
5
1
y = 2*x
A
0
...
2
0
...
6
0
...
Download free eBooks at bookboon
...
Frequencies and distributions functions, 2 dimensions
Random variables I
1) By means of a plane integral we get the condition (cf
...
3
3
Therefore, if we choose c = , then f (x, y) ≥ 0 everywhere, and its integral is 1, so the frequency
2
is
⎧
⎪ 3 x, 0 < y < 2x < 2,
⎨
2
f (x, y) =
⎪
⎩
0, otherwise
...
0 < x < 1,
⎩
3 2
3
−
y ,
4 16
We find the remaining distribution functions by an integration:
⎧
⎧
0,
y < 0,
⎪
⎪ 0, x < 0,
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎨
⎨
1 3
3
x3 , 0 ≤ x ≤ 1,
FY (y) =
FX (x) =
y−
y , 0 ≤ y ≤ 2,
⎪
⎪ 4
16
⎪
⎪
⎪
⎪
⎪
⎪
⎩
⎪
⎩
1, x > 1,
1,
y > 2
...
We divide R2 into the five domains I–V, cf
...
Clearly,
F (x, y) = 0
in domain I,
F (x, y) = 1
in domain II
...
e
...
4
16
Download free eBooks at bookboon
...
Frequencies and distributions functions, 2 dimensions
Random variables I
V
y = 2*x
II
III
IV
x
...
We get in domain IV,
F (x, y) = F (1, y) = FY (y) =
1 3
3
y−
y ,
4
16
and in domain V,
F (x, y) = F (x, 2x) =
1
3 3
2x −
· 8x4 (= FX (x)) = x3
...
Download free eBooks at bookboon
...
Frequencies and distributions functions, 2 dimensions
Random variables I
Example 4
...
1) Find the constant c
...
3) Find the simultaneous distribution function of (X, Y )
...
8
3
t
2
0
...
4
1 0
...
2
0
...
6
s
0
...
1
0
...
6
y = 1-x
0
...
2
0
0
...
4
0
...
8
1
x
Figure 20: The domain of integration of the frequency f (x)
...
com
49
4
...
It follows from the condition
1
1−x
1=c
1
x dy
0
dx = c
0
0
1 1
−
2 3
x − x2 dx = c
=
c
,
6
that if c = 6, then the frequency of (X, Y ) is given by
⎧
⎨ 6x, 0 ≤ x ≤ 1, 0 ≤ y ≤ 1 − x,
f (x, y) =
⎩
0, otherwise
...
4
1
...
8
0
...
4
0
...
2
0
0
...
4
0
...
8
1
1
...
Need help with your
dissertation?
Get in-depth feedback & advice from experts in your
topic area
...
helpmyassignment
...
uk for more info
Download free eBooks at bookboon
...
Frequencies and distributions functions, 2 dimensions
Random variables I
2) It follows by a vertical integration, x ∈ ]0, 1[ fixed, that
1−x
fX (x) =
0
6x dy = 6x − 6x2 ,
so the frequency of X is
⎧
⎨ 6x − 6x2 , x ∈ ]0, 1[,
fX (x) =
⎩
0,
otherwise
...
8
0
...
4
0
...
2
0
0
...
4
0
...
8
1
1
...
3
2
...
5
1
0
...
2 0
...
6 0
...
2
x
Figure 23: The graph of the frequency fY (y)
...
com
51
4
...
1
0
...
6
0
...
2
–0
...
2
0
...
6
0
...
2
x
Figure 24: The distribution function FY (y)
...
3) If we divide the plane into the domains I–VI, it follows that
I F (x, y) = 1 for x ≥ 1 and y ≥ 1
...
III F (x, y) = FX (x) = 3x2 − 2x3 for 0 < x < 1 and y ≥ 1
...
Download free eBooks at bookboon
...
Frequencies and distributions functions, 2 dimensions
Random variables I
III
I
V
IV
VI
II
Figure 25: The domains I–VI
...
8
x
...
6
0
...
2
0
0
...
6
0
...
8
1
x
1-y
Figure 26: The domain of integration in case V
...
Then
1−y
F (x, y)
y
x
1−t
6u du dt +
=
0
0
1−y
=
3y 2 dt +
0
6t du dt
1−y
x
1−y
2
0
6t − 6t2 dt
= 3y 2 (1 − y) + 3t − 2t3
x
1−y
= 3y 2 (1 − y) + 3x2 − 2x3 − 3(1 − y)2 + 2(1 − y)3
= 3x2 − 2x3 + (1 − y) 3y − 3y 2 − 3 + 3y + 2 − 4y + 2y 2
= 3x2 − 2x3 + (1 − y) −1 + 2y − y 2
= 3x2 − 2x3 − (1 − y)3
...
com
53
4
...
0
Summing up we get
⎧
1,
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
0,
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
3x2 − 2x3 ,
⎨
F (x, y) =
⎪
⎪
1 − (1 − y)3 ,
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪ 3x2 − 2x3 − (1 − y)3 ,
⎪
⎪
⎪
⎪
⎪
⎪
⎩
3x2 y,
for x ≥ 1 and y ≥ 1,
for x ≤ 0 or y ≤ 0,
for 0 < x < 1 and y ≥ 1,
for x ≥ 1 and 0 < y < 1,
for 0 ≤ x ≤ 1 and 1 − x ≤ y ≤ 1,
for 0 ≤ x ≤ 1 and 0 ≤ y ≤ 1 − x
...
Already today, SKF’s innovative knowhow is crucial to running a large proportion of the
world’s wind turbines
...
These can be reduced dramatically thanks to our
systems for on-line condition monitoring and automatic
lubrication
...
By sharing our experience, expertise, and creativity,
industries can boost performance beyond expectations
...
Visit us at www
...
com/knowledge
Download free eBooks at bookboon
...
Frequencies and distributions functions, 2 dimensions
Random variables I
Example 4
...
X2
1
...
Compute the means E {X1 } and E
2
...
The vector function τ , given by
τ (x1 , x2 ) =
x1 x2 ,
x1
x2
,
maps D = ]0, 2[ × ]0, 2[ bijectively onto
D = (y1 , y2 ) ∈ R2 | 0 < y1 < 4y2 , y1 y2 < 4
...
Sketch D , and find the simultaneous frequency k (y1 , y2 ) of (Y1 , Y2 )
...
Find the marginal frequencies of Y1 and Y2
...
Are Y1 and Y2 independent?
1) It follows that
E {X1 } =
2
1
2
x2 dx =
0
1
2
x3
3
2
=
0
4
,
3
and
E
1
X1
=
1
2
2
0
x
dx = 1
...
3
3
Download free eBooks at bookboon
...
Frequencies and distributions functions, 2 dimensions
Random variables I
4
3
2
1
0
2
1
3
4
Figure 27: The domain D lies between the y2 -axis, the hyperbola y1 y2 = 4 and the straight line
y1 = 4y2
...
y2
Hence the Jacobian becomes
1 y1
1 y2
2 y1
2 y2
∂ (x1 , x2 )
=
∂ (y1 , y2 )
1 1 y1
1
1
−
√
2 y 1 y2
2 y2 y2
The simultaneous frequency of (X1 , X2 ) is
⎧
⎪ 1 x1 x2 ,
⎨
4
g (x1 , x2 ) f (x1 ) · f (x2 ) =
⎪
⎩
0
=−
1
1
+
y2
y2
1
4
=−
1
...
4) The marginal frequency of Y1 for 0 < y1 < 4 is given by
y1
kY1 (y1 ) =
8
4
y1
y1
4
4
y1
y1
1
[ln y2 ] y1 =
ln
dy2 =
y1
4
y2
8
4
4
y1
and = 0 otherwise
...
16y2
Download free eBooks at bookboon
...
Frequencies and distributions functions, 2 dimensions
Random variables I
If instead y2 ∈ ]1, ∞[, then
kY2 (y2 ) =
1
8y2
4
y2
0
Summing up we get
⎧
⎪ y1
⎪
ln
⎨
4
kY1 (y1 ) =
⎪
⎪
⎩
y1 dy1 =
4
y1
1
3
...
5) Since D is not a rectangular domain, we conclude that Y1 and Y2 cannot be independent
...
Download free eBooks at bookboon
...
Frequencies and distributions functions, 2 dimensions
Random variables I
Example 4
...
1
...
Then introduce the random variables Y1 and Y2 by
Y1 = X1 + X2 ,
Y2 = X 1 − X 2
...
Prove that Y1 are Y2 non-correlated
...
3
...
4
...
5
...
1) If x1 > 0, then
hX1 (x1 ) =
1 −x1
e
2
∞
0
x1 e−x2 + x2 e−x2 dx2 =
1
(x1 + 1) e−x1 ,
2
and hX1 (x1 ) = 0 for x1 ≤ 0
...
2) It follows from (1) that V {X1 } = V {X2 }, thus
Cov (Y1 , Y2 ) = Cov (X1 + X2 , X1 − X2 ) = V {X1 } − V {X2 } + Cov (X1 , X2 ) = 0,
which shows that Y1 and Y2 are non-correlated
...
(3), because the domain is not a rectangle parallel to
the coordinate axes
...
com
58
4
...
2
1
2
Hence,
⎧
⎪ 1 y1 e−y1 ,
⎨
4
k (y1 , y2 ) =
⎪
⎩
0,
for |y2 | < y1 ,
otherwise
...
Since Y1 ∈ Γ(3, 1) is gamma distributed, we get
E {Y1 } = 3 · 1 = 2,
which can also be found directly from
E {Y1 } =
1
2
∞
0
3
y1 e−y1 dy1 =
3!
= 3
...
com
59
∞
|y2 |
=
1
(|y2 | + 1) e−|y2 |
...
Frequencies and distributions functions, 2 dimensions
Random variables I
Example 4
...
1
...
2
...
We introduce the random variables Y1 and Y2 by
Y1 = X1 + X2 ,
Y2 =
X2
...
3
...
4
...
5
...
1) Since h (x1 , x2 ) has a nice factorization,
h (x1 , x2 ) = fX1 (x1 ) · fX2 (x2 ) ,
where
⎧
⎪ 1 (x1 + 1) e−x1 , x1 > 0,
⎨
2
fX1 (x1 ) =
⎪
⎩
0,
x1 ≤ 0,
⎧ −x
⎨ e 2 , x2 > 0,
fX2 (x2 ) =
⎩
0,
x2 ≤ 0,
and fX1 (x1 ) ≥ 0 and fX2 (x2 ) ≥ 0, where
∞
−∞
∞
fX1 (x1 ) dx1 = 1,
−∞
fX2 (x2 ) dx2 = 1,
we have
a) found the marginal frequencies,
b) and shown that X1 and X2 are stochastically independent
...
x2 > 0,
∞
fX1 (x1 ) =
0
h (x1 , x2 ) dx2 =
1
(x1 + 1) e−x1
2
Download free eBooks at bookboon
...
Frequencies and distributions functions, 2 dimensions
Random variables I
and
∞
fX2 (x2 ) =
0
h (x1 , x2 ) dx1 =
1 −x2
e
2
∞
0
(x1 + 1) e−x1 dx1 = e−x2
...
2) The means are
E {X1 } =
∞
x1 fX1 (x1 ) dx1 =
0
1
2
∞
0
x2 + x1 e−x1 dx1 =
1
3
1
(2! + 1!) = ,
2
2
and
E {X2 } =
∞
0
∞
x2 fX2 (x2 ) dx2 =
0
x2 e−x2 dx2 = 1! = 1
...
∂ (y1 , y2 )
Download free eBooks at bookboon
...
Frequencies and distributions functions, 2 dimensions
Random variables I
This formula shows that the task is to find x1 and x2 expressed by (y1 , y2 )
...
Thus we get the weight function
1 − y2
y2
∂ (x1 , x2 )
=
∂ (y1 , y2 )
−y1
y1
= y1 − y1 y2 + y1 y2 = y1 > 0,
because D = R+ × ]0, 1[ is given
...
4) The marginal frequencies of Y1 and Y2 are computed for y1 > 0, resp
...
(Otherwise they
are 0
...
2
2
1 −y1
e
2
=
kY2 (y2 )
k (y1 , y2 ) dy2 =
=
=
2
y1 −
Since k (y1 , y2 ) = kY1 (y1 ) · kY2 (y2 ), we see that Y1 and Y2 are not independent
...
2
2
Alternatively,
E {Y1 } =
∞
∞
y1 kY1 (y1 ) dy1 =
0
0
5
6 2
1 3 1 2 −y1
y + y e
dy1 = + =
...
com
62
5
3 1
− =
...
Frequencies and distributions functions, 2 dimensions
Random variables I
Example 4
...
1) Find the marginal frequencies of the random variables X and Y
...
3) Find the variances V {X} and V {Y } of the random variables X and Y
...
5) Find the frequency of Z = X + Y
...
Analogously the marginal frequency of Y is given by
y
fY (y) =
e−y dx = y e−y
for y ≥ 0,
x=0
and fY (y) = 0 for y ≤ 0
...
2) Then
∞
x e−x dx = −(x + 1) e−x
E{X} =
0
∞
0
= 1,
and
∞
E{Y } =
0
y · y e−y dy =
∞
y 2 e−y dy =
0
−y 2 − 2y − 2 e−y
3) We first compute
∞
E X2 =
x2 e−x dx =
0
−x2 − 2x − 2 e−x
∞
0
=2
and
∞
E Y2 =
0
y 2 · y e−y dy =
∞
y 3 e−y dy = 3! = 6
...
Download free eBooks at bookboon
...
4
...
0
Then
Cov(X, Y ) = E{XY } − E{X} · E{Y } = 3 − 2 · 1 = 1,
hence
√
1
2
...
If z > 0, then the frequency is given by
∞
fZ (z) =
−∞
h(x, z − x) dx,
where the condition 0 ≤ x ≤ y = z − x is reformulated to
0≤x≤
z
...
Alternatively we compute the distribution function of Z by the following double integral,
FZ (z) =
z
2
x=0
z−x
e−y dy
y=x
dx =
z
2
x=0
e−x − ex−z dx
z
z
z
− e−z exp
− 1 = 1 + e−z − 2 exp −
2
2
2
z 2
1 − exp −
...
com
64
4
...
Additional remark
...
This gives
Cov(X, Y ) =
1
1
(V {X + Y } − V {X} − V {Y }) = {5 − 1 − 2} = 1
...
RUN LONGER
...
GAITEYE
...
com
1349906_A6_4+0
...
Frequencies and distributions functions, 2 dimensions
Random variables I
Example 4
...
1) Compute the marginal frequencies of X and Y
...
3) Find the means of X and Y
...
1) The marginal frequencies:
a) For fixed x ∈ [0, 2] we integrate with respect to y ∈ [0, x], which gives
x
fX (x) =
y=0
1
1
xy dy = x3 ,
2
4
0 < x < 2,
and fX (x) = 0 otherwise
...
2) We get the distribution functions by integrating the frequencies,
⎧
x ≤ 0,
⎪ 0,
⎪
⎪
⎪
⎪
⎨
1 4
FX (x) =
x ,
0 < x < 2,
⎪ 16
⎪
⎪
⎪
1,
⎪
⎩
x ≥ 2,
and
FY (y) =
⎧
⎪
⎪
⎪
⎪
⎪
⎨
y ≤ 0,
0,
1 4
1 2
y −
y ,
⎪ 2
16
⎪
⎪
⎪
⎪
⎩
1,
0 < y < 2,
y ≥ 2
...
com
66
2
=
0
8
32
= ,
20
5
16
8 8
− =
...
Frequencies and distributions functions, 2 dimensions
Random variables I
4) The median of X is found from the equation
FX (x) =
1 4
1
x = ,
16
2
√
i
...
x4 = 8, hence (X) = 4 8
...
e
...
Since y 2 ≤ 22 = 4, we get y 2 = 4 −
(Y ) =
4−
√
8, so
√
8
...
com
67
Click on the ad to read more
4
...
13 A rectangle has the edge lengths X1 and X2 , where X1 and X2 are independent
random variables, both of the frequency
⎧
0 < x < 1,
⎨ 3x2 ,
f (x) =
⎩
0,
otherwise
...
Find the mean E {X1 }
...
Find the mean of the circumference of the rectangle, E {2X1 + 2X2 }, and the mean of the area of
the rectangle, E {X1 X2 }
...
X2
The vector function τ , given by
τ (x1 , x2 ) =
x1 x2 ,
x1
x2
,
maps D = ]0, 1[ × ]0, 1[ bijectively onto
D = (y1 , y2 ) ∈ R2 | 0 < y1 < y2 , y1 y2 < 1
...
Sketch D and find the simultaneous frequency k (y1 , y2 ) for (Y1 , Y2 )
...
Compute the marginal frequencies of Y1 and Y2
...
5
...
Find the mean and the median of Y2 , and give an intuitive explanation of that the median is smaller
than the mean
...
4
2) Since X1 and X2 are independent, we get
E {2X1 + 2X2 } = 4 E {X1 } = 3,
and
E {X1 X2 } = E {X1 } · E {X2 } =
3
4
2
=
9
...
com
68
4
...
20
...
60
...
2
3) From
x1 =
√
y1 y2
og
y1
,
y2
x2 =
we get the Jacobian
∂ (x1 , x2 )
=
∂ (y1 , y2 )
y2
y1
1
2
1
1
√
2 y 1 y2
1
2
−
y1
y2
1 1
2 y2
y1
y2
=−
1
< 0,
2y2
and the simultaneous frequency for (y1 , y2 ) ∈ D , is given by
√
2
k (y1 , y2 ) = 3 ( y1 y2 ) · 3
y1
y2
2
·
2
1
y1 1
9 y1
= 9 · y 1 y2 ·
·
=
,
2y2
y2 2y2
2 y2
and k (y1 , y2 ) = 0 otherwise
...
If y2 ∈ ]0, 1], then
kY2 (y2 ) =
9
2y2
y2
0
2
y1 dy1 =
3 3
3 2
y 2 = y2
...
com
69
4
...
5) Since D is not a rectangular domain, we conclude that Y1 and Y2 are not independent
...
6) The mean of Y2 is
E {Y2 } =
3
2
1
0
3
y2 dy2 +
3
2
∞
1
9
3 3
dy2
3 = 8 + 4 = 8
...
www
...
com
We do not reinvent
the wheel we reinvent
light
...
An environment in which your expertise is in high
demand
...
Implement sustainable ideas in close
cooperation with other specialists and contribute to
influencing our future
...
Light is OSRAM
Download free eBooks at bookboon
...
Frequencies and distributions functions, 2 dimensions
Random variables I
Example 4
...
0,
1
...
We introduce the random variables Y1 and Y2 by
Y1 = X1 + 3X2 ,
Y2 =
X1
...
2
...
3
...
4
...
5
...
1) Since X1 ∈ Γ(2, 1), we have E {X1 } = 2 · 1 = 2
...
3
3
3
2) It follows from
y1 = x1 + 3x2
and
y2 =
x1
x1 + 3x2
that
x1 = y1 y2
and
x2 =
1
1
(y1 − x1 ) = y1 (1 − y2 )
...
3
Download free eBooks at bookboon
...
Frequencies and distributions functions, 2 dimensions
Random variables I
The simultaneous frequency og (y1 , y2 ) ∈ R+ × ]0, 1[ is given by
1
1
y1 (1 − y2 ) · e−y1 (1−y2 ) · y1
3
3
1 3 −y1
y e
· {6y2 (1 − y2 )} ,
· y2 (1 − y2 ) =
6 1
k (y1 , y2 ) = y1 y2 · e−y1 y2 · 9 ·
3
= y1 · e−y1
and k (y1 , y2 ) = 0 otherwise
...
4) It follows from
k (y1 , y2 ) = kY1 (y1 ) · kY2 (y2 ) ,
that Y1 and Y2 are independent
...
2+2
2
Alternatively,
E {Y2 } = 6
1
0
2
3
y2 − y2 dy2 = 6
1 1
−
3 4
=
6
1
=
...
com
72
4
...
15 Let X1 and X2 be independent random variables of the frequencies
⎧ −x
x1 > 0,
⎨ e 1,
fX1 (x1 ) =
⎩
0,
x1 ≤ 0,
and
⎧
⎨ x2 e−x2 ,
fX2 (x2 ) =
⎩
x2 > 0,
x2 ≤ 0
...
X2
1
...
X2
The vector function τ , given by
τ (x1 , x2 ) =
x1 + x2 ,
x1
x2
,
maps R+ × R+ bijectively onto itself
...
Find the simultaneous frequency k (y1 , y2 ) of (Y1 , Y2 )
...
Find the marginal frequencies of Y1 and Y2
...
)
4
...
5
...
6
...
1
...
Alternatively,
E {X1 } =
∞
0
x1 e−x1 dx1 = 1 and E {X2 } =
∞
0
x2 e−x2 dx2 = 2! = 2
...
Download free eBooks at bookboon
...
Frequencies and distributions functions, 2 dimensions
Random variables I
2
...
e
...
e
...
By
x2
y1
,
1 + y2
hence
x1 =
y 1 y2
y1
= y1 −
1 + y2
1 + y2
and
x2 =
y1
...
From (y1 , y2 ) ∈ R+ ×R+ follows that (x1 , x2 ) ∈ R+ ×R+ ,
and vice versa, so τ maps the domain R+ × R+ bijectively onto itself
...
1
(1 + y2 )
3
(y2 y1 + y1 ) = −
360°
thinking
y1
2
(1 + y2 )
< 0
...
360°
thinking
...
deloitte
...
Discover the truth at www
...
ca/careers
© Deloitte & Touche LLP and affiliated entities
...
com
74
Discover the truth at www
...
ca/careers
© Deloitte & Touche LLP and affiliated entities
...
D
4
...
Thus the simultaneous frequency of (Y1 , Y2 ) is
k (y1 , y2 ) =
y1
y1
1 2 −y1
2
e−y1 ·
·
2 = 2 y1 e
3
1 + y2
(1 + y2 )
(1 + y2 )
for y1 > 0 and y2 > 0,
and k (y1 , y2 ) = 0 otherwise
...
& 4
...
It follows that Y1 and Y2 are independent
...
3
...
If y2 > 0, we get by a formula that
∞
kY2 (y2 ) =
0
e−y2 t · t e−y · |t| dt =
∞
t2 e−(1+y2 )t dt =
0
2!
(1 + y2 )
3
=
and kY2 (y2 ) = 0 for y2 ≤ 0
...
Since X1 and X2 are independent, the mean is
E {Y2 } = E
X1
X2
= E {X1 } · E
1
X2
= 1 · 1 = 1
...
com
75
1
2
1
3
(1 + y2 )
= 1
...
Frequencies and distributions functions, 2 dimensions
Random variables I
Alternatively, 2Y2 ∈ F (2, 4), so
E {2Y2 } =
4
n2
= = 2,
n2 − 2
2
E {Y2 } = 1
...
The distribution function of y2 > 0 is given by
y2
FY2 (y2 ) =
0
1
2
dt = −
3
(1 + t)
(1 + t)2
y2
0
=1−
1
(1 + y2 )
2,
so the median is determined by
1
(1 + y2 )
2
=
1
,
2
hence
Y2 =
√
2 − 1
...
Send us your CV
...
Send us your CV on
www
...
com
Download free eBooks at bookboon
...
Functions and random variables, in general
Random variables I
5
Functions of random variables, in general
Example 5
...
1) Assume that X1 and X2 are independent
...
Does it follow that Y1 and Y2 are dependent?
1) The answer is ‘yes’
...
Then
P {ϕ1 (X1 ) ∈ A ∧ ϕ2 (X2 ) ∈ B}
= P X1 ∈ ϕ−1 (A) ∧ X2 ∈ ϕ−1 (B)
1
2
= P X1 ∈ ϕ−1 (A) · P X2 ∈ ϕ−1 (B) ,
1
2
= P {ϕ1 (X1 ) ∈ A} · P {ϕ2 (X2 ) ∈ B} ,
because X1 , X2 are independent
and we conclude that ϕ1 (X1 ) and ϕ2 (X2 ) are stochastically independent
...
Let
ϕ1 (X1 ) = c1
and
ϕ2 (X2 ) = c2
be constant maps
...
Download free eBooks at bookboon
...
Functions and random variables, in general
Random variables I
Example 5
...
Compute P {X · Y is even}
...
Are X and Y independent?
1) All probabilities are ≥ 0, and their sum is 1, so the table describes a distribution
...
6
2) By a counting of the table we get
P {X · Y is even} = P {X = 2} + P {X = 1 ∧ Y = 2} + P {X = 3 ∧ Y = 2}
=
3
1 1 1
+ + =
...
3 4 12
12
3
4) The random variables X and Y are not independent
...
g
...
com
78
1
= 0
...
Functions and random variables, in general
Random variables I
Example 5
...
A needle of length
2b, where b < a, is thrown such that it falls randomly between the two lines in the following sense:
l_2
a
2*b
Y
X
l_1
The midpoint of the needle has the distance X from 1 , where X is rectangularly distributed over ]0, a[,
and the needle forms an angle Y with the two parallel lines, where Y is rectangularly distributed over
]0, π[
...
1) Find a condition – expressed by X, Y , b – which describes that the needle intersects the line
2) Prove that the probability that the needle intersects
1
is
1
...
a π
Remark 5
...
If a needle is thrown at
2b 1
random many times, then the fraction when the needle intersects 1 , is approximately equal to
·
...
Since then many people
a f
have tried to find π in this way
...
He obtained intersection in 2532 of the cases, hence f =
...
160 of π, which is quite fair
...
He used a = 3 cm, b = 2, 5 cm, the needle was thrown 3408 times, and he
5 3408
= 3
...
,
obtained intersection 1808 times
...
1415926
...
Some mathematicians have later pointed out the fairly strange number
355
, which long has
3408 of throws, and they also noted that Lazzarini’s fraction can be reduced to
113
been known as one of the very best rational approximations of π
...
Download free eBooks at bookboon
...
Functions and random variables, in general
Random variables I
1) It follows by the geometry that the needle intersects
1,
if X ≤ b · sin Y
...
Since X is rectangularly distributed over ]0, a[, and Y is rectangularly distributed over ]0, π[, we get
⎧
⎧
⎪ 1 , for x ∈ ]0, a[,
⎪ 1 , for y ∈ ]0, π[,
⎨
⎨
a
π
fY (y) =
fX (x) =
⎪
⎪
⎩
⎩
0, otherwise,
0, otherwise
...
a π
Alternatively,
P {X ≤ b · sin Y } = P {X − b · sin Y ≤ 0},
so we can instead find the distribution function of Z = X − b · sin Y
...
com/Mitas
www
...
com
M
Month 16
I was a construction
M
supervisor ina cons
I was
the North Sea super
advising and the No
he
helping foremen advis
s Real work
solve problems
he
helping f
International
Internationa opportunities
al
�ree wo placements
work
or
s
solve p
Download free eBooks at bookboon
...
Functions and random variables, in general
Random variables I
We shall, however, first find the distribution function G(y) of −b · sin Y
...
G(y) = P {−b · sin Y ≤ y} = P sin Y ≥ −
b
If y ≥ 0, then G(y) = 1, and if y ≤ −b, then G(y) = 0
...
Since X and Y , and hence also X and −b sin Y are independent, we conclude that Z = X −b sin Y
has the frequency
∞
h(s) =
−∞
fX (s − x) g(x) dx,
s ∈ R
...
Now,
⎧
⎪ a for x ≥ a,
⎪
⎪
⎪
⎨
a
x, for x ∈ [0, a],
χ[0,a] (s) ds =
⎪
0
⎪
⎪
⎪
⎩
0, for x < 0,
so we get for b < a,
P {X ≤ b · sin Y } =
=
2
πab
b
x
0
2b
−
πa
1−
1 − y2
x
b
1
0
2
=
dx =
2b2
πab
2b 1
· ,
a π
Download free eBooks at bookboon
...
Functions and random variables, in general
Random variables I
which is the searched result
...
2 If the needle is thrown a great number of times, then the relative frequency f that it
2b 1
· , so we conclude that
intersects 1 will approximately be
a π
π∼
2b 1
·
...
The results have either been too poor, or one has cheated (like e
...
Lazzarini)
...
3 One can also go through this example without the assumption that b < a; but in this
case the computations become really tough, because the curve x = b · sin y then intersects the curve
x = a
...
com
82
Click on the ad to read more
6
...
1 Two persons A and B have the intension of meeting between 8 AM and 9 AM
...
Furthermore,
they have agreed that none of them will wait in more than 10 minutes
...
If instead, A and B have agreed that A will wait 15 minutes for B, while B will wait 5 minutes for
A, what is then the probability that they meet?
Hint
...
Let X be the arrival time of A, and let Y be the arrival time of B
...
1
0
...
6
0
...
2
0
0
...
4
0
...
8
1
Figure 28: The domain where the simultaneous frequency is 1
...
1
0
...
6
0
...
2
0
0
...
4
0
...
8
1
Figure 29: The domain C is the diagonal strip
...
com
83
6
...
The probability is equal to
6
6
the area of C, hence
P
|X − Y | <
1
6
=1−2·
1
2
5
6
2
=
11
= 0
...
36
1
0
...
6
0
...
2
0
0
...
4
0
...
8
1
Figure 30: The domain D is the translated diagonal strip
...
The probability is equal to the area of D on the
2) The event corresponds to −
12
4
figure,
1−
1
2
3
4
2
−
1
2
2
11
12
=1−
43
101
=
= 0
...
144
144
Alternatively, (1) is solved in the following way:
P {meeting}
= P {A arrives first between 800 and 850 , and B at most 10 min
...
later}
+P {A and B both arrive between 850 and 900 }
11
5 1 5 1 1 1
...
Since f (x, y) = 1[0,1]2 (x, y), we get
∞
h(z) =
−∞
f (x, x − z) dx =
1
1[0,1]2 (x, x − z) dx =
0
1
0
1[0,1] (x − z) dx,
The integrand is only = 0, if x ∈ ]0, 1[ and x − z ∈ ]0, 1[, i
...
x ∈ ]z, z + 1[, thus for z ∈ ] − 1, 1[, cf
...
(i) For z ∈ ] − 1, 0[ f˚ h(z) =
as
(ii) For z ∈ ]0, 1[ f˚ h(z) =
as
1
z
z+1
0
dx = z + 1
...
(iii) If z ∈ ] − 1, 0[, then h(z) = 0
...
com
84
6
...
5
0
0
...
6
0
...
8
1
–0
...
Then the task can be treated in the following way:
1)
P
|X − Y | <
1
6
−
=P
1
6
0
=
=
−1
6
1
2
(z + 1) dz +
0
1−
5
6
2
−
1
1
6
5
6
−1
6
h(z) dz
1
(z + 1)2
2
(1 − z) dz =
1
2
1
6
=
2
−1
=1−
5
6
0
−1
6
−
2
=
1
(1 − z)2
2
1
6
0
11
...
com
85
1
2
11
12
1
(z + 1)2
2
2
+
3
4
0
1
− 12
−
2
=
1
(1 − z)2
2
43
...
Inequalities between two random variables
Random variables I
Example 6
...
Both Henry
and John arrive at randomly chosen times between 8 AM and 9 AM
...
2) Find the probability that John arrives more than 10 minutes after Henry
...
Hint
...
1
0
...
6
0
...
2
0
0
...
4
0
...
8
1
Figure 32: The domain where the simultaneous distribution function is 1
...
Since X and Y are independent and rectangularly distributed over e
...
]0, 1[, the simultaneous frequency is
⎧
⎨ 1, (x, y) ∈ ]0, 1[ × ]0, 1[,
f (x, y) =
⎩
0, otherwise,
and
f (x, y)f (x, y) dx dy = area(A),
A
for A
]0, 1[ × ]0, 1[
...
after Henry}
=P
Y >X+
1
6
= area of the upper triangle =
Download free eBooks at bookboon
...
72
1
...
Inequalities between two random variables
Random variables I
1
0
...
6
0
...
2
0
0
...
4
0
...
8
1
Figure 33: The domain given by X < Y is the upper triangle
...
8
0
...
4
0
...
2
0
...
6
0
...
6
1
0
...
6
0
...
2
0
0
...
4
0
...
8
1
Figure 35: The domain where the difference is at most 5 minutes is represented by the domain around
the diagonal
...
com
87
6
...
} = P
|X − Y | <
= area of the diagonal strip = 1 − 2 ·
1
2
11
12
1
12
2
=
23
...
com
88
Click on the ad to read more
6
...
3 Two persons A and B arrive at a meeting point between 7 AM and 8 AM
...
(We adjust the time at 7 AM)
...
Find the probability that the two persons meet
...
8
0
...
4
0
...
2
0
...
6
0
...
Since X and Y are independent, the frequency of the 2-dimensional random variable (X, Y ) is given
by
⎧
⎨ 4xy, 0 < x, y < 1,
h(x, y) = f (x)g(y) =
⎩
0,
otherwise,
1
hour, the task is to find P
3
the diagonal strip on the figure
...
e
...
Then by first integrating vertically (the inner integral,
3
Download free eBooks at bookboon
...
Inequalities between two random variables
Random variables I
so x is kept fixed),
P
1
X ≤Y ≤x+
3
2
3
=
2x
x+
0
2
3
=
0
2
3
=
x+ 1
3
0
1
3
4 2 2
x + x
3
9
2
1
4xy dy
dx +
x
− x2
1
dx +
2
3
1
dx + x − x4
2
2
2
3
1
4xy dy dx
x
2x − 2x3 dx
1
2
3
4 3 1 2
x + x
=
9
9
2
3
0
=
1
2
−
8
4
−
9 81
1 4 1 36 − 8
32
12
1 28
4 8
·
+ · + −
=
+
+ −
9 27 9 9 2
81
243 243 2 81
=
+ 1−
1
84
1
40
243 − 80
163
44
+ −
= −
=
=
≈ 0, 335
...
4 According to their schedules, 2 trains A and B shall arrive to a station at the same
time on each their line
...
However, the trains are very often delayed up to 20 minutes, so we assume that the arrival time of
train A is rectangularly distributed over [0, 20] (measured in minutes), and the arrival time of train B
is also rectangularly distributed over [0, 20]
...
1) Find the probability that train A arrives before train B
...
3) Find the probability that train A arrives before train B and departs after train B
...
1) It follows by an area consideration of weight
P {X < Y } =
1
that
400
1
...
Then by an area consideration,
P {X − 4 < Y < X + 5}
1
1
· 152 − · 162
2
2
481
319
= 1−
=
...
com
90
=
1
400
400 −
225 256
−
2
2
6
...
20
15
10
5
0
5
10
20
15
Figure 38: The event X − 4 < Y < X + 5 is represented by the diagonal strip
...
3) If A arrives before B, i
...
X < Y , and departs after B, i
...
X + 5 > Y + 4, then X < Y < X + 1
...
com
91
=
19
20
2
1 39 1
39
·
=
...
Inequalities between two random variables
Random variables I
Example 6
...
Every minute (t = 1, 2, 3,
...
We define the
6
random variables X and Y by
X = k, if Henry obtains his first six in throw number k,
Y = k, if Peter obtains his first six in throw number k
...
Find P {X = k}, k ∈ N, and find the mean E{X}
...
Find for every k ∈ N the probability P {X = k ∧ Y = k}, and then find P {X = Y }
...
4
...
Find P {Z = k}, k = 2, 3, 4,
...
Find for k = 2, 3, 4,
...
1) Since X (and also Y ) is geometric distributed with p =
P {X = k} =
1
6
5
6
1
, we get
6
k−1
,
k ∈ N,
and E{X} = 6
...
Then by a summation,
∞
P {X = Y } =
P {X = k ∧ Y = k} =
k=1
1
36
∞
k=1
25
36
k−1
=
1
·
36
1
25
1−
36
3) Clearly,
P {X < Y } + P {Y < X} + P {X = Y } = 1
...
com
92
1−
1
11
=
5
...
11
6
...
36 11
11
4) The random variable Z can be written Z = X1 + X2 , where X1 and X2 are independent of the
same distribution as X
...
Excellent Economics and Business programmes at:
“The perfect start
of a successful,
international career
...
rug
...
com
93
Click on the ad to read more
6
...
5) Here,
∞
P {Z < Y }
P {Z = k ∧ Y > k} =
=
k=2
=
1 25
·
·
36 36
1
11
36
2
1
36
∞
(k − 1)
k=2
25
...
com
94
25
36
k−2
·
25
36
7
...
1 Let X be rectangularly distributed over ]0, a[, where a > 0
...
The frequency of X is
⎧
⎪ 1 , 0 < x < a,
⎨
a
f (x) =
⎪
⎩
0, otherwise
...
The inverse map is
1
x = τ −1 (y) = − +
2
Then the frequency
⎧
⎪ 1·
⎪
⎪ a
⎪
⎨
2
g(y) =
⎪
⎪
⎪
⎪
⎩
0,
1
+ y,
4
where τ −1 (y) =
2
1
...
Alternatively, we get for y ∈ 0, a2 + a ,
1
X≤− +
2
G(y) = P {Y ≤ y} = P
1
+y
4
=
1
a
hence in the same interval,
g(y) = G (y) =
1
·
a
2
1
1
1
= ·√
...
com
95
1
1
+y−
4
2
,
7
...
2 A line segment of length 1 is randomly divided into two pieces of the lengths X and
1−X, where we assume that X is rectangularly distributed over the interval ]0, 1[
...
Find the probability that the area of this rectangle is bigger than
...
We shall find the probability that this expression is bigger
1
than
...
2
4
2
4
Since X is rectangularly distributed, we get
√
√
√
√
2
1
2+ 4
1
2
2
2
1
−
...
1 It is possible in general to find the distribution function of
Y = f (X) = X(1 − X)
...
Download free eBooks at bookboon
...
Functions Y = f(X) of random variables
Random variables I
Note that the probability of Y = X(1 − X) being bigger than y is
⎧
1,
for y ≤ 0,
⎪
⎪
⎪
⎪
⎪
⎪ √
⎨
1
1 − 4y, for 0 < y < ,
P {Y > y} = 1 − FY (y) =
4
⎪
⎪
⎪
⎪
⎪
⎪
1
⎩
0,
for y ≥
...
♦
8
2
2
π π
Example 7
...
Find
2 2
the distribution functions and the frequencies of the random variables
Y = sin X,
Z = cos X,
U = tan X
...
1) Y = sin X
...
π π
increasingly
Alternatively, we first find the frequency fY (y) of Y
...
com
97
...
Functions Y = f(X) of random variables
Random variables I
1
0
...
6
0
...
2
–1
...
5
–1
0
0
...
5
Figure 40: The graph of z = cos x for x ∈ ] − π , π [
...
and if −1 < y < 1, we get the distribution function
FY (y) = P {Y ≤ y} =
y
−1
1
π
1−
y2
dy =
1
Arcsin y
π
y
=
−1
1
1
Arcsin y +
...
In this case, only z ∈ ]0, 1[ is of interest
...
the figure – that
P {Z ≤ z} = 2 P Arccos z ≤ X <
= 1−
π
2
=2·
1
π
π
− Arccos z
2
2
2
Arccos z = Arcsin z
...
π π
Since z = cos x is not monotonous in − , , we cannot apply the usual argument
...
com
98
7
...
If u ∈ R, then
P {U ≤ u} = P {X ≤ Arctan u} =
1
π
Arctan u +
π
2
=
1
1
Arctan u + ,
π
2
hence
FU (u) =
1
1
Arctan u +
π
2
og fU (u) = FU (u) =
1
,
π (1 + u2 )
u ∈ R
...
1 + u2
Then apply the standard formula
...
American online
LIGS University
is currently enrolling in the
Interactive Online BBA, MBA, MSc,
DBA and PhD programs:
▶▶ enroll by September 30th, 2014 and
▶▶ save up to 16% on the tuition!
▶▶ pay in 10 installments / 2 years
▶▶ Interactive Online education
▶▶ visit www
...
com to
find out more!
Note: LIGS University is not accredited by any
nationally recognized accrediting agency listed
by the US Secretary of Education
...
Download free eBooks at bookboon
...
Functions Y = f(X) of random variables
Random variables I
Example 7
...
Find the distribution functions and the frequencies of the random variables
Y =
1
,
X
Z = cos X,
U = sin X
...
2
...
5
1
0
...
5
1
2
1
...
x
1
1
1
is
, ∞
...
5
for y >
for y ≤
1
,
πy
1
,
π
1
...
If z ∈ ] − 1, 1[, then we get the distribution function
FZ (z) = P {Z ≤ z} = P {cos X ≤ z} = P {X ≥ Arccos z}
1
= 1 − P {X < Arccos z} = 1 − Arccos z,
π
Download free eBooks at bookboon
...
Functions Y = f(X) of random variables
Random variables I
1
0
...
5
1
1
...
5
3
–0
...
hence
⎧
⎪
⎪
⎪
⎪
⎪
⎨
FZ (z) =
⎪
⎪
⎪
⎪
⎪
⎩
z ≤ −1,
0,
1−
1
Arccos z, −1 < z < 1,
π
z ≥ 1,
1,
and
⎧ 1
1
⎪
√
, −1 < z < 1,
⎨
π 1 − z2
fZ (z) =
⎪
⎩
0,
otherwise
...
8
0
...
4
0
...
5
1
1
...
5
3
Figure 43: The graph of u = sin x for x ∈ ]0, π[
...
com
101
7
...
If u ∈ ]0, 1[, then we get the distribution function
FU (u)
= P {U ≤ u} = P {sin X ≤ u}
= P {X ≤ Arcsin u} + P {X ≥ π − Arcsin u}
= P {X ≤ Arcsin u} + 1 − P {X < π − Arcsin u}
1
2
1
= 1 + Arcsin u − {π − Arcsin u} = Arcsin u,
π
π
π
hence
⎧
⎪
⎪
⎪
⎪
⎪
⎨
FU (u) =
u ≤ 0,
0,
2
Arcsin u, 0 < u < 1,
⎪ π
⎪
⎪
⎪
⎪
⎩
1,
u ≥ 1,
and
⎧ 2
1
⎪
√
,
⎨
π 1 − u2
fU (u) =
⎪
⎩
0,
0 < u < 1,
otherwise
...
If t(x) is a bijective transformation, and x = x(t) is the inverse, then we have in the form of differentials,
dx
dt = fT (t) dt
...
1) If x ∈ ]0, π[, then y =
1
∈
x
1
1
, ∞
...
πy
100
Download free eBooks at bookboon
...
Functions Y = f(X) of random variables
Random variables I
2) If x ∈ ]0, π[, then z = cos x ∈ ] − 1, 1[ bijectively
...
dz
1 − z2
If z ∈ ] − 1, 1[, then
fZ (z) = fX (Arccos z) · − √
1
1
,
= √
2
1−z
π 1 − z2
so if z ∈ ] − 1, 1[, then we get by an integration that
z
FZ (z) =
−1
1
·
π
1
1−
ζ2
dζ =
1
1
1
[− Arccos ζ]z = {π − Arccos z} = 1 − Arccos z
...
Download free eBooks at bookboon
...
Functions Y = f(X) of random variables
Random variables I
Example 7
...
Find the distribution function of X
...
1
0
...
6
0
...
2
0
0
...
5
2
...
1
0
...
6
0
...
2
0
1
2
3
x
Download free eBooks at bookboon
...
Functions Y = f(X) of random variables
Random variables I
1) If x ≤ 0, then F (x) = 0
...
If 0 < x ≤ , then F (x) =
2
2π
3
π
π
= , hence for < x < π,
In particular, F
2
4
2
F (x) = F
π
+
2
x
π
2
3
1
π
1
dx = +
x−
2π
4 2π
2
=
1
x
+
...
Summing up we get the distribution function
⎧
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎨
F (x) =
0,
3x
,
2π
x ≤ 0,
0
π
,
2
⎪ 1
⎪
⎪ + x , π < x < π,
⎪
⎪
⎪ 2 2π
2
⎪
⎪
⎪
⎪
⎩
1,
x ≥ π
...
x
π
1
1st variant
...
π
The frequency is obtained by a differentiation,
⎧
⎪ 3 , for y > 2 ,
⎪
⎪
⎪ 2πy 2
π
⎪
⎪
⎪
⎨
2
1
1
fY (y) =
⎪
⎪ 2πy 2 , for π < y < π ,
⎪
⎪
⎪
⎪
⎪
⎩
0,
otherwise
...
com
105
X<
1
y
7
...
Since x =
fY (y) = FX
1
y
dx
1
1
and
= − 2 , it follows that
y
dy
y
⎧
2
⎪ 3
⎪
for y ≥ ,
⎪
⎪ 2πy 2
π
⎪
⎪
⎪
⎨
1
2
1
1
· 2 =
for < y < ,
⎪
y
⎪ 2πy 2
π
π
⎪
⎪
⎪
⎪
⎪
⎩
0
otherwise
...
π
2
1
We get for ≤ y ≤ that
π
π
If y ≤
y
FY (y) =
If y >
1
π
1
dη
= −
2πη 2
2πη
y
=
1
π
1
1
−
...
4 4 2πy
2πy
3) The function z = sin x is not bijective, so we cannot apply the usual theorem
...
π
, we must have π− Arcsin z ∈
2
P {X ≤ x}, found in (1) that
Since Arcsin z ∈
FZ (z) = 1 +
thus
FZ (z) =
⎧
⎪
⎪
⎪
⎪
⎪
⎨
0,
3
Arcsin z −
2π
0,
2
Arcsin z,
⎪ π
⎪
⎪
⎪
⎪
⎩
1,
1
1
+
(π − Arcsin z)
2 2π
for z ≤ 0,
for 0 < z < 1,
for z ≥ 1,
and hence
⎧ 1
1
⎪
·√
,
⎨
π
1 − z2
fZ (z) =
⎪
⎩
0,
for 0 < z < 1,
otherwise
...
com
106
π
, π
...
Functions Y = f(X) of random variables
Random variables I
Example 7
...
Find the frequencies of the random variables
Y = sinh X,
Z = cosh X
...
1) Y = sinh X
...
It follows from the usual theorem that the frequency is given for y > 0 by
”
“ √
1
1
ln y+ 1+y 2
g(y) = f τ −1 (y) · τ −1 (y) = e
·
=
·
2
1+y
y + 1 + y2
Join the best at
the Maastricht University
School of Business and
Economics!
1
1 + y2
,
Top master’s programmes
• 3rd place Financial Times worldwide ranking: MSc
3
International Business
• 1st place: MSc International Business
• 1st place: MSc Financial Economics
• 2nd place: MSc Management of Learning
• 2nd place: MSc Economics
• nd place: MSc Econometrics and Operations Research
2
• nd place: MSc Global Supply Chain Management and
2
Change
Sources: Keuzegids Master ranking 2013; Elsevier ‘Beste Studies’ ranking 2012;
Financial Times Global Masters in Management ranking 2012
Visit us and find out why we are the best!
Master’s Open Day: 22 February 2014
Maastricht
University is
the best specialist
university in the
Netherlands
(Elsevier)
www
...
nl
Download free eBooks at bookboon
...
Functions Y = f(X) of random variables
Random variables I
thus
g(y) =
⎧
⎪
⎪
⎨
1
y+
1+
⎪
⎪
⎩
y2
·
1
1 + y2
0,
,
for y > 0,
otherwise
...
In this case, z = τ (x) = cosh x, x ∈ R+ , thus
x = τ −1 (z) = Arcosh z = ln z +
z2 − 1 ,
z > 1,
and
dx
= τ −1 (z)
dz
1
=√
...
Applying the theorem we get for z > 1 the frequency
h(z) = f τ −1 (z) ·
τ −1 (z) = e− ln(z+
√
z 2 −1)
1
1
1
√
=
·√
,
·√
2−1
2−1
2−1
z
z+ z
z
hence
⎧
⎪
⎨
h(z) =
⎪
⎩
1
1
√
·√
,
2−1
2−1
z+ z
z
for z > 1,
0,
otherwise
...
com
108
8
...
Y)
Random variables I
8
Functions of two random variables, f (X, Y )
Example 8
...
Prove that X + Y has the frequency
g(x) =
k+1
,
π {(k + 1)2 + x2 }
x ∈ R
...
Find by using the result of (1) the frequency of X 1 +X2
...
Find by using the result of (2) the frequency of Y1 + Y2
...
The frequency g(x) of X + Y is given by the convolution
g(x) =
1
π2
∞
−∞
k2
1
k
·
dt
...
2
2
The integral is clearly convergent, so d = −b, and the logarithmic terms disappear by taking the
limit
...
Download free eBooks at bookboon
...
Functions of two random variables, f(X
...
The constant term of this equation is
k = 1 + x2 a + k 2 x · b + k 2 c,
thus
a
+ kx · b + k · c = 1
...
(3) 1 + x2
k
1 + x2
The coefficient of t i (2) gives the equation
−2xa + 1 + x2 b − k 2 b = 0,
which is rewritten as
a
+ c + 1 + x2 − k 2 b + 2kx · c = 0
...
the above)
(5) k
a
+ c − x · b + (1 − k) · c = 0
...
Download free eBooks at bookboon
...
Functions of two random variables, f(X
...
When the third column is replaced by the sum of the first and the third column we see that the
denominator is reduced to
1 + x2
−2kx
k
=
kx
1 + x2 − k 2
−x
1 + x2 − k 2
−2kx
k
0
1
=
1 + x2 − k 2
−2kx
k
2kx
1 + x2 − k 2
2kx
1 + x2 − k 2
−x
= 1 + x2 − k 2
2
0
0
1
+ 4k 2 x2
= x4 + x2 2 − 2k 2 + 4k 2 + k 4 − 2k 2 + 1
= x2 x2 + (k − 1)2 + x2 2k 2 + 2 − k 2 + 2k − 1 + k 2 − 1
2
= x2 x2 + (k − 1)2 + x2 (k + 1)2 + (k + 1)2 (k − 1)2
= x2 x2 + (k − 1)2 + (k + 1)2 x2 + (k − 1)2
=
x2 + (k − 1)2
x2 + (k + 1)2
...
com
axa_ad_grad_prog_170x115
...
Functions of two random variables, f(X
...
Thus, the frequency is given by
g(x) =
1
π
1
k+1
a
+c = · 2
k
π x + (k + 1)2
as required
...
π (a1 + a2 )2 + x2
3) In this case we get the frequency
g(y) =
a1 a2
π2
=
a1 a2
π2
=
=
a1 a2
π2
∞
1
2
−∞ a1
∞
−∞
∞
−∞
+ (t − b1 )
2
·
1
a2
2
+ (y − t − b2 )
2
dt
1
1
·
du
a2 + u2 a2 + (y − u − b1 − b2 )2
1
2
1
1
·
du
a2 + u2 a2 + (u − {y − b1 − b2 })2
1
2
1
a1 + a2
·
,
π (a1 + a2 )2 + (y − {b1 + b2 })2
where we have applied (2)
...
com
112
u = t − b1 ,
8
...
Y)
Random variables I
Example 8
...
Prove that the random variable Z = XY has the frequency
fZ (z) =
2
ln |z|
,
·
2 z2 − 1
π
z ∈ R,
(suitably modified for z = −1, 0, 1)
...
2 z 2 + x2
2
1+z
1+x
z + x2
If z = −1, 0, 1, then the frequency of Z = XY is given by
∞
fZ (z)
=
g(x) g
−∞
=
=
=
=
=
z
x
1
1
dx = 2
|x|
π
∞
−∞
1
·
1 + x2
1
z
1+
x
2
·
1
dx
|x|
2 ∞ 1
1
·
x dx
symmetry; then u = x2 ,
2 z 2 + x2
π 0 1+x
∞
1
1
1
· 2
du
2
π 0 1+u z +u
∞
1
1
1
1
1
·
−
·
du
π2 0
z2 − 1 1 + u z2 − 1 z2 + u
1
1
1
1
lim
−
du
·
π 2 z 2 − 1 A→∞ 0 1 + u z 2 + u
u+1
1
2 ln |z|
1
1
lim ln
...
Note that
lim fZ (z) = ∞,
z→0
and that it follows by l’Hospital’s rule that
1
2
2 ln |z|
z = 2 lim 1 = 1
...
com
113
8
...
Y)
Random variables I
Example 8
...
1) Find the frequency of the random variable XY
...
Y
3) Find P {Y > 2X}
...
2
1
...
5
–0
...
2 0
0
...
4
0
...
8
1
1
...
4
x
Figure 45: The graph of the frequency g(s) of XY
...
X
2) Since the values of
lie in ]0, ∞[, it follows by an application of a formula for s ∈ ]0, ∞[ that the
Y
frequency is given by
∞
h(s) =
f (s x) f (x) x dx
...
Then
s
we must split the investigation:
Download free eBooks at bookboon
...
Functions of two random variables, f(X
...
8
0
...
4
0
...
2
x
Figure 46: The graph of the frequency h(s) of
a) If 0 < s ≤ 1, then 1 ≤
1
h(s) =
0
X
...
2
b) If 1 < s < ∞, then instead
1
s
h(s) =
0
x dx =
1
...
com
115
Click on the ad to read more
8
...
Y)
Random variables I
Summing up we get
⎧
⎪
⎪
⎪
⎪
⎪
⎪
⎨
h(s) =
1
,
2
0 < s ≤ 1,
1
⎪
⎪ 2s2 , 1 < s < ∞,
⎪
⎪
⎪
⎪
⎩
0,
s ≤ 0
...
8
0
...
4
0
...
2
0
...
4
0
...
3) 1st variant
...
2 2
4
P {Y > 2X} = area(A) =
2nd variant
...
2
An alternative solution is the following:
1) Since XY has its values lying in ]0, 1[, it follows from the figure that if s ∈ ]0, 1[
...
Download free eBooks at bookboon
...
Functions of two random variables, f(X
...
2
1
0
...
6
0
...
2
0
0
...
4
0
...
8
1
Figure 48: The curve xy = s defines the domain A
...
2
1
0
...
6
0
...
2
0
0
...
6
0
...
8
1
Figure 49: The domain A lies above the line
x
= s, 0 < s < 1
...
8
0
...
4
0
...
2
0
...
6
0
...
2
Figure 50: The domain B lies above the line
Download free eBooks at bookboon
...
y
8
...
Y)
Random variables I
2) It follows that the values of
P
X
≤s
Y
X
lie in ]0, ∞[
...
If s > 1, then it follows from the second figure that
P
X
≤s
Y
= area(B) = 1 −
1
,
2s
s ≥ 1
...
Example 8
...
Assume that
fi = P {X = i} ,
gi = P {Y = i} ,
i = 1, 2, 3, 4, 5, 6
...
, 12, are not all the same
...
, 12
...
, 12
...
If k = 12, then
1
= f6 g6 ,
11
hencer f6 > 0 and g6 > 0
...
11
By subtracting the equation for k = 2 from the equation for k = 7, it follows by a rearrangement that
(f1 − f6 ) g1 = {f5 g2 + f4 g3 + f3 g4 + f2 g5 } + f1 g6 > 0,
Download free eBooks at bookboon
...
Functions of two random variables, f(X
...
Since g1 > 0, we must have f1 > f6
...
(f6 − f1 ) g6 = {f5 g2 + f4 g3 + f3 g4 + f2 g5 } + f6 g1 > 0
for similar reasons
...
These two claims cannot be simultaneously fulfilled, so the assumption must be wrong
...
Alternatively we assume that we can choose the fi and the gj in such a way that the probabilities
are equal, i
...
P {X + Y = k} =
1
,
11
k = 2,
...
Then in particular,
P {X + Y = 2} = f1 g1 =
1
11
and P {X + Y = 12} = f6 g6 =
1
,
11
hence f1 g1 = f6 g6
...
f6
g1
Need help with your
dissertation?
Get in-depth feedback & advice from experts in your
topic area
...
helpmyassignment
...
uk for more info
Download free eBooks at bookboon
...
Functions of two random variables, f(X
...
1
1
Since x +
> 1 (actually x +
≥ 2, when x > 0), this is not possible, and we have obtained a
x
x
contradiction, and the claim follows
...
5 Let the 2-dimensional random variable (X1 , X2 ) have its frequency h (x1 , x2 ) given by
⎧
⎪ 1 , x2 + x2 < r2 ,
⎨
1
2
πr2
h (x1 , x2 ) =
⎪
⎩
0,
otherwise,
(a uniform distribution over the disc x2 + x2 < r2 )
...
Find the frequency of the 2-dimensional random variable (Y1 , Y2 ), and find the marginal frequencies
...
1 This clearly corresponds to the transformation between rectangular and polar coordinates over a fixed disc
...
Hence,
⎧ y
⎪ 1 , (y1 , y2 ) ∈ [0, r] × [0, 2π[,
⎨
πr2
k (y1 , y2 ) =
⎪
⎩
0,
otherwise
...
com
120
y2 ∈ [0, 2π[
...
Functions of two random variables, f(X
...
It follows immediately that
k (y1 , y2 ) = fY1 (y1 ) fY2 (y2 ) ,
hence Y1 and Y2 are stochastically independent
...
6 Let the 2-dimensional random variable (X1 , X2 ) have the frequency
⎧
⎨ x1 + x2 , for 0 < x1 < 1 og 0 < x2 < 1,
h (x1 , x2 ) =
⎩
0,
otherwise,
and let (Y1 , Y2 ) = τ (X1 , X2 ) be given by
Y1 = X1 + X2 ,
Y2 = X2
...
2) Find the frequency k (y1 , y2 ) of (Y1 , Y2 )
...
1
0
...
6
0
...
2
0
0
...
5
Figure 51: The domain D
...
e
...
com
121
2
8
...
Y)
Random variables I
it follows that the inverse map τ −1 exists,
τ −1 (y1 , y2 ) = (y1 − y2 , y2 ) ,
thus
x1 = y 1 − y 2 ,
x2 = y2
...
The Jacobian is then given by
Jτ (y1 , y2 ) =
∂ (x1 , x2 )
=
∂ (y1 , y2 )
1 −1
0
1
= 1
...
0,
1
...
8
y
0
...
4
0
...
2
0
...
5
2
x
Figure 52: The graph of FY1 (y1 )
...
com
122
8
...
Y)
Random variables I
Marginal frequencies
...
b) If 1 < y1 < 2, then
1
fY1 (y1 ) =
y2 =y1 −1
1
k (y1 , y2 ) dy2 =
y2 =y1 −1
y1 dy2 = y1 (2 − y1 ) ,
Brain power
By 2020, wind could provide one-tenth of our planet’s
electricity needs
...
Up to 25 % of the generating costs relate to maintenance
...
We help make it more economical to create
cleaner, cheaper energy out of thin air
...
Therefore we need the best employees who can
meet this challenge!
The Power of Knowledge Engineering
Plug into The Power of Knowledge Engineering
...
skf
...
com
123
Click on the ad to read more
8
...
Y)
Random variables I
hence summing up,
⎧
2
0 < y1 ≤ 1,
y1 ,
⎪
⎪
⎪
⎪
⎨
2
fY1 (y1 ) =
y (2 − y1 ) = 1 − (y1 − 1) , 1 < y1 < 2,
⎪ 1
⎪
⎪
⎪
⎩
0,
otherwise
...
6
1
...
2
1
y
0
...
6
0
...
2
–0
...
2
0
...
6
0
...
2
x
–0
...
2) For Y2 it follows by a horizontal integration for 0 < y2 < 1 that
y2 +1
fY2 (y2 ) =
=
y1 =y2
y2 +1
k (y1 , y2 ) dy1 =
y1 =y2
y1 dy1 =
1 2
y
2 1
y2 +1
y1 =y2
1
1
1
2
2
(y2 + 1) − y2 = (2y2 + 1) · 1 = y2 + ,
2
2
2
thus summing up
⎧
⎪ y2 + 1 , 0 < y2 < 1,
⎨
2
fY2 (y2 ) =
⎪
⎩
0,
otherwise
...
com
124
8
...
Y)
Random variables I
Example 8
...
where
D = (x1 , x2 ) ∈ R2 | 0 < x2 < x1 < ∞ ,
and let (Y1 , Y2 ) = τ (X1 , X2 ) be given by
Y1 = X1 + X2 ,
2
Y2 = (X1 − X2 )
...
2) Find the frequency k (y1 , y2 ) of (Y1 , Y2 )
...
4) Are Y1 and Y2 independent random variables?
1
0
...
6
y
0
...
2
0
0
...
6
0
...
8
1
x
Figure 54: The domain D lies between the X1 axis and the line x2 = x1
...
2
Download free eBooks at bookboon
...
Functions of two random variables, f(X
...
8
0
...
4
0
...
2
0
...
4
0
...
Since (x1 , x2 ) is uniquely determined by (y1 , y2 ), we conclude that τ is bijective
...
The boundary curve x2 = 0 is mapped into y1 = y2 , i
...
√
2
2
y2 = y1 , y1 ≥ 0
...
2) We next compute the Jacobian,
∂x1
∂y1
∂x2
∂y1
∂ (x1 , x2 )
=
∂ (y1 , y2 )
∂x1
∂y2
1 1
√
4 y2
1
2
∂x2
∂y2
=
1
2
−
1 1
√
4 y2
=−
1 1
√ < 0
...
/
3) The marginal frequency of Y1 is obtained by a vertical integration,
1
fY1 (y1 ) =
2
2
y1
0
e−y1
√ y2
√ dy2 = e−y1 [ y2 ]01 = y1 e−y1
y2
for y1 > 0,
and fY1 (y1 ) = 0 for y ≤ 0
...
Download free eBooks at bookboon
...
Functions of two random variables, f(X
...
It also
follows from
fY1 (y1 ) · fY2 (y2 ) = k (y1 , y2 )
...
com
127
Click on the ad to read more
8
...
Y)
Random variables I
Example 8
...
1
...
X1
2
...
Define the random variables Y1 and Y2 by
Y1 = X1 + X2 ,
Y2 =
X1
...
3
...
4
...
(This question can be answered both with and without an application of the answer of question 3
...
Check if Y1 and Y2 are independent
...
Compute the mean E {Y2 }
...
Find, e
...
by an application of question 2
...
8
...
1) The means are
E {X1 } =
∞
x2 e−x dx = 2
and
∞
1
X1
E
0
=
0
x −x
e dx = 1
...
2
2
Alternatively, the frequency of Y2 =
∞
fY2 (y2 ) =
−∞
= y2
X1
is zero for y2 ≤ 0, and when y2 > 0, then
X2
f (y2 x) f (x) · |x| dx =
∞
x3 e−(1+y2 )x dx =
0
∞
0
y2 x e−y2 x · x e−x · x dx
∞
y2
(1 + y2 )
4
Download free eBooks at bookboon
...
Functions of two random variables, f(X
...
4 4
2
3) It follows from y1 = x1 + x2 and y2 =
x1 = y2 x2
and
x1
that
x2
y1 = x1 + x2 = (y2 + 1) x2 ,
hence
x1 =
y 1 y2
y1
= y1 −
y2 + 1
y2 + 1
and
x2 =
y1
...
2
The simultaneous frequency of (X1 , X2 ) is
⎧
for x1 > 0 og x2 > 0,
⎨ x1 x2 e−(x1 +x2 )
g (x1 , x2 ) =
⎩
0
otherwise,
hence the simultaneous frequency of (Y1 , Y2 ) is 0 for y1 ≤ 0 or y2 ≤ 0, and
k (y1 , y2 ) =
=
y1
y1 y 2
y1
·
· e−y1 ·
4
y2 + 1 y2 + 1
(y2 + 1)
3
y1 y2
−y1
,
for y1 > 0 and y2 > 0,
4 e
(y2 + 1)
which also can be written
⎧
6y2
⎪ 1 y 3 e−y1 ·
⎪
⎨ 6 1
4
(y2 + 1)
k (y1 , y2 ) =
⎪
⎪
⎩
0
for y1 > 0 and y2 > 0,
otherwise
...
(possibly by the second variant of 2
...
com
129
8
...
Y)
Random variables I
⎧
⎪
⎪
⎨
kY2 (y2 ) =
⎪
⎪
⎩
6y2
(y2 + 1)
for y2 > 0,
4
for y2 ≤ 0
...
6) Since X1 and X2 are independent, the mean is
E {Y2 } = E
X1
X2
1
X2
= E {X1 } · E
= 2 · 1 = 2
...
Alternatively, Y2 has the distribution function
y2
KY2 (y2 )
=
0
= 6 −
If we put KY2 (y2 ) =
3y2 + 1
(y2 + 1)
3
=
1
,
2
6t
dt =
(t + 1)4
1 1
1 1
+
2
2 u
3 u3
y2 +1
1
y2 +1
1
6(u − 1)
du = 6
u4
=1−
3
(y2 + 1)
2
+
y2 +1
1
1
1
− 4
u3
u
2
(y2 + 1)
3
=1−
du
3y2 + 1
(y2 + 1)
3
...
3
6y2 + 2 = (y2 + 1) ,
or
3
2
2
y2 + 3y2 − 3y2 − 1 = (y2 − 1) y2 + 4y2 + 1 = 0
...
8) The mass of probability is divided into two equal parts by the median 1
...
Thus, the mean must lie to the
right of 1
...
com
130
9
...
1 Let X be a random variable of values in N0
...
E{X} =
k=0
It is often easier to apply this formula by computation of means
...
n=1
Download free eBooks at bookboon
...
Means and moments of higher order
Random variables I
Since the terms are non-negative, this is equivalent with
∞
n P {X = n} is (just) convergent
...
n=1
If conversely E{X} exists, we just repeat the computations above in the reverse order
...
2 Two persons A and B play the following game:
They each throw two coins
...
The game is a draw,
if they obtain an equal number of heads
...
What is the probability q that the game is a draw?
2
...
One stops first time one of the two
players wins
...
What is the probability that A wins in game number k?
4
...
B\A
TT
TH
HT
HH
TT
TH
HT
HH
0
-1
-1
-1
1
0
0
-1
1
0
0
-1
1
1
1
0
Table 1: If A wins, we write 1
...
In case of a draw we write 0
...
Finally, 0 means a draw
...
Since the 16 possibilities all have the same probability, we get by simply counting
q = P {a draw} =
3
6
=
...
16
Download free eBooks at bookboon
...
Means and moments of higher order
Random variables I
3) If A wins in game number k, then the first k − 1 games must all have been draws, hence
k−1
3
8
P {A wins in the k th game} = q k−1 · pA =
·
5
...
Then
P {X = k} = P {A wins in game number k} + P {B wins in game number k} =
5
·
8
3
8
k−1
...
(1 − x)2
=
8
...
3 A box contains N balls of the numbers from 1 to N
...
Let Xn denote the random variable which indicates the largest selected number
...
n
N
...
Since all numbers have the same probability,
the distribution function is given by
FX (k) = P {X ≤ k} =
k
,
N
k = 1, 2,
...
Thus we derive the distribution function of Xn ,
FXn (k) = P
= (P {X ≤ k}) =
j=1,
...
, N,
hence
pk = P {Xn = k} = P {Xn ≤ k} − P {Xn ≤ k − 1} =
The mean is
N
E {Xn }
=
k=1
=
1
Nn
⎧
1 ⎨
k pk = n
N ⎩
N
k n+1 −
k=1
N n+1 −
N − 1k n
N
k n − (k − 1)n
...
com
133
1
N
⎫
⎬
⎭
N −1
k=1
=
1
Nn
k
N
N
k=1
n
...
Means and moments of higher order
Random variables I
Then notice that
1
0
xn dx =
1
N
1
N
k
N
N −1
k=1
n
can be interpreted as an approximating sum of the integral
1
, hence
n+1
N −1
k
N
k=1
n
→
1
xn dx =
0
1
n+1
for N → ∞
...
Example 9
...
Prove that
∞
μ=
0
0
{1 − F (x)}dx −
It is given that
∞
−∞
|x| f (x) dx < ∞, and that
∞
(6) μ =
F (x) dx
...
−∞
0
Let A > 0
...
Since
0 ≤ A F (−A) = A
−A
−∞
f (x) dx ≤
−A
−∞
|x| f (x) dx → 0 for A → ∞,
we conclude that A F (−A) → 0 for A → ∞
...
−∞
Analogously,
0 ≤ A{1 − F (A)} = A
∞
A
f (x) dx ≤
∞
A
|x| f (x) dx → 0 for A → ∞,
Download free eBooks at bookboon
...
Means and moments of higher order
Random variables I
so when A → ∞, we conclude in the same way that
∞
−A{1 − F (A)} +
x f (x) dx = lim
A→∞
0
A
0
∞
{1 − F (x)} dx
=
0
{1 − F (x)} dx,
where the integrals are even absolutely convergent
...
Alternatively, a more streamlined, though also more sophisticated method is the following
...
=
−∞
Challenge the way we run
EXPERIENCE THE POWER OF
FULL ENGAGEMENT…
RUN FASTER
...
RUN EASIER…
READ MORE & PRE-ORDER TODAY
WWW
...
COM
Download free eBooks at bookboon
...
indd 1
22-08-2014 12:56:57
135
Click on the ad to read more
9
...
−∞
However, we have assumed that the mean exists, which implies that all the integrals above are absolutely convergent, so the formal calculation is also real
...
5 Let X be an non-negative random variable of the distribution function F (x) and frequency f (x)
...
(If the k th moment does not exist, then both the right hand side and the left hand side of (7) are equal
to ∞
...
Remark
...
1) Assume that X is non-negative and that E X k exists, i
...
0 ≤ E Xk =
∞
0
xk f (x) dx < ∞
...
Then by a partial integration,
A
xk f (x) dx
xk {F (x) − 1}
=
0
A
0
A
+k
= −Ak {1 − F (A)} + k
∞
0
Now
0
A
0
xk−1 {1 − F (x)} dx
xk−1 {1 − F (x)} dx
...
Then by taking the limit A → ∞,
∞
E Xk =
∞
xk f (x) dx = k
0
0
xk−1 {1 − F (x)} dx
...
Download free eBooks at bookboon
...
Means and moments of higher order
Random variables I
2) Then assume that X is non-positive and that E X k exists, i
...
0 ≤ E Xk
0
=
−∞
|x|k f (x) dx < ∞
...
−A
Since
−A
0 ≤ Ak F (−A) = Ak
−∞
f (x) dx ≤
−A
|x|k f (x) dx → 0
−∞
for A → ∞,
it follows by taking the limit that
(8) E X k =
0
−∞
xk f (x) dx = −k
0
xk−1 F (x) dx
...
Hence,
0
−A
xk f (x) dx ≤ k
Therefore, if E X
k
0
xk−1 F (x) dx
...
Alternatively and more streamlined (and also more sophisticated), because one at first does not
care so much for the convergence of the integrals (this should of course be done at last), we have the
following proof:
When k ∈ N, and X is non-negative of the distribution function F (x) and the frequency f (x), then
∞
k
0
∞
xk−1 {1 − F (x)} dx = k
∞
=
y
f (y) dy dx
x=0
y=x
∞
k
k xk−1 dx dy =
f (y)
y=0
∞
xk−1
x=0
y f (y) dy = E X k
...
Since the integrand is non-negative, we can interchange the order of integration
...
Then let X ≤ 0 have the distribution function F (x) and the frequency f (x)
...
−∞
Download free eBooks at bookboon
...
Means and moments of higher order
Random variables I
Thus we get in this case the formula
0
E X k = −k
xk−1 F (x) dx
...
It the k th moment exists, then all integrals are absolutely convergent
...
6 Let X and Y be non-negative random variables of distribution functions F X and FY ,
means E{X} and E{Y } variances V {X} and V {Y }
...
1) If so, prove that E{Y } ≤ E{X}
...
4 or Example 9
...
1
0
...
6
0
...
2
0
0
...
4
0
...
8
1
1
...
4
Figure 56: Illustration of FX (x) ≤ FY (x) in (2)
...
Then clearly X and Y
are non-negative, and FX (x) ≤ FY (y), cf
...
2) The answer is “no”! We construct a counterexample
...
2
2
Download free eBooks at bookboon
...
Means and moments of higher order
Random variables I
For the variances, however, we get
V {Y } =
(1 − a)2
a2
>
= V {X},
12
12
because a > 1 − a > 0 for
1
< a < 1
...
7 Let X be a random variable satisfying
E{X} = E X 2 = 1
...
Since
V {X} = E X 2 − (E{X})2 = 1 − 1 = 0,
it follows that X is a constant, and since E{X} = 1, we get X ≡ 1
...
Download free eBooks at bookboon
...
Means and moments of higher order
Random variables I
Example 9
...
Prove the following generalization of Chebyshev’s inequality:
For every a ∈ R+ ,
P {|X − E{X}| ≥ a} ≤
1
E |X − E{X}|k
...
, k
...
In particular, E{X} exists
...
Example 9
...
1) Let g : R → R be an even, non-negative function, which is increasing on [0, ∞[
...
2) Let g : R → R be a non-negative, increasing function
...
We always assume that E{g(X)} exists
...
e
...
−∞
The similar proofs when X is either of discrete type or of mixed type are obtained by simple modifications of the main proof
...
−∞
According to the assumptions, g(x) ≥ 0 and
g(a) ≤ g(x) for x ≥ a and g(−a) = g(a) ≤ g(x) for x ≤ −a,
Download free eBooks at bookboon
...
Means and moments of higher order
Random variables I
hence
∞
g(a)P {|X| ≥ a} = g(a)
≤
−a
f (x) dx + g(−a)
a
∞
−a
g(x) f (x) dx +
a
−∞
f (x) dx
−∞
∞
g(x) f (x) dx ≤
g(x) f (x) dx = E{g(X)}
...
2) Similarly,
g(a)P {X ≥ a} = g(a)
∞
a
f (x) dx ≤
∞
a
g(x) f (x) dx ≤
∞
g(x) f (x) dx = E{g(X)}
...
Example 9
...
Prove that
if a is a median of X, then
√
|a − μ| ≤ 2 · σ
...
Apply Chebyshev’s inequality
...
2
One of the sets {x ≤ a} and {x ≥ a} must necessarily be contained in the set {|x − μ| ≥ |a − μ|}, and
1
1
since P {X ≤ a} ≥ and P {X ≥ a} ≥ , we get
2
2
σ2
1
≤ P {|X − μ| ≥ |a − μ|} ≤
, for a = μ
...
If a = μ, there is of course nothing to prove
...
com
141
9
...
11 Let X be a random variable, for which all moments mk = E X k
for k ∈ N the k th decreasing moment
exist
...
1) Find for n = 1, 2, 3, the nth decreasing as a linear combination of the k th moments k ≤ n
...
A
...
B
...
C
...
Example 9
...
It is then
...
1) Express for n = 2, 3, 4, the nth central moment by the k th moments for k ≤ n
...
A
...
1
Download free eBooks at bookboon
...
Means and moments of higher order
Random variables I
B
...
1
1
1
C
...
1
1
www
...
com
We do not reinvent
the wheel we reinvent
light
...
An environment in which your expertise is in high
demand
...
Implement sustainable ideas in close
cooperation with other specialists and contribute to
influencing our future
...
Light is OSRAM
Download free eBooks at bookboon
...
Means and moments of higher order
Random variables I
Example 9
...
Let X be a random variable, which only has values in the open interval I, and let ϕ : I → R be convex
...
1) Let us first check where the assumption of convexity can be applied
...
2) Since E{X} ∈ I, we get by choosing a = E{X},
E{ϕ(X)} − ϕ(E{X}) ≥ 0,
i
...
E{ϕ(X)} ≥ ϕ(E{X})
...
com
144
10
...
1 A random variable X has the frequency
⎧
x > 0,
⎨ a e−ax ,
f (x) =
⎩
0,
x ≤ 0,
where a is a positive constant
...
The distribution function G(y) of Y = X 2 is 0 for y ≤ 0
...
for y > 0,
for y ≤ 0
...
a2
The variance of Y is
V {Y }
2
= E Y 2 − (E{Y }) = E X 4 −
=
1
a4
∞
0
t4 e−t dt −
2
a2
2
∞
=
0
x4 a e−ax dx −
4
4!
4
24 − 4
20
= 4− 4 =
= 4
...
2 Let X be rectangularly distributed over ] − h, h[
...
and
We first notice that
E |X|k =
1
2h
h
−h
|x|k dx =
1
h
h
xk dx =
0
hk
...
com
145
4
a4
10
...
2k + 1
Example 10
...
The line intersects the x axis at a point of the abscissa X
...
b
v
1
...
5
X
0
0
...
5
2
π π
Assuming that Θ(= v) is rectangularly distributed over − , , find the frequency of X
...
The distribution of X is called a Cauchy distribution
...
e
...
Since Θ has the frequency
⎧
π
π
⎪ 1,
⎨
− <θ< ,
π
2
2
f (θ) =
⎪
⎩
0,
otherwise,
π π
and τ −1 (x) ∈ − ,
for every x ∈ R, we derive that the frequency of X is
2 2
g(x) = f τ −1 (x) ·
1
1
dθ
= ·
dx
π 1 + x−a
b
2
·
b
1
=
,
b
π {b2 + (x − a)2 }
Download free eBooks at bookboon
...
10
...
Alternatively,
|x| · g(x) ∼ k ·
1
|x|
for large x,
hence
∞
−∞
|x| g(x) dx = ∞
...
4 A line segment of length 1 is divided randomly into two parts of lengths X and 1 − X,
where we assume that X is rectangularly distributed over ]0, 1[
...
1) Find the distribution of Y and the distribution of Z
...
1) The distribution function of X is
⎧
for x ≤ 0,
⎪ 0,
⎪
⎪
⎪
⎨
x,
for 0 < x < 1,
FX (x) =
⎪
⎪
⎪
⎪
⎩
1,
for x ≥ 1
...
If y ∈ 0, , then
2
2
P {X ≤ y or 1 − X ≤ y} = P {0 < X ≤ y} + P {1 − y ≤ X < 1} = 2y,
hence
⎧
⎪ 1
⎪
⎪
⎪
⎨
2y
FY (y) =
⎪
⎪
⎪
⎪
⎩
0
for y ≥ 1 ,
2
for 0 < y <
1
2,
⎧
⎨ 2
and fY (y) =
for y ≤ 0,
Download free eBooks at bookboon
...
0
10
...
If z ∈
, 1 , then
2
2
FZ (z) = P {X ≤ z and 1 − X ≤ z} = P {1 − z ≤ X ≤ z} = 2z − 1,
hence
⎧
⎪
⎪
⎪
⎪
⎨
FZ (z) =
⎪
⎪
⎪
⎪
⎩
1
2z − 1
0
for z ≥ 1,
for
1
2
⎧
⎨ 2
< z < 1,
for z ≤
and fZ (z) =
for
⎩
otherwise
...
2
=
1
...
1
...
Discover the truth at www
...
ca/careers
© Deloitte & Touche LLP and affiliated entities
...
deloitte
...
Download free eBooks at bookboon
...
deloitte
...
Click on the ad to read more
© Deloitte & Touche LLP and affiliated entities
...
Mean and variance in special cases
Random variables I
Example 10
...
This means that if X
and Y denote the abscissa and the ordinate, resp
...
1) Find the probability of the event that the distance from A to a given edge of the square is ≤ t
...
Find the distribution function
and the frequency of U
...
1
0
...
6
t 0
...
2
A
0
0
...
4
0
...
8
1
Valgt side
1) Obviously, the probability that the distance from A to e
...
]0, 1[ on the x axis, is < t for 0 < t < 1,
hence, the corresponding random variable is rectangularly distributed over ]0, 1[
...
If u ∈ ]0, 1 [,
2
2
then U ≤ u, if and only if A lies in the unions of the domains between a dotted line and the closest
parallel edge, so by considering an area,
FU (u) = 1 − (1 − 2u)2 = 4 u − u2 = 4y − 4u2 ,
We conclude that the frequency is
fU (u) = FU (u) = 4 − 8u,
0
1
,
2
and fU (u) = 0 otherwise
...
com
149
0
1
...
Mean and variance in special cases
Random variables I
1
1-u
0
...
6
1–2u
0
...
2
0
–0
...
2
0
...
4
0
...
2 3
6
=
0
Then compute
1
2
E U2 =
0
u2 (4 − 8u) du =
1
2
0
4u2 − 8u3 du =
so the variance is
V {U } = E U 2 − (E{U })2 =
1
1
1
−
=
...
com
150
4 3
u − 2u4
3
1
2
=
0
1
1 1
− =
,
6 8
24
10
...
6 The function f is for 0 < x < 1 given by
f (x) =
1
π
x(1 − x)
,
while the function is equal to 0 for any other value of x
...
2) Find the mean and the variance of the random variable X
...
4) Find the mean of the random variable Y
...
1) Obviously, f (x) ≥ 0 for every x ∈ R
...
2) Since f (x) = 0 outside a bounded interval, all moments exist
...
2
Furthermore,
E{X(X − 1)} =
1
π
1
0
x(x − 1)
x(1 − x)
Since the graph of the integrand
have
E{X(X − 1)} = −
dx = −
1
π
1
0
x(1 − x) dx
...
8 4
8
Download free eBooks at bookboon
...
Mean and variance in special cases
Random variables I
√
3) Since y = ψ(x) = x is a bijective map ψ : ]0, 1[ → ]0, 1[, with the inverse x = ϕ(x) = y 2 , where
√
dy
= 2y > 0, we conclude that the frequency of Y = X is
dx
⎧
1
2
2y
⎪ f (ϕ(y)) · ϕ (y) =
⎪
= ·
,
for y ∈ ]0, 1[,
⎨
π
π y 2 (1 − y 2 )
1 − y2
g(y) =
⎪
⎪
⎩
0,
otherwise,
corresponding to the distribution
⎧
1,
for
⎪
⎪
⎪
⎪
⎪
⎨
2
G(y) =
Arcsin y,
for
⎪ π
⎪
⎪
⎪
⎪
⎩
0,
for
function
y ≥ 1,
y ∈ ]0, 1[,
y ≤ 0
...
π
We will turn your CV into
an opportunity of a lifetime
Do you like cars? Would you like to be a part of a successful brand?
We will appreciate and reward both your enthusiasm and talent
...
You will be surprised where it can take you
...
employerforlife
...
com
152
Click on the ad to read more
10
...
7 1) Prove that the function
⎧
⎪
⎨
f (x) =
⎪
⎩
ab
e−bx − e−ax ,
a−b
for x ≥ 0,
0,
for x < 0,
where a and b denote positive constants, a = b, can be considered as the frequency of a random
variable X
...
3) Find E{X} and V {X}, expressed by a and b
...
Prove for every fixed x that
lim f (x) = g(x),
b→a
where
⎧ 2 −ax
,
⎨ a xe
g(x) =
for x ≥ 0,
⎩
for x < 0
...
6) Finally, prove that E{X} → E{Y } for b → a
...
Then f (x) ≥ 0 for x ∈ R, and
∞
f (x) dx =
−∞
ab
a−b
∞
0
e−bx − e−ax dx =
ab
a−b
1 1
−
b a
=
ab
a−b
·
= 1,
a−b
ab
thus f (x) can be considered as a frequency
...
If x > 0, then
x
F (x) =
=
x
1
ab
1
1
ab
− e−bt + e−at =
−a e−bt + b e−at
e−bt − e−at dt =
a−b 0
a−b
b
a
a−b
0
1
1
−bx
−ax
−ax
−a e
be
+ be
+a−b =1+
− a e−bx
...
com
153
=
ab
a−b
1
1
− 2
b2
a
x
0
10
...
a2
b
4) Let x > 0 and a > 0 be fixed
...
g
...
= a2 lim
b→a
b→a
a−b
−1
For x ≤ 0 we get of course 0, thus
lim f (x) = g(x)
...
Since
∞
∞
g(x) dx = a2
−∞
∞
x e−ax dx =
0
t e−t dt = 1,
0
it follows that g(x) is the frequency of a random variable Y
...
a
It follows from (3) that
lim E{X} = lim
b→a
b→a
1 1
+
a b
=
2
= E{Y }
...
1 It is possible to give a simper solution
...
com
154
b e−bx ,
0,
x ≥ 0,
x < 0,
10
...
♦
a2
b
Example 10
...
1) Find the means E{X} and E{Y }
...
3) Find P {X + Y = k}, k ∈ N0
...
2
(1 − q)
p
p
Here we have used that by a partial differentiation with respect to q ∈ ]0, 1[ we obtain the important
expressions
1
=
1−q
∞
qk
d
dq
and
k=0
1
1−q
=
1
=
(1 − q)2
∞
k · q k−1
...
p2
p p
p
p
p2
p
Notice that it is easier to compute E{X(X − 1)} than
E X2 =
∞
k 2 P {X = k} =
k=1
∞
k 2 pq k
...
com
155
k(k − 1)q k−2
k=2
i
...
V {X} = V {Y } = 2 ·
∞
2
2pq 2
q2
= 3 = 2· 2,
(1 − q)3
p
p
10
...
i=0
Remark 10
...
♦
I joined MITAS because
I wanted real responsibili�
I joined MITAS because
I wanted real responsibili�
Real work
International
Internationa opportunities
al
�ree wo placements
work
or
�e Graduate Programme
for Engineers and Geoscientists
Maersk
...
discovermitas
...
com
156
�e
for Engin
Click on the ad to read more
10
...
9 There are given two components in an instrument
...
We introduce the random variables X1 , X2 and Y2 by
X1 = min {T1 , T2 } ,
X2 = max {T1 , T2 } ,
Y2 = X 2 − X 1
...
1
...
2
...
Let it be given without proof that (X1 , X2 ) has the simultaneous frequency
2a2 e−a(x1 +x2 ) ,
0,
h (x1 , x2 ) =
0 < x 1 < x2 ,
otherwise
...
Find the simultaneous frequency of the 2-dimensional random variable (X 1 , Y2 )
...
Find the frequency of Y2
...
Check if the random variables X1 and Y2 are independent
...
2a
2) For X2 we get
P {X2 ≤ x2 }
= P {T1 ≤ x2 ∧ T2 ≤ x2 } = P {T1 ≤ x2 } · P {T2 ≤ x2 }
=
1 − e−ax2
2
,
x2 > 0,
so X2 has the frequency
fX2 (x2 ) = 2a e−ax2 1 − e−ax2 = 2a e−ax2 − 2a e−2ax2
Download free eBooks at bookboon
...
Mean and variance in special cases
Random variables I
and
for x2 ≤ 0
...
a 2a
2a
Additional
...
e
...
a a 2a
2a
3) This is trivial, because
E {Y2 } = E {X2 } − E {X1 } =
1
1
3
−
=
...
g
...
This is also written
k (y1 , y2 ) =
2a e−2ay1 · a e−ay2 ,
0,
for y1 > 0 and y2 > 0,
otherwise
...
) It follows immediately from 4
...
Download free eBooks at bookboon
Title: Some exercices in probability
Description: Exercices to practicing at home .. ( from low levels to high ones ) students from schools to college goodluck
Description: Exercices to practicing at home .. ( from low levels to high ones ) students from schools to college goodluck