Introduction To Sample Space and Probability
Introduction To Sample Space and Probability
Consider the sample space S, as shown in Fig. 3.1, and a two-dimensional space
where x1 and x2 are real axes, 1 < x1 < 1, 1 < x2 < 1. Mapping of the
outcome si to a point (x1, x2) in a two-dimensional space, leads to a two-dimensional
random variable (X1, X2).
If the sample space is discrete, the resulting two-dimensional random variable
will also be discrete. However, as in the case of a one variable, the continuous
sample space may result in continuous, discrete, or mixed two-dimensional random
variables. The range of a discrete two-dimensional random variable is made of
points in a two-dimensional space, while the range of a continuous two-dimensional
variable is a continuous area in a two-dimensional space. Similarly, a mixed
two-dimensional random variable has, aside from continuous area, additional
G.J. Dolecek, Random Signals and Processes Primer with MATLAB, 155
DOI 10.1007/978-1-4614-2386-7_3, # Springer Science+Business Media New York 2013
156 3 Multidimensional Random Variables
discrete points. Figure 3.2 shows examples of discrete, continuous, and mixed
ranges.
The following example illustrates this concept.
Example 3.1.1 Two messages have been sent through a communication system.
Each of them, independently of another, can be transmitted with or without error.
Define the two-dimensional variable for this case.
Solution The sample space has four outcomes, shown in Fig. 3.3.
We used the following denotation:
s1 ¼ fThe first and second messages are transmitted correctlyg;
s2 ¼ fThe first message is transmitted correctly; while the second one incurs an errorg;
s3 ¼ fThe second message is transmitted correctly; while the first message incurs an errorg;
s4 ¼ fBoth messages incur an error in transmissiong:
(3.1)
Fig. 3.2 Examples of discrete, continuous and mixed ranges. (a) Discrete. (b) Continuous. (c) Mixed
3.1 What Is a Multidimensional Random Variable? 157
Example 3.1.2 Consider the experiment of tossing two coins. There are four
outcomes in the sample space S:
Adopting the denotation from Example 1.1.1, we rewrite the outcomes (3.3) in
the following form:
s1 ¼ fH; Hg;
s2 ¼ fH; Tg;
s3 ¼ fT; Hg;
s4 ¼ fT; Tg: (3.4)
Let X1 and X2 indicate the occurrence of heads and tails for the first and second
coins, respectively.
The values of the random variable X1 for the first coin tossing are x1 ¼ 1 if heads
occurs, and x1 ¼ 0, if tails occurs.
Similarly, for the variable X2, the value x2 ¼ 1 indicates the occurrence of heads,
and x2 ¼ 0 indicates tails in the second coin toss.
The mapping from the sample space S to the (x1, x2) space is shown in Fig. 3.4.
Note that the values of the random variables are the same as in Example 3.1.1.
Therefore, one can obtain the same two-dimensional random variable, from differ-
ent random experiments.
The next example illustrates that, like in the one variable example, one can
obtain different two-dimensional random variables from the same experiment.
Example 3.1.3 Consider the same outcomes as in Example 3.1.1 for the following
mapping:
Therefore, the values of the random variable X1 are x1 ¼ 1, if at least one heads
occurs and x1 ¼ 0 if no heads occurs. Similarly, the values of X2 are x2 ¼ 1 if at
least one tails occurs and x2 ¼ 0 if no tails occurs. This mapping is shown in
Fig. 3.5.
In a similar way, one can define a N-dimensional random variable by mapping
the outcomes of the space S to N-dimensional space, thus obtaining the N-dimen-
sional random variable,
ðX1 ; X2 ; . . . ; XN Þ (3.6)
with the range:
ðx1 ; x2 ; . . . ; xN Þ: (3.7)
Fig. 3.4 Mapping the sample space S to (x1, x2) space in Example 3.1.2
3.2 Joint Distribution and Density 159
Fig. 3.5 Mapping from the space S to the space (x1, x2) in Example 3.1.3
The cumulative distribution function, joint distribution function or, shortly, joint
distribution of a pair of random variables X1 and X2 is defined as the probability of
joint events:
fX1 x1 ; X2 x2 g; (3.8)
where x1 and x2 are values within the two-dimensional space as shown in (3.9). The
joint distribution is also called two-dimensional distribution, or second distribution.
In this context, distribution of one random variable is also called one-dimensional
distribution, or first distribution.
From the properties of a one-dimensional distribution, we easily find the follow-
ing properties for a two-dimensional distribution:
P.1 0 FX1 X2 ðx1 ; x2 Þ 1: (3.10)
Pfx11 < X1 x12 ; x21 < X2 x22 g ¼ Pfx11 < X1 x12 ; X2 x22 g
Pfx11 < X1 x12 ; X2 < x21 g ¼ PfX1 x12 ; X2 x22 g PfX1 < x11 ; X2 x22 g
½PfX1 x12 ; X2 < x21 g PfX1 < x11 ; X2 < x21 g
¼ FX1 X2 ðx12 ; x22 Þ FX1 X2 ðx11 ; x22 Þ FX1 X2 ðx12 ; x21 Þ þ FX1 X2 ðx11 ; x21 Þ:
(3.15)
Example 3.2.1 Find the joint distribution function from Example 3.1.1 considering
that the probability that the first and second messages are correct (denoted as p1 and
p2, respectively), and that they are independent.
Example 3.2.4 Find the probability that a random point (x1, x2) will fall in the area
A, as shown in Fig. 3.6.
Solution Area A is defined as:
ðX1 ; X2 ; . . . ; XN Þ;
the joint distribution, denoted as FX1 ;...; XN ðx1 ; . . . ; xN Þ, is then defined as:
The joint probability density function (PDF) or joint density function or two-
dimensional density function or shortly, joint PDF, for pair of random variables
X1 and X2, is denoted as, fX1 X2 ðx1 ; x2 Þ, and defined as:
@ 2 FX1 X2 ðx1 ; x2 Þ
fX1 X2 ðx1 ; x2 Þ ¼ : (3.27)
@x1 @x2
For discrete variables the derivations are not defined in the step discontinuities,
implying the introduction of delta functions at pairs of discrete points (x1i, x2j).
Therefore, the joint PDF is equal to (see [PEE93, pp. 358–359], [HEL91, p. 147]):
XX
fX1 X2 ðx1 ; x2 Þ ¼ PfX1 ¼ x1i ;X2 ¼ x2j gdðx1 x1i Þdðx2 x2j Þ: (3.28)
i j
Example 3.2.5 We can find the joint density functions for Examples 3.2.1, 3.2.2,
and 3.2.3. Using the distribution (3.18), and (3.27), we have:
From here, considering that dx is an infinitesimal interval, the PDF in the interval
[x, x + dx] is constant, resulting in:
Similarly, the volume in Fig. 3.7b presents the probability that the random
variables X1 and X2 are in the intervals [x1, x1 + dx1] and [x2, x2 + dx2],
respectively.
The equivalent probability
corresponds to the elemental volume V, with a base of (dx1 dx2) and height of
fX 1 X2 ðx1 ; x2 Þ:
P.2 ð
1 ð
1
xð1 xð2
P.3
FX1 X2 ðx1 ; x2 Þ ¼ fX1 X2 ðx1 ; x2 Þdx1 dx2: (3.38)
1 1
xð1 ð
1
P.4
FX1 ðx1 Þ ¼ fX1 X2 ðx1 ; x2 Þdx1 dx2: (3.39)
1 1
xð2 ð
1
xð22 xð12
P.5
Pfx11 < X1 x12 ; x21 < X2 x22 g ¼ fX1 X2 ðx1 ; x2 Þdx1 dx2: (3.41)
x21 x11
P.6 ð
1
ð
1
Ðx1 Ð 2
x2 þdx
fX1 X2 ðx1 ; x2 Þdx1 dx2
1 x2
FX1 ðx1 jx2 < X2 x2 þ dx2 Þ ¼
Ð 2
x2 þdx
fX2 ðx2 Þdx2
x2
Ðx1 Ðx1
fX1 X2 ðx1 ; x2 Þdx1 dx2 fX1 X2 ðx1 ; x2 Þdx1
1 1
¼ ¼ :
fX2 ðx2 Þdx2 fX2 ðx2 Þ
(3.47)
If dx2 is approaching zero, and for each x2 for which fX 2 ðx2 Þ 6¼ 0, we finally
obtain:
Ðx1
fX1 X2 ðx1 ; x2 Þdx1
1
FX1 ðx1 jX2 ¼ x2 Þ ¼ : (3.48)
fX2 ðx2 Þ
Similarly, we have:
Ðx2
fX1 X2 ðx1 ; x2 Þdx2
1
FX2 ðx2 jX1 ¼ x1 Þ ¼ : (3.49)
fX1 ðx1 Þ
From (3.48) and (3.49), using (3.27), we obtain the corresponding PDFs:
fX1 X2 ðx1 ; x2 Þ
fX1 ðx1 jX2 ¼ x2 Þ ¼ : (3.50)
fX2 ðx2 Þ
fX1 X2 ðx1 ; x2 Þ
fX2 ðx2 jX1 ¼ x1 Þ ¼ : (3.51)
fX1 ðx1 Þ
Consider now that the condition for event B is defined as an event in which the
other variable X2 lies in the given interval [x21, x22], resulting in:
Ðx1 xÐ22
fX1 X2 ðx1 ; x2 Þdx1 dx2
1 x21
FX1 ðx1 jx21 < X2 x22 Þ ¼ xÐ22
fX2 ðx2 Þdx2
x21
FX1 X2 ðx1 ; x22 Þ FX1 X2 ðx1 ; x21 Þ
¼ ; (3.52)
FX2 ðx22 Þ FX2 ðx21 Þ
3.2 Joint Distribution and Density 167
xÐ22
fX1 X2 ðx1 ; x2 Þdx2
x21
fX1 ðx1 jx21 < X2 x22 Þ ¼ xÐ22 : (3.54)
Ð
1
fX1 X2 ðx1 x2 Þdx1 dx2
x21 1
ð
1 ð
1
fX2 ðx2 Þ ¼ fX1 X2 ðx1 ; x2 Þ dx1 ¼ lx1 ex1 ðlþx2 Þ dx1 : (3.59)
0 0
fX1 X2 ðx1 ; x2 Þ
fX1 ðx1 jx2 Þ ¼ : (3.61)
fX2 ðx2 Þ
The two random variables, X1 and X2, are independent if the events
Therefore, if the random variables X1 and X2 are independent, then their joint
distributions and joint PDFs are equal to the products of the marginal distributions
and densities, respectively.
Example 3.2.8 Determine whether or not the random variables X1 and X2 are
independent, if the joint density is given as:
1=2 for 0 < x1 < 2; 0 < x2 < 1
fX1 X2 ðx1 ; x2 Þ ¼ : (3.66)
0 otherwise
where
1=2 for 0 < x2 < 2
fX1 ðx1 Þ ¼ : (3.68)
0 otherwise
1 for 0 < x2 < 1
fX2 ðx2 Þ ¼ : (3.69)
0 otherwise
From (3.65) and (3.67)–(3.69), we can conclude that the variables are
independent.
The result (3.65) can be generalized to N jointly independent random variables:
Y
N
FX1 ;...;XN ðx1 ; . . . ; xN Þ ¼ FXi ðxi Þ: (3.70)
i¼1
Y
N
fX1 ;...;XN ðx1 ; . . . ; xN Þ ¼ fXi ðxi Þ: (3.71)
i¼1
In order to find the mean value of two joint random variables X1 and X2, we will
apply the similar procedure to that which we used in the case of one random
variable (see Sect. 2.7), starting with a random experiment.
Consider two discrete random variables X1 and X2 with the possible values x1i
and x2j, respectively.
The experiment is performed N times under the same conditions,
N1 X
X N2
N¼ Nij ; (3.72)
i¼1 j¼1
As indicated in Sect. 2.7.2, we now have a finite set of values x1i and x2j, and we
can calculate the arithmetic mean value of the products, also called the empirical
mean value, since it is obtained from the experiment,
P
N1 P
N2
Nij x1i x2j
i¼1 j¼1
X1 X2emp ¼ : (3.74)
N
For a high enough value N, the ratio Nij/N becomes a good approximation of the
probability,
Nij
! PfX1 ¼ x1i ; X2 ¼ x2j g; (3.75)
N
and the empirical mean value becomes independent on experiment and approaches
the mean value of joint random variables X1 and X2,
XX
X 1 X2 ¼ x1i x2j PfX1 ¼ x1i ; X2 ¼ x2j g; (3.76)
i j
X
1 X
1
X1 X2 ¼ x1i x2j PfX1 ¼ x1i ; X2 ¼ x2j g: (3.77)
i¼1 j¼1
X1 ; . . . ; XN ; (3.78)
X
1 X
1
X 1 . . . XN ¼ ... x1i . . . xN j PfX1 ¼ x1i ; . . . ; XN ¼ xN j g: (3.79)
i¼1 j¼1
Similarly,
X
1 X
1
gðX1 ...XN Þ ¼ .. . gðx1i ...xN j ÞPfX1 ¼ x1i ; ...; XN ¼ xN j g: (3.80)
i¼1 j¼1
3.3 Expected Values and Moments 171
Example 3.3.1 The discrete random variables X1 and X2, both take the discrete
values 1 and 1, with the following probabilities:
Solution
(a) From (3.76), we have:
Using a similar approach to that taken in Sect. 2.7.3, we can express the mean
value of the joint continuous random variables X1 and X2, using the joint density
function fX1 X2 ðx1 ; x2 Þ:
ð
1 ð
1
The mean value for the two random variables can be generalized for N continu-
ous random variables:
X 1 ; . . . ; XN : (3.88)
172 3 Multidimensional Random Variables
ð
1 ð
1
Similarly, the expression (3.87) can be generalized for N random variables (the
mean value of a function of N variables),
gðX1 ; . . . ; XN Þ (3.90)
ð
1 ð
1
Consider the sum of two random variables X1 and X2. We can write:
gðX1 ; X2 Þ ¼ X1 þ X2 ; (3.92)
ð
1 ð
1
ð
1 ð
1
fX1 X2 ðx1 ; x2 Þdx2 ¼ fX1 ðx1 Þ; fX1 X2 ðx1 ; x2 Þdx1 ¼ fX2 ðx2 Þ: (3.94)
1 1
ð
1 ð
1
X
N X
N
Xk ¼ Xk : (3.96)
k¼1 k¼1
Example 3.3.2 Verify the relation (3.95) for the discrete random variables from
Example 3.3.1.
Solution From (3.80), we have:
2 X
X 2
X1 þ X2 ¼ ðx1i þ x2j ÞPfX1 ¼ x1i ; X2 ¼x2j g ¼ 3=4 1=4 ¼ 1=2: (3.97)
i¼1 j¼1
To verify the result (3.97), we found the following probabilities for random
variables X1 and X2, using (1.67):
X
2
X1 ¼ x1i PfX1 ¼x1i g ¼ 1 3=8 þ 1 5=8 ¼ 1=4: (3.102)
i¼1
r ¼ n þ k: (3.106)
Equation (3.105) presents the expected value of the function g(X1, X2) of the
random variables X1 and X2, and thus can be obtained using (3.87):
ð
1 ð
1
RX 1 X2 ¼ EfX1 ; X2 g ¼ 0: (3.112)
Note that if the variables are uncorrelated and one or both variables have a zero
mean value, it follows that they are also orthogonal.
Example 3.3.3 The random variables X1 and X2 are related in the following form:
X2 ¼ 2X1 þ 5: (3.113)
Determine whether or not the variables are correlated and orthogonal, if the
random variable X1 has the mean, and the squared mean values equal to 2 and 5,
respectively.
Solution In order to determine if the variables are correlated and orthogonal, we
first have to find the correlation:
Since
The joint central moment of order r, of two random variables X1 and X2, with
corresponding mean values E{X1} and E{X2}, is defined as:
n o
mnk ¼ E ðX1 EfX1 gÞn ðX2 EfX2 gÞk ; (3.117)
r ¼ n þ k: (3.118)
176 3 Multidimensional Random Variables
Using the expression for the mean value of the function of two random variables
(3.87), we arrive at:
ð
1 ð
1
k
mnk ¼ ðx1 X1 Þn ðx2 X2 Þ fX1 X2 ðx1 ; x2 Þdx1 dx2 : (3.119)
1 1
ð
1 ð
1
n
mn1 ;...;nN ¼ ... ðx1 X1 Þn1 ; . . . ; ðxN XN Þ N fX1 ;...;XN ðx1 ; . . . ; xN Þdx1 ; . . . ; dxN :
1 1
(3.120)
Let us first relate the covariance to the independent variables, where the joint
PDF is equal to the product of the marginal PDFs, resulting in the following
covariance:
ð
1 ð
1
CX1 X2 ¼ m11 ¼ ðx1 X1 Þðx2 X2 ÞfX1 ðx1 ÞfX2 ðx2 Þdx1 dx2
1 1
21 32 3
ð ð
1 ð
1 ð
1
¼4 x1 fX1 ðx1 Þdx1 X1 fX1 ðx1 Þdx154 x2 fX2 ðx2 Þdx2 X2 fX2 ðx2 Þdx25
1 1 1 1
¼ X1 X1 X2 X2 ¼ 0:
(3.122)
From (3.122) it follows that the covariance is equal to zero for the independent
variables.
Using (3.95), (3.121) can be simplified as:
CX1 X2 ¼ X1 X2 X1 X2 : (3.123)
Let us now relate covariance with the correlated and orthogonal random
variables.
From (3.123) and (3.111), it follows that the covariance is equal to zero if the
random variables are uncorrelated.
3.3 Expected Values and Moments 177
Therefore, from summing the above statements it follows that the covariance
equals to zero if the random variables are either independent or dependent but
uncorrelated.
Additionally, from (3.112) it follows that if the random variables are orthogo-
nal, then the covariance is equal to the negative product of their mean values.
Consider the sum of two random variables X1 and X2, which is itself a random
variable X:
X ¼ X1 þ X2 : (3.125)
By applying the definition (2.333) of the variance of the random variable X and
using (3.88) and (3.125), we get:
2 2
s2X ¼ ðX XÞ ¼ ðX1 þ X2 X1 þ X2 Þ
2 2
¼ ðX1 X1 Þ þ ðX2 X2 Þ þ 2ðX1 X1 ÞðX2 X2 Þ: (3.126)
The first two terms in (3.126) are the corresponding variances of the variables X1
and X2, respectively, while the third averaged product is the covariance.
Therefore, (3.126) reduces to:
Equation (3.127) states that the variance of the sum of the variables X1 and X2 is
equal to the sum of the corresponding variances if their covariance is equal to zero
(i.e., the variables are either independent or uncorrelated).
Therefore, if the random variables X1 and X2 are either independent or uncorre-
lated ðCX 1 X2 ¼ 0Þ, then
Random variables are independent if one variable does not have any influence
over the values of another variable, and vice versa. For nonrandom variables,
dependency means that if we know one variable, we can find the exact values of
another variable.
However, dependence between random variables can have different degrees of
dependency.
If we can express the relation between random variables X1 and X2 with an exact
mathematical expression, then this dependency is called functional dependency, as
seen for the example (X2 ¼ 2X1 + 2). In the opposite case, we do not have the exact
mathematical expression but the tendency, as for example, if X1 is the height and X2
is the weight of people in a population. In the majority of cases, higher values of X1
correspond to higher values of X2, but there is no mathematical relation to express
this relation.
To this end, the dependence between variables is expressed using some
characteristics, like covariance. However, the covariance contains the information,
not only of dependency of random variables, but also the information about the
dissipation of variables around their mean values. If, for example, the dissipations
of random variables X1 and X2 around their mean values were very small, then the
covariance CX 1 X2 would be small for any degree of dependency in the variables.
This problem is solved by introducing the correlation coefficient ðrX 1 X2 Þ:
C X1 X2 ðX1 X1 ÞðX2 X2 Þ X1 X2 X1 X2
rX1 X2 ¼ ¼ ¼ : (3.130)
sX1 sX2 sX1 sX2 sX1 sX2
What are the values that the correlation coefficient can take?
(a) The variables are equal
Consider a case in which the random variables are equal, X2 ¼ X1, and there-
fore, there is a maximum degree of dependency between them. From (3.130),
and using the definition of variance (2.333), we have:
2
CX1 X2 ðX1 X1 ÞðX1 X1 Þ ðX1 X1 Þ s2X1
rX1 X2 ¼ ¼ ¼ ¼ ¼ 1: (3.131)
sX1 sX2 sX1 sX1 s2X1 s2X1
X2 ¼ aX1 þ b; (3.132)
where a and b are deterministic constants. Figure 3.9 shows this dependency for
a > 0 and a < 0.
3.3 Expected Values and Moments 179
Finally, using (3.133) and (3.134), we can obtain the correlation coefficient:
CX1 X2 as2X1 a 1 for a>0
rX1 X2 ¼ ¼ ¼ ¼ : (3.135)
sX1 sX2 sX1 jajsX1 jaj 1 for a<0
then there is a positive correlation, as shown in Fig. 3.10a, in contrast, to the case
in which
1 rX1 X2 1: (3.138)
Y ¼ X2 : (3.139)
Determine whether or not the random variables X and Y are correlated and find
the coefficient of correlation for the following two cases:
(a) The random variable X is uniform in the interval [1, 1].
(b) The random variable X is uniform in the interval [0, 2].
3.3 Expected Values and Moments 181
Solution
(a) The expected values for the random variables X and Y are:
ð1
1 2 1
X ¼ 0; Y ¼ X2 ¼ x dx ¼ : (3.140)
2 3
1
The obtained result shows that the covariance is equal to zero, and thus the
coefficient of correlation (3.130) is also zero. As such, the random variables are
uncorrelated.
Therefore, variables X and Y are dependent and uncorrelated.
(b) In this case, the random variables are also dependent.
The corresponding mean values are:
ð2
1 2 4
X ¼ 1; Y¼ X2 ¼ x dx ¼ : (3.142)
2 3
0
To calculate (3.143), we first need the third moment of the random variable X:
ð2
1 3
X3 ¼ x dx ¼ 2: (3.144)
2
0
This result indicates that the random variables are correlated. To find the
coefficient of correlation we need the corresponding variances.
From (3.142), we have:
2
s2X ¼ X2 X ¼ 4=3 1 ¼ 1=3; (3.146)
2 2
s2Y ¼ Y 2 Y ¼ X4 X2 ; (3.147)
182 3 Multidimensional Random Variables
ð2
1 4
X4 ¼ x dx ¼ 16=5: (3.148)
2
0
Placing (3.148) and (3.142) into (3.147), we can obtain the variance of the
random variable Y:
qffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffi
sY ¼ s2Y ¼ 1:422 ¼ 1:1925: (3.151)
Using values for the covariance (3.145) and standard deviations (3.150) and
(3.151), we calculate the coefficient of correlation using (3.130):
CXY 2=3
rXY ¼ ¼ ¼ 0:9682: (3.152)
sX sY 0:5774 1:1925
As opposed to case (a), in case (b), the variables are dependent and correlated.
Based on the previous discussion, we can conclude that the dependence is a
stronger condition than correlation. That is, if the variables are independent, they
are also uncorrelated. However, if the random variables are dependent they can be
either correlated or uncorrelated, as summarized in Fig. 3.12.
Positive Negative
Correlation Correlation
rXY >0 rXY <0
Consider two random variables X1 and X2 with a known joint PDF fX 1 X2 ðx1 ; x2 Þ. As a
result of the transformation
Y1 ¼ g1 ðX1 ; X2 Þ
Y2 ¼ g2 ðX1 ; X2 Þ (3.153)
Fig. 3.14 Mapping from (x1, x2) space onto (y1, y2) space
184 3 Multidimensional Random Variables
the elementary area (dx1 dx2) is mapped one-to-one onto a corresponding infinites-
imal area (dy1 dy2). As a result, the corresponding probabilities are equal:
The infinitesimal areas (dy1 dy2) and (dx1 dx2) are related as,
dy1 dy2
dx1 dx2 ¼ ; (3.157)
Jðx1 x2 Þ
where J(x1, x2) is the Jacobian of the transformation (3.153) [PAP65, p. 201]:
@g1 @g1
@x2 :
Jðx1 ; x2 Þ ¼ @x1 (3.158)
@g2 @g2
@x1 @x2
where g1
i , i ¼ 1, 2 is the inverse transformation of (3.153),
x1 ¼ g1
1 ðy1 ; y2 Þ
(3.160)
x2 ¼ g1
2 ðy1 ; y2 Þ:
Note that in (3.159) the absolute value of the Jacobian must be used because the
joint density cannot be negative as opposite to the Jacobian which can be either
positive or negative.
If for certain values y1, y2, there is no real solution (3.160), then
x21 þ x22
1
fX1 X2 ðx1 ; x2 Þ ¼ e 2s2 ; 1 < x1 < 1; 1 < x2 < 1: (3.162)
2ps2
Find the joint density function of the random variables Y1 and Y2 if,
X1 ¼ Y1 cos Y2 ;
(3.163)
X2 ¼ Y1 sin Y2 :
x2
y2 ¼ tan1 ¼ g2 ðx1 ; x2 Þ
x1 (3.164)
qffiffiffiffiffiffiffiffiffiffiffiffiffiffi
y1 ¼ x21 þ x22 ¼ g1 ðx1 ; x2 Þ:
Y ¼ gðX1 ; X2 Þ: (3.167)
r ¼ X1 or r ¼ X2 : (3.168)
186 3 Multidimensional Random Variables
Y
X1
X2
Y ¼ gðX1 ; X2 Þ;
r ¼ X1 ; (3.169)
or
Y ¼ gðX1 ; X2 Þ:
r ¼ X2 : (3.170)
The joint PDF fYr(y, z) is obtained from (3.159). Finally, the required PDF fY(y)
is obtained from fYr(y, z), using (3.42a, 3.42b):
ð
1
Y ¼ X1 X 2 ; (3.172)
r ¼ X1 (3.173)
y ¼ x1 x2 :
z ¼ x1 : (3.174)
From here,
x1 ¼ z; x2 ¼ y=z: (3.175)
3.4 Transformation of Random Variables 187
and
ð
1
1
fY ðyÞ ¼ fX X ðz; y=zÞdz: (3.178)
j zj 1 2
1
In this case, the infinitesimal area (dy1 dy2) corresponds to two or more infinitesi-
mal areas (dx1 dx2), resulting in:
P
fX1 X2 ðxi1 ; xi2 Þ
fY1 Y2 ðy1 ; y2 Þ ¼ i
Jðxi ; xi Þ xi ¼ g1 ðy ; y Þ : (3.179)
1 2 1 1 1 2
xi2 ¼ g1
2 ðy 1 ; y2 Þ
Example 3.4.3 Find the joint PDF of the random variables Y1 and Y2,
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
X1
Y1 ¼ X12 þ X22 ; Y2 ¼ ; (3.180)
X2
if the given joint PDF of the random variables X1 and X2 is given as:
x21 þ x22
1
fX1 X2 ðx1 ; x2 Þ ¼ e 2s2 : (3.181)
2ps2
y22 þ 1
Jðy1 ; y2 Þ ¼ : (3.184)
y1
y1 y2 y1
x11 ¼ pffiffiffiffiffiffiffiffiffiffiffiffi2ffi ; x12 ¼ pffiffiffiffiffiffiffiffiffiffiffiffi2ffi ;
1 þ y2 1 þ y2
y1 y2 y1 (3.185)
x1 ¼ pffiffiffiffiffiffiffiffiffiffiffiffi2ffi ; x2 ¼ pffiffiffiffiffiffiffiffiffiffiffiffi2ffi :
2 2
1 þ y2 1 þ y2
Finally, we have:
8
>
< y21
y1 1 2
fY1 Y2 ðy1 ; y2 Þ ¼ e 2s for y1 0; (3.187)
: 1 þ y2 ps
> 2 2
0 for y1 < 0:
X1 ; X2 ; . . . ; XN ; (3.188)
with the joint density function fX1 X2 ;...;XN ðx1 ; x2 ; . . . ; xN Þ, are transformed into new
random variables
Y1 ; Y2 ; . . . ; YN ; (3.189)
3.5 Characteristic Functions 189
Y1 ¼ g1 ðX1 ; . . . ; XN Þ
Y2 ¼ g2 ðX1 ; . . . ; XN Þ
.. (3.190)
.
YN ¼ gN ðX1 ; . . . ; XN Þ
dy1 ; . . . ; dyN
dx1 ; . . . ; dxN ¼ ; (3.191)
Jðx1 ; . . . ; xN Þ
3.5.1 Definition
The joint characteristic function of two random variables X1 and X2, denoted as
fX1 X2 ðo1 ; o2 Þ, is defined as the expected value of the complex function
ejðo1 X1 þo2 X2 Þ ,
n o
fX1 X2 ðo1 ; o2 Þ ¼ E ejðo1 X1 þo2 X2 Þ : (3.194)
fX1 X2 ðx1 ; x2 Þ. This means that using the inverse two-dimensional Fourier transform,
one can obtain the joint PDF from its joint characteristic function, as shown in the
following expression:
ð
1 ð
1
1
fX1 X2 ðx1 ; x2 Þ ¼ fX1 X2 ðo1 ; o2 Þejðo1 x1 þo2 x2 Þdo1 do2 : (3.196)
ð2pÞ2
1 1
If random variables X1 and X2 are independent, then their joint density is equal to
the product of the marginal densities. Thus the joint characteristic function (3.195)
becomes:
ð
1 ð
1
fX1 X2 ðo1 ; o2 Þ ¼ ejo1 x1 ejo2 x2 fX1 ðx1 ÞfX2 ðx2 Þdx1 dx2 ¼
1 1
ð
1 ð
1
ejo1 x1 fX1 ðx1 Þdx1 ejo2 x2 fX2 ðx2 Þdx2 ¼ fX1 ðo1 ÞfX2 ðo2 Þ: (3.197)
1 1
Therefore, for the independent random variables, the joint characteristic function
is equal to the product of the marginal characteristic functions. The reverse is also
true, that is if the joint characteristic function is equal to the product of marginal
characteristic functions, then the corresponding random variables are independent.
The marginal characteristic functions are obtained by making o1 or o2 equal to
zero in the joint characteristic function, as demonstrated in (3.198):
Y ¼ X 1 þ X2 : (3.199)
If the random variables X1 and X2 are independent, then the joint characteristic
function of their sum is equal to the product of the marginal characteristic functions,
This result can be applied for the sum of N independent random variables Xi,
X
N
Y¼ Xi : (3.202)
i¼1
Y
N
fY ðoÞ ¼ fXi ðoÞ: (3.203)
i¼1
Example 3.5.1 Find the characteristic function of the random variable Y, where
Example 3.5.2 The random variables X1 and X2 are independent. Find the joint
characteristic function of the variables X and Y, as given in the following equations:
X ¼ X1 þ 2X2 ;
Y ¼ 2X1 þ X2 : (3.206)
fXY ðo1 ; o2 Þ ¼ Efejðo1 Xþo2 Y g ¼ Efejo1 ðX1 þ2X2 Þþjo2 ð2X1 þX2 Þ g
¼ EfejX1 ðo1 þ2o2 ÞþjX2 ð2o1 þo2 Þ g: (3.207)
Knowing that the random variables X1 and X2 are independent from (3.207) and
(3.201), we arrive at:
Using the moment theorem for one random variable, as an analogy, we arrive at
the moment theorem that finds joint moments mnk from the joint characteristic
function, as
nþk @ nþk fX1 X2 ðo1 ; o2 Þ
mnk ¼ ðjÞ o1 ¼ 0 : (3.209)
@o1 n @o2 k
o2 ¼ 0
The random variable Y is equal to the sum of the independent random variables
X1 and X2,
Y ¼ X 1 þ X2 : (3.210)
ð
1
1
fY ðyÞ ¼ fY ðoÞejoy do: (3.211)
2p
1
ð
1
1
fY ðyÞ ¼ fX1 ðoÞfX2 ðoÞejoy do: (3.212)
2p
1
By applying the relation (2.386) to the characteristic function fX1 ðoÞ in (3.212),
and interchanging the order of the integrations, we arrive at:
ð
1 ð
1
1
fY ðyÞ ¼ fX2 ðoÞejoy do fX1 ðx1 Þejox1 dx1
2p
1 1
ð
1 ð
1
1
¼ fX1 ðx1 Þdx1 fX2 ðoÞejoðyx1 Þ do: (3.213)
2p
1 1
3.5 Characteristic Functions 193
ð
1
1
fX2 ðoÞejoðyx1 Þ do ¼ fX2 ðy x1 Þ; (3.214)
2p
1
ð
1
ð
1
The expressions (3.215) and (3.216) present the convolution of the PDFs of
random variables X1 and X2, and can be presented as:
fY ðyÞ ¼ fX1 ðx1 Þ fX2 ðx2 Þ ¼ fX2 ðx2 Þ fX1 ðx1 Þ; (3.217)
X
N
Y¼ Xi ; (3.218)
i¼1
Example 3.5.3 Two resistors R1 and R2 are in a serial connection (Fig. 3.16a). Each
of them randomly changes its value in a uniform way for 10% about its nominal
value of 1,000 O. Find the PDF of the equivalent resistor R,
R ¼ R1 þ R2 (3.220)
X ¼ X1 þ X 2 : (3.221)
194 3 Multidimensional Random Variables
From (3.217), the PDF of the random variable X is equal to the convolution of
the PDFs of the random variables X1 and X2. In this case, it is convenient to present
the convolution graphically, as shown in Fig. 3.16.
Exercise 3.1 The joint random variables X1 and X2 are defined in a circle of a
radius r ¼ 2, as shown in Fig. 3.17. Their joint PDF is constant inside the circle.
Find and plot the joint PDF and the marginal PDFs. Determine whether or not the
random variables X1 and X2 are independent.
Answer The area A in Fig. 3.17 is:
A ¼ r 2 p ¼ 4p: (3.222)
The volume below the joint density is the height of the cylinder which, according
to (3.37) must be unity, is shown in Fig. 3.18.
The joint density is:
1=4p for x21 þ x22 4;
fX1 X2 ðx1 ; x2 Þ ¼ (3.223)
0 otherwise:
3.6 Numerical Exercises 195
8 pffiffiffiffiffiffiffiffiffiffiffiffi
>
< Ð2
x1
1 4 x22
fX1 X2 ðx1 ; x2 Þdx1 ¼ ðx11 x12 Þ ¼ for jx2 j 2;
fX2 ðx2 Þ ¼ 4p 2p
> x1
: 1
0 otherwise:
(3.227)
ð2 ð1
Pf0 < x1 < 2; 0 < x2 < 1g ¼ 1=6 dx1 dx2 ¼ 1=3: (3.232)
0 0
The random variables are independent because the joint PDF can be presented as
a product of the corresponding marginal PDFs.
Exercise 3.3 Find the conditional density fX1 ðx1 jx2 Þ for the joint variables from
Exercise 3.1 and find whether the variables are dependent.
Answer From (3.50), (3.223), and (3.227) we have:
fX1 X2 ðx1 ; x2 Þ
fX1 ðx1 jX2 ¼ x2 Þ ¼
fX2 ðx2 Þ
( pffiffiffiffiffiffiffiffiffiffiffiffiffi
p1ffiffiffiffiffiffiffi2ffi for jx1 j < 4 x22 ; jx2 j < 2;
¼ 2 4x2 (3.233)
0 otherwise:
The conditional density (3.233) is different from fX1 ðx1 Þ, given in (3.226), thus
confirming that the variables are dependent.
Exercise 3.4 Find the conditional density fX1 ðx1 jX2 ¼ x2 Þ for the joint density from
Exercise 3.2 and find whether the variables are independent.
Answer From (3.50), (3.223), and (3.227), we have:
1=6
fX X ðx1 ; x2 Þ ¼ 1=3 for 1 < x1 < 2;
fX1 ðx1 jX2 ¼ x2 Þ ¼ 1 2 ¼ 1=2 (3.234)
fX2 ðx2 Þ 0 otherwise:
X
3
PfX1 ¼ x11 g ¼ PfX1 ¼ x11 ; X2 ¼ x2j g ¼ 0:1 þ 0:15 þ 0:2 ¼ 0:45:
j¼1
X
3
PfX1 ¼ x12 g ¼ PfX1 ¼ x12 ; X2 ¼ x2j g ¼ 0:15 þ 0:25 þ 0:15 ¼ 0:55: (3.239)
j¼1
Similarly,
X
2
PfX2 ¼ x21 g ¼ PfX1 ¼ x1j ; X2 ¼ x21 g ¼ 0:1 þ 0:15 ¼ 0:25;
j¼1
X
2
PfX2 ¼ x22 g ¼ PfX1 ¼ x1j ; X2 ¼ x22 g ¼ 0:15 þ 0:25 ¼ 0:4;
j¼1
X
2
PfX2 ¼ x23 g ¼ PfX1 ¼ x1j ; X2 ¼ x23 g ¼ 0:2 þ 0:15 ¼ 0:4; (3.242)
j¼1
FX1 ðx1 jX2 ¼ x21 ¼ 0Þ; and FX2 ðx2 jX1 ¼ x11 ¼ 1Þ; (3.245)
P
2
PfX1 ¼ x1i ; X2 ¼ x21 ¼ 0guðx1 x1i Þ
i¼1
FX1 ðx1 jX2 ¼ x21 ¼ 0Þ ¼ : (3.246)
PfX2 ¼ x21 ¼ 0g
Similarly, we have:
Exercise 3.7 Two-dimensional random variable (X1, X2), has a uniform joint PDF
in the area A, as shown in Fig. 3.23. Find the marginal PDFs.
Answer Knowing that the area A ¼ 1/2, the corresponding joint PDF is:
2 for ðx1 ; x2 Þ 2 A;
fX1 X2 ðx1 ; x2 Þ ¼ (3.251)
0 otherwise:
From Fig. 3.23, we see that x2 changes from 0 to x1, where x1 is in the interval
[0, 1]:
ð
1 xð1
Therefore,
2x1 for 0 < x1 < 1;
fX1 ðx1 Þ ¼ (3.254)
0 otherwise:
Similarly, we have:
ð
1 xð2
and
2x2 for 0 < x2 < 1;
fX2 ðx2 Þ ¼ (3.256)
0 otherwise:
X2 ¼ 2X1 þ 5: (3.257)
Determine whether or not the random variables are orthogonal and correlated if,
EfX1 X2 g ¼ RX 1 X2 ¼ 0: (3.259)
and
E X12 ¼ 5: (3.262)
Finally,
RX 1 X2 ¼ 2 5 þ 5 2 ¼ 0: (3.263)
coefficient 2, the correlation is negative and the correlation factor is equal to 1.
This is confirmed in the following numerical calculation.
From (3.130), the correlation coefficient is:
C X1 X2 ðX1 2ÞðX2 X2 Þ
rX 1 X 2 ¼ ¼
sX1 sX2 sX1 sX2
X1 X2 2X2 X1 X2 þ 2X2 X1 X2 X1 X2
¼ ¼ : (3.264)
sX1 sX2 sX1 sX2
X1 X2 ¼ 0: (3.265)
From (3.257), we find the mean value and the variance of the variable X2:
X2 ¼ 2X1 þ 5 ¼ 1;
s2X2 ¼ 4s2X1 ¼ 4: (3.266)
2 1
rX1 X2 ¼ ¼ 1: (3.267)
12
Exercise 3.9 The random variable X is uniform in the interval [0, 2p]. Show that
the random variables Y and Z are dependent and uncorrelated, if
This indicates that the variables are dependent. We also notice that the depen-
dence is squared, that is, it does not contain any degree of linear relation. Therefore,
the correlation is zero and the random variables are uncorrelated. This is confirmed
in the following calculation.
The mean values and the autocorrelation are:
ð
2p
1
Y¼ sin x dx ¼ 0;
2p
0
2ðp
1
Z¼ cos x dx ¼ 0; (3.270)
2p
0
2ðp 2ðp
1 1
YZ ¼ sin x cos x dx ¼ sin 2x dx ¼ 0:
2p 4p
0 0
3.6 Numerical Exercises 203
Answer In order to find the mean values for X1 and X2, we need the corresponding
probabilities P(X1i) and P(X2j), i ¼ 1, . . ., 3; j ¼ 1, . . ., 3.
From (1.67), we have:
Exercise 3.11 The discrete random variable X1 has 0 and 1 as its discrete values,
whereas X2 has 0, 1, and 1 as its discrete values. Find the mean value and the
variance of the random variable X, if
2 2 2
s2X ¼ X2 X ¼ ð2X1 þ X22 Þ ð2X1 þ X22 Þ : (3.283)
2 X
X 3
X ¼ 2X1 þ X22 ¼ ð2x1i þ x22j ÞPfX1 ¼ x1i ; X2 ¼ x2j g
i¼1 j¼1
2
2 X
X 3
X2 ¼ ð2X1 þ X22 Þ ¼ ð2x1i þ x22j Þ2 PfX1 ¼ x1i ; X2 ¼ x2j g
i¼1 j¼1
2
s2X ¼ X2 X ¼ 4:9 1:92 ¼ 1:29: (3.286)
3.6 Numerical Exercises 205
Exercise 3.12 The random variables X1 and X2 are independent and have the
density functions
Y1 ¼ X1 þ X2 (3.288)
and
X1
Y2 ¼ (3.289)
X 1 þ X2
are independent.
Answer From (3.288) to (3.289), we write:
y1 ¼ x1 þ x2 ;
x1
y2 ¼ : (3.290)
x1 þ x2
The random variables X1 and X2 are independent, and their joint PDF from
(3.287) is:
eðx1 þx2 Þ for x1 0; x2 0;
fX1 X2 ðx1 ; x2 Þ ¼ fX1 ðx1 ÞfX2 ðx2 Þ ¼ (3.291)
0 otherwise:
Using (3.289), we can present the joint density (3.291) in the following form:
ey1 for y1 0; 0 y2 1;
fX1 X2 ðy1 ; y2 Þ ¼ (3.292)
0 otherwise:
The joint density of Y1 and Y2 is obtained from (3.159), (3.291), and (3.293) as:
where
Therefore, from (3.295) it follows that the random variables Y1 and Y2 are also
independent.
Exercise 3.13 The random variables X1 and X2 are independent and have the
following density functions:
ex1 for x1 > 0 ex2 for x2 > 0
fX1 ðx1 Þ ¼ ; f ðx Þ ¼ : (3.297)
0 otherwise X2 2 0 otherwise
X1
X¼ : (3.298)
X2
Solution We have two input random variables and one output variable. In order to
apply the expression (3.159) we must first introduce the auxiliary variable Y,
Y ¼ X1 : (3.299)
x1
x¼ ;
x2
y ¼ x1 : (3.300)
fX1 X2 ðx; yÞ y
fXY ðx; yÞ ¼ ¼ 2 ey ey=x ; y > 0; x > 0: (3.303)
jJðx; yÞj x
ð
1 ð
1 ð
1
y yð1þ1=xÞ 1
fX ðxÞ ¼ fXY ðx; yÞdy ¼ 2
e dy ¼ 2 y eay dy; (3.304)
x x
0 0 0
where
a ¼ 1 þ 1=x: (3.305)
Using integral 1 from Appendix A and from (3.304) and (3.305), we get:
1 x2 1
fX ðxÞ ¼ ¼ ; x0 (3.306)
x ðx þ 1Þ2 ðx þ 1Þ2
2
X ¼ X1 þ a cos X2 ; (3.307)
From here,
ð
1
Exercise 3.15 The random variables X1 and X2 are independent. Find the PDF of
the random variable X, if
X1
X¼ : (3.312)
X2
x21 x22
x1 2 x2 2
fX1 ðx1 Þ ¼ e 2a uðx1 Þ; fX2 ðx2 Þ ¼ 2 e 2b uðx2 Þ: (3.313)
a2 b
x ¼ x1 =x2 ;
y ¼ x2 ; (3.314)
1 x21 x22 y2 x2 1
fX X ðx; yÞ yx1 x2 2 a2 þ b2
xy 3
2
þ 2
fXY ðx; yÞ ¼ 1 2 ¼ e ¼ e 2 a b : (3.316)
jJðx; yÞj a2 b2 a2 b2
Denoting
1 x2 1
a¼ þ ; (3.317)
2 a2 b2
x 1 2a2 x
fX ðxÞ ¼ ¼ ; x0 (3.319)
a2 b2 2a2 b2 a2
2
x2 þ 2
b
3.6 Numerical Exercises 209
Exercise 3.16 The random variables X1 and X2 are independent and uniform in the
intervals [1, 2] and [3, 5], respectively (as shown in Fig. 3.24). Find the density of
their sum:
X ¼ X1 þ X2 : (3.320)
Answer The density of the sum of the independent random variables is equal to the
convolution of their densities. This result can be easily obtained graphically, as
shown in Fig. 3.25.
Exercise 3.17 The random variable X1 uniformly takes values around its nominal
value 100 in the interval [100 10%]. Similarly, the variable X2 changes uni-
formly in the interval [200 10%]. Find the probability that the sum of the
variables X ¼ X1 + X2 is less than 310, if the variables X1 and X2 are independent.
Answer The density of the sum is the convolution of the corresponding densities
and is obtained graphically, as shown in Fig. 3.26. The desired probability is
presented in the shaded area, and is equal to:
Exercise 3.18 The random variables X1 and X2 are independent. Find the density of
their sum if the corresponding densities are given as:
For x < 0,
fX ðxÞ ¼ 0: (3.325)
For x > 0,
ð
1
Exercise 3.19 Find the characteristic function of the variable X with the density
function shown in Fig. 3.27 in terms of the characteristic function of the variables
X1 and X2. The variable X is the sum of X1 and X2, and the variables X1 and X2 are
independent.
3.6 Numerical Exercises 211
where
ð ð
fX1 ðoÞ ¼ ejox1 fX1 ðx1 Þdx1 ; fX2 ðoÞ ¼ ejox2 fX2 ðx2 Þdx2 : (3.328)
x1 x2
The PDF from Fig. 3.27 is obtained by the convolution of the densities of the
variables X1 and X2, as shown in Fig. 3.28.
The characteristic functions are:
ð1 ð1
1 jox1 1 jox2 sin o
fX1 ðoÞ ¼ e dx1 ¼ fX2 ðoÞ ¼ e dx2 ¼ : (3.329)
2 2 o
1 1
2
sin o
fX ðoÞ ¼ : (3.330)
o
212 3 Multidimensional Random Variables
1
Y
-1
-2
-3
-4
0 0.2 0.4 0.6 0.8 1
X
a b
4 0.25
3.5
0.2
3
2.5 0.15
Y1
2
Y
1.5 0.1
1
0.05
0.5
0 0
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2 0 0.1 0.2 0.3 0.4 0.5
X X1
c d
100 100
90 90
80 80
70 70
Y2
60
Y3
60
50 50
40 40
30 30
20 20
5 6 7 8 9 10 -10 -9 -8 -7 -6 -5
X2 X3
Solution The plot in Fig. 3.31a indicates that the variables are independent. Their
sum is shown in Fig. 3.31b.
The estimated and the mathematical PDFs are shown in Fig. 3.32.
Exercise M.3.4 (MATLAB file exercise_M_3_4.m) Generate the uniform random
variables X1 and X2 in the intervals [1, 6] and [2, 2], respectively.
(a) Determine whether or not X1 and X2 are independent observing the plot X2 vs. X1.
(b) Plot the sum
Y ¼ X 1 þ X2 : (3.333)
and estimate the PDF of the sum.
a b X1+X2
1 2
0.9 1.8
0.8 1.6
0.7 1.4
0.6 1.2
X2
0.5 1
0.4 0.8
0.3 0.6
0.2 0.4
0.1 0.2
0 0
0 0.2 0.4 0.6 0.8 1 0 200 400 600 800 1000
X1
X1+X2
PDF estimation
0.5
0
0 0.5 1 1.5 2
cells
CONVOLUTION OF PDFs
1
PDF
0.5
0
0 0.5 1 1.5 2
range
(c) Find the mathematical PDF of Y as the convolution of the corresponding PDFs,
Solution The plot in Fig. 3.33a indicates that the variables are independent. Their
sum is shown in Fig. 3.32b.
The estimated and the mathematical PDFs are shown in Fig. 3.34.
a b X1+X2
2 8
1.5 7
1 6
5
0.5
4
X2
0
3
-0.5
2
-1 1
-1.5 0
-2 -1
1 2 3 4 5 6 0 200 400 600 800 1000
X1
X1+X2
PDF estimation
0.2
0.15
0.1
0.05
0
-1 0 1 2 3 4 5 6 7 8
cells
CONVOLUTION OF PDFs
0.2
0.15
PDF
0.1
0.05
0
-1 0 1 2 3 4 5 6 7 8
range
Exercise M.3.5 (MATLAB file exercise_M_3_5.m) Plot the joint density for
independent uniform random variables X1 and X2:
(a) X1 and X2 are in the intervals [1, 2] and [1, 2], respectively.
(b) X1 and X2 are in the intervals [2, 4] and [1, 6], respectively.
Solution The joint densities are shown in Fig. 3.35.
Exercise M.3.6 (MATLAB file exercise_M_3_6.m) Plot the joint density for
independent normal random variables X1 and X2:
(a) X1 ¼ N(0, 1); X2 ¼ N(0, 1).
(b) X1 ¼ N(4, 4); X2 ¼ N(3, 9).
Solution The joint densities are shown in Fig. 3.36.
1.5
JOINT PDF
0.5
0 2
2 1.8 1.5
1.6
1.4
1.2 1
1
x2 x1
1.5
1
JOINT PDF
0.5
-0.5
-1 4
6
5 3
4
3
2 2
1
x2 x1
0.2
0.15
JOINT PDF
0.1
0.05
0 5
4 2 0
0
-2
-4 -5
x2 x1
0.03
0.025
JOINT PDF
0.02
0.015
0.01
0.005
0 10
15 10 5
5 0 0
-5
-10 -5
x2 x1
Fig. 3.36 Joint PDFs (a) X1 ¼ N(0, 1); X2 ¼ N(0, 1) (b) X1 ¼ N(4, 4); X2 ¼ N(3, 9)
Exercise M.3.7 (MATLAB file exercise_M_3_7.m) Plot the joint density for the
variables Y1 and Y2 from Example 3.4.1:
8
>
< y21
y1 2
fY1 Y2 ðy1 ; y2 Þ ¼ e 2s for y1 0; 0 y2 2p; : (3.335)
>
: 2ps2
0 otherwise:
0.1
-0.05
-0.1
8
6 10
4 5
2 0
0 -5
y2 y1
3.8 Questions
Q.3.1. Can the same marginal density functions possibly result in different joint
density functions?
Q.3.2. In general, is a knowledge of the marginal PDFs sufficient to specify the
joint PDF?
Q.3.3. The random variables X1 and X2 are independent. Does it mean that the
transformed variables Y1 ¼ g1(X1) and Y2 ¼ g2(X2) are also independent?
Q.3.4. In which condition is the conditional distribution equal to the marginal
distribution FX1 ðx1 jX2 x2 Þ ¼ FX1 ðx1 Þ?
Q.3.5. Is the necessary condition fX1 X2 ðx1 ; x2 Þ ¼ fX1 ðx1 ÞfX2 ðx2 Þ, for the indepen-
dence of two random variables X1 and X2, also a sufficient condition?
Q.3.6. Is the following true?
Pfx1 < X1 x1 þ dx1 ; x2 < X2 x2 þ dx2 g ¼ fX1 X2 ðx1 ; x2 Þdx1 dx2 : (3.337)
3.9 Answers
A.3.1. Yes. As illustrated in the following example, two discrete random variables
X1 and X2 have the following possible values:
Note that the joint PDFs (3.344) and (3.345) are different.
Next we find the marginal densities.
220 3 Multidimensional Random Variables
ð
1
ð
1
Note that the marginal densities (3.346) and (3.347) are equal in both
cases, but the joint densities are different.
Therefore, the same marginal density functions may result in different
joint density functions.
A.3.2. In the previous example, we concluded that knowledge of the marginal
density functions does not provide all of the information about the relations
of the random variables.
In case (a),
fX1 ðx1 ÞfX2 ðx2 Þdx1 dx2 ¼ fY1 Y2 ðy1 ; y2 Þdy1 dy2 : (3.352)
FY1 Y2 ðy1 ;y2 Þ ¼ fY1 Y2 ðy1 ;y2 Þdy1 dy2 ¼ fX1 X2 ðx1 ;x2 Þdx1 dx2
1 1 1 1
ð1 Þ gðxð2 Þ
gðx yð1 yð2
¼ fX1 ðx1 ÞfX2 ðx2 Þdx1 dx2 ¼ fX1 ðx1 Þdx1 fX2 ðx2 Þdx2 ¼ FY1 ðy1 ÞFY2 ðy2 Þ:
1 1 1 1
(3.353)
resulting in:
A.3.5. The joint density function for the independent random variables X1 and X2
can be written as (see Q.3.3. and [THO71, p. 71]):
ð
1 ð
1
¼ g1 ðx1 Þ g2 ðx2 Þdx2 ¼g1 ðx1 ÞK1 ; g2 ðx2 Þdx2 ¼K1 ; (3.358)
1 1
ð
1 ð
1
¼ g2 ðx2 Þ g1 ðx1 Þdx1 ¼g2 ðx2 ÞK2 ; g1 ðx1 Þdx1 ¼K2 : (3.359)
1 1
ð
1 ð
1
K1 K2 ¼ 1: (3.363)
3.9 Answers 223
which confirms that the condition is also a sufficient condition for the
independence.
A.3.6. Yes. It is true, as explained in the following:
The event
Pfa < X1 b; c < X2 dg ¼ FX1 X2 ðb; dÞ FX1 X2 ða; dÞ ½FX1 X2 ðb; cÞ FX1 X2 ða; cÞ:
(3.369)
Pfa < X1 b; c < X2 dg ¼ FX1 X2 ðb; dÞ FX1 X2 ða; dÞ FX1 X2 ðb; cÞ þ FX1 X2 ða; cÞ:
(3.370)
Placing (3.372) and (3.373) into (3.371), and using the definition (3.27),
we get:
A.3.8. A set of N random variables is statistically independent “if any joint PDF of
M variables, M N, is factored into the product of the corresponding
marginal PDFs” [MIL04, p. 209]. According to this statement, the
conditions (3.338)–(3.340) do not assure that the all three variables are
3.9 Answers 225
PfðX ¼ UÞ \ ðY 0Þg
PfðX ¼ UÞ \ ðY 0Þg: (3.376)
The input random variable X is a discrete random variable, while the output
random variable Y is a continuous random variable. The conditional
variables are also continuous: Y1 ¼ Y|U and Y2 ¼ Y|U.
The PDFs of the noise and the variables Y1 and Y2 are shown in
Fig. 3.40.
From Fig. 3.40, we get:
Fig. 3.40 PDFs of (a) noise (b) r.v. Y1, and (c) r.v. Y2
From (3.380), it follows that the covariance and correlation are equal if
either mean value of X1 or X2–or of both of them–is zero.
A.3.11. It is not necessarily be zero. It depends on the range of the variable X. See
Example 3.3.4.
A.3.12. Let us recall of the definition of covariance (3.121):
When variables X1 and X2 are correlated, they have either the same
tendency (positive correlation), or the opposite tendency (negative corre-
lation). Therefore, if there is a positive correlation between them, the
dissipation of the random variable X1 around its mean value will have
the same sign (in a majority of cases) as the dissipation of the random
variable X2 around its mean value. As a consequence, the expected value of
the product of (X1 E{X1}) and (X2 E{X2}) cannot equal zero, and the
covariance is not zero.
Similarly, if variables X1 and X2 have a negative correlation, the signs of
the dissipations of the values of the random variables will be opposite. As a
consequence, the expected value of the product of (X1 E{X1}) and
(X2 E{X2}) cannot equal zero, and the covariance is not zero.