Independent Random Variables
Just as we defined the concept of independence between two events A and B, we shall now define independent random variables. Intuitively, we intend to say that X and Y are independent random variables if the outcome of X, say, in no way influences the outcome of Y. This is an extremely important notion and there are many situations in which such an assumption is justified.EXAMPLE 1
Consider two sources of radioactive material at some distance from each other which are emitting a-particles. Suppose that these two sources are observed for a period of two hours and the number of particles emitted is recorded. Assume that the following random variables are of interest: X1 and X2,
the number of particles emitted from the first source during the first and second hour, respectively; and Y 1 and Y 2, the number of particles emitted from the second source during the first and second hour, respectively. It seems intuitively obvious that (X1 and Y1), or ( X1 and Y2 ), or ( X2 and Y1 ), or (X2 and Y2) are all pairs of independent random variables, For the X's depend only on the characteristics of source I while the Y's depend on the characteristics of source 2, and there is presumably no reason to assume that the two sources influence each other's behavior in any way. When we consider the possible independence of X1 and X2, however, the matter is not so clear cut. Is the number of particles emitted during the second hour influenced by the number that was emitted during the first hour? To answer this question we would have to obtain additional information about
the mechanism of emission. We could certainly not assume. a priori, that X1
and X2 are independent.
Let us now make the above intuitive notion of independence more precise.
Definition. (a) Let (X, Y) be a two-dimensional discrete random variable. We say that X and Y are independent random variables if and only if p(xi , yj) =p(xi)q(yj) for all i and j. That is,P(X = xi , Y = yj)= P(X = xi)P(Y =yj), for all i and j.
(b) Let (X, Y) be a two-dimensional continuous random variable. We say that X and Y are independent random variables if and only if f(x, y) = g(x)h(y) for all (x, y), where f is the joint pdf, and g and h are the marginal pdf's of X and Y, respectively.
Note: If we compare the above definition with that given for independent events, the similarity is apparent: we are essentially requiring that the joint probability (or joint pdf) can be factored. The following theorem indicates that the above definition is equivalent to another approach we might have taken.
Theorem 1.
(a) Let (X, Y) be a two-dimensional discrete random variable. Then X and Y are independent if and only if p(xi | yj ) = p(xi ) for all i and j (or equivalently, if and only if q(yj | xi) = q(yi) for all i and j).
(b) Let (X, Y) be a two-dimensional continuous random variable. Then X and Y are independent if and only if g(x | y) = g(x), or equivalently, if and only if h(y | x) = h(y), for all (x, y).
EXAMPLE 2
Suppose that a machine is used for a particular task in the morning and for a different task in the afternoon. Let X and Y represent the number of times the machine breaks down in the morning and in the afternoon, respectively.
Table 1 gives the joint probability distribution of (X. Y).
An easy computation reveals that for all the entries in Table 1 we have
p(xi , yj) =p(xi)q(yj)
Thus X and Y are independent random variables.
X \ Y | 0 | 1 | 2 | q(yj) |
---|---|---|---|---|
0 | 0.1 | 0.2 | 0.2 | 0.5 |
1 | 0.04 | 0.08 | 0.08 | 0.2 |
2 | 0.06 | 0.12 | 0.12 | 0.3 |
p(xi) | 0.2 | 0.4 | 0.4 | 1.0 |
EXAMPLE 3
Let X and Y be the life lengths of two electronic devices. Suppose that their joint pdf is given by
f(x, y) = e-(x+y) x≥0, y≥0
Since we can factor f(x, y) = e-x e-x, the independence of X and Y is established.
EXAMPLE 4.
Suppose that f(x, y) = 8xy, 0 ≤ x ≤ y ≤ l. Although/ is (already) written in factored form, X and Y are not independent, since the domain of definition
{(x. y) | 0 ≤ x ≤ y l}
is such that for given x, y may assume only values greater than that given x and less than I. Hence
X and Y are not independent.
Note: From the definition of the marginal probability distribution (in either the discrete or the continuous case) it is clear that the joint probability distribution determines, uniquely, the marginal probability distribution. That is, from a knowledge of the joint pdf f. we can obtain the marginal pdf's g and h. However, the converse is not true! That is, in general, a knowledge of the marginal pdf's g and h do not determine the joint pdf .f Only when X and Y are independent is this true, for in this case we have f(x, y) = g(x)h(y).
The following result indicates that our definition of independent random variables is consistent with our previous definition of independent events.
Theorem 1
Let (X, Y) be a two-dimensional random variable. Let A and B be events whose occurrence (or nonoccurrence) depends only on X and Y, respectively. (That is, A is a subset of Rx, the range space of X, while B is a subset of Rv, the range space of Y.) Then, if X and Y are independent
random variables,
we have P(A n B) = P(A)P(B).
Proof (continuous case only):
\[P\left( A\cap B \right)=\iint\limits_{\left( A\cap B \right)}{f\left( x,y \right)}dxdy=\iint\limits_{\left( A\cap B \right)}{g(x)h\left( y \right)dx}=\int_{A}{g\left( x \right)}dx\int_{B}{h\left( y \right)dy=p\left( A \right)p\left( B \right)}\]
Distribution of Product and Quotient of Independent Random Variables
Among the most important functions of X and Y which we wish to consider are the sum S = X + Y, the product W= XY, and the quotient Z = X/ Y. We can use the method in this section to obtain the pdf of each of these random variables under very general conditions. We shall investigate the sum of random variables in much greater detail in Chapter 11. Hence we will defer discussion of the probability distribution of X + Y until then. We shall, however, consider the product and quotient in the following two theorems.Theorem 2
Let (X, Y) be a continuous two-dimensional random variable and assume that X and Y are independent. Hence the pdf f may be written as f(x, y)= g(x)h(y). Let W= XY.
Then the pdf of W, say p, is given by \[p(w)=\int\limits_{-\infty }^{+\infty }{g\left( u \right)h\left( \frac{w}{u} \right)}\left| \frac{1}{u} \right|du\]
Proof: Let w = xy and u = x. Thus x = u and y = w/u. The Jacobian is
\[J=\left| \begin{matrix} 1 & 0 \\ \frac{-w}{{{u}^{2}}} & \frac{1}{0} \\ \end{matrix} \right|=\frac{1}{u}\]
Hence the joint pdf of W = XY and U = X is
s(w,u) = g(u)h (w/h) |1/u| ·
The marginal pdf of W is obtained by integrating s(w, u) with respect to u, yielding the required result. The values of w for- which p(w) > 0 would depend on the values of (x, y) for which/(x, y) > 0.
Note: In evaluating the above integral we may use the fact that
\[\int_{-\infty }^{+\infty }{g(u)h}\left( \frac{w}{u} \right)\left| \frac{1}{u} \right|\]
EXAMPLE 5.
Suppose that we have a circuit in which both the current I and the resistance R vary in some random way. Specifically, assume that I and R are independent continuous random variables with the following pdf's.
I: g(i) = 2i, 0 ≤ i ≤ 1 and 0 elsewhere;
R: h(r) = r2/9 0≤ r ≤ 3 and 0 elsewhere
. Of interest is the random variable E = IR (the voltage in the circuit). Let p be the pdf of E. By Theorem 2 we have
\[p(e)=\int_{-\infty }^{+\infty }{g(i)h}\left( \frac{e}{i} \right)\left| \frac{1}{e} \right|di\]
Some care must be taken in evaluating this integral. First, we note that the variable
of integration cannot assume negative values. Second, we note that in order
for the integrand to be positive, both the pdf's appearing in the integrand must
be positive. Noting the values for which g and hare not equal to zero we find that
the following conditions must be satisfied:
0≤i≤1 and 0≤e/i≤3
These two inequalities are, in turn, equivalent to
e/3 ≤ i ≤ I. Hence the above integral becomes
\[p(e)=\int_{1/8}^{1}{2i\frac{{{e}^{2}}}{9{{i}^{2}}}}\frac{1}{i}di=-\frac{2}{9}{{e}^{2}}\frac{1}{i}|_{e/3}^{1}=\frac{2}{9}e\left( 3-e \right),\] div="">
0≤ e ≤ 3
Theorem 3
Let (X, Y) be a continuous two-dimensional random variable and assume that X and Y are independent. [Hence the pdf of ( X, Y) may be written as f(x, y) = g(x)h(y).] Let Z = X/ Y. Then the pdf of Z, say q, is given by
\[q(z)=\int\limits_{-\infty }^{+\infty }{g(vz)h(v)}\left| v \right|dv\]
Proof"
Let z = x/y and let v = y. Hence x = vz and y = v. The Jacobian is
\[J=\left| \begin{matrix}
v &
z \\
0 &
1 \\
\end{matrix} \right|=v\]Hence the joint pdf of Z = X/ Y and V = Y equals
f(z, v) = g(vz)h(v) |v|.
Integrating this joint pdf with respect to v yields the required marginal pdf of Z.
EXAMPLE 6
Let X and Y represent the life lengths of two light bulbs manufactured by different processes. Assume that X and Y are independent random variables with the pdf's f and g, respectively, where
f(x) = e-x, x≥0 and 0 elsewhere;
g(y) = 2e-2y, y≥0 and 0 elsewhere.
Of interest might be the random variable X/ Y, representing the ratio of the two life lengths. Let q be the pdf of Z.
By Theorem 6.5 we have
\[q(z)=\int_{-\infty }^{+\infty }{g(vz)h(v)\left| v \right|dv}\]
Since X and Y can assume only non negative quantities, the above integration need only be carried out over the positive values of the variable of integration. In addition, the integrand will be positive only when both the pdf's appearing are positive. This implies that we must have v ≥ 0 and vz ≥ 0. Since z > 0, these inequalities imply that v ≥ 0.
Thus the above becomes
\[q(z)=\int_{0}^{\infty }{{{e}^{-vz}}2{{e}^{-2v}}dv}=2\int_{0}^{\infty }{v{{e}^{-v\left( 2+s \right)}}}dv\]
An easy integration by parts yields
q(z) = 2/(z + 2)2 ' z≥ 0.
No comments:
Post a Comment