Independent Random Variables - NayiPathshala

Breaking

Total Pageviews

Loading...

Search Here

1/20/2018

Independent Random Variables

Independent Random Variables 

Just as we defined the concept of independence between two events A and B, we shall now define independent random variables. Intuitively, we intend to say that X and Y are independent random variables if the outcome of X, say, in no way influences the outcome of Y. This is an extremely important notion and there are many situations in which such an assumption is justified.

EXAMPLE 1

Consider two sources of radioactive material at some distance from each other which are emitting a-particles. Suppose that these two sources are observed for a period of two hours and the number of particles emitted is recorded. Assume that the following random variables are of interest: X1 and X2 , the number of particles emitted from the first source during the first and second hour, respectively; and Y1 and Y2 , the number of particles emitted from the second source during the first and second hour, respectively. It seems intuitively obvious that (X1 and Y1), or (X2 and Y2 ), or (X2 and Y1), or (X2 and Y2 ) are

all pairs of independent random variables, For the X's depend only on the characteristics of source I while the Y's depend on the characteristics of source 2, and there is presumably no reason to assume that the two sources influence each other's behavior in any way. When we consider the possible independence of X1  and X2, however, the matter is not so clear cut. Is the number of particles emitted during the second hour influenced by the number that was emitted during the first hour? To answer this question we would have to obtain additional information about the mechanism of emission. We could certainly not assume. a priori , that X1 and X2 are independent.
Let us now make the above intuitive notion of independence more precise.

Definition. 

(a) Let (X, Y) be a two-dimensional discrete random variable. We say that X and Y are independent random variables if and only if P(xi,yj)= p(xi)q(yj) for all i and j. That is,P(X = xi, Y =yj)= P(X = X1)P(Y =Y1), for all i and j.
(b) Let (X, Y) be a two-dimensional continuous random variable. We say that X and Y are independent random variables if and only if f(x, y) = g(x)h(y) for all (x, y), where f is the joint pdf, and g and h are the marginal pdf's of X and Y, respectively.

Note:
 If we compare the above definition with that given for independent events, the similarity is apparent: we are essentially requiring that the joint probability (or joint pdf) can be factored. The following theorem indicates that the above definition is equivalent to another approach we might have taken.

Theorem 1 

(a) Let (X, Y) be a two-dimensional discrete random variable. Then X and Y are independent if and only if p(xi|yj) = p(xi) for all i and j (or equivalently, if and only if q(yj |xi) = q(yj) for all i and j).

(b) Let (X, Y) be a two-dimensional continuous random variable. Then X and Y are independent if and only if g(x I y) = g(x), or equivalently, if and only if h(y I x) = h(y), for all (x, y).

Proof" See Problem 2


EXAMPLE 2

 Suppose that a machine is used for a particular task in the morning and for  different task in the afternoon. Let X and Y represent the number of times the machine breaks down in the morning and in the afternoon, respectively. Table 1 gives the joint probability distribution of (X. Y). An easy computation reveals that for all the entries in Table 1 we have 

P(xi,yj)= p(xi)q(yj)

Thus X and Y are independent random variables.

X\Y012q(yj)
00.10.20.20.5
10.040.080.080.2
20.060.120.120.3
p(xi)0.20.40.41.0

EXAMPLE 3

 Let X and Y be the life lengths of two electronic devices. Suppose that their joint pdf is given by


f(x,y)= e-(x+y)                                     x ≥ 0 , y≥0

Since we can factor f(x, y) = e-x e-y, the independence of X and Y is established. 

EXAMPLE 4

 Suppose that f(x, y) = 8xy,  0  x  y  l.Although/ is (already) written in factored form, X and Y are not independent, since the domain of definition {(x. y) | 0  x  y  l} is such that for given x, y may assume only values greater than that given x and less than I. Hence X and Y are not independent.

Note: From the definition of the marginal probability distribution (in either the discrete or the continuous case) it is clear that the joint probability distribution determines, uniquely, the marginal probability distribution. That is, from a knowledge of the joint pdf f. we can obtain the marginal pdf's g and h. However, the converse is not true! Thal is, in general, a knowledge of the marginal pdf's g and h do not determine the joint pdf .f Only when X and Y are independent is this true, for in this case we have f(x, y) = g(x)h(y). The following result indicates that our definition of independent random variables is consistent with our previous definition of independent events.


Theorem 2.

 Let (X, Y) be a two-dimensional random variable. Let A and B be events whose occurrence (or nonoccurrence) depends only on X and Y, respectively. (That is, A is a subset of Rx, the range space of X, while B is a subset of Rv, the range space of Y.) Then, if X and Y are independent random variables, we have

 P(A  B) = P(A)P(B). 

Proof :(continuous case only):

P(A  B) = (A ∩ B)f(x, y) dx dy = (A ∩ B) g(x)h(y) dx dy
 = g(x) dxB h(y) dy = P(A)P(B)

No comments:

Post a Comment