Independent Events - NayiPathshala

Breaking

Total Pageviews

Loading...

Search Here

1/06/2018

Independent Events

Independent Events 

We have considered events A and B which could not occur simultaneously, that is, \[P(A\bigcap{B})=\ \phi \]. Such events were called mutually exclusive. We have noted previously that if A and B are mutually exclusive, then P(A /B) = 0, for the given occurrence of B precludes the occurrence of A. At the other extreme we have the situation, also discussed above. in which \[B\supset A\]
 and hence P(B / A) = l. In each of the above situations, knowing that B has occurred gave us some very definite information concerning the probability of the occurrence of A. However, there are many situations in which knowing that some event B did occur has no bearing whatsoever on the occurrence or nonoccurrence of A.

EXAMPLE 
Suppose that a fair die is tossed twice. Define the events A and B as follows:

 A = {the first die shows an even number},
B = {the second die shows a 5 or a 6}.

 It is intuitively clear that events A and B are totally unrelated. Knowing that B did occur yields no information about the occurrence of A. In fact, the following computation bears this out. Taking as our sample space the 36 equally likely outcomes considered in
 Example 
we find that P(A) = 18/36= 1/2, P(B) = 12/36 = 1/3, while P(\[A\bigcap{B}\]) = 6/36 = 1/6  Hence

 \[P(\frac{A}{B})=\frac{P(A\bigcap{B})}{P(B)}=\frac{\frac{1}{6}}{\frac{1}{3}}=\frac{1}{2}\]

Thus we find, as we might have expected, that the unconditional probability is equal to the conditional probability P(A I B). Similarly,

 \[P(\frac{B}{A})=\frac{P(B\bigcap{A})}{P(A)}=\frac{\frac{1}{2}}{\frac{1}{6}}=\frac{1}{3}\]

Hence we might be tempted to say that A and B are independent if and only if P(A / B) = P(A) and P(B /A) = P(B). Although this would be essentially appropriate there is another approach which gets around the difficulty encountered here, namely that both P(A) and P(B) must be nonzero before the above equalities are meaningful. Consider P(\[A\bigcap{B}\])  , assuming that the above.conditional probabilities are equal to the corresponding unconditional probabilities. We have

\[\begin{align}
  & P(A\bigcap B)=P(A/B)P(B)=P(A)P(B) \\
 & P(A\bigcap B)=P(B/A)P(B)=P(B)P(A) \\
\end{align}\]

Thus we find that provided neither P(A) nor P(B) equals zero, the unconditional probabilities are equal to the conditional probabilities if and only if 
P(A n B) = P(A)P(B). 
Hence we make the following formal definition. [If either P(A) or P(B) equals zero, this definition is still valid.] 
Definition. A and B are independent events if and only if 

P(A n B) = P(A)P(B).                                               (3.6) 

Note: This definition is essentially equivalent to the one suggested above, namely, that A and Bare independent if P(B /A) = P(B) and P(A /B) = P(A). This latter form is slightly more intuitive, for it says precisely what we have been trying to say before: that A and B are independent if knowledge of the occurrence of A in no way influences the probability of the occurrence of B. That the formal definition adopted also has a certain intuitive appeal may be seen by considering the following example. 

No comments:

Post a Comment