Discrete Random Variables - NayiPathshala

Breaking

Total Pageviews

Loading...

Search Here

1/08/2018

Discrete Random Variables

Discrete Random Variables 

Definition. Let X be a random variable. If the number of possible values of X (that is, Rx, the range space) is finite or countably infinite, we call X a discrete random variable. That is, the possible values of X may be listed as x i. x 2, ..• , x,,, . . . In the finite case the list terminates and in the countably infinite case the list continues indefinitely.
EXAMPLE 1.
A radioactive source is emitting a-particles. The emission of these particles is observed on a counting device during a specified period of time. The following random variable is of interest: X = number of particles observed. What are the possible values of X? We shall assume that these values consist of all non negative integers. That is, Rx = {O, 1, 2, ... , n, ... } . An objection which we confronted once before may again be raised at this point. It could be argued that during a specified (finite) time interval it is impossible to observe more than, say N particles, where N may be a very large positive integer. Hence the possible values for X should really be: 0, 1, 2, ... , N. However, it turns out to be mathematically simpler to consider the idealized description given above. In fact, whenever we assume that the possible values of a random variable X are countably infinite, we are actually considering an idealized representation of X. In view of our previous discussions of the probabilistic description of events with a finite or countably infinite number of members, the probabilistic description of a discrete random variable will not cause any difficulty. We proceed as follows. Definition. Let X be a discrete random variable. Hence Rx,  the range space of X, consists of at most a countably infinite number of values, x i. x 2, • • • With . • each possible outcome x; we associate a number p(xi) = P(X = xi), called the probability of xi. The numbers p(xi), i = 1, 2, ... must satisfy the following conditions:

: (a) p(xi 0 for all i,
(b) Î£ i=∞to1 p(xi) = 1.

The function p defined above is called the probability function (or point probability function) of the random variable X. The collection of pairs (xi, p(xi)), i = I, 2, . . . , is sometimes called the probability distribution of X.

Notes: (a) The particular choice of the numbers p(x;) is presumably determined from the probability function associated with events in the sample space Son which X is defined. That is, p(x;) = P[s /X(s) =xi]. (See Eqs. 1 and 2.) However, since we are interested only in the values of X, that is Rx, and the probabilities associated with these values, we are again suppressing the functional nature of X. (See Fig. 3.) Although in most cases the numbers will in fact be determined from the probability distribution in some underlying sample space S, any set of numbers p(xi) satisfying Eq. (3) may serve as proper probabilistic description of a discrete random variable.
(b) If X assumes only a finite number of values, say x1, . .. , xN, then p(xi) = 0 for i > N, and hence the infinite series in Eq. (3) becomes a finite sum.
(c) We may again note an analogy to mechanics by considering a total mass of one unit distributed over the real line with the entire mass located at the points x1, x2, . •• The numbers p(xi) represent the amount of mass located at xi
(d) The geometric interpretation of a probability distribution is often useful.

Let B be an event associated with the random variable X. That is, B  Rx  Specifically, suppose that B = {xi xi2 , • • • } • Hence

P(B) = P[s / X(s)  B] (since these events are equivalent)
 = P[s / X(s) = xi ;  j = I, 2, . . . ] =Σ j=∞to1 p(xi)•

In words: The probability of an event B equals the sum of the probabilities of the individual outcomes associated with B.


Notes: (a) Suppose that the discrete random variable X may assume only a finite number of values, say x1, ... , XN. If each outcome is equally probable, then we obviously have p(x1) = • • • = p(xN) = l/N.
(b) If X assumes a countably infinite number of values, then it is impossible to have all outcomes equally probable. For we cannot possibly satisfy the condition :\[\sum\limits_{n=1}^{\infty }{p(xi)}\] = 1 if we must have p(xi) = c for all i.
(c) In every finite interval there will be at most a finite number of the possible values of X. If some such interval contains none of these possible values we assign probability zero to it. That is, if Rx = {x1, x2, ... , xn} and if no xi E [a, b], then P[a  X  b) = 0.
EXAMPLE 2. 
Suppose that a radio tube is inserted into a socket and tested. Assume that the probability that it tests positive equals !-; hence the probability that it tests negative is ;1-. Assume furthermore that we are testing a large supply of such tubes. The testing continues until the first positive tube appears. Define the random variable X as follows: X is the number of tests required to terminate the experiment. The sample space associated with this experiment is s = {+, -+, --+, ---+, ... }. To determine the probability distribution of X we reason as follows. The possible values of X are I, 2, ... , n, ... (we are obviously dealing with the idealized sample space). And X = n if and only if the first (n - 1) tubes are negative and the nth tube is positive. If we suppose that the condition of one tube does not affect the condition of another, we may write
p(n) = P(X = n) = (1/4)n-1 (2/4) , n = I, 2, ...
To check that these values of p(n) satisfy Eq. ( 3) we note that

\[\sum\limits_{n=1}^{\infty }{p(n)}\]=3/4(1+1/4+1/16+..........)
=3/4(1/{1-1/4})=1

Note: We are using here the result that the geometric series 1 + r + r2 + • • converges to 1/( 1 - r) whenever |r| < 1. This is a result to which we shall refer repeatedly. Suppose that we want to evaluate P(A), where A is defined as {The experiment ends after an even number of repetitions}. Using Eq (4)
we have ""
P(A) = \[\sum\limits_{n=1}^{\infty }{p(2n)}\]= 3/16+3/256
=3/16(1+1/16+....)
=3/16(1/{1-1/16})=1/5


Discrete Random Variables

Case 1.
 X is a discrete random variable. If X is a discrete random variable
and Y = H(X), then it follows immediately that Y is also a discrete random
variable.
For suppose that the possible values of X may be enumerated as Xi. x2, • • • ,
Xn, • • • Then certainly the possible values of Y may be enumerated as y1 =
H(x1), Y2 = H(x2), • • • (Some of the above Y-values may be the same, but this
certainly does not distract from the fact that these values may be enumerated.)

EXAMPLE 3.
 Suppose that the random variable X assumes the three values -1, 0, and 1 with probabilities 1/3 , 1/2 and 1/6, respectively. Let Y = 3X + 1. Then the possible values of Y are -2, 1, and 4, assumed with probabilities 1/3  1/2 and 1/6
This example suggests the following general procedure: If x1 . • • ,xn, • . • are
the possible values of X, p(xi) = P(X = xi), and His a function such that to
each value y there corresponds exactly one value x, then the probability distribution
of Y is obtained as follows.

Possible values of Y:              yi= H(xi), i = l, 2, ... , n, . . . ;
Probabilities of Y:               q(y;) = P( Y = yi) = p(xi).

Quite often the function H does not possess the above characteristic, and it may
happen that several values of X lead to the same value of Y, as the following
example illustrates.
EXAMPLE 4. 
Suppose that we consider the same random variable X as in
Example 3 above. However, we introduce Y = X2• Hence the possible values
of Y are zero and one, assumed with probabilities 1/2 , 1/2 For Y = 1 if and only
if X = -1 or X = 1 and the probability of this latter event is 1/3+1/6=1/2
In terms of our previous terminology the events B: { X = ± 1} and C: { Y = 1}
are equivalent events and hence by Eq. (2) have equal probabilities.

The general procedure for situations as described in the above example is as
follows: Let xi xi2, • • • , xik, . . . , represent the X-values having the property
H(xij) = yi for all j. Then

q(yi)=p(Y=yi)=p(xi)+p(xi2)+.......

In words: To evaluate the probability of the event { Y = y;}, find the equivalent
event in terms of X (in the range space Rx) and then add all the corresponding
probabilities.

EXAMPLE 5.
Let X have possible values 1, 2, ... , n, . . . and suppose that
P(X = n) = (1/2)n. Let
          y = 1             if X is even,
          = -1               if X is odd.
Hence Y assumes the two values -1 and + 1. Since Y= I if and only if X= 2, or  X= 4, or X= 6, or ..., applying  yields
                    P( Y = 1) = :1/4+1/16+1/64+..............=1/3
Hence
                  P( Y = - 1 ) = 1 - P( Y = I) = 2/3
Case 2. 
X is a continuous random variable. It may happen that X is a continuous random variable while Y is discrete. For example, suppose that X may assume all real values while Y is defined to be +1 if X  0 while Y = -1 if X < 0. in order to obtain the probability distribution of Y, simply determine
the equivalent event (in the range space Rx) corresponding to the different values
of Y. In the above case, Y = 1 if and only if X  0, while Y = -1 if and only
if X < 0. Hence P(Y = 1) = P(X  0) while P(Y = - 1)= P(X < 0). If
the pdf of X is known, these probabilities may be evaluated. In the general case,
if { Y = yi} is equivalent to an event, say A, in the range space of X, then
                    q(yi) = P( Y = yi) = Af(x) dx .

No comments:

Post a Comment