Sums of independent random variables - NayiPathshala

Breaking

Total Pageviews

Loading...

Search Here

1/21/2018

Sums of independent random variables

Sums of independent random variables

This lecture discusses how to derive the distribution of the sum of two independent random variables. We explain first how to derive the distribution function of the sum and then how to derive its probability mass function (if the commands are discrete) or its probability density function (if the commands are continuous).
Distribution function of a sum
The following proposition characterizes the distribution function of the sum in terms of the distribution functions of the two commands.
Proposition Let X and Y be two independent random variables and denote by [eq1] and [eq2] their distribution functions. Let
                                             z=x+y
and denote the distribution function of z by Fz(z) The following holds:
                                              Fz(z) =E[Fx(z-X) ]
or                                           Fz(z) =E[Fy(z-Y) ]
Proof
Example Let X be a uniform random variable with support Rx=[0 , 1] and probability density function
                                    1 if x Ïµ Rx
          Fx(x)={
                                     0    otherwise
and Y another uniform random variable, independent of X with support  Rx=[0 , 1] and probability density function
                                   1 if y Ïµ Ry
          Fy(y)={
                                   0    otherwise
The distribution function of X is
                                                         1 if x ≤ 0
          Fx(x)=lim(x,-∞) ={        x if 0≤1
                                                                               1 if x>1

The distribution function of Z=X+Y is
                  Fz(Z)=E[Fx(z-Y)
              =lim(x,-∞)Fx(z-y)fy(y)dy
              =lim(1,0)Fx(z-y)dy
              =lim(z , z-1)Fx(z-y)tdt           (by a change of variable t=z-y)
              =lim(z-1,z)FX(t)dt                  (exchanging the bounds of integration)
There are four cases to consider:
If z ≤ 0 then
                       Fz(Z)=lim(z-1 , z)Fx(t)dt
                              =lim(z-1 , z)0 dt
                              =0
If 0 < z ≤ 1, then
                       Fz(Z)=lim(z-1 , z)Fx(t)dt
                                     =lim(z-1 , 0)Fx(t)dt + lim(0 , z)Fx(t)dt
                                      =lim(z-1 , 0)0 dt + lim(0 , z)t dt
                                      =0 + [(1/2)t2] = (1/2)z2
If ,1< z ≤ 2 then
                       Fz(Z)=lim(z-1 , z)Fx(t)dt
                             =lim(z-1 , 1)Fx(t)dt  + lim(1 , z)Fx(t)dt
                             =lim(z-1 , 1)tdt  + lim(1 , z)1dt
                             =(1/2)-(z-1)2+z-1
                             =1/2-(1/2)z2+(1/2)2z-1/2+z-1
                     =(1/2)z2+2z-1
  1. If z>2, then
                       Fz(Z)=lim(z-1 , z)Fx(t)dt
                       Fz(Z)=lim(z-1 , z)1dt
                              =z-(z-1)=1

Probability Mass Function (PMF)

If X is a discrete random variable then its range RX is a countable set, so, we can list the elements in RX. In other words, we can write
RX={x1,x2,x3,...}.
Note that here x1,x2,x3,... are possible values of the random variable X. While random variables are usually denoted by capital letters, to represent the numbers in the range we usually use lowercase letters such as xx1yz, etc. For a discrete random variable X, we are interested in knowing the probabilities of X=xk. Note that here, the event A={X=xk} is defined as the set of outcomes s in the sample space S for which the corresponding value of X is equal to xk. In particular,
A={sS|X(s)=xk}.
The probabilities of events {X=xk} are formally shown by the probability mass function (pmf) of X.
Definition 
Let X be a discrete random variable with range RX={x1,x2,x3,...} (finite or countably infinite). The function
PX(xk)=P(X=xk), for k=1,2,3,...,
is called the probability mass function (PMF) of X.
Thus, the PMF is a probability measure that gives us probabilities of the possible values for a random variable. While the above notation is the standard notation for the PMF of X, it might look confusing at first. The subscript X here indicates that this is the PMF of the random variable X. Thus, for example, PX(1) shows the probability that X=1. To better understand all of the above concepts, let's look at some examples.


Example 
I toss a fair coin twice, and let X be defined as the number of heads I observe. Find the range of XRX, as well as its probability mass function PX.
Solution: 
Here, our sample space is given by
S={HH,HT,TH,TT}.
The number of heads will be 01 or 2. Thus
RX={0,1,2}.
Since this is a finite (and thus a countable) set, the random variable X is a discrete random variable. Next, we need to find PMF of X. The PMF is defined as
PX(k)=P(X=k) for k=0,1,2.
We have
PX(0)=P(X=0)=P(TT)=14,
PX(1)=P(X=1)=P({HT,TH})=14+14=12,
Although the PMF is usually defined for values in the range, it is sometimes convenient to extend the PMF of X to all real numbers. If xRX, we can simply write PX(x)=P(X=x)=0. Thus, in general we can write
PX(x)={P(X=x)if x is in RX0otherwise
To better visualize the PMF, we can plot it. Figure 3.1 shows the PMF of the above random variable X. As we see, the random variable can take three possible values 0,1 and 2. The figure also clearly indicates that the event X=1 is twice as likely as the other two possible values. The Figure can be interpreted in the following way: If we repeat the random experiment (tossing a coin twice) a large number of times, then about half of the times we observe X=1, about a quarter of times we observe X=0, and about a quarter of times we observe X=2.
Example 
I have an unfair coin for which P(H)=p, where 0<p<1. I toss the coin repeatedly until I observe a heads for the first time. Let Y be the total number of coin tosses. Find the distribution of Y.
    • First, we note that the random variable Y can potentially take any positive integer, so we have RY=N={1,2,3,...}. To find the distribution of Y, we need to find PY(k)=P(Y=k) for k=1,2,3,.... We have
      PY(1)=P(Y=1)=P(H)=p,
      PY(2)=P(Y=2)=P(TH)=(1p)p,
      PY(3)=P(Y=3)=P(TTH)=(1p)2p,
      ....
      ....
      ....
      PY(k)=P(Y=k)=P(TT...TH)=(1p)k1p.
      Thus, we can write the PMF of Y in the following way
      PY(y)={(1p)y1pfor y=1,2,3,...0otherwise

Consider a discrete random variable X with Range(X)=RX. Note that by definition the PMF is a probability measure, so it satisfies all properties of a probability measure. In particular, we have
  • 0PX(x)1 for all x, and
  • xRXPX(x)=1.
Also note that for any set ARX, we can find the probability that XA using the PMF
P(XA)=xAPX(x).
Properties of PMF:
  • 0PX(x)1 for all x;
  • xRXPX(x)=1;
  • for any set ARX,P(XA)=xAPX(x).



Example 
For the random variable Y in Example 3.4,
  1. Check that xRYPY(y)=1.
  2. If p=12, find P(2Y<5).
In Example 3, we obtained
PY(k)=P(Y=k)=(1p)k1p, for k=1,2,3,...
Thus,
  1. to check that yRYPY(y)=1, we have

    yRYPY(y)=k=1(1p)k1p
    =pj=0(1p)j
    =p11(1p) Geometric sum
    =1;
  2. if p=12, to find P(2Y<5), we can write

    P(2Y<5)=k=24PY(k)
    =k=24(1p)k1p
    =12(12+14+18)
    =716.

No comments:

Post a Comment