Sums of independent random variables
This lecture discusses how to derive the distribution of the sum of two independent random variables. We explain first how to derive the distribution function of the sum and then how to derive its probability mass function (if the commands are discrete) or its probability density function (if the commands are continuous).
Distribution function of a sum
The following proposition characterizes the distribution function of the sum in terms of the distribution functions of the two commands.
Proposition Let and be two independent random variables and denote by and their distribution functions. Let
z=x+y
and denote the distribution function of z by Fz(z) The following holds:
Fz(z) =E[Fx(z-X) ]
or Fz(z) =E[Fy(z-Y) ]
Proof
Example Let X be a uniform random variable with support Rx=[0 , 1] and probability density function
1 if x ϵ Rx
Fx(x)={
0 otherwise
and Y another uniform random variable, independent of X with support Rx=[0 , 1] and probability density function
1 if y ϵ Ry
Fy(y)={
0 otherwise
The distribution function of X is
1 if x ≤ 0
Fx(x)=∫lim(x,-∞) ={ x if 0≤1
1 if x>1
The distribution function of Z=X+Y is
Fz(Z)=E[Fx(z-Y)
=∫lim(x,-∞)Fx(z-y)fy(y)dy
=∫lim(1,0)Fx(z-y)dy
=∫lim(z , z-1)Fx(z-y)tdt (by a change of variable t=z-y)
=∫lim(z-1,z)FX(t)dt (exchanging the bounds of integration)
There are four cases to consider:=∫lim(1,0)Fx(z-y)dy
=∫lim(z , z-1)Fx(z-y)tdt (by a change of variable t=z-y)
=∫lim(z-1,z)FX(t)dt (exchanging the bounds of integration)
Fz(Z)=∫lim(z-1 , z)Fx(t)dt
=∫lim(z-1 , z)0 dt
=0
If 0 < z ≤ 1, then
Fz(Z)=∫lim(z-1 , z)Fx(t)dt
=∫lim(z-1 , 0)Fx(t)dt + ∫lim(0 , z)Fx(t)dt
=∫lim(z-1 , 0)0 dt + ∫lim(0 , z)t dt
=0 + [(1/2)t2] = (1/2)z2
If ,1< z ≤ 2 then
Fz(Z)=∫lim(z-1 , z)Fx(t)dt
=∫lim(z-1 , 1)Fx(t)dt + ∫lim(1 , z)Fx(t)dt
=∫lim(z-1 , 1)tdt + ∫lim(1 , z)1dt
=(1/2)-(z-1)2+z-1
=1/2-(1/2)z2+(1/2)2z-1/2+z-1
=(1/2)z2+2z-1
- If z>2, then
Fz(Z)=∫lim(z-1 , z)Fx(t)dt
Fz(Z)=∫lim(z-1 , z)1dt
=z-(z-1)=1
Note that here are possible values of the random variable . While random variables are usually denoted by capital letters, to represent the numbers in the range we usually use lowercase letters such as , , , , etc. For a discrete random variable , we are interested in knowing the probabilities of . Note that here, the event is defined as the set of outcomes in the sample space for which the corresponding value of is equal to . In particular,
The probabilities of events are formally shown by the probability mass function (pmf) of .
Definition
Let be a discrete random variable with range (finite or countably infinite). The function
is called the probability mass function (PMF) of .
Example
Probability Mass Function (PMF)
If is a discrete random variable then its range is a countable set, so, we can list the elements in . In other words, we can write
Definition
Let be a discrete random variable with range (finite or countably infinite). The function
Thus, the PMF is a probability measure that gives us probabilities of the possible values for a random variable. While the above notation is the standard notation for the PMF of , it might look confusing at first. The subscript here indicates that this is the PMF of the random variable . Thus, for example, shows the probability that . To better understand all of the above concepts, let's look at some examples.
Example
I toss a fair coin twice, and let be defined as the number of heads I observe. Find the range of , , as well as its probability mass function .
Solution:
Here, our sample space is given by
The number of heads will be , or . Thus
Since this is a finite (and thus a countable) set, the random variable is a discrete random variable. Next, we need to find PMF of . The PMF is defined as
We have
To better visualize the PMF, we can plot it. Figure 3.1 shows the PMF of the above random variable . As we see, the random variable can take three possible values and . The figure also clearly indicates that the event is twice as likely as the other two possible values. The Figure can be interpreted in the following way: If we repeat the random experiment (tossing a coin twice) a large number of times, then about half of the times we observe , about a quarter of times we observe , and about a quarter of times we observe .
Example
I have an unfair coin for which , where . I toss the coin repeatedly until I observe a heads for the first time. Let be the total number of coin tosses. Find the distribution of .
- First, we note that the random variable can potentially take any positive integer, so we have . To find the distribution of , we need to find for . We have
- First, we note that the random variable can potentially take any positive integer, so we have . To find the distribution of , we need to find for . We have
Consider a discrete random variable with Range. Note that by definition the PMF is a probability measure, so it satisfies all properties of a probability measure. In particular, we have
- for all , and
- .
Properties of PMF:
- for all ;
- ;
- for any set .
Example
For the random variable in Example 3.4,
- Check that .
- If , find P.
In Example 3, we obtained
Thus,
- to check that , we have
; - if , to find P, we can write
.
No comments:
Post a Comment