Functions of a Random Variable
In defining a random variable X we pointed out, quite emphatically, that X is a function defined from the sample space S to the real numbers. In defining a two dimensional random variable (X, Y) we were concerned with a pair of functions X = X (s), Y = Y(s), each of which is defined on the sample space of some experiment and each of which assigns a real number to every s E S, thus yielding the two-dimensional vector [X(s), Y(s)].Let us now consider Z = H1( X, Y), a function of the two random variables X and Y. It should be clear that Z = Z(s) is again a random variable. Consider the following sequence of steps:
(a) Perform the experiment e and obtain the outcome s.
(b) Evaluate the numbers X(s) and Y(s).
(c) Evaluate the number Z = H1 [X(s), Y(s)].
The value of Z clearly depends on s, the original outcome of the experiment. That is, Z = Z(s) is a function assigning to every outcome s E S a real number, Z(s). Hence Z is a random variable. Some of the important random variables we shall be interested in are X + Y, XY, X/ Y, min (X, Y), max (X, Y), etc.
The problem we solved in the previous chapter for the one-dimensional random variable arises again: given the joint probability distribution of (X, Y), what is the probability distribution of Z = H1 ( X, Y)? (It should be clear from the numerous previous discussions on this point, that a probability is distribution is induced on , Rz the sample space of Z.)
If (X, Y) is a discrete random variable, this problem is quite easily solved.The following (one-dimensional) random variables might be of interest:
U = min (X, Y) = least number of items produced by the two lines;
V = max (X, Y) = greatest number of items produced by the two lines;
W = X + Y = total number of items produced by the two lines.
To obtain the probability distribution of U, say, we proceed as follows. The possible values of U are: 0, I, 2, and 3. To evaluate P( U = 0) we argue that U = 0 if and only if one of the following occurs: X = 0, Y = 0 or X = 0, Y = I or X = 0, Y = 2 or X = 0, Y = 3 or X = I, Y = 0 or X = 2, Y = 0 or X = 3,
Y = 0 or X = 4, Y = 0 or X = 5, Y = 0. Hence P( U = 0) = 0.28. The rest of the probabilities associated with U may be obtained in a similar way. Hence the probability distribution of U may be summarized as follows: u: 0, I, 2, 3; P( U = u): 0.28, 0.3p, 0.25, 0.17. The probability distribution of the random variables V and W as defined above may be obtained in a similar way.
If (X, Y) is a continuous two-dimensional random variable and if Z = H1 ( X, Y) is a continuous function of (X, Y), then Z will be a continuous (one-dimensional) random variable and the problem of finding its pdf is somewhat more involved. In order to solve this problem we shall need a theorem which we state and discuss below. Before doing this, let us briefly outline the basic idea. In finding the pdf of Z = H1 ( X, YJ it is often simplest to introduce a second random variable, say W = H2( X, Y), and first obtain the joint pdf of Z and W, say k(z, w). From a knowledge of k(z, w) we can then obtain the desired pdf of Z, say g(z), by simply integrating k(z, w) with respect to w. That is,
\[g\left( z \right)=\int_{-\infty }^{+\infty }{k\left( z,w \right)}dw\]
The remaining problems are
(1) how to find the joint pdf of Z and W, and
(2)how to choose the appropriate random variable W = H2(X, Y). To resolve the latter problem, let us simply state that we usually make the simplest possible choice for W. In the present context, W plays only an intermediate role, and we are not really interested in it for its own sake. In order to find the joint pdf of Z and W we need Theorem 1 .
Theorem 1.
Suppose that (X, Y) is a two-dimensional continuous random variable with joint pdf f Let Z = H1 ( X, Y) and W = H2( X, Y), and assume that the functions H1 and H2 satisfy the following conditions:
(a) The equations z = H1(x, y) and w = H2(x, y) may be uniquely solved for x and y in terms of z and w, say x = G1(z, w) and y = G2(z, w).
(b) The partial derivatives ax/az, ax/aw, ay/az, and ayjaw exist and are continuous.Then the joint pdf of (Z, W ), say k(z, w), is given by the following expression: k(z, w) = f[G1(z, w), G2(t, w)] | J(z, w)|, where J(z, w) is the following 2 * 2
determinant:
\[J\left( z,w \right)=\left| \begin{array}{*{35}{l}} \frac{dx}{dz} & \frac{dx}{dw} \\ dy & dy \\ dz & dw \\ \end{array} \right|\]
This determinant is called the Jacobian of the transformation (x, y) = (z, w) and is sometimes denoted by d(x, y) / d(z, w). We note that k(z, w) will be nonzero for those values of (z, w) corresponding to values of (x, y) for which f(x, y) is nonzero.
Notes: (a) Although we shall not prove this theorem, we will at least indicate what
needs to be shown and where the difficulties lie. Consider the joint cdf of the twodimensional
random variable (Z, W), say
\[k\left( z,w \right)=P(Z\le z,W\le w)=\int\limits_{-\infty }^{\infty }{\int\limits_{-\infty }^{\infty }{k\left( s,f \right)}}dsdt\]
where k is the sought pdf. Since the transformation (x, y) ----> (z, w) is assumed to be one
to one [see assumption (a) above], we may find the event, equivalent to {Z ≤ z, W ≤ w} ,
in terms of X and Y. Suppose that this event is denoted by C.it is, {(X, Y) ∈ C} if and only if {Z ≤ z, W ≤ w} . Hence
\[\int_{-\infty }^{\infty }{\int_{-\infty }^{z}{k(s,t)}dsdt=\iint\limits_{c}{f\left( x,y \right)dxdy}}\]
Since f is assumed to be known, the integral on the right-hand side can be evaluated.
Differentiating it with respect to z and w will yield the required pdf. In most texts on
advanced calculus it is shown that these techniques lead to the result as stated in the
above theorem.
(b) Note the striking similarity between the above result and the result obtained in
the one-dimensional case treated in the previous chapter. The
monotonicity requirement for the function y = H(x) is replaced by the assumption that
the correspondence between (x, y) and (z, w) is one to one. The differentiability condition
is replaced by certain assumptions about the partial derivatives involved. The
final solution obtained is also very similar to the one obtained in the one-dimensional
case: the variables x and y are simply replaced by their equivalent expressions in terms
of z and w, and the absolute value of dx/dy is replaced by the absolute value of the
Jacobian.
EXAMPLE 1.
Suppose that we are aiming at a circular target of radius one which has been placed so that its center is at the origin of a rectangular coordinate system . Suppose that the coordinates ( X, Y) of the point of impact are uniformly distributed over the circle That is,
f(x,y) = l/Ï€ if (x, y) lies inside (or on) the circle,
=0, elsewhere
Suppose that we are interested in the random variable R representing the distance from the origin. That is, R = ( X2 + Y2 )1/2 We shall find the pdf of R, say g, as follows: Let ϕ= tan-1 (Y/X). Hence X= H1 (R,ϕ) and Y = H2(R,ϕ ) where x = H1 (r, ϕ) = r cosϕ and y = H2(r, ϕ) = r sin ϕ (We are simply introducing polar coordinates.)
The Jacobian is
\[J=\left| \begin{matrix} \frac{dx}{dr} & \frac{dx}{d\phi } \\ \frac{dy}{dr} & \frac{dy}{d\phi } \\ \end{matrix} \right|=\left| \begin{matrix} \cos \phi & -r\sin \phi \\ \sin \phi & r\cos \phi \\ \end{matrix} \right|=r{{\cos }^{2}}\phi +r{{\sin }^{2}}\phi =r\]
Under the above transformation the unit circle in the xy-plane is mapped into
the rectangle in the cpr-plane in Fig. 6.11. Hence the joint pdf of {, R) is given by
g(Ï• , r)= r/x , 0 ≤r ≤ 1 , 0 ≤ Ï• < 2r,
Thus the required pdf of R, say h, is given by
\[h(r)=\int_{0}^{2r}{g\left( \phi ,r \right)}d\phi =2r\]
0 ≤r ≤ 1
Note: This example points out the importance of obtaining a precise representation
of the region of possible values for the new random variables introduced.
No comments:
Post a Comment