More on bivariate normal distribution

1 01 2019

This post is a continuation of the preceding post on bivariate normal distribution. The preceding post gives one characterization of the bivariate normal distribution. This post gives another characterization and other properties, providing further insight into bivariate normal distribution.

Practice problems to reinforce these concepts are available here.

Recap

First we summarize the preceding post. Consider a pair of random variables X and Y with the following probability density function (pdf):

.
(1)……..\displaystyle f(x,y)=\frac{1}{2 \pi \ \sigma_X \ \sigma_Y \ \sqrt{1-\rho^2}} \ e^{-\frac{1}{2} \ W}

……..
where
……..
\displaystyle W=\frac{1}{1-\rho^2} \ \biggl[\biggl(\frac{x-\mu_X}{\sigma_X} \biggr)^2-2 \rho \biggl(\frac{x-\mu_X}{\sigma_X} \biggr) \biggl(\frac{y-\mu_Y}{\sigma_Y} \biggr) +\biggl(\frac{y-\mu_Y}{\sigma_Y} \biggr)^2\biggr]
.

for all -\infty<x<\infty and -\infty<y<\infty. Any random variables X and Y that are jointly distributed according to the pdf in (1) are said to have a bivariate normal distribution with parameters \mu_X, \sigma_X, \mu_Y, \sigma_Y and \rho. This definition of bivariate normal does not give much insight. The following characterization gives further insight.

Theorem 1
The following two conditions (Condition 1 and Condition 2) are equivalent.

Condition 1. The joint pdf of the random variables X and Y is the same as (1) above.

Condition 2. The random variables X and Y satisfy these four conditions: (1) The conditional distribution of Y, given any X=x, is a normal distribution. (2) The mean of the conditional distribution of Y given x, E[Y \lvert x], is a linear function of x. (3) The variance of the conditional distribution of Y given x, Var[Y \lvert x], is a constant, that is, it is not a function of x. (4) The marginal distribution of X is a normal distribution.

Condition 1 is the definition of bivariate normal stated at the beginning. Condition 1 implying Condition 2 is shown in Theorem 1 in the preceding post. Condition 2 shows that the conditional distributions are normal with linear mean and constant variance and that the marginal distributions are normal. Condition 2 implying Condition 1 is shown in Theorem 3 in the preceding post. Thus bivariate normality can be defined by either condition. The following theorem gives the specific form of the linear conditional mean and constant variance.

Theorem 2
Whenever the random variables X and Y have a bivariate normal distribution with parameters \mu_X, \sigma_X, \mu_Y, \sigma_Y and \rho (by satisfying Condition 1 or Condition 2), the conditional mean E[Y \lvert X=x] and the conditional variance Var[Y \lvert X=x] are of the form:

.
(2)……..\displaystyle E[X \lvert Y=y]=\mu_X+\rho \frac{\sigma_X}{\sigma_Y} (y-\mu_Y)

.
(3)……..Var[X \lvert Y=y]=\sigma_X^2 (1-\rho^2)
.
The equation in (2) is also called the least squares regression line. Whenever the conditional mean E[Y \lvert X=x] is a linear function, it must be of the exact same form as in (2) (this fact is Theorem 2 in this previous post). Given that the linear form of E[Y \lvert X=x] is part of the definition of bivariate normal, equation (2) is not surprising.

How to Generate a Bivariate Normal Distribution

There is a way to generate a bivariate normal distribution. This process is interesting in its own right. It also points toward another characterization of bivariate normal. Let U and V be a pair of independent standard normal random variables (standard normal is a normal distribution with mean 0 and standard deviation 1). We generate a bivariate normal distribution as follows:

.

How to generate a bivariate normal distribution. Let U and V be a pair of independent standard normal random variables. For any set of 5 parameters \mu_X, \sigma_X, \mu_Y, \sigma_Y and \rho, let X=\sigma_X \ U+\mu_X and Y=\sigma_Y [\rho \ U+\sqrt{1-\rho^2} \ V]+\mu_Y. Then X and Y have a bivariate normal distribution with parameters \mu_X, \sigma_X, \mu_Y, \sigma_Y and \rho. These parameters have the usual meaning as indicated in Theorem 1 and Theorem 2 above.

.

The mu-parameters (\mu_X and \mu_Y) can be any real numbers. The sigma-parameters (\sigma_X and \sigma_Y) can be any positive real numbers. The parameter \rho must satisfy -1<\rho<1.

Because U and V are independent standard normal, it can be readily verified that E[X]=\mu_X, Var[X]=\sigma_X^2, E[Y]=\mu_Y, Var[Y]=\sigma_Y^2. Because both X and Y are linear combinations of U and V, both X and Y have normal distributions. So the marginal distribution of X is normal with mean \mu_X and standard deviation \sigma_X. Likewise the marginal distribution of Y is normal with mean \mu_Y and standard deviation \sigma_Y.

Next, confirm the role of the parameter \rho. First, evaluate the expectation E[X Y].

.
\displaystyle \begin{aligned} E[X Y]&=E\biggl[ \biggl(\sigma_X \ U+\mu_X \biggr) \biggl(\sigma_Y [\rho \ U+\sqrt{1-\rho^2} \ V]+\mu_Y \biggr) \biggr]\\& \ \ \ \ \ \ \ \ \ \ \ \vdots \ \ \ \ \ \ \ \ \ \ \ \vdots \ \ \ \ \ \ \ \ \ \ \ \vdots \\&=E\biggl[\sigma_X \ \sigma_Y \ \rho \ U^2+\sigma_X \ \sigma_Y \sqrt{1-\rho^2} \ U \ V+\sigma_X \ \mu_Y \ U+\mu_X \ \mu_Y  \biggr] \\&=\sigma_X \ \sigma_Y \ \rho \ E[U^2]+\sigma_X \ \sigma_Y \sqrt{1-\rho^2} \ E[U \ V]+\sigma_X \ \mu_Y \ E[U] +\mu_X \ \mu_Y\\&=\sigma_X \ \sigma_Y \ \rho+\mu_X \ \mu_Y \end{aligned}
.

In the above derivation, some algebraic manipulation is needed to get to a workable form. Also note that E[U V]=0 since U and V are independent and standard normal. Furthermore, E[U^2]=1 and E[U]=0 since U is standard normal.

The covariance \text{Cov}(X,Y) is E[X Y]-\mu_X \mu_Y, which becomes \sigma_X \sigma_Y \rho. This means that \rho=\text{Cov}(X,Y) / (\sigma_X \sigma_Y). This confirms that \rho is the correlation coefficient of X and Y.

With the parameters squared away, we next obtain the joint pdf of X and Y. To this end, we use a Jacobian transformation argument. Note that the two equations x=\sigma_X \ u+\mu_X and y=\sigma_Y [\rho \ u+\sqrt{1-\rho^2} \ v]+\mu_Y is a one-to-one transformation from (u,v) to (x,v). We know the pdf w(u,v) of U and V because U and V are independent standard normal. Through the transformation, we want to express the pdf k(x,y) of X and Y via w(u,v). The following is the inverse of the transformation (it expresses U and V in terms of X and Y).

.
(4)……..\displaystyle U=\frac{X-\mu_X}{\sigma_X}

(5)……..\displaystyle V=\frac{1}{\sqrt{1-\rho^2}} \ \biggl[\frac{Y-\mu_Y}{\sigma_Y}- \rho \ \frac{X-\mu_X}{\sigma_X}  \biggr]
.

Consider the following two functions q_1(x,y) and q_2(x,y) that derive from (4) and (5).

.
……..\displaystyle u=q_1(x,y)=\frac{x-\mu_X}{\sigma_X}

……..\displaystyle v=q_2(x,y)=\frac{1}{\sqrt{1-\rho^2}} \ \biggl[\frac{y-\mu_Y}{\sigma_Y}- \rho \ \frac{x-\mu_X}{\sigma_X}  \biggr]
.

Now calculate the Jacobian.

.
……..\displaystyle \begin{aligned} J&=\text{det} \left[\begin{array}{cc}      \displaystyle \frac{\partial q_1}{\partial x} & \displaystyle \frac{\partial q_1}{\partial y}   \\      \text{ } & \text{ }   \\      \displaystyle \frac{\partial q_2}{\partial x} & \displaystyle \frac{\partial q_2}{\partial y}           \end{array}\right] =\text{det} \left[\begin{array}{cc}      \displaystyle \frac{1}{\sigma_X} & 0   \\      \text{ } & \text{ }   \\      \displaystyle \frac{- \rho}{\sigma_X \sqrt{1-\rho^2}} & \displaystyle \frac{1}{\sigma_Y \sqrt{1-\rho^2}}           \end{array}\right] \\&=\frac{1}{\sigma_X \sigma_Y \sqrt{1-\rho^2}}  \end{aligned}
.

The pdf of X and Y is then k(x,y)=w(q_1(x,y),q_2(x,y))=w(u,v) \lvert J \lvert where w(u,v) is the joint pdf of U and V. The pdf w(u,v) is known since U and V are independent standard normal.

.
……..\displaystyle \begin{aligned} k(x,y)&=w(u,v) \ \lvert J \lvert \\&=\frac{1}{2 \pi} \ e^{-\frac{1}{2} (u^2+v^2) } \ \frac{1}{\sigma_X \sigma_Y \sqrt{1-\rho^2}} \\&=\frac{1}{2 \pi \sigma_X \sigma_Y \sqrt{1-\rho^2}} \ e^{-\frac{1}{2} (u^2+v^2) } \\&=\frac{1}{2 \pi \sigma_X \sigma_Y \sqrt{1-\rho^2}} \ e^{-\frac{1}{2} \biggl(\frac{x-\mu_X}{\sigma_X} \biggr)^2+\frac{1}{1-\rho^2} \biggl[\frac{y-\mu_Y}{\sigma_Y}-\rho \ \frac{x-\mu_X}{\sigma_X} \biggr]^2  } \\& \ \ \ \ \ \ \ \ \ \ \ \vdots \ \ \ \ \ \ \ \ \ \ \ \vdots \ \ \ \ \ \ \ \ \ \ \ \vdots \\&=\frac{1}{2 \pi \sigma_X \sigma_Y \sqrt{1-\rho^2}} \ e^{-\frac{1}{2} W} \end{aligned}
.

where W is the following quantity.

.
……..\displaystyle W=\frac{1}{1-\rho^2} \ \biggl[\biggl(\frac{x-\mu_X}{\sigma_X} \biggr)^2-2 \rho \biggl(\frac{x-\mu_X}{\sigma_X} \biggr) \biggl(\frac{y-\mu_Y}{\sigma_Y} \biggr) +\biggl(\frac{y-\mu_Y}{\sigma_Y} \biggr)^2\biggr]
.

Note that the pdf k(x,y) is identical to the one in (1). Thus the approach of starting with a pair of independent standard normal random variables does generate a bivariate normal distribution.

Another Characterization of Bivariate Normal

The reverse of the preceding section is also true. Starting with any bivariate normal X and Y, the U and V described in (4) and (5) are a pair of independent standard normal random variables. We have the following theorem.

Theorem 3
Let X and Y be continuous random variables. Then X and Y have a bivariate normal distribution with parameters \mu_X, \sigma_X, \mu_Y, \sigma_Y and \rho if and only if the following condition holds.

Condition 3. There exists a pair of independent standard normal random variables U and V such that X=\sigma_X U+\mu_X and ……………. Y=\sigma_Y \ [ \rho \ U+\sqrt{1-\rho^2} \ V]+\mu_Y.

One direction of Theorem 3 is already established in the preceding section – Condition 3 implying that X and Y are bivariate normal. We now sketch out a proof of the other direction – the fact that X and Y are bivariate normal implies Condition 3.

Suppose that X and Y are bivariate normal with parameters \mu_X, \sigma_X, \mu_Y, \sigma_Y and \rho. Consider the U and V as defined in (4) and (5) above. It is straightforward to show that E[U]=0 and Var[U]=1 and E[V]=0 and Var[V]=1. More importantly, it can be shown that the joint pdf g(u,v) of U and V is a product of two standard normal density functions, one in terms of u and the other in terms of v. If this is done, then we know U and V are independent and standard normal random variables.

The equations (4) and (5) above is a one-to-one transformation from (x,y) to (u,v). The inverse of that transformation is given by the following two equations.

.
……..\displaystyle x=h_1(u,v)=\sigma_X \ u+\mu_X

……..\displaystyle y=h_2(u,v)=\sigma_Y \ [ \rho \ u+\sqrt{1-\rho^2} \ v]+\mu_Y
.

Now calculate the Jacobian.

.
……..\displaystyle \begin{aligned} J&=\text{det} \left[\begin{array}{cc}      \displaystyle \frac{\partial h_1}{\partial u} & \displaystyle \frac{\partial h_1}{\partial v}   \\      \text{ } & \text{ }   \\      \displaystyle \frac{\partial h_2}{\partial u} & \displaystyle \frac{\partial h_2}{\partial v}           \end{array}\right] =\text{det} \left[\begin{array}{cc}      \displaystyle \sigma_X & 0   \\      \text{ } & \text{ }   \\      \displaystyle \sigma_Y \ \rho & \displaystyle \sigma_Y \sqrt{1-\rho^2}           \end{array}\right] \\&=\sigma_X \sigma_Y \sqrt{1-\rho^2}  \end{aligned}
.

Recall that f(x,y) is the bivariate normal pdf as described in (1). The following derivation gives the pdf g(u,v) of U and V.

.
……..\displaystyle \begin{aligned} g(u,v)&=f(h_1(u,v),h_2(u,v)) \ \lvert J \lvert \\&=f(x,y) \ \lvert J \lvert \\&=\frac{1}{2 \pi \ \sigma_X \ \sigma_Y \ \sqrt{1-\rho^2}} \ e^{-\frac{1}{2} \ W} \ \sigma_X \sigma_Y \sqrt{1-\rho^2} \\&=\frac{1}{2 \pi} \ e^{-\frac{1}{2} \ W}  \end{aligned}
.

where W is the expression found in (1) above. The derivation continues.

.
……..\displaystyle \begin{aligned} g(u,v)&=\frac{1}{2 \pi} \ e^{-\frac{1}{2} \ \frac{1}{1-\rho^2} \ \biggl[\biggl(\frac{x-\mu_X}{\sigma_X} \biggr)^2-2 \rho \biggl(\frac{x-\mu_X}{\sigma_X} \biggr) \biggl(\frac{y-\mu_Y}{\sigma_Y} \biggr) +\biggl(\frac{y-\mu_Y}{\sigma_Y} \biggr)^2\biggr]} \\&=\frac{1}{2 \pi} \ e^{-\frac{1}{2} \ \frac{1}{1-\rho^2} \ \biggl[u^2-2 \ \rho \ u \ (\rho \ u+\sqrt{1-\rho^2} \ v) +(\rho \ u+\sqrt{1-\rho^2} \ v)^2\biggr]} \\&\ \ \ \ \ \ \ \ \ \ \ \vdots \ \ \ \ \ \ \ \ \ \ \ \vdots \ \ \ \ \ \ \ \ \ \ \ \vdots \\&=\frac{1}{2 \pi} \ e^{-\frac{1}{2} \ (u^2+v^2) } \\&=\frac{1}{2 \sqrt{\pi}} \ e^{-\frac{1}{2} \ u^2 } \ \frac{1}{2 \sqrt{\pi}} \ e^{-\frac{1}{2} \ v^2 } \end{aligned}
.

The last result shows that the pdf g(u,v) is a product of two standard normal pdfs (one in terms of u and the other one in terms of v. This means that U and V are independent and standard normal random variables. This concludes the proof of Theorem 3.

An Application

We have discussed three characterizations of bivariate normal distribution. One is the basic definition using the pdf described in (1). The second one is Condition 2 in terms of the conditional distribution of Y \lvert X=x and the conditional mean E[Y \lvert X=x]. The third one is Condition 3, which essentially says that any bivariate normal X and Y can be generated by a pair of independent standard normal random variables.

Whenever we say X and Y have a bivariate normal distribution, we have any of the three conditions at our disposal. We now use Condition 3 to prove the following theorem.

Theorem 4
Suppose that the random variables X and Y have a bivariate normal distribution. Then any linear combination of X and Y has a normal distribution. More specifically, Y=a X+b Y is a normal random variable for any real constants a and b.

One comment. When a=b=0, Y=0, which clearly is not a normal distribution. One way to get around this is to declare a single point as a normal distribution with zero variance. This is the approach some authors take. Another approach is to exclude the case a=b=0. If the second approach is used, the theorem should say Y=a X+b Y is a normal random variable for any real constants a and b not both zero. In any case, the scenario a=b=0 is not a very interesting one.

Based on Condition 3, X and Y can be expressed in terms of a pair of independent U and V, both have standard normal distributions.

.
……..X=\sigma_X U+\mu_X

……..Y=\sigma_Y \ [ \rho \ U+\sqrt{1-\rho^2} \ V]+\mu_Y
.

Express Y=a X+b Y in terms of U and V.

.
……..\displaystyle \begin{aligned}Y&=a X + b Y \\&=a \biggl(\sigma_X U+\mu_X \biggr)+b \biggl(\sigma_Y \ [ \rho \ U+\sqrt{1-\rho^2} \ V]+\mu_Y \biggr) \\&\ \ \ \ \ \ \ \ \ \ \ \vdots \ \ \ \ \ \ \ \ \ \ \ \vdots \ \ \ \ \ \ \ \ \ \ \ \vdots \\&=\biggl(a \ \sigma_X+b \ \sigma_Y \ \rho \biggr) \ U+ \biggl(b \ \sigma_Y \sqrt{1-\rho^2}\biggr) \ V+a \ \mu_X+ b \ \mu_Y  \end{aligned}
.

The last expression is a linear combination of U and V plus a constant. Linear combinations of independent normal distributions are normal. Any normal distribution adding a constant is normal. Thus the last expression is a normal distribution. As a result, a X + b Y is a normal distribution. Note that the mean of a X + b Y is what it should be: a \ \mu_X+ b \ \mu_Y. The variance of a X + b Y is what it should be: a^2 \ \sigma_X^2+b^2 \ \sigma_Y^2+2 \ a b  \ \text{Cov}(X,Y). This completes the proof of Theorem 4.

The Moment Generating Function

Theorem 4 can also be accomplished using the moment generating function of the bivariate normal distribution. The following theorem is stated without proof.

Theorem 5
Suppose that the random variables X and Y have a bivariate normal distribution with parameters \mu_X, \sigma_X, \mu_Y, \sigma_Y and \rho. Then its moment generating function (mgf) M(s,t) is given by:

.
……..\displaystyle M(s,t)=e^{\displaystyle (\mu_X \ s+\mu_Y \ t) +\frac{1}{2}(\sigma_X^2 \ s^2+\sigma_Y^2 \ t^2+2 \ \rho \ \sigma_X \ \sigma_Y \ s \ t) }
.

where -\infty<s<\infty and -\infty<t<\infty. Many properties can be derived from the mgf. For example, it can be used to establish Theorem 4. The above mgf can be used to show that the mgf of a X+b Y is a normal mgf. The following is another application of the bivariate normal mgf.

Theorem 6
Suppose that the random variables X and Y have a bivariate normal distribution. Then any pair of linear combinations of X and Y also have a bivariate normal distribution. More specifically, L=a X+b Y and M=c X+d Y also have a bivariate normal distribution for any constants a,b,c,d.

One way to prove Theorem 6 is to show that the mgf of L and M is of the same form in Theorem 5.

Remarks

One prominent characteristic of bivariate normal is that normal distribution is found in many directions. If X and Y are bivariate normal, then the marginal distributions are normal (both X and Y). Furthermore, the conditional distributions of Y \lvert X=x and X \lvert Y=y are normal. On top of that, any linear combination of X and Y is normal. What makes the last result possible is that X and Y can be expressed by a pair of independent standard normal random variables. Theorem 6 indicates that X and Y transformed linearly also have a bivariate normal distribution. So bivariate normality is a versatile property and is definitely an interesting and rich mathematical property.

Based on the discussion in the preceding paragraph, whenever X and Y are bivariate normal, their sum X+Y and differences X-Y and Y-X are also normal. In general, the sum of two normal distributions is not necessarily normal. On the other hand, though bivariate normality of X and Y implies the normality of the marginal distributions, in general the normality of the marginals does not mean the joint distribution is bivariate normal. The following example shows why.

Example 1
Let X be a standard normal random variable. Let T be Bernoulli such that P[T=1]=P[T=-1]=0.5. Furthermore, T and X are assumed to be independent. Let Y=T X, which can be also be explicitly defined as follows:

.
……..\displaystyle  Y = \left\{ \begin{array}{ll}           \displaystyle  X &\ \ \ \ \ \ T=1 \\            \text{ } & \text{ } \\          \displaystyle  -X &\ \ \ \ \ \ T=-1           \end{array} \right.
.

The random variable Y is also standard normal. The following shows that its CDF is identical to the standard normal CDF.

.
……..\displaystyle \begin{aligned} P[Y \le y]&=P[Y \le y \lvert T=1] \ P[T=1]\\& \ \ \ \ +P[Y \le y \lvert T=-1] \ P[T=-1] \\&=0.5 \ P[X \le y]+0.5 \ P[-X \le y]  \\&=0.5 \ P[X \le y]+0.5 \ P[X \ge -y] \\&=0.5 \ P[X \le y]+0.5 \ P[X \le y] \\&=P[X \le y] \end{aligned}
.

However, X+Y is the following random variable, which is not normal.

.
……..\displaystyle  X+Y = \left\{ \begin{array}{ll}           \displaystyle  2 X &\ \ \ \ \ \ T=1 \\            \text{ } & \text{ } \\          \displaystyle  0 &\ \ \ \ \ \ T=-1           \end{array} \right.
.

The sum of independent normal distributions is normal. But in general the sum two normal distributions do not have to be normal as this example demonstrates. The example also shows that when the marginal distributions are normal, the joint distribution does not have to be bivariate normal. If X and Y were bivariate normal, then X+Y would have to be normal. Thus the X and Y in this example cannot be bivariate normal.

Another way to show that X and Y in this example are not bivariate normal is through the covariance. Note that \text{Cov}(X,Y)=0 since ………….. E[X Y]=E[T X^2]=E[T] E[X^2]=0. Thus \rho=0. For bivariate normal X and Y, zero correlation means independence. However, X and Y are obviously dependent.

We conclude by presenting two calculation examples.

Example 2
Suppose that the height X (husband) and the height Y (wife) of a married couple are modeled by a bivariate normal distribution with parameters \mu_X=69 inches, \sigma_X=2 inches, \mu_Y=66 inches, \sigma_Y=2.5 inches, and \rho=0.6. For a randomly selected married couple, determine the probability that the wife is taller than the husband.

Since this is a bivariate normal model, the difference Y-X has a normal distribution with mean \mu_Y-\mu_X=-3 inches and standard deviation \sqrt{4.25} as the variance is:

.
……..\displaystyle \begin{aligned} Var[Y-X]&=Var[Y]+Var[X]+2 (1) (-1) \text{Cov}(X,Y) \\&=2.5^2+2^2-2 \ \rho \ \sigma_X \ \sigma_Y \\&=6.25+4-2 \cdot 0.6 \cdot 2 \cdot 2.5=4.25  \end{aligned}
.

The following calculates the probability.

.
……..\displaystyle \begin{aligned} P[Y-X>0]&=1- P[Y-X \le 0] \\&=1-\Phi \biggl(\frac{0-(-3)}{\sqrt{4.25}} \biggr) \\&=1-\Phi(1.46) \\&=1-0.9279 \\&=0.0721 \end{aligned}
.

In this bivariate normal model, there is about a 7% chance that the female member of a couple is taller than the male member of the couple.

Example 3
For the same bivariate normal X and Y discussed in Example 2, any pair of linear combinations of X and Y is also bivariate normal according to Theorem 6. In particular, L=Y and M=Y-X are bivariate normal. This fact can be used to answer this question: for a randomly selected married couple, if the wife is three inches taller than the husband, what is the probability that she is taller than 5 feet 10 inches (70 inches)?

The probability to be calculated is:

.
……..P[L>70 \lvert M=3]=P[Y>70 \lvert Y-X=3]
.

The first is to determine the 5 parameters of the bivariate normal L and M.

.
……..\displaystyle \mu_L=66
……..\displaystyle \sigma_L=2.5
……..\displaystyle \mu_M=66-69=-3
……..\displaystyle \sigma_M^2=4.25 \ \ \ \ \sigma_M=\sqrt{4.25}
.
……..\displaystyle \begin{aligned} \text{Cov}(L,M)&=\text{Cov}(Y,Y-X)=\text{Cov}(Y,Y)-\text{Cov}(Y,X)\\&=\sigma_Y^2-\rho \ \sigma_X \ \sigma_Y \\&=2.5^2-0.6 \cdot 2 \cdot 2.5=3.25  \end{aligned}
.
……..\displaystyle \rho_{L,M}=\frac{3.25}{2.5 \ \sqrt{4.25}}=\frac{1.3}{\sqrt{4.25}}
.

Next, find the mean and variance of L \lvert M=3.

.
……..\displaystyle E[L \lvert M=3]=66+ \frac{1.3}{\sqrt{4.25}} \ \frac{2.5}{\sqrt{4.25}}\ (3-(-3))=\frac{300}{4.25}
.
……..\displaystyle Var[L \lvert M=3]=\sigma_L^2 \ (1-\rho_{L,M}^2)=2.5^2 \ \biggl(1-\frac{1.69}{4.25} \biggr)=\frac{16}{4.25}
.

Finally, the desired probability:

.
……..\displaystyle \begin{aligned} P[L>70 \lvert M=3]&=1-P[L \le 70 \lvert M=3] \\&=1-\Phi \biggl(\frac{70-\frac{300}{4.25}}{\sqrt{\frac{16}{4.25}} } \biggr) \\&=1-\Phi(-0.303) \\&=1-(1-0.6179)=0.6179  \end{aligned}
.

For those couples whose wives are three inches taller, there is roughly a 62% chance that the wife is over 70 inches tall.

Practice problems are available here.

.

.

Dan Ma mathematical statistics

Daniel Ma mathematical statistics

Dan Ma math

Daniel Ma mathematics

Dan Ma stats

Daniel Ma statistics

Dan Ma statistical

Daniel Ma statistical

\copyright 2019 – Dan Ma

Advertisements

Actions

Information

2 responses

1 01 2019
Practice Problem Set 5 – bivariate normal distribution | Probability and Statistics Problem Solve

[…] posts – one is a detailed introduction to bivariate normal distribution and the other is a further discussion that brings out more mathematical properties of the bivariate normal distribution. The properties […]

Like

1 01 2019
Introducing bivariate normal distribution | Mathematical Statistics

[…] The next post is a further discussion on bivariate normal distribution. Practice problems on bivariate normal distribution are available here. […]

Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s




%d bloggers like this: