One important property of the bivariate normal distribution is that its two marginal distributions are univariate normal distributions.
Marginal normal distributions
If \((X,Y) \sim \NormalDistn(\mu_X, \sigma_X^2, \mu_Y, \sigma_Y^2, \rho)\) then the marginal distributions of \(X\) and \(Y\) are univariate normal,
\[ \begin{align} X \;\;&\sim\;\; \NormalDistn(\mu_X, \sigma_X^2) \\ Y \;\;&\sim\;\; \NormalDistn(\mu_Y, \sigma_Y^2) \end{align} \]We first prove the result for the standard bivariate normal distribution (where \(\mu_X = \mu_Y = 0\) and \(\sigma_X^2 = \sigma_Y^2 = 1\)). The joint pdf of this distribution is
\[ \begin{align} f(x,y) \;\;&=\;\; \frac{1}{2\pi\sqrt{1 - \rho^2}} \exp\left(-\frac{1}{2(1-\rho^2)} \left(x^2 + y^2 - 2\rho x y\right)\right) \\ &=\;\; \frac{1}{2\pi\sqrt{1 - \rho^2}} \exp\left(-\frac{1}{2(1-\rho^2)} \left((y-\rho x)^2 + (1 - \rho^2)x^2\right)\right) \\ &=\;\; \frac{1}{\sqrt{2\pi}} \exp\left(-\frac{x^2}2\right) \times \frac{1}{\sqrt{2\pi}\sqrt{1 - \rho^2}} \exp\left(-\frac{(y-\rho x)^2}{2(1-\rho^2)}\right) \end{align} \]The marginal pdf of \(X\) is found by integrating the joint pdf over \(Y\),
\[ \begin{align} f_X(x) \;\;&=\;\; \int_{-\infty}^{\infty} f(x,y) \;dy \\ &=\;\; \frac{1}{\sqrt{2\pi}} \exp\left(-\frac{x^2}2\right) \times \int_{-\infty}^{\infty} \frac{1}{\sqrt{2\pi}\sqrt{1 - \rho^2}} \exp\left(-\frac{(y-\rho x)^2}{2(1-\rho^2)}\right) \;dy \end{align} \]Now the integral on the right is that of a normal distribution with mean \(\rho x\) and variance \((1-\rho^2)\), so it must be one. Therefore
\[ f_X(x) \;\;=\;\; \frac{1}{\sqrt{2\pi}} \exp\left(-\frac{x^2}2\right) \]which is the pdf of a \(\NormalDistn(0,1)\) distribution.
Since a general bivariate normal distribution can be found by linearly transforming such standard normal variables
\[ X = \mu_X + \sigma_X Z_X \spaced{and} Y = \mu_Y + \sigma_Y Z_Y \]their distributions must also be normal with means and variances given by this linear transformation, proving the general result.
From the marginal distributions of \(X\) and \(Y\), we can find the means and variances of the two variables.
Means and variances
If \((X,Y) \sim \NormalDistn(\mu_X, \sigma_X^2, \mu_Y, \sigma_Y^2, \rho)\) then
\[ \begin{align} E[X] \;&=\; \mu_X, \qquad &\Var(X) \;=\; \sigma_X^2 \\ E[Y] \;&=\; \mu_Y, \qquad &\Var(Y) \;=\; \sigma_Y^2 \end{align} \]The next diagram helps to explain the marginal distributions
Illustration
The following probability density function describes the joint distribution of two variables,
\[ (X,Y) \;\;\sim\;\; \NormalDistn(\mu_X=2, \sigma_X^2=1, \mu_Y=0, \sigma_Y^2=1, \rho) \]The slider under the diagram can adjust the value of the parameter \(\rho\).
Use the pop-up menu to set Slice to Y. This shows a slice through the joint pdf, initially at \(Y = -1.00\).
The marginal pdf of \(Y\) at \(y=-1\), \(f_Y(y)\), is the area of this slice. Use the slider to show the slice at other values of \(Y\). Although the diagram does not directly show the shape of this marginal distribution, it should be clear that
Use the pop-up menu to slice by \(X\). Its marginal pdf is given by the cross-sectional areas in the opposite direction. Again observe that this is highest at its mean, \(x=2\).