We now give an example showing how the maximum likelihood estimates of two parameters can be found.

Example: Normal distribution

We now consider a random sample, \(\{X_1, \dots, X_n\}\) from a \(\NormalDistn(\mu, \sigma^2)\) distribution. The distribution's probability density function is

\[ f(x) \;\;=\;\; \frac 1{\sqrt{2\pi}\;\sigma} e^{- \frac{\Large (x-\mu)^2}{\Large 2 \sigma^2}} \]

and its logarithm is

\[ \log f(x) \;\;=\;\; -\frac 1 2 \log(\sigma^2) - \frac{(x-\mu)^2}{2 \sigma^2} - \frac 1 2 \log(2\pi) \]

The log-likelihood function is therefore

\[ \ell(\mu, \sigma^2) \;\;=\;\; \sum_{i=1}^n {\log f(x_i)} \;\;=\;\; -\frac n 2 \log(\sigma^2) - \frac{\sum_{i=1}^n {(x_i-\mu)^2}}{2 \sigma^2} - \frac n 2 \log(2\pi) \]

To get the maximum likelihood estimates, we therefore solve

\[ \frac{\partial \ell(\mu, \sigma^2)}{\partial \mu} \;\;=\;\; \frac{\sum{(x_i - \mu)}}{\sigma^2} \;\;=\;\; 0 \]

and

\[ \frac{\partial \ell(\mu, \sigma^2)}{\partial \sigma^2} \;\;=\;\; -\frac n {2 \sigma^2} + \frac{\sum_{i=1}^n {(x_i-\mu)^2}}{2 \sigma^4} \;\;=\;\; 0 \]

Solving the first of these equations gives

\[ \sum{(x_i - \mu)} \;=\; \sum{x_i} - n\mu \;=\; 0 \qquad\text{so}\qquad \hat{\mu} \;=\; \overline{x} \]

Substituting this into the second equation gives

\[ -\frac n 2 + \frac{\sum{(x_i-\overline{x})^2}}{2 \sigma^2} \;=\; 0 \qquad\text{so}\qquad \hat{\sigma}^2 \;=\; \frac {\sum{(x_i-\overline{x})^2}} n \]

Note that the maximum likelihood estimator of \(\sigma^2\) is biased since we have already shown that the sample variance, \(S^2\), is unbiased,

\[ E\left[S^2\right] \;=\; E\left[\sum_{i=1}^n {\frac {(X_i - \overline{X})^2} {n-1}}\right] \;=\; \sigma^2 \]

Although not the maximum likelihood estimator, the sample variance is usually preferred to the maximum likelihood estimate of \(\sigma^2\).