Maximum likelihood estimators have the same properties when used with continuous and discrete distributions. We repeat these properties, again in a slightly abbreviated form that is not mathematically rigorous.

Bias

The maximum likelihood estimator, \(\hat {\theta} \), of a parameter, \(\theta\), that is based on a random sample of size \(n\) is asymptotically unbiased,

\[ E[\hat {\theta}] \;\; \xrightarrow[n \rightarrow \infty]{} \;\; \theta \]

Asymptotic normality

The maximum likelihood estimator, \(\hat {\theta} \), of a parameter, \(\theta\), that is based on a random sample of size \(n\) asymptotically has a normal distribution,

\[ \hat {\theta} \;\; \xrightarrow[n \rightarrow \infty]{} \;\; \text{a normal distribution} \]

Approximate standard error

If \(\hat {\theta} \) is the maximum likelihood estimator of a parameter \(\theta\) based on a large random sample, its standard error can be approximated by:

\[ \se(\hat {\theta}) \;\;\approx\;\; \sqrt {- \frac 1 {\ell''(\hat {\theta})}} \]

From these, we can find the approximate bias (zero) and standard error of most maximum likelihood estimators based on large random samples.