Consistency

When an estimator is based on a random sample of \(n\) values, \(\hat{\theta}(X_1, X_2, \dots, X_n)\), the estimator usually becomes closer to the parameter being estimated, \(\theta\), when the sample size increases.

Definition

An estimator \(\hat{\theta}(X_1, X_2, \dots, X_n)\) that is based on a random sample of \(n\) values is said to be a consistent estimator of \(\theta\) if

\[ \hat{\theta}(X_1, X_2, \dots, X_n) \;\; \xrightarrow[n \rightarrow \infty]{} \;\; \theta \]

A precise definition of consistency explains what this limit means, but is relatively complex. The concept is that the distribution of \(\hat{\theta}\) becomes more and more concentrated on \(\theta\).

Consistency can usually be proved using the following theorem that is stated here without proof.

Consistent estimators

An estimator \(\hat{\theta}(X_1, X_2, \dots, X_n)\) is a consistent estimator of \(\theta\) if the following two conditions hold:

\[ \begin{align} \Var(\hat{\theta}) \;\; &\xrightarrow[n \rightarrow \infty]{} \;\; 0 \\[0.5em] Bias(\hat{\theta}) \;\; &\xrightarrow[n \rightarrow \infty]{} \;\; 0 \end{align} \]