Many continuous distributions have one or more parameters whose values are unknown. An unknown parameter, \(\theta\), is often estimated from a random sample of \(n\) values from the distribution,
\[ \hat{\theta} \;\; =\;\; \hat{\theta}(X_1, X_2, \dots, X_n) \]As when estimating parameters of discrete distributions, the concepts of bias and standard error are important ways to differentiate a good estimator from a bad one. The definitions of these quantities are the same for both discrete and continuous distributions; we repeat them here.
Bias
The bias of an estimator \(\hat{\theta}\) of a parameter \(\theta\) is
\[ \Bias(\hat{\theta}) \;=\; E\big[\hat{\theta}\big] - \theta \]If its bias is zero, \(\hat{\theta}\) is called an unbiased estimator of \(\theta\).
Standard error
The standard error of an estimator \(\hat{\theta}\) is its standard deviation.
Bias and standard error can again be combined into a single value.
Mean squared error
The mean squared error of an estimator \(\hat{\theta}\) of a parameter \(\theta\) is
\[ \MSE(\hat{\theta})\; =\; E\left[ (\hat{\theta} - \theta)^2 \right] \;=\; \Var(\hat{\theta}) + \Bias(\hat{\theta})^2 \]A further characteristic of estimators also applies to continuous distributions.
Consistency
An estimator \(\hat{\theta}(X_1, X_2, \dots, X_n)\) is a consistent estimator of \(\theta\) if
\[ \begin{align} \Var(\hat{\theta}) \;\; &\xrightarrow[n \rightarrow \infty]{} \;\; 0 \\[0.5em] \Bias(\hat{\theta}) \;\; &\xrightarrow[n \rightarrow \infty]{} \;\; 0 \end{align} \]