MLE of a rectangular distribution's maximum

Consider a random sample of \(n\) values from a rectangular distribution whose maximum is unknown,

\[ X \;\;\sim\;\; \RectDistn(0, \;\beta) \]

We showed earlier that the maximum likelihood estimate of \(\beta\) is the maximum of the values in the sample,

\[ \hat{\beta} \;\;=\;\; \max(x_1, x_2, \dots, x_n) \]

Distribution of estimator

Writing \(Y = \max(X_1, X_2, \dots, X_n)\), its cumulative distribution function is

\[ \begin{align} F_Y(y) \;\;&=\;\; P(Y \le y) \\[0.4em] &=\;\; P(X_1 \le y \textbf{ and } X_2 \le y \textbf{ and }\dots, \textbf{ and } X_n \le y) \\[0.4em] &=\;\; P(X_1 \le y) \times P(X_2 \le y) \times \cdots \times P(X_n \le y) \\[0.2em] &=\;\; \left(\frac y{\beta} \right)^n \end{align} \]

since the CDF of the rectangular distribution is \(F(x) = \dfrac x{\beta}\). The probability density function of \(Y\) is therefore

\[ f(y) \;=\; F'(y) \;=\; \frac {n\;y^{n-1}}{\beta^n} \qquad\text{for } 0 \le y \le \beta \]

Distribution of MLE

The diagram below shows the probability function for the maximum likelihood estimator of \(\beta\).

Drag the slider to see how the sample size affects the shape of the distribution — it becomes more concentrated around the true parameter value as \(n\) increases.


We showed earlier that the method of moments estimator of \(\beta\) is twice the sample mean. Its distribution is approximately normal (from the Central Limit Theorem); click Show moments estimator to superimpose its distribution on the diagram (in red).

Observe that although the maximum likelihood estimator is biased — its mean is less than \(\beta\) — its variance is much lower than that of the moments estimator. As the sample size, \(n\), increases, the advantage of the MLE becomes more pronounced.

Mean and variance

The mean of \(Y\) is

\[ \begin{align} E[Y] \;&=\; \int_0^{\beta} y \times \frac {n\;y^{n-1}}{\beta^n} \;dy \\ &=\; \int_0^{\beta} \frac n{\beta^n} y^n \;dy \\ &=\; \frac n{\beta^n} \left[\frac{y^{n+1}}{n+1} \right]_0^{\beta} \\ &=\; \frac {n\;\beta}{n+1} \\ \end{align} \]

Its variance can be found from

\[ \begin{align} E[Y^2] \;&=\; \int_0^{\beta} y^2 \times \frac {n\;y^{n-1}}{\beta^n} \;dy \\ &=\; \int_0^{\beta} \frac n{\beta^n} y^{n+1} \;dy \\ &=\; \frac n{\beta^n} \left[\frac{y^{n+2}}{n+2} \right]_0^{\beta} \\ &=\; \frac {n\;\beta^2}{n+2} \\ \end{align} \]

Therefore

\[ \begin{align} \Var(Y) \;=\; E[Y^2] - \left(E[Y]\right)^2 \;&=\; \frac {n\;\beta^2}{n+2} - \frac {n^2\;\beta^2}{(n+1)^2} \\ &=\; n\;\beta^2 \frac {(n+1)^2 - n(n+2)}{(n+1)^2(n+2)} \\ &=\; \frac{n\;\beta^2}{(n+1)^2(n+2)} \end{align} \]

Bias and standard error

From these, we can find the bias of the estimator

\[ \Bias(\hat{\beta}) \;=\; E[\hat{\beta}] - \beta \;=\; -\frac {\beta}{n+1} \]

Its standard error is

\[ \se(\hat{\beta}) \;=\; \sqrt {\Var(\hat{\beta})} \;=\; \beta \sqrt{\frac{n}{(n+1)^2(n+2)}} \]