From a random sample of \(n\) values from a rectangular distribution,
\[ X \;\;\sim\;\; \RectDistn(0, \;\beta) \]the maximum likelihood estimate of \(\beta\) is the maximum of the values in the sample,
\[ \hat{\beta} \;\;=\;\; \max(x_1, x_2, \dots, x_n) \]Distribution of estimator
Writing \(Y = \max(X_1, X_2, \dots, X_n)\), its CDF is
\[ \begin{align} F_Y(y) \;\;&=\;\; P(Y \le y) \\[0.4em] &=\;\; P(X_1 \le y \textbf{ and } X_2 \le y \textbf{ and }\dots, \textbf{ and } X_n \le y) \\[0.4em] &=\;\; P(X_1 \le y) \times P(X_2 \le y) \times \cdots \times P(X_n \le y) \\[0.2em] &=\;\; \left(\frac y{\beta} \right)^n \end{align} \]Its pdf is therefore
\[ f(y) \;=\; F'(y) \;=\; \frac {n\;y^{n-1}}{\beta^n} \qquad\text{for } 0 \le y \le \beta \]The pdf of this estimator is shown below, together with the pdf for the method of moments estimator — twice the sample mean.
Note that the method of moments estimator is unbiased, whereas the maximum likelihood estimator is biased — it is always less than \(\beta\). However the method of moments estimator is far more variable — its standard error is much higher.
Mean, variance, bias and standard error
It can be shown that the mean of \(Y\) is
\[ E[Y] \;=\; \frac {n\;\beta}{n+1} \]The maximum likelihood estimator is therefore biased,
\[ \Bias(\hat{\beta}) \;=\; E[\hat{\beta}] - \beta \;=\; -\frac {\beta}{n+1} \]though the bias decreases as \(n \to \infty\).
Since
\[ E[Y^2] \;=\; \frac {n\;\beta^2}{n+2} \]its variance is
\[ \Var(Y) \;=\; E[Y^2] - \left(E[Y]\right)^2 \;=\; \frac{n\;\beta^2}{(n+1)^2(n+2)} \]The standard error of the maximum likelihood estimator is therefore
\[ \se(\hat{\beta}) \;=\; \sqrt {\Var(\hat{\beta})} \;=\; \beta \sqrt{\frac{n}{(n+1)^2(n+2)}} \]