Maximum likelihood estimates can usually be found as turning points of the likelihood function (or equivalently the log-likelihood function) — i.e. by solving \(\ell'(\theta) = 0\).

However there are a few distributions for which this method does not work. Estimation of the maximum of a \(\RectDistn(0, \beta)\) distribution (the "German Tank Problem") is one.

Rectangular distribution

The following six values,

0.12   0.32   0.36   0.51   0.63   0.69

are a random sample from a rectangular distribution whose minimum is zero but whose maximum is unknown.

\[ X \;\; \sim \; \; \RectDistn(0, \;\beta) \]

This distribution has probability density function

\[ f(x\;|\; \beta) = \begin{cases}\dfrac 1 {\beta} &\text{for } 0 \le x \le \beta \\[0.4em] 0 &\text{otherwise} \end{cases} \]

Its likelihood is

\[ L(\beta) \;\;=\;\; \prod_{i=1}^6 {f(x_i \;|\; \beta)} \;\;=\;\; \begin{cases} \left(\dfrac 1 {\beta}\right)^6 &\text{for } \beta \ge \max(x_1, \dots, x_6) \\[0.4em] 0 &\text{otherwise} \end{cases} \]

(Since it is impossible to get data values greater than \(\beta\), any such data values have zero probability, making the likelihood zero too.)

The diagram below illustrates this. The bottom half shows the rectangular pdf with a slider that can be used to adjust the value of the parameter \(\beta\). The six data values are marked by vertical red lines.

The heights of the red bars are \( f(x \;|\; \beta) = \frac 1 {\beta}\) at the data values and the likelihood is the product of these heights, as shown in the top half of the diagram. Drag the slider to see how the likelihood depends on the heights of these bars. Observe that the likelihood has a discontinuity at its maximum, which arises when \(\beta\) is the maximum of the data values. (When \(\beta\) is less than any of the data values, they have zero probability, making the likelihood zero.)

The maximum likelihood estimator is therefore

\[ \hat{\beta} \;\; = \;\; \max(x_1, \dots, x_6) \]

Observe that the maximum likelihood estimate is at a discontinuity in the likelihood function not at a turning point. In this example, the maximum likelihood estimator cannot therefore be found solving \(\ell'(\beta) = 0\).

Bias and standard error

Since the maximum likelihood estimate of \(\beta\) is at a discontinuity of the likelihood function, the 2nd derivative of the log-likelihood function is undefined and cannot be used to obtain an approximate value for the estimator's standard error.

It is however possible to find the distribution of the sample maximum from a rectangular distribution from first principles. We will derive this distribution later, but simply give formulae for the mean and variance here, based on a sample of size \(n\).

\[ E\left[\hat{\beta}\right] \;=\; \frac n {n+1} \beta \spaced{and} \se\left(\hat{\beta}\right) \;=\; \sqrt {\frac n {(n+1)^2(n+2)}}\times \beta \]

The estimator is therefore biased with

\[ \Bias(\hat{\beta}) \;=\; -\frac 1 {n+1} \beta \]

It is however consistent since its bias and standard error both tend to zero as \(n \to \infty\).