We now give two further examples of estimators found by the method of moments.
Example: Sex ratio of Siberian tigers
The probability of a newborn tiger being male is an unknown parameter, \(\pi\). Assuming that the sexes of tigers in a litter are independently determined, the number of males in a litter of size three will be
\[ X \;\; \sim \; \; \BinomDistn(3, \pi) \]A researcher recorded the numbers of males from a sample of \(n = 207\) such litters, as summarised by the following frequency table.
Number of males | 0 | 1 | 2 | 3 |
---|---|---|---|---|
Frequency | 33 | 66 | 80 | 28 |
What is the method of moments estimate of \(\pi\)?
We will denote the number of tiger cubs in a litter (the number of Bernoulli trials for each recorded value) by \(k\) here to distinguish it from the sample size, \(n\), so \(k = 3\) and \(n = 207\). The mean of the binomial distribution with parameters \(k\) and \(\pi\) is
\[ \mu(\pi) \;=\; k\pi \;=\; 3\pi \]The sample mean is the average of the 207 counts summarised by the table, giving a value
\[ \overline{x} \;=\; 1.498 \]The method of moments estimate of \(\pi\) is therefore found by solving
\[\mu(\pi) \;=\; \overline{x} \spaced{or} 3\pi = 1.498\]so the estimate is
\[\hat{\pi} \;=\; \frac {1.498} 3 \;=\; 0.499\]The method of moments estimator of \(\pi\) based on a random sample from a \(\BinomDistn(k,\;\; \pi)\) distribution is an unbiased estimator since
\[ E[\hat{\pi}] \;=\; E\left[\frac {\overline{X}} k\right] \;=\; \frac {k\pi} k = \pi\]Method of moments estimators are not however always unbiased, as illustrated by the following example.
Example: Sample from a geometric distribution
If \(\{X_1, X_2, \dots, X_n\}\) is a random sample from a geometric distribution with probability function
\[ p(x) = \pi (1-\pi)^{x-1} \quad \quad \text{for } x = 1, 2, \dots \]what is the method of moments estimator of \(\pi\)?
The mean of the distribution is \(\mu(\pi) = \displaystyle \frac 1 {\pi}\), so the method of moments estimator is found by solving
\[ \mu(\pi) \;=\; \frac 1 {\pi} \;=\; \overline{X} \]and the estimator is \(\hat{\pi} = \frac 1 {\overline{X}}\).
This estimator is biased since although
\[ \frac 1 {E[\overline{X}]} \;=\; \pi \]it can be shown that
\[ E\left[\frac 1 {\overline{X}} \right] \;\ne\; \frac 1 {E[\overline{X}]} \](We will not prove the inequality here, but it can be justified by noting that the average of the values 1 and 3 is 2, but the average of \(\frac 1 1\) and \(\frac 1 3\) is not \(\frac 1 2\).)
This estimator is however consistent — its bias and standard error tend to zero as the sample size increases.