We now concentrate on a random sample from a \(\NormalDistn(\mu, \sigma^2)\) distribution and develop hypothesis tests about the distribution's two parameters. Initially we assume that \(\sigma^2\) is a known value and consider how perform tests about \(\mu\). The test may be one-tailed, such as

or two-tailed,

where \(\mu_0\) is a known constant.

Test statistic

The sample mean, \(\overline{X}\), could be used as a test statistic but, in practice, it is easier to use its standardised version as the test statistic,

\[ Z \;\;=\;\; \frac{\overline{X} - \mu_0}{\diagfrac{\sigma}{\sqrt{n}}} \]

This has a standard normal distribution, \(Z \sim \NormalDistn(0,1)\) if H0 is true.

P-value and interpretation

The p-value for the test is found by comparing the value of the test statistic (evaluated from the data set) to the standard normal distribution. For a one-tailed test, this is one tail area of the distribution, but for a two-tailed test, it is double the smaller tail area since values of \(\overline{X}\) below \(\mu_0\) give the same evidence against H0 as values above it.

The p-value is interpreted in the same way as for all other hypothesis tests. A small value means that a value of \(\overline{X}\) as far as was observed from \(\mu_0\) would be unlikely if the null hypothesis was true, and this provides evidence suggesting that the alternative hypothesis is true. The diagram below illustrates for a 2-tailed test.