Decisions from tests

Hypothesis tests often result in some action by the researchers that depends on whether we conclude that H0 or HA is true. This decision depends on the data.

Decision    Action
accept H0    some action (often the status quo)   
reject H0    a different action (often a change to a process)   

There are two types of error that can be made, represented by the red cells below:

Decision
  accept H0     reject H0  
True state of nature H0 is true    correct Type I error
HA (H0 is false)     Type II error correct

A good decision rule should have small probabilities for both kinds of error.

Saturated fat content of cooking oil

The clinician who tested the saturated fat content of soybean cooking oil was interested in the hypotheses.

H0 :   \(\mu = 15%\)
HA :   \(\mu \gt 15%\)

If H0 is rejected, the clinician intends to report the high saturated fat content to the media. The two possible errors that could be made are described below.

Decision
  accept H0  
(do nothing)
  reject H0  
(contact media)
Truth H0: µ is really 15% correct wrongly accuses manufacturers
HA: µ is really over 15%     fails to detect high saturated fat correct

Ideally the decision should be made in a way that keeps both probabilities low.

Decision rule and significance level

A decision rule's probability of a Type I error is its significance level. Fixing the significance level at say 5% therefore sets the details of the decision rule such that

\[ P(\text{reject }H_0 \mid H_0 \text{ is true}) \;\;=\;\; 0.05 \]

This does not however tell you the probability of a Type II error.

Illustration

Consider a test about the mean of a normal distribution with \(\sigma = 4\), based on a random sample of \(n = 16\) values:

H0 :   μ = 10
HA :   μ > 10

The sample mean will be used as a test statistic since its distribution is known when the null hypothesis holds,

\[ \overline{X} \;\;\sim\;\; \NormalDistn\left(\mu_0, \frac{\sigma}{\sqrt{n}}\right) \;\;=\;\; \NormalDistn(10, 1) \]

Large values of \(\overline{X}\) would usually be associated with the alternative hypothesis, so we will consider decision rules of the form

Data Decision
< k    accept H0
is k or higher    reject H0   

for some value of \(k\), the critical value for the test.

The diagram below illustrates the probabilities of Type I and Type II errors for different decision rules — these are the red areas in the upper and lower parts of each pair of normal distributions.

Note how reducing the probability of a Type I error increases the probability of a Type II error — it is impossible to simultaneously make both probabilities small with only \(n\) = 16 observations.


The above diagram used an alternative hypothesis value of \(\mu = 13\). The alternative hypothesis allows other values of \(\mu > 12\) and the probability of a Type II error reduces as \(\mu\) increases. For a decision rule that results in a 5% significance level, the diagram below illustrates this.

This is as should be expected — the further that the real value \(\mu\) is above 10, the more likely we are to detect that it is higher than 10 from the sample mean.

The decision rule affects the probabilities of Type I and Type II errors and there is always a trade-off between these two probabilities.