Significance level
The decision rule affects the probabilities of Type I and Type II errors and there is always a trade-off between these two probabilities. Selecting a critical value to reduce one error probability will increase the other.
In practice, we usually concentrate on the probability of a Type I error. The decision rule is chosen to make the probability of a Type I error equal to a pre-chosen value, often 5% or 1%. This probability is called the significance level of the test and its choice should depend on the type of problem. The worse the consequence of incorrectly rejecting H0, the lower the significance level that should be used.
If the significance level of the test is set to 5% and we decide to reject H0 then we say that H0 is rejected at the 5% significance level.
Reducing the significance level of the test increases the probability of a Type II error.
The choice of significance level should depend on the type of problem.
The worse the consequence of incorrectly rejecting H0, the lower the significance level that should be used. In many applications the significance level is set at 5%.
Illustration
The diagram below is identical to the one on the previous page.
With the top slider, adjust k to make the probability of a Type I error as close as possible to 5%. This is the decision rule for a test with significance level 5%.
From the normal distribution, the appropriate value of k for a test with 5% significance level is 11.64.
Drag the top slider to reduce the significance level to 1% and note that the critical value for the test increases to about k = 12.3.
P-values and decisions
The critical value for a hypothesis test about a population mean (known standard deviation) with any significance level (e.g. 5% or 1%) can be obtained from the quantiles of normal distributions. For other hypothesis tests, it is possible to find similar critical values from quantiles of the relevant test statistic's distribution
For example, when testing the mean of a normal population when the population standard deviation is unknown, the test statistic is a t-value and its critical values are quantiles of a t distribution.
It would seem that different methodology is needed to find decision rules for different types of hypothesis test, but this is only partially true. Although some of the underlying theory depends on the type of test, the decision rule for any test can be based on its p-value. For example, for a test with significance level 5%, the decision rule is always:
Decision | |
---|---|
p-value > 0.05 | accept H0 |
p-value < 0.05 | reject H0 |
For a test with significance level 1%, the null hypothesis, H0, should be rejected if the p-value is less than 0.01.
If computer software provides the p-value for a hypothesis test, it is therefore easy to translate it into accept (or reject) the null hypothesis at the 5% or 1% significance level.
Illustration
The following diagram again investigates decision rules for testing the hypotheses
H0 : μ = 10
HA : μ > 10
based on a sample of n = 16 values from a normal population with known standard deviation σ = 4.
In the diagram, the decision rule is based on the p-value for the test. Use the slider to adjust the critical p-value and observe that the significance level (probability of Type I error) is always equal to the p-value used in the decision rule. Adjust the critical p-value to 0.01.
Although the probability of a Type II error on the bottom row of the above table varies depending on the type of test, the top row in the diagram is the same for all kinds of hypothesis test.