Difference between parameter and estimate
Many hypothesis tests are about a single parameter of the model:
It is natural to base a test about such a parameter on the corresponding sample statistic:
If the value of the sample statistic is close to the hypothesised value of the parameter, there is no reason to doubt the null hypothesis. However if they are far apart, the data are not consistent with the null hypothesis and we should conclude that the alternative hypothesis holds.
A large distance between the estimate and hypothesized value is evidence against the null hypothesis.
Statistical distance
How do we tell what is a large distance between, say, p and a hypothesised value for the population proportion, π0? The empirical rule says that we expect p to be within two standard errors of π0 (about 95% of the time). If we measure the distance in standard errors, we know that 2 (standard errors) is a large distance, 3 is a very large distance, and 1 is not much.
The number of standard errors is
In general, the statistical distance of an estimate to a hypothesised value of the underlying parameter is
If this comes to more than 2, or less than -2, it suggests that the hypothesized value is wrong: the estimate is not consistent with the hypothesised parameter value. If, on the other hand, z is close to zero, the data are giving a result reasonably close to what we expected based on the hypothesis.