Consider two partitions of the sample space, \(\{A_i, i=1,\dots, n_A\}\) and \(\{B, B^c\}\). The probabilities for all possible events can be described in three different ways:

The joint probabilities
\(P(A_i \textbf{ and } B)\) and \(P(A_i \textbf{ and } \overline{B})\) for all i
The marginal probabilities for \(A\) and the conditional probabilities for \(B\)
\(P(A_i)\), \(P(B \mid A_i)\) and \(P(\overline{B} \mid A_i)\) for all i
The marginal probabilities for \(B\) and the conditional probabilities for \(A\)
\(P(B)\), \(P(\overline{B})\), \(P(A_i \mid B)\) and \(P(A_i \mid \overline{B})\) for all i

We have already shown how to find marginal and conditional probabilities from the joint probabilities, and how to find joint probabilities from marginal and conditional ones. The following theorem gives a formula that helps find one set of conditional probabilities from the other.

Bayes Theorem

If \(\{A_1, ..., A_k\} \) is a partition of the sample space,

\[ P(A_j \mid B)   =  \frac {P(A_j) \times P(B \mid A_j) } {\sum_{i=1}^{k} {P(A_i) \times P(B \mid A_i) } } \]

(Proved in full version)

In actual examples, it is usually easiest to work out the probability from first principles:

\[ P(A_j \mid B)   =  \frac {P(B \textbf{ and } A_j) } {P(B) } \]

The numerator can be found from the definition of conditional probability

\[ P(A_j \mid B)   =  \frac {P(B \textbf{ and } A_j) } {P(B) } \]

and the denominator can be evaluated using the law of total probability.

Example

Medical diagnostic tests for a disease are rarely 100% accurate. There are two types of error:

Consider a diagnostic test with conditional probabilities

\[ P(negative \mid disease) = 0.05 \quad\quad\quad P(positive \mid no \text{ } disease) = 0.10 \]

If 10% of people who are given the test have the disease,

\[ P(disease) = 0.10 \]

what is the probability that someone with a positive test result actually has the disease?

(Solved in full version)