Two independent repetitions of a random experiment

One important situation that leads to independent random variables is when some random experiment is repeated twice in essentially the same way.

Examples

In these examples, not only can the two variables be assumed to be independent, but it is reasonable to assume that both have the same distribution. This allows us to dispense with the subscripts for their probability functions,

\[ p_X(\cdot) \;=\; p_Y(\cdot) \;=\; p(\cdot) \]

Random sample

If this is extended to \(n\) independent repetitions of a random experiment, we get \(n\) independent identically distributed random variables. This is often abbreviated to \(n\) iidrv's and is also called a random sample from the distribution with probability function \(p(x)\).

Definition

A random sample of \(n\) values from a distribution is a collection of \(n\) independent random variables, each of which has this distribution.

Random samples often arise in statistics, and the following theorem is central to their analysis.

Probabilities for random samples

The probability that the values in a discrete random sample are \(x_1, x_2, ..., x_n\) is

\[ P(X_1 = x_1, X_2 = x_2, ..., X_n = x_n) \;\; = \;\; \prod_{i=1}^n p(x_i) \]

The result can be proved by induction. It clearly holds when \(n=2\) from the definition of independence of two random variables.

\[ P(X_1 = x_1 \textbf{ and } X_2 = x_2) \; = \; P(X_1 = x_1) \times P(X_2 = x_2) \]

For a random sample of \(n=3\) values,

\[ P(X_1 = x_1 \textbf{ and } X_2 = x_2 \textbf{ and } X_3 = x_3) \; = \; P(X_1 = x_1 \textbf{ and } X_2 = x_2) \times P(X_3 = x_3) \]

since \(X_3\) is independent of both \(X_1\) and \(X_2\). Therefore

\[ P(X_1 = x_1 \textbf{ and } X_2 = x_2 \textbf{ and } X_3 = x_3) \; = \; P(X_1 = x_1) \times P(X_2 = x_2) \times P(X_3 = x_3) \]

The proof for \(n = 4\) proceeds in a similar way, and so on.