The same definition of independence holds for both discrete and continuous random variables.

Definition

Two random variables, \(X\) and \(Y\), are independent if all events about the value of \(X\) are independent of all events about the value of \(Y\).

Independence of continuous random variables is usually deduced from the way that the variables are measured rather than from mathematical calculations. For example,

Characterisation of independence

For independent continuous random variables, \(X\) and \(Y\),

\[ \begin{align} P(x \lt X \lt x+\delta x &\textbf{ and } y \lt Y \lt y+\delta y) \\ &=\;\; P(x \lt X \lt x+\delta x) \times P(y \lt Y \lt y+\delta y) \\ &\approx\;\; f_X(x)\;f_Y(y) \times \delta x \; \delta y \end{align} \]

so

\[ P(X \approx x \textbf{ and } Y \approx y) \;\; \propto \;\; f_X(x)\;f_Y(y) \]

This is closely related to the corresponding result for two independent discrete random variables,

\[ P(X=x \textbf{ and } Y=y) \;\;=\;\; p_X(x) \times p_Y(y) \]

Random samples

A collection of \(n\) independent identically distributed random variables from the same distribution is called a random sample.

Extending our earlier characterisation of independence of two continuous random variables,

\[ P(X_1 \approx x_1, X_2 \approx x_2, ..., X_n \approx x_n) \;\; \propto \;\; \prod_{i=1}^n f(x_i) \]

This is again closely related to the corresponding formula for a random sample from a discrete distribution

\[ P(X_1 = x_1, X_2 = x_2, ..., X_n = x_n) \;\; = \;\; \prod_{i=1}^n p(x_i) \]