Transformation of a single variable
The method of finding a continuous random variable's probability density function via its cumulative distribution function can be simplified considerably when the variable is a monotonic function of another whose pdf is known. (A monotonic function is one that steadily increases or decreases.)
Monotonic function of X
If a continuous random variable \(X\) has probability density function \(f_X(x)\) and another variable is defined as \(Y = g(X)\) where the function \(g(\cdot)\) is a monotonic function with inverse \(X = h(Y)\), then the pdf of \(Y\) is
\[ f_Y(y) \;\;=\;\; f_X(h(y)) \times \left| h'(y) \right| \]If \(g(X)\) is a monotonic increasing function, the cumulative distribution function for \(Y\) is
\[ F_Y(y) \;\;=\;\; P(Y \le y) \;\;=\;\; P(X \le h(y)) \;\;=\;\; F_X(h(y)) \]We now differentiate to find the probability density function of \(Y\),
\[ f_Y(y) \;=\; F_Y'(y) \;=\;\; F_X'(h(y)) \times h'(y) \;\;=\;\; f_X(h(y)) \times h'(y) \]If \(g(X)\) is monotonic decreasing,
\[ F_Y(y) \;\;=\;\; P(Y \le y) \;\;=\;\; P(X \ge h(y)) \;\;=\;\; 1 - F_X(h(y)) \]Since \(h'(y)\) is now negative, the result is the same but involving the absolute value of \(h'(y)\).
The method will be clearer in an example.
Example: Log-normal distribution
If \(X \sim \NormalDistn(\mu,\; \sigma^2)\), what is the distribution of \(Y = \exp(X)\)?
In this example, \(y = g(x) = \exp(x)\) with inverse function \(x = h(y) = \log(y)\), the natural logarithm. The probability density function of \(Y\) is therefore
\[ \begin{align} f_Y(y) \;\;=\;\; f_X(h(y)) \times h'(y) \;\;&=\;\; f_X(\log(y)) \times \frac 1 y \\ &=\;\; \frac 1{y\sqrt{2\pi}\;\sigma} e^{- \frac{\large (\log(y)-\mu)^2}{\large 2 \sigma^2}} \end{align} \]Since \(Y\) must be greater than zero, its pdf is therefore
\[ f_Y(y) \;\;=\;\; \begin{cases} \displaystyle \frac 1{y\sqrt{2\pi}\;\sigma} e^{- \frac{\large (\log(y)-\mu)^2}{\large 2 \sigma^2}} & \text{if } y \ge 0 \\[0.4em] 0 & \text{otherwise} \end{cases} \]This distribution is called a log-normal distribution.
Note that if a random variable \(Y\) has a log-normal distribution, we can show in a similar way that \(X = \log(Y)\) will have a normal distribution.
Another example shows the relationship between the exponential and Weibull distributions.
Example
If \(X \sim \ExponDistn(\lambda)\), show that the distribution of \(Y = X^a\) has a \(\WeibullDistn(\diagfrac 1 a, \lambda^a)\) distribution.
To simplify the proof, we will write \(a = \large \frac 1 k\), so \(y = g(x) = x^{\large \frac 1 k}\) which has inverse function \(x = h(y) = y^k\). The probability density function of \(Y\) is therefore
\[ \begin{align} f_Y(y) \;\;=\;\; f_X\left(h(y)\right) \times h'(y) \;\;&=\;\; f_X(y^k) \times k \; y^{k - 1}\\ &=\;\; \lambda e^{-\lambda y^k} \times k \; y^{k - 1} \\ &=\;\; k\lambda y^{k-1} e^{-\lambda y^k} \end{align} \]which is the pdf of a \(\WeibullDistn(k, \lambda^{\large \frac 1 k})\) distribution.