|
Statistics and Algorithms / Simulations and Bootstrapping / Monte Carlo Integration |
Monte Carlo integration uses randomization techniques to compute approximations to the value of integrals for which analytical methods can’t be applied and standard numerical methods, such as quadrature, are slow and difficult. Suppose the integral to be computed can be written
\begin{equation} I = \int {h\left( x \right){\kern 1pt} f\left( x \right){\kern 1pt} {\kern 1pt} dx} \label{eq:boot_importoriginal} \end{equation}
where \({f\left( x \right){\kern 1pt} {\kern 1pt} }\) is a density function for a (convenient) probability distribution. Then \(I\) is the expected value of \(h\) over that distribution. If \(h\) is well-behaved, then the law of large numbers will apply to an independent random sample from the density \(f\), and \(I\) can be approximated by
\begin{equation} \hat I = \frac{1}{n}\sum {h\left( {x_i } \right)} \end{equation}
This approximation improves with the size of the random sample \(n\). The accuracy of the Monte Carlo estimates is governed by the Central Limit Theorem: if we approximate \(E\,h\left( x \right)\) with
\begin{equation} \frac{1}{n}\sum {h^* \left( {x_i } \right)} \end{equation}
for some random variable \({h^* }\), the variance of the numerical error is approximately
\begin{equation} \frac{1}{n}{\mathop{\rm var}} \,h^* \end{equation}
While this process can be applied outside of statistical work (need to integrate a function \(g\)?—find a clever \(hf\) combination), it is mainly used in Bayesian statistics where a complex posterior distribution can be “mapped out” by reporting the expected values of interesting functions. The practical examples in the Bootstrapping and Simulation Tools are actually all fairly trivial instances of Monte Carlo integration. For instance, a probability is just the expected value of an indicator variable on a particular event: \(h\) is one or zero depending upon whether the draw for \(x\) satisfies a condition.
The following prices a vanilla European call using Monte Carlo, estimates the standard deviation of the error in this calculation, and compares it with the Black-Scholes value. Because only the value at expiration matters, this can be simulated without generating a full path for the security price. This is on the example file MCPriceEurope.RPF.
compute expire = 5.0/12.0, strike=52.0, price=50.0
compute rate =.1, sigma=.40
compute hdrift = (rate-.5*sigma^2)*expire
compute hsigma = sigma*sqrt(expire)
compute ndraws = 10000
compute total = 0.0, total2=0.0
do draw=1,ndraws
compute payoff = exp(-rate*expire)*$
%max(0,price*exp(hdrift+%ran(hsigma))-strike)
compute total = total+payoff,total2=total2+payoff^2
end do draw
compute mcvalue = total/ndraws
compute mcvar = (total2/ndraws-mcvalue^2)/ndraws
?"Value by Monte Carlo" total/ndraws "Std Dev" sqrt(mcvar)
*
@BSOption(price=price,sigma=sigma,rate=rate,strike=strike,expire=expire) bsvalue
?"Value by Black-Scholes" bsvalue
In antithetic acceleration, we use a clever choice of \({h^* }\) to get a reduction in the variance. Suppose that the density \(f\) is symmetric around \(x_0\). Because of the symmetry, the function
\begin{equation} h^* \left( x \right) = \left( {{1 \mathord{\left/ {\vphantom {1 2}} \right. } 2}} \right)\left( {h\left( x \right) + h\left( {2x_0 - x} \right)} \right) \end{equation}
has the same expected value as \(h\). And its variance will be lower as long as there is a correlation between \({h\left( x \right)}\) and \({h\left( {2x_0 - x} \right)}\). In particular, if \(h\) is linear in \(x\), \(var h*\) is zero: every draw for \(x\) will produce \(h^* \left( x \right) = h\left( {x_0 } \right)\), which is, of course, the correct expected value. If \(h\) is close to being linear over the region where \(f\) is high, we would see a very substantial reduction in variance for the antithetical method versus independent draws.
The option pricing example above seems a good candidate for antithetic acceleration, as flipping the sign of the draw will change a high security price to a low one and vice versa. Since you’re doing two function evaluations per draw, antithetic acceleration shows signs of working if the variance is smaller than 1/2 the standard Monte Carlo.
compute total=0.0,total2=0.0
do draw=1,ndraws
compute u = %ran(hsigma)
compute payoff1 = exp(-rate*expire)*$
%max(0,price*exp(hdrift+u)-strike)
compute payoff2 = exp(-rate*expire)*$
%max(0,price*exp(hdrift-u)-strike)
compute total = total + .5*(payoff1+payoff2)
compute total2 = total2 + .25*(payoff1+payoff2)^2
end do draw
compute mcvalue = total/ndraws
compute mcvar = (total2/ndraws-mcvalue^2)/ndraws
?"Value by MC with Antithetic Acceleration" total/ndraws "Std Dev" sqrt(mcvar)
Copyright © 2025 Thomas A. Doan