Statistics and Algorithms / Simulations and Bootstrapping / Bootstrapping and Resampling Methods / Fixed Regressor Bootstrap |
The fixed regressor bootstrap was introduced in Hansen(1996). This is a bootstrap technique for producing approximate significance levels for tests where, because of the nature of the hypothesis being tested, standard asymptotics don't apply, and standard bootstrap shuffling of the data set isn't possible.
There are two key assumptions:
1.You have a search over a range of "nuisance parameters" which are unidentified under the null.
2.The test statistic would have an asymptotic chi-squared distribution if those nuisance parameters were known.
A test for a breakpoint at an unknown location is an example of this. Under the null that there is no break, there is no break point, but if we choose to look at a break at a particular entry, the test for a difference between the regressions in the two regimes will be asymptotically chi-squared (it's just a Chow test). Hansen demonstrates that the significance level of a test done by optimizing a functional of the basic test statistics (such as the supremum or an average) over the nuisance parameters can be approximated by randomizing only the residual, treating the regressors (and possible threshold variables) as fixed (hence the name of the test). Note that this is for evaluating test statistics only---not for the more general task of evaluating the full distribution of the estimates.
The procedure is to run the regression under the null, then, for each bootstrap draw, multiply the original residuals by a series of i.i.d. \(N(0,1)\) random numbers. Compute the test statistic treating those as the dependent variable keeping the original regressors. Note that this is the procedure regardless of the type of model you have under the null. For instance, if the model is an AR(1) with a (possible) structural break at the unknown location \(T_0\):
\begin{equation} y_t = \alpha + \rho y_{t - 1} + \gamma y_{t - 1} I(t \ge T_0 ) + u_t \label{eq:boot_fixedregar} \end{equation}
the regression under the null is
\begin{equation} y_t = \alpha + \rho y_{t - 1} + u_t \end{equation}
If
\begin{equation} \hat u_t \xi _t = \alpha + \rho y_{t - 1} + \gamma y_{t - 1} I(t \ge T_0 ) + v_t \end{equation}
Because the regressors are fixed (only the dependent variable changes), it's possible to substantially reduce the computation time from what would be required to run a full set of regressions across the different settings of the nuisance parameters on each draw. (Most of the work of each break point can be done once and saved).
The fixed regressor bootstrap is used in the @TAR, @THRESHTEST and @REGHBREAK procedures.
Copyright © 2025 Thomas A. Doan