|
Statistics and Algorithms / Vector Autoregressions / Conditional Forecasting |
The type of analysis in Historical Decomposition/Counterfactual Simulation is relatively straightforward because it constrains the behavior of the shocks. It’s much harder if you want to constrain the behavior of the endogenous variables themselves—that becomes a special case of conditional forecasting.
In a conditional forecast, certain values during the forecast period are fixed in advance. The simplest type of conditional forecast is where time paths for the exogenous variables are input, as is usually done in simultaneous equations models.
When forecasts are conditional upon future values of endogenous variables, the procedure is more complicated. Consider the formula for the error in a H-step forecast (forecasting \(t+H\) using \(t\)):
\begin{equation} \sum\limits_{s = 0}^{H - 1} {\Psi _s } {\bf{u}}_{t + H - s} \label{eq:var_condforebasemar} \end{equation}
When we constrain a future value of the process, we are setting a value for this forecast error, since subtracting the unconstrained forecast from the constrained value produces the error of forecast. We have thus put a linear constraint upon the innovations of the periods between \(t+1\) and \(t+H\).
With a one-step-ahead constraint, the forecast error in a variable consists of just one innovation since \(\Psi _0 = {\bf{I}}\) (non-orthogonal innovations). However, two steps out, the forecast error in a variable depends upon the first step innovations in all variables plus its own second step. There are now many ways to achieve the particular value—we have only a single constraint on a linear combination of \(N+1\) variables. Thus, it’s much more difficult to achieve a counterfactual based upon the value of \(\mathbf{Y}\).
Computing Conditional Forecasts
Conditional forecasting (in a linear model) is actually a special case of Kalman smoothing. However, it’s simpler to solve the problem directly than to cast the VAR into a state-space form. In calculating forecasts subject to constraints on the forecast error \eqref{eq:var_condforebasemar}, it’s most convenient to switch to orthogonalized residuals, thus, the forecast error is
\begin{equation} \sum\limits_{s = 0}^{H - 1} {\Psi _s {\bf{Fv}}_{t + H - s} } \label{eq:var_condforemar} \end{equation}
where \(\mathbf{F}\) is a factor of the covariance matrix. By stacking the orthogonalized innovations during the forecast period into a vector, we can write the constraints in the form
\begin{equation} {\bf{RV}} = {\bf{r}} \end{equation}
The conditional forecasts are then made by first computing the \(\mathbf{V}\) vector which minimizes \({\bf{V'}}{\kern 1pt} {\bf{V}}\) subject to the constraint. The solution to this is
\begin{equation} {\bf{V}}^{\bf{*}} = {\bf{R'(RR')}}^{{\bf{ - 1}}} {\bf{r}} \end{equation}
The \(\mathbf{V}\)’s are then translated back into non-orthogonalized shocks, and the model is forecast with those added shocks. This is similar to what is done in Counterfactual Simulations; the calculation is just complicated by the need to solve out the minimization problem rather than simply zero out shocks.
The @CONDITION Procedure
This can all be done using the procedure @CONDITION. Before you execute it, you must define the VAR and estimate it. @CONDITION can handle any number of constraints on single values of future endogenous variables. With the GENERAL option, you can use more general linear constraints on future values.
When setting up constraints, it’s a good idea to keep them as loose as possible. For instance, if you’re trying to constrain a growth rate, it’s better to do this as year over year changes, rather than constraining each quarter or month. That is, to constrain the series Y (in logs) to 4% annual growth, use
@condition(model=kpsw,from=2018:1,steps=8,results=cforecasts) 1
# y 2018:4 y(2017:4)+.04
rather than
@condition(model=kpsw,from=2014:1,steps=8,results=cforecasts) 4
# y 2018:1 y(2017:4)+.01
# y 2018:2 y(2017:4)+.02
# y 2018:3 y(2017:4)+.03
# y 2018:4 y(2017:4)+.04
The potential problem with the tighter constraints is that they might force some “whipsawing” innovations to hit all the values.
CONDITION.RPF Example
Example CONDITION.RPF demonstrates conditional forecasting, using the same data set as in IMPULSES.RPF (a small open economy model of Canada with US GDP and exchange rates). It computes forecasts conditional on 5% year over year growth in the US GDP. This type of constraint is easy to handle because US GDP is in logs. As suggested above, this constrains the growth rate just as year over year growth of 5% for the two years (2007:4 over the base year of 2006:4 and 2008:4 over 2006:4).
@condition(model=canmodel,from=fstart,to=fend,results=condfore) 2
# logusagdp 2007:4 logusagdp(2006:4)+.05
# logusagdp 2008:4 logusagdp(2006:4)+.10
The conditional forecasts are in series CONDFORE(1) to CONDFORE(6) in the order of the dependent variables in the model.
Conditional Forecasting with Restricted Shocks
Antolin-Diaz, Petrella and Rubio-Ramirez(2021) apply conditional forecasting to a Vector Autoregression, but need to restrict which shocks can be used to accomplish the post-sample restrictions. In a standard implementation of conditional forecasting, the most likely set of shocks may not be very useful from a policy standpoint—for instance, to hit optimistic goals for GDP into the future, in most cases, the most likely path is to have large positive shocks to GDP in the first few periods.
There is nothing particularly complicated about this—the stock @CONDITION procedure can be used. All you need to do is feed it a “factor” matrix which isn't full rank and only includes columns for the shocks that you want to include. For instance, in the CONDITION.RPF example, to use only money and interest rate shocks, this first does a Cholesky factor with M (#3) and R (#5) ordered first
compute mrfirst=%psdfactor(%sigma,$
||3,5,1,2,4,6||)
then creates a matrix which zeros out all but those two columns:
dec rect msectoronly(%nvar,%nvar)
ewise msectoronly(i,j)=$
%if(j==3.or.j==5,mrfirst(i,j),0.0)
Adding FACTOR=MSECTORONLY to the @CONDITION procedure call will solve the conditional forecast with the desired restrictions. Note that (orthogonalized) shocks to M and R will (as this is written) have contemporaneous effects on the other variables because the Cholesky factor is ordered with those before the other four variables—as the authors point out, this type of exercise depends upon an identification of the shocks, while a standard conditional forecast (what they call “conditional-on-observables” forecasting) doesn't.
You need to be careful in using this that you don’t have too many conditions for the number of shocks that you permit. If, for instance, you are only allowing one shock, you can have no more than one condition per horizon (or more precisely, the count of restrictions through every horizon can’t exceed the number of steps—you could, for instance, impose two restrictions at two steps out as long as you have no restrictions at one step).
Copyright © 2025 Thomas A. Doan