RATS 11.1
RATS 11.1

All of the examples seen so far have produced a complete factorization of the covariance matrix. An alternative approach is to isolate one shock with certain characteristics. This might be the requirement that a shock produce certain responses, or that a shock is a particular linear combination of the non-orthogonal shocks. The Blanchard-Quah factorization is actually a form of this: one shock (the demand shock) has the property that it has a zero long-run response, the other is just whatever shock is required to complete a factorization.

 

You can construct a factorization around any single (non-zero) shock—some rescaling of it will always be part of a factorization. Why do we need a factorization if we’re just interested in the one shock? This is mainly because the decomposition of variance can only be computed if you have a full factorization. In general, there will be many ways to complete the factorization, but the fraction of the variance explained by the shock of interest will be the same for all of them.

 

Using @ForcedFactor

The procedure @ForcedFactor computes a factorization which includes a (scale of a) specified column in the factorization (which forces an orthogonalized component to hit the variables in a specific pattern), or which includes a (scale of a) specified row in the inverse of the factorization (which forces an orthogonalized component to be formed from a particular linear combination of innovations). For example, in a four variable system where the first two variables are interest rates:

 

@ForcedFactor sigma ||1.0,-1.0,0.0,0.0|| f1

@ForcedFactor(force=column) sigma ||1.0,1.0,0.0,0.0|| f2

 

F1 will be a factorization where the first orthogonal component is the innovation in the difference between the rates. F2 will be a factorization where the first orthogonal component loads equally onto the two interest rates, and hits none of the other variables contemporaneously.

 

Another example is from King, Plosser, Stock and Watson (1991). They need a factorization which hits the three variables in the system equally in the long run. By using %VARLAGSUMS, this can be done very simply with:

 

compute [rect] a=||1.0|1.0|1.0||

compute x=%varlagsums*a

@forcedfactor(force=column) %sigma x factor

 

@ForcedFactor will also allow you to control more than one column in the factorization, but the columns other than the first will be linear combinations of itself and the columns to its left. (That is, you can control the space spanned by some set of columns, but not the columns themselves).

 

Sign Restrictions

Parametric SVAR’s have, too frequently, been unable to produce models where shocks have the desired properties, and the types of zero restrictions which allow isolated shocks to be identified as described above aren’t always reasonable. Uhlig (2005) and Canova and De Nicolo (2002) proposed an even less structured approach, in which the shock is identified by sign restrictions, satisfying the prior understanding of how a particular shock should behave.

 

Because it is likely that there are many shocks which satisfy a set of sign restrictions, Uhlig’s approach uses a randomization procedure to explore the space of the possible shocks, which requires techniques described in Simulations and Bootstrapping. The basic idea behind it is if you take any factorization \({\bf{FF'}} = \Sigma \), then any column vector that is part of any factorization of \(\Sigma\) (Uhlig calls these impulse vectors) can be written in the form \({\bf{F}}\alpha \) where \(\left\| \alpha  \right\| = 1\). Thus, the space of single shocks in an SVAR model can be explored by looking the unit sphere in the proper space.

 


Copyright © 2026 Thomas A. Doan