Faust CRCSPP 1998 |
This is a replication file for one of the two models from Faust(1998). From the abstract:
"This paper presents a new way to assess robustness of claims [about money] from identified VAR work. All possible identifications are checked for the one that is worst for the claim, subject to the restriction that the VAR produce reasonable impulse responses to shocks. The statistic on which the claim is based need not be identified; thus, one can assess claims in large models using minimal restrictions. The technique reveals only weak support for the claim that monetary policy shocks contribute a small portion of the forecast error variance of post-War U.S. output in standard 6-variable and 13-variable models."
This uses the same data set as used in Leeper, Sims and Zha(1998). Instead of offering a full Structural (or Identified) Vector Autoregression as was done in that paper, Faust's approach is to attempt to provide an upper bound on the percentage of the variance of a series that can be attributed to a shock with the properties of a contractionary monetary shock. This is based upon the following: given any chosen (orthonormal) factorization of the covariance matrix in a Vector Autoregression, the fraction of variance of a variable at any horizon explained by a linear combination \(\bf{x}\) (note that Faust uses \(\alpha\) for this) of the orthonormal shocks can be written as the quadratic form \({\bf{x'V}}_h {\bf{x}}\) where \(\bf{V}_h\) is obtained from the IRF’s for the original factorization. Since we still need this to define a unit variance shock, we need \({\bf{x}}'{\bf{x}} = 1\). His approach is to solve the problem
\begin{equation} \mathop {\max }\limits_{\bf{x}} \,\,{\bf{x'V}}_h {\bf{x}}\,{\rm{subject}}\,{\rm{to}}\,{\bf{x}}'{\bf{x}} = 1\,{\rm{and}}\,{\bf{C}}_r {\bf{x}} \ge 0 \end{equation}
where the final restrictions are on the impulse responses (sign restrictions, signs of changes between periods). In other applications, simply looking at the solution to this without those sign restrictions might be interesting; in Faust’s case, since he is trying to look (potential) monetary policy shocks, the restrictions are critical.
The paper does both a 6 variable and a 13 variable model. The RATS code only shows the smaller one. The data are monthly U.S. data from 1959:1 through 1996:6. The series are a monthly interpolated value of real GDP (RGDPMON), the CPI price index (CPISA), the commodity prices (PCM), non-borrowed reserves (NBRSA), total reserves (TRSA) and the Federal Funds rate (FEDFUNDS). All the variables except the interest rate are transformed to 100 x logs:
set rgdpmon = 100.0*log(rgdpmon)
set cpisa = 100.0*log(cpisa)
set pcm = 100.0*log(pcm)
set nbrsa = 100.0*log(nbrsa)
set trsa = 100.0*log(trsa)
It does a six lag VAR with a nine-year (108 month) horizon on the impulse responses:
compute nlags =6
compute nsteps=108
system(model=sixvar)
variables rgdpmon cpisa pcm nbrsa fedfunds trsa
lags 1 to nlags
det constant
end(system)
estimate(noprint,noftests)
*
impulse(responses=irf,model=sixvar,factor=%decomp(%sigma),noprint,steps=nsteps)
The target variable for the analysis is GDP (first variable in the model)—the aim is to bound how much of the variance of GDP can be attributed to a monetary shock. The responses of GDP can be extracted using:
dec vect[series] gdpresp(nvar)
do i=1,nvar
set gdpresp(i) = irf(1,i)
end do i
These are the responses of GDP to each of the Cholesky factor shocks, that is GDPRESP(1) is the response of GDP to the GDP shock, GDPRESP(2) is to the CPI shock. Note, however, that the results will end up being the same regardless of what factor is used in computing the IRF's.
The CMOM of the series of IRF's will give the quadratic form matrix which will give the variance from the weights on the orthogonal components. The forecast error variance (for NSTEPS steps) is the sum of the diagonal elements. Scale the matrix by that, so the matrix sums to one on the diagonal.
cmom 1 nsteps
# gdpresp
compute vh=%cmom/%sum(%xdiag(%cmom))
(The %CMOM prior to scaling is the full covariance matrix of the NSTEPS ahead forecast error from the model.)
Compute the eigen decomposition. The largest eigenvalue will be the maximum fraction of GDP that any shock can explain.
eigen vh eigval eigvect
The first eigenvector gives the weights for the maximal vector on the columns in the original orthogonalization (by default, EIGEN normalizes the eigenvectors to unit length, which is what we want), which is pre-multiplied by the factor used in computing the orthogonalized IRF's to give the impact responses:
compute [vector] x=%xcol(eigvect,1)
compute [vector] z=%decomp(%sigma)*x
This displays the maximum percentage explained and the impact response:
?"Maximal explanatory percentage" 100.0*eigval(1)
?"Impact Response of maximizer" *.#### tr(z)
The results are:
Maximal explanatory percentage 92.90157
Impact Response of maximizer
0.2376 -0.0209 -0.3500 -0.0613 -0.3182 -0.4904
Obviously, this is a very high percentage, but does it behave like a contractionary monetary policy shock? The assumption is that that type of shock would be positive on interest rates (variable 5) and negative on all others. The sign of the eigenvector as a whole is arbitrary so we could flip the whole vector, which would give us the correct sign for the interest rate and GDP, but would be wrong for all the others.
Faust proposes a modified eigenvalue procedure for solving the problem with sign constraints on the impacts. However, with RATS, it's simpler to just solve the maximization problem directly using FIND with constraints. The base model has:
nonlin(parmset=base) x %normsqr(x)==1
that is, we optimize over the vector X with the length constraint.
We can apply the sign constraints using:
compute [rect] step0 = %xt(irf,1)
compute [vect] flipper=||-1.0,-1.0,-1.0,-1.0,+1.0,-1.0||
nonlin(parmset=impacts) (step0*x).*flipper>=0.0
which multiplies the first step responses (which is the same as the factor matrix) by the weight vector x and does an elementwise product times the FLIPPER vector with the desired signs. In a PARMSET when applied to a vector or matrix >= is a constraint on all the elements of the vector.
The guess value for X is the eigenvector sign-corrected to give the correct (negative) sign for the response of GDP (variable 1):
compute [vector] x=%xcol(eigvect,1),x=x*(-%sign(z(1)))
The optimization is done with
find(method=bfgs,parmset=base+impacts,iters=400) max %qform(vh,x)
end find
which produces:
FIND Optimization - Estimation by BFGS with inequalities
Convergence in 12 Iterations. Final criterion was 0.0000058 <= 0.0000100
Function Value 0.5803
Variable Coeff
**********************************************
1. X(1) -0.682540584
2. X(2) -0.063050776
3. X(3) 0.052112205
4. X(4) -0.293403908
5. X(5) 0.657792187
6. X(6) -0.093118769
so the maximum subject to constraints is 58%. The X values can't be interpreted directly, but, if we transform them to the impacts, we get
Impact Response of maximizer
-0.3038 0.0000 0.0000 -0.4562 0.3402 0.0000
This computes and graphs the responses (over a slightly shorter range) to this shock. Note that this does all the graphs on a common scale, which is not ordinarily a good idea (these are responses of different variables to a single shock, not responses of a single variable to different shocks). However, given the data transformations as 100 x log, everything (including the interest rates) are at least roughly in the same scale:
compute z=step0*x
impulse(shock=z,model=sixvar,steps=nsteps,results=policyresponses,noprint)
spgraph(header="Impulse Responses to Policy Shock",vfields=2,hfields=3)
table(noprint) / policyresponses
do i=1,6
graph(hlabel=vl(i),number=0,max=%maximum,min=%minimum)
# policyresponses(i,1) 1 49
end do i
spgraph(done)
Copyright © 2025 Thomas A. Doan