Under your recommand, I directly use f and mu option in DLM.
Code: Select all
*
* Replication for Fabiani & Mestre(2004), "A system approach for
* measuring the euro area NAIRU," Empirical Economics, vol. 29, no. 2,
* pages 311-341
*
* Baseline model
* For numerical reasons, this estimates the variances in log form and
* scales up the constant in the inflation equation by a factor of 1000.
*
calendar(q) 1970
allocate 2000:4
*
* options
*
compute [real] infinity=1e4 ;*** initial loose conditions
compute [integer] nstates=8 ;*** number of states
compute [integer] exclude=nstates ;*** number of initial obs to drop
compute [integer] nlags=3 ;*** number of lags in measurement equ
*
* Note that you don't *need* to condition on a value as high as
* "nstates" (the exclude option above). With three observables, it takes
* only three data points to resolve the initial 8 diffuse state values.
*
* read data
*
open data awm110500.rat
data(format=rats) / pcd yed mtd compr urx yer
log pcd / lpcd
log yed / lyed
log mtd / lmtd
log compr / lcompr
log yer / lyer
diff lpcd / dlpcd
diff lyed / dlyed
diff lmtd / dlmtd
diff lcompr / dlcompr
diff lyer / dlyer
diff dlpcd / ddlpcd
diff dlyed / ddlyed
diff dlcompr / ddlcompr
diff dlmtd / ddlmtd
*** data
set ddp = ddlpcd
set ddm = ddlmtd
set y = lyer
set u = urx
*
* Observables from measurement equation(s)
*
equation yeqn *
# y u ddp
*
* Co-states from measurement equation(s)
*
equation zeqn *
# ddp{1 to nlags} ddm{0 to nlags} constant
*
* Figure out the observable range
*
inquire(reglist) start end
# %rlfromtable(%eqntable(zeqn)) %rlfromtable(%eqntable(yeqn))
*
* Starting values
*
filter(type=hp) y start-nlags end y_hp
set y_gap = y-y_hp
filter(type=hp) u start-nlags end u_hp
set u_gap = u-u_hp
lin y_gap
# u_gap{1}
compute phi1 = %beta(1)
compute lnvyc = log(%seesq)
lin ddp
# u_gap{1} ddp{1 to nlags} ddm{0 to nlags} constant
compute rho2 = %beta(1)
compute sup1 = %beta(2), sup2 = %beta(3), sup3 = %beta(4)
compute sup4 = %beta(5), sup5 = %beta(6), sup6 = %beta(7), sup7 = %beta(8)
compute kappa= 1000.0*%beta(9)
compute lnvh = log(%seesq)
diff(diffs=2) y_hp / ddy_hp
stat(noprint) ddy_hp start-nlags end
compute lnvbeta=log(%variance)
compute [real] lnvp=-20.0
diff(diffs=2) u_hp / ddu_hp
stat(noprint) ddu_hp start-nlags end
compute lnvzeta=log(%variance)
compute [real] lnvn=-20.0
*
lin u_gap
# u_gap{1 to 2}
compute lnvuc = log(%seesq)
compute delt1 = %beta(1), delt2 = %beta(2)
*
* Initial conditions for SSM
*
declare vector x0(nstates)
compute x0=%const(0)
declare symmetric v0(nstates,nstates)
compute v0=infinity*%identity(nstates)
nonlin phi1 rho2 delt1 delt2 $
lnvh lnvp lnvbeta lnvyc lnvn lnvzeta lnvuc $
sup1 sup2 sup3 sup4 sup5 sup6 sup7 kappa
*
dec rect a f m w
dec symm sw sv
*
* This sets up the input matrices for the DLM which are fixed across time
*
function DLMSetup
compute a = ||$
1,1,0,0,0,0,0,0|$
0,1,0,0,0,0,0,0|$
0,0,0,0,0,0,phi1,0|$
0,0,1,0,0,0,0,0|$
0,0,0,0,1,1,0,0|$
0,0,0,0,0,1,0,0|$
0,0,0,0,0,0,delt1,delt2|$
0,0,0,0,0,0,1,0||
compute m = ||$
1,0,1,0,0,0,0,0|$
0,0,0,0,1,0,1,0|$
0,0,0,0,0,0,0,rho2||
compute w = ||$
0,0,0,0,0,0,0,0| $
0,0,0,0,0,0,0,0| $
sup1,sup2,sup3,sup4,sup5,sup6,sup7,.001*kappa||
compute f=||$
1,0,0,0,0,0|$
0,1,0,0,0,0|$
0,0,1,0,0,0|$
0,0,0,0,0,0|$
0,0,0,1,0,0|$
0,0,0,0,1,0|$
0,0,0,0,0,1|$
0,0,0,0,0,0||
compute sw=%diag(%exp(||lnvp,lnvbeta,lnvyc,lnvn,lnvzeta,lnvuc||))
compute sv=%diag(||0.0,0.0,exp(lnvh)||)
end
*
* Defines the observable
*
dec frml[vector] yf
*frml yf = %eqnxvector(yeqn,t)-w*%eqnxvector(zeqn,t)
frml yf = || y,u,ddp||
dec frml[vector] z
frml z = || ddp{1}, ddp{2}, ddp{3}, ddm, ddm{1},ddm{2},ddm{3},constant||
dec frml[vector] m2
frml m2 = w * z
*
* This sequence reproduces the results in the paper. The initial pegging
* of the lnvzeta parameter (which governs the change in the NAIRU trend)
* prevents the estimation from heading towards a second mode with a much
* higher value for that parameter. (That mode has only a slightly
* smaller likelihood, but implies a NAIRU which tracks the observed
* unemployment almost exactly.)
*
* For reasons we have yet to determine, it doesn't work to shift to BFGS
* for the second set of iterations.
*
nonlin(parmset=ssmparms) phi1 rho2 delt1 delt2
nonlin(parmset=regparms) sup1 sup2 sup3 sup4 sup5 sup6 sup7 kappa
nonlin(parmset=variances) lnvh lnvp lnvbeta lnvyc=-50.0 lnvn lnvzeta lnvuc
nonlin(parmset=pegs) lnvzeta=-18.5
dlm(start=DLMSetup(),parmset=ssmparms+regparms+variances+pegs,y=yf,a=a,c=tr(m),mu=m2,f=f,sw=sw,sv=sv,$
x0=x0,sx0=v0,condition=exclude, method=simplex,iters=50,noprint) start end
dlm(start=DLMSetup(),parmset=ssmparms+regparms+variances,y=yf,a=a,c=tr(m),mu=m2,f=f,sw=sw,sv=sv,$
x0=x0,sx0=v0,condition=exclude, method=simplex,iters=100) start end
*
* This computes the Kalman smoothed estimates of the states at the final estimates.
*
dlm(start=DLMSetup(),parmset=ssmparms+regparms+variances,y=yf,a=a,c=tr(m),mu=m2,f=f,sw=sw,sv=sv,$
x0=x0,sx0=v0, type=smooth,likelihood=loglik,vhat=vhat) start end xstates vstates
*
* This is a more technically correct set of estimates. The state space
* model has four unit roots in eight states. Using a fully diffuse prior
* (which is what is done above) produces a decided difference in the
* first forty or fifty observations as the high variance on the four
* stationary states resolves out. This has almost no effect on the
* smoothed values, but the filtered values are what determine the
* coefficient estimates.
*
* BFGS *does* work with this.
*
dlm(start=DLMSetup(),parmset=ssmparms+regparms+variances+pegs,y=yf,a=a,c=tr(m),mu=m2,f=f,sw=sw,sv=sv,$
presample=ergodic,condition=exclude, method=simplex,iters=50) start end
dlm(start=DLMSetup(),parmset=ssmparms+regparms+variances,y=yf,a=a,c=tr(m),mu=m2,f=f,sw=sw,sv=sv,$
presample=ergodic,condition=exclude, method=bfgs,iters=200) start end
dlm(start=DLMSetup(),parmset=ssmparms+regparms+variances,y=yf,a=a,c=tr(m),mu=m2,f=f,sw=sw,sv=sv,$
presample=ergodic, type=smooth,likelihood=loglik,vhat=vhat) start end xstates vstates
*
* Pull out the smoothed estimates of the NAIRU and potential GDP. Graph these
* against the actual unemployment and GDP.
*
set nairu start end = xstates(t)(5)
set yp start end = xstates(t)(1)
*
spgraph(vfields=1,hfields=2,footer="Smoothed Series")
graph(hlabel="Output") 2
# y
# yp
graph(hlabel="Unemployment") 2
# u
# nairu
spgraph(done)
*
* Pull out the smoothed estimates of the local trend rates for GDP and U
*
set ytrend start end = xstates(t)(2)
diff yp / dyp
set utrend start end = xstates(t)(6)
diff nairu / dnairu
spgraph(vfields=1,hfields=2,footer="Smoothed Series")
graph(hlabel="Trend in Output") 2
# dyp
# ytrend
graph(hlabel="Trend in Unemployment") 2
# dnairu
# utrend
spgraph(done)
*
* Pull out the smoothed estimates of the gap. Compare with the gaps
* generated by univariate HP filters.
*
set ygap start end = xstates(t)(3)
set ugap start end = xstates(t)(7)
spgraph(vfields=1,hfields=2,footer="Cyclical Series")
graph(hlabel="Output") 2
# y_gap
# ygap
graph(hlabel="Unemployment") 2
# u_gap
# ugap
spgraph(done)
*
* Pull out the smoothed residual in the inflation measurement equation.
* The fitted value (regression component) will be the difference between
* this and the actual ddp series.
*
set pres start end = vhat(t)(3)
set pfit start end = ddp-pres
spgraph(footer="Phillips Curve",vfields=1,hfields=2)
graph(key=none,hlabel="Actual vs Fit") 2
# ddp
# pfit
graph(key=none,hlabel="Residual")
# pres
spgraph(done)
I hope that the codes can be directly written by using new feature.
And add more explanatory comments due to its complex.