Page 1 of 1
Scale problem in MNZ03
Posted: Thu Jun 24, 2010 9:24 am
by ivory4
Code: Select all
open data lgdp.txt
calendar(q) 1947
data(format=free,org=columns) 1947:01 1998:02 lgdp
*
* Everything is run on 100*log data
*
set lgdp = 100.0*lgdp
*
@NBERCycles(down=recession)
*
* UC-0 decomposition with AR(2) cycle, fixed trend rate.
*
nonlin mu sn ph1 ph2 se
*
dec frml[rect] af
dec frml[vect] zf
dec frml[symm] swf
*
frml af = ||1.0,0.0,0.0|$
0.0,ph1,ph2|$
0.0,1.0,0.0||
frml zf = ||mu,0.0,0.0||
frml swf = %diag(||sn^2,se^2||)
*
compute [vect] c=||1.0,1.0,0.0||
compute [rect] f=%identity(2)~~%zeros(1,2)
*
* Get initial guess values
*
filter(type=hp) lgdp / gdp_hp
set gap_hp = lgdp - gdp_hp
linreg gap_hp
# gap_hp{1 2}
compute ph1=%beta(1),ph2=%beta(2),se=sqrt(%seesq)
set trend = t
linreg gdp_hp
# constant trend
compute mu=%beta(2)
compute sn=sqrt(.1*%seesq)
*
* In order to be comparable with the BN decomposition (which is a
* "filtered" calculation), this uses filtered rather than smoothed
* estimates of the components.
*
dlm(presample=ergodic,a=af,z=zf,sw=swf,c=c,f=f,y=lgdp,method=bfgs,type=filter) / states0
*
set cycle0 = states0(t)(2)
set trend0 = states0(t)(1)
*
graph(footer="Figure 1 UC-0 Cycle, US Real GDP. Percent Deviation from Trend",$
shaded=recession)
# cycle0
This is the original code for MNZ_RESTAT_2003. I notice if 100 is removed sometimes the results become chaotic, meaningless, while sometimes the results are 1/100 scaled well. What would be reasons? I used CLEAR PROGRAM button each time I run the results
Re: Scale problem in MNZ03
Posted: Thu Jun 24, 2010 10:14 am
by ivory4
By the way, in the original file mnz_restat_2003.prg
Code: Select all
frml zf = ||mu,0.0,0.0||
frml swf = ||sn^2|rhone*sn*se,se^2||
dlm(presample=ergodic,a=af,z=zf,sw=swf,c=c,f=f,y=lgdp,method=bfgs,type=filter) / statesur
set cycleur = statesur(t)(2)
set trendur = statesur(t)(1)
*
graph(footer="Figure 3 UC-UR Cycle, US Real GDP",$
shaded=recession)
# cyclebn
The last line should ve # cycleur
Re: Scale problem in MNZ03
Posted: Thu Jun 24, 2010 10:17 am
by ivory4
When UCUR is estimated in MNZ paper, why define covariance in DLM as RHONE*SN*SE?
Re: Scale problem in MNZ03
Posted: Thu Jun 24, 2010 3:05 pm
by TomDoan
ivory4 wrote:When UCUR is estimated in MNZ paper, why define covariance in DLM as RHONE*SN*SE?
That's the parameterization from the original paper, so that might be better addressed to one of the authors. However, their main point is that the value of rho is what causes the difference in the results between the different ways of extracting the trends, so parameterizing the covariance matrix so rho is estimated directly makes sense.
Re: Scale problem in MNZ03
Posted: Fri Jun 25, 2010 8:36 am
by ivory4
to get the likelihood figure in the paper,
Code: Select all
ompute bench = %logl
set ll = 0.0
set rho = 0.0
compute xrho = -0.95
* From -0.95 to 0.95
do i = 1, 39, 1
nonlin mu sn ph1 ph2 se rhone=xrho
dlm(presample=ergodic,a=af,z=zf,sw=swf,c=c,f=f,y=lgdp,method=bfgs,type=filter) / statesur
compute ll(j) = %logl
compute rho(j) = xrho
compute xrho = xrho+0.05
end do
set ratio = 0.0
do j = 1, 39, 1
compute ratio(j) = -2*(ll(j)-bench)
end do
set chi2 1 39 = 3.8415
scatter(style=lines,header="Likelihood Ratio Statistics For Testing RhoNE = RhoNE,0",$
hlabel="RhoNE,0",$
vlabel="LR Statistics") 2
# rho ratio 1 39
# rho chi2 1 39 3
Some DLM after Rhone = 0.5 are not converging which I think makes the likelihood calculation inaccurate. If I change 39 to 29 which means use Rhone from -0.95 to 0.5, it seems better. Is there any suggestion on this issue?
Re: Scale problem in MNZ03
Posted: Fri Jun 25, 2010 10:13 am
by TomDoan
Apparently, the odd behavior of the likelihood ratio in the paper at around rho=.6 is correct. I've added the replication of figure 4 to the example, and also corrected the error in the earlier graph that you reported.
This is the code that I used for generating the grid of values. With BFGS over sequential estimates, it's helpful to reuse the previous estimate for the Hessian, which is done here. For all i's except 1, this uses the option HESSIAN=%XX, which feeds the previous covariance matrix in.
Code: Select all
@gridseries(from=-.99,to=.99,size=.01,pts=ngrid) rhos
set lrstat 1 ngrid = 0.0
nonlin mu sn ph1 ph2 se
*
* Reset to the full optimum coefficients
*
compute %parmspoke(fullset,baseparms)
do i=1,ngrid
compute rhone=rhos(i)
if i<>1
dlm(presample=ergodic,a=af,z=zf,sw=swf,c=c,f=f,y=lgdp,method=bfgs,hessian=%xx,type=filter)
else
dlm(presample=ergodic,a=af,z=zf,sw=swf,c=c,f=f,y=lgdp,method=bfgs,type=filter)
compute lrstat(i)=-2.0*(%logl-baselogl)
end do i
scatter(style=line,footer="Figure 4 Likelihood Ratio Tests for Testing Value of Rho")
# rhos lrstat
Re: Scale problem in MNZ03
Posted: Fri Jun 25, 2010 2:14 pm
by ivory4
TomDoan wrote:Apparently, the odd behavior of the likelihood ratio in the paper at around rho=.6 is correct. I've added the replication of figure 4 to the example, and also corrected the error in the earlier graph that you reported.
I extend the range of the data it seems that after Rho passes 0.6 DLM gives may inconvergence and also I change the model to a UCUR with ARMA(2,1) after 0.0 DLM gives a lot inconvergences.
Re: Scale problem in MNZ03
Posted: Fri Jun 25, 2010 5:26 pm
by TomDoan
First off, note that the likelihood function is VERY flat in rho. It's possible that the model has multiple modes and the sequential estimates across a rho grid have discovered that when the coefficients change sharply on a small change in rho. You might want to see what happens if you take the original data, fix rho at .58 (anywhere left of the quick change) and give it guess values from the coefficients at .6 (or anywhere right of the quick change). If that gives you a higher likelihood than you originally got for .58, try reversing the order of estimation on the grid and see what happens.
Re: Scale problem in MNZ03
Posted: Sun Aug 29, 2010 1:47 pm
by ivory4
In this revised code, LR=-2(%logl-baselogl)
Is the %logl from DLM caculated in the same way as %logl in Linreg, as shown in Page 178 of UG?
Re: Scale problem in MNZ03
Posted: Sun Aug 29, 2010 5:00 pm
by TomDoan
All log likelihoods in RATS are computed using all the integrating constants so they will be compatible between models done with different estimators. In comparing DLM with LINREG, you would need to make sure that the estimation ranges are the same, since DLM can handle pre-sample missing values, while LINREG can't.
Re: Scale problem in MNZ03
Posted: Sun Aug 29, 2010 9:34 pm
by ivory4
I thought LR test would be -2*(L(null)- L(alternative))?
Re: Scale problem in MNZ03
Posted: Sun Aug 29, 2010 9:36 pm
by ivory4
TomDoan wrote:All log likelihoods in RATS are computed using all the integrating constants so they will be compatible between models done with different estimators. In comparing DLM with LINREG, you would need to make sure that the estimation ranges are the same, since DLM can handle pre-sample missing values, while LINREG can't.
I was wondering about the formula of loglikelihood in DLM because I may need to compare results from RATS with other papers' results which may not be obtained from RATS where sometimes the values are at quite different levels.
Re: Scale problem in MNZ03
Posted: Mon Aug 30, 2010 6:46 am
by TomDoan
ivory4 wrote:I thought LR test would be -2*(L(null)- L(alternative))?
That's what it is. baselogl is the likelihood with rho chosen freely, while the %logl's are with rho fixed at various choices.
Re: Scale problem in MNZ03
Posted: Mon Aug 30, 2010 8:25 am
by TomDoan
ivory4 wrote:TomDoan wrote:All log likelihoods in RATS are computed using all the integrating constants so they will be compatible between models done with different estimators. In comparing DLM with LINREG, you would need to make sure that the estimation ranges are the same, since DLM can handle pre-sample missing values, while LINREG can't.
I was wondering about the formula of loglikelihood in DLM because I may need to compare results from RATS with other papers' results which may not be obtained from RATS where sometimes the values are at quite different levels.
RATS includes the constant -T/2 log(2 x pi) where T is the number of valid observations (counting multiples if Y is a vector). The rest should be the same for any software that's doing the calculation correctly.