Clark (1987) Model sensitivity to sample size

Discussion of State Space and Dynamic Stochastic General Equilibrium Models
timduy
Posts: 50
Joined: Sun Jan 15, 2012 12:24 am

Clark (1987) Model sensitivity to sample size

Unread post by timduy »

Working with Clark (1987) model of US GNP, although using Oregon non-farm payrolls. Estimation works if I begin at 1950, but if I change the sample size, estimation breaks down. Any ideas on why it would be so sensitive? I have played around with changing the initial guesses on the variance, but with no success.

Program (with 1950 beginning).
ORNAv4.rpf
(1.81 KiB) Downloaded 1154 times
Output with longer sample:

Code: Select all

DLM - Estimation by BFGS
Convergence in     2 Iterations. Final criterion was  0.0000080 <=  0.0000100
Quarterly Data From 1950:01 To 2015:02
Usable Observations                       262
Rank of Observables                       260
Log Likelihood                      -322.0541

    Variable                        Coeff      Std Error      T-Stat      Signif
************************************************************************************
1.  SN                           -0.540186595  0.040730620    -13.26242  0.00000000
2.  PH1                           1.798589245  0.009537075    188.58919  0.00000000
3.  PH2                          -0.817249515  0.009048379    -90.32000  0.00000000
4.  SE                            0.514894796  0.045477307     11.32202  0.00000000
5.  SZ                            0.029441744  0.017593677      1.67343  0.09424316
With shorter sample:

Code: Select all

DLM - Estimation by BFGS
NO CONVERGENCE IN 4 ITERATIONS
LAST CRITERION WAS  0.0000000
ESTIMATION POSSIBLY HAS STALLED OR MACHINE ROUNDOFF IS MAKING FURTHER PROGRESS DIFFICULT
TRY HIGHER SUBITERATIONS LIMIT, TIGHTER CVCRIT, DIFFERENT SETTING FOR EXACTLINE OR ALPHA ON NLPAR
RESTARTING ESTIMATION FROM LAST ESTIMATES OR DIFFERENT INITIAL GUESSES MIGHT ALSO WORK
Quarterly Data From 1960:01 To 2015:02
Usable Observations                       222
Rank of Observables                       219
Log Likelihood                      -207.6280

    Variable                        Coeff      Std Error       T-Stat       Signif
**************************************************************************************
1.  SN                           -0.010929528  0.000013466     -811.62177  0.00000000
2.  PH1                           1.459218643  0.000000276  5283294.24210  0.00000000
3.  PH2                          -0.459193269  0.000000072 -6374876.66831  0.00000000
4.  SE                            0.685293944  0.000024336    28159.69119  0.00000000
5.  SZ                            0.280365867  0.000066049     4244.82895  0.00000000
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: Clark (1987) Model sensitivity to sample size

Unread post by TomDoan »

You're hitting the stationarity boundary with the smaller data set (notice that the PH's add to effectively one). When it hits that, you have three unit roots in the model which causes a discontinuity in the likelihood function. If you add the option CONDITION=3, you eliminate that problem by having it condition on the first three whether or not the PH's give you a third unit root.

method=bfgs,piters=100,type=filter) 1950:1 2015:2 states0

Note that you probably want to scale the PITERS back to maybe 10 or so. It's close to converged by the time it gets to BFGS, so you're not getting enough BFGS iterations to get a good estimate for the covariance matrix of the coefficients.
timduy
Posts: 50
Joined: Sun Jan 15, 2012 12:24 am

Re: Clark (1987) Model sensitivity to sample size

Unread post by timduy »

I was thinking that was it. Thank you!
Post Reply