The 'Davies' Test

Questions and discussions on Vector Autoregressions
alexecon
Posts: 72
Joined: Fri Oct 30, 2015 12:16 pm

The 'Davies' Test

Unread post by alexecon »

I was going through Chapter 7 of Time Series Econometrics Analysis by W Enders (4th ed.), as I was trying to find out more about how to test for the presence or not of non-linearity in a VAR. Apparently, the usual LR test (~chi-sq) is not recommended because under the null there is (are) unidentified parameter(s). Terasvirta (2006) also discusses this.

The solution (?) is to use Davies's (1987) sup test. Enders outlines in his book how to construct critical values but does not accompany the discussion with a relevant program. I have seen implementations of Davies's test both in a univariate context (Garcia and Perron, 1996) http://www.jstor.org/stable/2109851, and a multivariate one (Balcilar et al, 2015). https://doi.org/10.1016/j.eneco.2015.01.026

Would it be worth to add this functionality to the wealth of procedures already available in RATS?
alexecon
Posts: 72
Joined: Fri Oct 30, 2015 12:16 pm

Re: The 'Davies' Test

Unread post by alexecon »

Assuming that the likelihood ratio has a single peak, the upper bound of this test can be calculated as follows:
Pr[χ2 >M]+2(M/2)^(q/2) exp(−M/2)/Γ(q/2)
where M=2(lnL1-lnL0) is the observed likelihood, q are parameters that appear only under the alternative hypothesis, and Γ is the Gamma function. [Davies (1987) and Garcia and Perron (1996)]
Any suggestions as to how this can be written in RATS? Many thanks in advance for any help.

PS: This test is also relevant to non-VAR settings of course so I may have posted in a less-than-ideal topic. Please move it if you think that's the case!
Last edited by alexecon on Thu Jan 26, 2017 11:26 am, edited 1 time in total.
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: The 'Davies' Test

Unread post by TomDoan »

You're missing a 2 on the LR statistic, but isn't everything in that directly computable?
alexecon
Posts: 72
Joined: Fri Oct 30, 2015 12:16 pm

Re: The 'Davies' Test

Unread post by alexecon »

TomDoan wrote:You're missing a 2 on the LR statistic, but isn't everything in that directly computable?
Yes (I will edit), -and yes. What I wasn't sure was how to invoke the Gamma distribution but I have now found it in the documentation. Thanks!
alexecon
Posts: 72
Joined: Fri Oct 30, 2015 12:16 pm

Re: The 'Davies' Test

Unread post by alexecon »

I am trying to apply this test to the MS-VECM model in the Balcilar et al (2015) paper. (The authors report that they reject the restricted linear model using the Davies upper bound.) Following estimation of the non-switching and switching VECMs using the code here viewtopic.php?f=8&t=2536&hilit=msvecm, is it possible to obtain the two log likelihoods?
Last edited by alexecon on Wed Aug 09, 2017 9:27 am, edited 1 time in total.
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: The 'Davies' Test

Unread post by TomDoan »

Sure the ESTIMATE (which does a one-regime VECM) produces %LOGL and the MAXIMIZE (which does the MS VECM) does also. Save the values into other variables, and compute the LR test statistic by 2 x their difference.
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: The 'Davies' Test

Unread post by TomDoan »

I'm not convinced that this is a proper use of Davies(1987). Davies never mentions applicability to asymptotic chi-squared distributions---only actual finite sample chi-squared distributions (from linear models with normal residuals). He also allows for only a single unidentified parameter under the null (the adjustment is based upon an approximation to the integral over that), while a MS model will have two in testing two regimes against one (the two transition probabilities). The other references all go back to Garcia and Perron, which has an incorrect summary of the result from Davies.
alexecon
Posts: 72
Joined: Fri Oct 30, 2015 12:16 pm

Re: The 'Davies' Test

Unread post by alexecon »

Thanks for the nudge. Davies (1987) is a 30-year old reference now so the literature has indeed moved on. Interesting, however, that it's still used in recent MS papers, e.g. as in Balcilar et al and discussed at some length in Enders' book. Looking at more recent contributions, the following claim to offer tangible improvements:
  • HANSEN, B. (1996): “Inference When a Nuisance Parameter Is Not Identified Under the Null Hypothesis,” Econometrica, 64, 413–430.
    GARCIA, R. (1998): “Asymptotic Null Distribution of the Likelihood Ratio Test in Markov Switching Models,” International Economic Review, 39, 763–788.
    CHO, J. S., AND H. WHITE (2007): “Testing for Regime Switching,” Econometrica, 75, 1671–1720.
    CARRASCO, M., HU, L. AND PLOBERGER, W. (2014): “Optimal Test for Markov Switching Parameters. Econometrica,” 82: 765–784. http://onlinelibrary.wiley.com/doi/10.3 ... A8609/full
Clearly there's value in testing for constant means and variances across regimes instead of presuming the validity of a regime-switching model. E.g. Carrasco et al cannot reject the null of a constant mean in Hamilton's model. I'm working in a VAR/VECM setting and allow for shifts in the process but cannot say with any confidence whether this is the appropriate modelling decision. I can only rely on the statistical significance of the estimated parameters across regimes and the regime-dependent impulse responses, but the lack of a formal testing procedure places a serious limitation on this kind of work.

What do you think of the above papers? Carrasco supplies the GAUSS code here: https://www.webdepot.umontreal.ca/Usage ... /index.htm. The data and programme needed to replicate Table 3 in their paper follow.

Data:

Code: Select all

2123.6
2160.8
2204.1
2208.7
2229.9
2232.4
2247
2320.5
2363.4
2382
2366.1
2328.9
2318.8
2321.7
2347.9
2396.4
2465.3
2505.5
2539
2553.4
2543.7
2563.4
2560.6
2599.9
2618.7
2613.8
2637.9
2606.3
2536.6
2552.4
2611.2
2671.5
2725.1
2793.6
2791.5
2802.2
2864
2851.1
2856.5
2821.2
2839
2890.6
2937.5
2997.3
3050.5
3086
3114.6
3125.4
3164.7
3203.2
3263.5
3288.9
3364.4
3401.6
3448.3
3455.9
3543.6
3592.2
3662.4
3747.6
3839.9
3852.6
3877.2
3909.5
3943.4
3943.5
3977.4
4006.7
4089.4
4158.9
4188.1
4205.9
4271.7
4283
4308.6
4288.5
4282.5
4291.2
4328.7
4280.7
4402.9
4429.4
4461.4
4475.3
4556.1
4662.9
4710
4786.8
4913.7
4972.8
4953.7
5000.8
4966.3
4975.6
4921.9
4895.5
4829.3
4866.2
4950.6
5022.6
5134.3
5174.4
5200
5238.6
5306.1
5409.2
5504.3
5496.9
5523.4
5728.7
5788.8
5872.6
5883.8
5896.8
5952.3
5967.8
5988.3
5860.9
5845.6
5938.6
6064.8
6013.2
6089.3
6022.1
5920.6
5960.6
5926.3
5928.6
6000.8
6138.3
6259.3
6389.9
6507.2
6618.8
6681.9
6728.7
6780
6840
6937.6
6994.5
7055.9
7073
7144.3
7168.5
7210.9
7293.5
7355.6
7483.8
7530.3
7623.1
7658.6
7763.7
7835.6
7892.6
7957.7
7983.3
8063.5
8096.6
8089.6
8050.1
7994.7
8033.4
8062.6
8104.4
8192.3
8278.3
8359.1
8447.8
8472.8
8518
8570.1
8663
8755.9
8870
8924
9022.1
9056.1
9078.7
9137.3
9214.6
9285
9434.1
9511.9
9621.4
9685
9837.4
9951.3
10019.8
10124.7
10212.5
10331.5
10512.2
10619
10707.5
10839.7
11045.6
11069.2
11288.1
11292.8
11386.8
11333.9
11416.9
11360.3
11468.8
11523.3
11564.5
11638.7
11661.7
11694.9
11808.1
12000.8
12136
12234
12285.1
12386.7
12460.7
12623.4
12667.7
12776.9
12812.4
12974.7
13021.6
13009.6
13107.3
13122.6
13248.4
13405.8
13511
13431.7
13476.6
13367.4
12991.9
12785.6
12770.7
12844.9
12971.6
13092.9
13238.4
13328.9
13383.9

Programme:

Code: Select all

/* This program applies CHP to test for Markov Switching in Hamilton's AR model on GNP */
/* test for switching in the mean only */
/* Hamilton's original series: t = 135 for gnp82.dat */
/* Extended series: t = 239 for gnpc96.txt */
/* critical value and p-values are reported */
/* Table 3 in the resubmission */

new;

rndseed 2938438;

t = 239;
rho_b = 0.7;
itnb = 3000;
load gnp[]=c:/paper/werner/revision/application/hamilton/gnpc96.txt;

gnp = 100*ln(gnp);
dat = gnp[2:t+1] - gnp[1:t];

nar = 4;         

nn = t-nar;

y = dat[1+nar:t];
x = dat[nar:t-1];

it = 2; 
do while it <= nar;
  x = x~dat[nar-it+1:t-it];
  it = it + 1; 
endo;

/* first estimate the AR under H0 */

xx = ones(nn,1)~x;

b0 = inv(xx'xx)*(xx'y);
u = y - xx*b0;
v0 = sqrt((u'u)/rows(u));
var0 = inv(xx'xx)*((xx.*u)'(xx.*u))*inv(xx'xx);
se0 = sqrt(diag(var0));

phi = b0[2:1+nar];      @AR cofficients@
mu0 = b0[1]/(1-sumc(phi));  @ mean of GNP growth @

/* get the 1st derivative of the log likelihood, use lt as prefix */

ltmu = u*(1-sumc(phi))/v0^2;

ltphi = u.*(x[.,1]-mu0)/v0^2;

for i(2,nar,1);
    ltphi = ltphi~(u.*(x[.,i]-mu0)/v0^2);
endfor;

ltsig2 = u^2/(2*v0^4)-1/(2*v0^2);

ltmx = ltmu~ltphi~ltsig2;

/* get the 2nd derivative of the log likelihood, use mt as prefix */
/* In this case, we only need the derivative wrt mu */

mtmu = -(1-sumc(phi))^2/v0^2;   @scalar@


/* calculate our test statistic supTS and expTS */

rho = -rho_b;   @lower bound of rho@
ir = 1;           @iterations for rho@

cv = zeros(200,1); @stores supTS critical values for each rho@

cv2 = zeros(200,1);    @stores expTS critical values for each rho@


do while rho<=rho_b + 0.01;

mu2t = zeros(nn,1);
xs = zeros(nn,1);

xs[2] = rho * ltmu[1];
mu2t[2] = ltmu[2] * xs[2];

for i(3,nn,1);
    xs[i] = rho * (xs[i-1] + ltmu[i-1]);
    mu2t[i] = ltmu[i] * xs[i];

endfor;

mu2t = (mtmu+ltmu^2)/2+mu2t;
gamma_e = 1/sqrt(t)*sumc(mu2t);

epsilont = mu2t-ltmx*inv(ltmx'ltmx)*ltmx'mu2t; @projection of mu2t on lt1@
esqe = meanc(epsilont^2);

tspe = gamma_e/sqrt(esqe);

if esqe<1e-5;
    cv[ir] = 0;
    cv2[ir] = 1;
else;
    cv[ir] = 1/2*maxc(tspe|0)^2;    @supTS test statistic@ 
    cv2[ir] = sqrt(2*pi)*exp((tspe-1)^2/2)*cdfn(tspe-1);  @expTS test statistic@
endif;

rho = rho+0.01;
ir = ir+1;
endo;

cv = cv[1:ir-1];

cv2 = cv2[1:ir-1];

supts = maxc(cv);
expts = meanc(cv2);
print /flush cv~cv2;
/* bootstrap the critical values */
supb = zeros(itnb,1);   @stores the bootstrapped supTS critical value@
expb = zeros(itnb,1);   @stores the bootstrapped expTS critical value@


for itb(1,itnb,1);

/* first simulate the series itnb(e.g., 3000) times according to ML estimators */

ys = zeros(t,1);
ys[1:nar] = meanc(y)+stdc(y)*rndn(nar,1);   @start from stationary distribution@

for i(nar+1,t,1);
    ys[i] = b0'(1|ys[i-1:i-nar])+v0*rndn(1,1);
endfor;

yb = ys[1+nar:t];
xb = ys[nar:t-1];

it = 2; 
do while it <= nar;
  xb = xb~ys[nar-it+1:t-it];
  it = it + 1; 
endo;

/* first estimate the AR under H0 */

xxb = ones(nn,1)~xb;

b0b = inv(xxb'xxb)*(xxb'yb);
ub = yb - xxb*b0b;
v0b = sqrt((ub'ub)/rows(ub));

phib = b0b[2:1+nar];      @AR cofficients@
mu0b = b0b[1]/(1-sumc(phib));  @ mean of GNP growth @

/* get the 1st derivative of the log likelihood, use lt as prefix */

ltmub = ub*(1-sumc(phib))/v0b^2;

ltphib = ub.*(xb[.,1]-mu0b)/v0b^2;

for i(2,nar,1);
    ltphib = ltphib~(ub.*(xb[.,i]-mu0b)/v0b^2);
endfor;

ltsig2b = ub^2/(2*v0b^4)-1/(2*v0b^2);

ltmxb = ltmub~ltphib~ltsig2b;

/* get the 2nd derivative of the log likelihood, use mt as prefix */

mtmub = -(1-sumc(phib))^2/v0b^2;


/* calculate our test statistic supTS and expTS */

rho = -rho_b;   @lower bound of rho@
ir = 1;           @iterations for rho@

cv = zeros(200,1); @stores supTS critical values for each rho@

cv2 = zeros(200,1);    @stores expTS critical values for each rho@


do while rho<=rho_b + 0.01;


mu2tb = zeros(nn,1);
xsb = zeros(nn,1);

xsb[2] = rho * ltmub[1];
mu2tb[2] = ltmub[2] * xsb[2];

for i(3,nn,1);
    xsb[i] = rho * (xsb[i-1] + ltmub[i-1]);
    mu2tb[i] = ltmub[i] * xsb[i];

endfor;

mu2tb = (mtmub+ltmub^2)/2+mu2tb;
gamma_e = 1/sqrt(t)*sumc(mu2tb);


epsilont = mu2tb-ltmxb*inv(ltmxb'ltmxb)*ltmxb'mu2tb; @projection of mu2t on lt1@
esqe = meanc(epsilont^2);
tspe = gamma_e/sqrt(esqe);  @test statistic process@

if esqe<1e-5;
    cv[ir] = 0;
    cv2[ir] = 1;
else;
    cv[ir] = 1/2*maxc(tspe|0)^2;    @supTS test statistic@ 
    cv2[ir] = sqrt(2*pi)*exp((tspe-1)^2/2)*cdfn(tspe-1);  @expTS test statistic@
endif;

rho = rho+0.01;
ir = ir+1;
endo;

cv = cv[1:ir-1];

cv2 = cv2[1:ir-1];

supb[itb] = maxc(cv);
expb[itb] = meanc(cv2);

endfor;

/*empirical critical values */

supb = sortc(supb,1);
expb = sortc(expb,1);

crtsup99=supb[floor(itnb*0.99)];
crtsup95=supb[floor(itnb*0.95)];
crtsup90=supb[floor(itnb*0.90)];   

crtexp99=expb[floor(itnb*0.99)];
crtexp95=expb[floor(itnb*0.95)];
crtexp90=expb[floor(itnb*0.90)];   

"Estimation Under Null of Linearity";
" Regression Parameters   Standard Error"
b0~se0;
"";
"Standard Deviation : " v0;
"";
"our test statstic ";
"supTS is " supts;
"expTS is " expts;

"Iteration of simulation is " itnb;
"supTS critical values are " crtsup99 crtsup95 crtsup90;
"expTS critical values are " crtexp99 crtexp95 crtexp90;
"supTS p-value is " meanc(supts.<supb);
"expTS p-value is " meanc(expts.<expb);

end;
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: The 'Davies' Test

Unread post by TomDoan »

Hansen and Garcia both involve a staggering amount of number-crunching.

Is there anything wrong with using SBC? It's a lot simpler than doing bootstrapping.
alexecon
Posts: 72
Joined: Fri Oct 30, 2015 12:16 pm

Re: The 'Davies' Test

Unread post by alexecon »

I don’t know. Is testing with the SBC not subject to the issue of nuisance parameters? It seems surprising that so much work has gone into resolving this, if the SBC were to be the answer.
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: The 'Davies' Test

Unread post by TomDoan »

alexecon wrote:I don’t know. Is testing with the SBC not subject to the issue of nuisance parameters? It seems surprising that so much work has gone into resolving this, if the SBC were to be the answer.
You don't get papers published in Econometrica by using well-known methods.

There are certain regularity conditions in the derivation of the SBC that aren't met by nested Markov Switching models. However, in practice it appears to give effectively the same results as the very complicated calculations of the actual posterior odds of the models (SBC is based upon asymptotic posterior odds). See the discussion in Fruhwirth-Schnatter's(2006) "Finite Mixture and Markov Switching Models".
alexecon
Posts: 72
Joined: Fri Oct 30, 2015 12:16 pm

Re: The 'Davies' Test

Unread post by alexecon »

Thank you for the reference. I can see that Fruhwirth-Schnatter uses the SBC to select the number of regimes in chapter 11. I have to read a couple of the references therein to understand more, as I got confused with the "bridge", "importance", and "reciprocal importance" sampling estimators. Still, I take it that these aren't necessary and that the log-likelihoods obtained from RATS estimation of Ehrmann et al (2003) and Balcilar et al (2015) can be used to construct the BIC given on page 347? Or am I being naive here?
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: The 'Davies' Test

Unread post by TomDoan »

alexecon wrote:Thank you for the reference. I can see that Fruhwirth-Schnatter uses the SBC to select the number of regimes in chapter 11. I have to read a couple of the references therein to understand more, as I got confused with the "bridge", "importance", and "reciprocal importance" sampling estimators.
Those are all MCMC techniques for computing the posterior odds.
alexecon wrote: Still, I take it that these aren't necessary and that the log-likelihoods obtained from RATS estimation of Ehrmann et al (2003) and Balcilar et al (2015) can be used to construct the BIC given on page 347? Or am I being naive here?
EEV should be fine. Balcilar et al is a different situation because of the unbounded likelihood function due to the prolonged flat spot in the oil price. The randomized values allow the model to be estimated, but the peak log likelihood is still misleadingly high.
alexecon
Posts: 72
Joined: Fri Oct 30, 2015 12:16 pm

Re: The 'Davies' Test

Unread post by alexecon »

I'm not using their (Balcilar et al's) data and 'my' likelihood is better behaved. I guess it should be OK.
Post Reply