VECMGARCH.RPF is an example of some of the calculations done in Pardo and Torro(2007). This is applied to a different data set than in the paper; in this case, the closing weekly adjusted values for the Dow Jones Industrial Average and the Russell 2000 for the U.S. stock market. Note that, unlike most applications of GARCH to stock market data, these are not done in returns—if you convert to returns, you would lose the (possible) cointegration, and analyzing this as an error-correction GARCH model is the whole point. The two series are transformed to 100*log's and are normalized to both start at 0. The normalization has no effect on the model, but makes it easier to see (graphically) that the two series might be cointegrated.
set logdjia = 100.0*(log(djia/djia(1)))
set logrut = 100.0*(log(russell2000/russell2000(1)))
AIC and SBC (and HQ) pick very different lag lengths. The SBC picks 1, which seems too short. We use 4 in the subsequent analysis, which is a compromise which is at least a local minimum in the AIC. Note that the lag selection criteria assume homoscedasticity, and so can be thrown off by GARCH error processes.
There's no strong reason to believe that the two stock indexes are cointegrated since they have different risk-return structures. The Johansen test allowing for trending data is done with @JOHMLE. The Johansen test is known to be at least somewhat robust to a GARCH error process.
@johmle(lags=nlags,det=constant,cv=cv)
# logdjia logrut
The Johansen test marginally rejects cointegration, but we'll go ahead and assign cointegration rank one for illustrative purposes. This uses the estimated cointegrating vector. In other applications (for instance with spot vs futures prices), the cointegrating vector will be known theoretically, and you should use that, not an estimated one.
This sets up a VECM for the DET=CONSTANT model (including a CONSTANT as a DET term in the VAR). For this, you put in the original dependent variables, not the differences. The rearrangement of the regression is done internally when you ESTIMATE the model.
equation(coeffs=cv) ecteq *
# logdjia logrut
*
system(model=basevecm)
variables logdjia logrut
lags 1 to nlags
det constant
ect ecteq
end(system)
The error correction model is first estimated by least squares, and a test for ARCH is done on the residuals with @MVARCHTEST, which overwhelmingly rejects homoscedasticity.
estimate(resids=vecmresids)
@mvarchtest(lags=5)
# vecmresids
This estimates an asymmetric BEKK GARCH model. This saves the jointly standardized residuals (as STDU), the conditional covariance matrices (HH) and (non-standardized) residuals (RR) and the VECH representation (VECHCOMPS):
garch(model=basevecm,mv=bekk,asymmetric,stdresids=stdu,hmatrices=hh,rvectors=rr,$
pmethod=simplex,piters=10,method=bfgs,iters=500,vechmat=vechcomps)
One thing to note in the output is that neither of the error correction terms is significant. If the series were truly cointegrated (again, we are doing this as a VECM for illustration) at least one of these two should be significant. These t-statistics are the tests for long-run causality.
This does the recommended set of multivariate residual diagnostics (for remaining ARCH and for serial correlation in the mean):
@mvarchtest(lags=5)
# stdu
@mvqstat(lags=5)
# stdu
The results pass easily.
These next lines do causality tests in the mean. The first does a joint test for whether the dx2 lags and the ect lag are zero in the first equation (first block of mean model coefficients) and the second does a similar test for the dx1 lags and ect in the second block of mean model coefficients. The simplest way to set these up is to use the Regression Tests, Exclusion Restrictions test wizard. Note that the Pardo-Torro paper doesn't include the lagged ECT in the test, but it should be included since if it's non-zero, the lagged "other" does help to predict. The wizard dialog for the x2 to x1 causality will look something like
The first block of regressors is for the first equation, so this will do a joint test on the DX2 lags and the ECT lag for that.
test(zeros,title="x2-->x1 causality")
# 4 5 6 8
test(zeros,title="x1-->x2 causality")
# 9 10 11 16
Neither direction is significant. There are also tests on the BEKK cross effects (joint test of all the off-diagonal A, B and D coefficients) and a joint test on all the asymmetry (D) coefficients. Again, it's easiest to set these up using the Regression Tests wizard.
There are also diagnostics on the univariate standardized residuals:
set std1 = rr(t)(1)/sqrt(hh(t)(1,1))
set std2 = rr(t)(2)/sqrt(hh(t)(2,2))
You can only do univariate tests (not joint ones like @MVARCHTEST) on these. Again, these pass all the tests. However, note well that we chose to fit this under the assumption of cointegration despite the fact that the initial test showed that to be likely incorrect—the model is probably more complicated than it needs to be to adequately explain the data.
None of the remaining analysis has anything to do with this being a VECM-GARCH model—they're general to the multivariate GARCH model. Graphs are done of the conditional volatility (in standard deviations, which aren't dominated by the more extreme values as the variances), the conditional correlation and the conditional beta. All of these can be done (in pretty much the same way) for any type of multivariate GARCH model.
The final calculations are impact surfaces for the two variances and the covariance with respect to lagged residuals. This can be computed (easily) for any multivariate GARCH model which has a VECH representation (DVECH, VECH or any form of BEKK), but the graphs are specific to this being a two-variable model, since it does a contour graph with axes for each shock.
Full Program
open data "usstocks.xls" calendar(w) 2006:1:2 * * The data were taken from Yahoo Finance and are in reversed * chronological order. RATS v9.1 automatically detects that and * rearranges the data. (The REVERSE option has no effect, but creates an * immediate error with earlier versions of RATS). * data(format=xls,org=columns,reverse) / djia russell2000 * * Normalize both to 100 * log relative to the first entry * set logdjia = 100.0*(log(djia/djia(1))) set logrut = 100.0*(log(russell2000/russell2000(1))) * graph(footer="Log Stock Indexes",key=upleft,klabels=||"DJIA","Russell 2000"||) 2 # logdjia # logrut * * AIC and SBC (or HQ) pick very different lag lengths. The 1 favored by * SBC and HQ seems too short. AIC has a local minimum at 4, so we choose * that. (None of these criteria are designed for heteroscedastic data, * so they can be thrown off by outliers). * @varlagselect(lags=10,crit=aic) # logdjia logrut @varlagselect(lags=10,crit=sbc) # logdjia logrut * compute nlags=4 * * There's no strong reason to believe that the two stock indexes are * cointegrated since they have different risk-return structures. The * Johansen test marginally rejects cointegration, but we'll go ahead and * assign cointegration rank one for illustrative purposes. This uses the * estimated cointegrating vector. In other applications (for instance * with spot vs futures prices), the cointegrating vector will be known * theoretically, and you should use that, not an estimated one. * * This analyzes cointegration using DET=CONSTANT, which allows for a * trend in the series. * @johmle(lags=nlags,det=constant,cv=cv) # logdjia logrut * * This sets up a VECM for the DET=CONSTANT model (including a CONSTANT * as a DET term in the VAR). For this, you put in the original dependent * variables, not the differences. The rearrangement of the regression is * done internally when you ESTIMATE the model. * equation(coeffs=cv) ecteq * # logdjia logrut * system(model=basevecm) variables logdjia logrut lags 1 to nlags det constant ect ecteq end(system) * estimate(resids=vecmresids) * * Note that the number of lags used in the diagnostic tests has nothing * to do with the choice of the number of lags in the VAR model. (The * paper does some tests later on the OLS VECM residuals and uses 5 for * those). Somewhere in the 3-5 lag range generally seems appropriate for * this. * @mvarchtest(lags=5) # vecmresids * * Estimate an asymmetric BEKK GARCH model. This saves the jointly * standardized residuals (as STDU), the conditional covariance matrices * (HH) and (non-standardized) residuals (RR) and the VECH representation * (VECHCOMPS). * garch(model=basevecm,mv=bekk,asymmetric,stdresids=stdu,hmatrices=hh,rvectors=rr,$ pmethod=simplex,piters=10,method=bfgs,iters=500,vechmat=vechcomps) * * These are (our) recommended diagnostic tests on the jointly standardized * residuals. Tests on the univariate standardized residuals come later. * @mvarchtest(lags=5) # stdu @mvqstat(lags=5) # stdu * * Causality tests in the mean. The first tests does a joint test for * whether the dx2 lags and the ect lag are zero in the first equation * (first block of mean model coefficients) and the second does a similar * test for the dx1 lags and ect in the second block of mean model * coefficients. (You can use the Statistics-Regression Tests, Exclusion * Restrictions wizard to set these up.) * * (The paper doesn't include the ECT in the test, but it should be * included since if it's non-zero, the lagged "other" does help to * predict). * test(zeros,title="x2-->x1 causality") # 4 5 6 8 test(zeros,title="x1-->x2 causality") # 9 10 11 16 * * Wald tests of BEKK coefficients * test(zeros,title="BEKK cross effects") # 21 22 25 26 29 30 test(zeros,title="Asymmetry") # 28 29 30 31 * * Tests on the univariate standardized residuals * set std1 = rr(t)(1)/sqrt(hh(t)(1,1)) set std2 = rr(t)(2)/sqrt(hh(t)(2,2)) * * Skewness, (excess) kurtosis and normality (Jarque-Bera) are all part * of the standard STATISTICS output. * stats std1 stats std2 * * The Q(20) (Ljung-Box) and Q^2(20) (McLeod-Li) are included in the * BDINDTESTS procedure. * @bdindtests(number=20) std1 @bdindtests(number=20) std2 * * You can also do the LB and McLeod-Li tests separately using @RegCorrs * (for LB) and @McLeodLi. The results are very slightly different * because they use a slightly different calculation for the * autocorrelations. * @regcorrs(qstats,number=20) std1 @mcleodli(number=20) std1 @archtest(lags=20) std1 * @regcorrs(qstats,number=20) std2 @mcleodli(number=20) std2 @archtest(lags=20) std2 * * These graph the conditional volatility (in standard deviations to * allow more detail to be seen), the conditional correlation and the * conditional beta. The latter two also include a horizontal line at the * corresponding value from the original VECM estimates. * set sig11 = sqrt(hh(t)(1,1)) set sig22 = sqrt(hh(t)(2,2)) graph(footer="Conditional Volatility in Std Deviations",$ key=below,klabels=||"DJIA","Russell 2000"||) 2 # sig11 # sig22 * set rho12 = %cvtocorr(hh(t))(1,2) graph(footer="Conditional correlation",vgrid=%cvtocorr(%sigma)(1,2)) # rho12 * set beta12 = hh(t)(1,2)/hh(t)(1,1) graph(footer="Conditional beta of Russell 2000 with respect to DJIA",$ vgrid=%sigma(1,2)/%sigma(1,1)) # beta12 * * Compute impact surfaces (done as contour graphs) * compute ngrid=21 * * This picks the grids in each direction based upon the observed * absolute values of the residuals from the VECM (going from - to + of * the 99%-ile of the absolute value). * set plus1 = abs(vecmresids(1)) stats(fract,noprint) plus1 compute grid1=%seqrange(-%fract99,+%fract99,ngrid) set plus2 = abs(vecmresids(2)) stats(fract,noprint) plus2 compute grid2=%seqrange(-%fract99,+%fract99,ngrid) * dec rect var1g(ngrid,ngrid) var2g(ngrid,ngrid) covarg(ngrid,ngrid) * do i=1,ngrid do j=1,ngrid compute [symm] response=%vectosymm($ vechcomps("A")*%vec(%outerxx(||grid1(i)|grid2(j)||))+$ vechcomps("D")*%vec(%outerxx(||%minus(grid1(i))|%minus(grid2(j))||)),$ 2) compute var1g(i,j)=response(1,1) compute var2g(i,j)=response(2,2) compute covarg(i,j)=response(1,2) end do j end do i * * Do contour plots * gcontour(x=grid1,y=grid2,f=var1g,hlabel="DJIA",vlabel="Russell 2000",$ footer="Impact Surface for DJIA Variance") gcontour(x=grid1,y=grid2,f=var2g,hlabel="DJIA",vlabel="Russell 2000",$ footer="Impact Surface for Russell 2000 Variance") gcontour(x=grid1,y=grid2,f=covarg,hlabel="DJIA",vlabel="Russell 2000",$ footer="Impact Surface for Covariance")
Output
Lags AICC
0 15.0836694
1 8.2546087
2 8.2497805
3 8.2534275
4 8.2522532
5 8.2588972
6 8.2653806
7 8.2519615
8 8.2438783*
9 8.2507098
10 8.2599726
VAR Lag Selection
Lags SBC/BIC
0 15.0994319
1 8.3018144*
2 8.3283199
3 8.3631895
4 8.3931256
5 8.4307666
6 8.4681323
7 8.4854793
8 8.5080449
9 8.5454065
10 8.5850792
Likelihood Based Analysis of Cointegration
Variables: LOGDJIA LOGRUT
Estimated from 2006:01:30 to 2016:08:15
Data Points 551 Lags 4 with Constant
Unrestricted eigenvalues and -T log(1-lambda)
Rank EigVal Lambda-max Trace Trace-95% LogL
0 -2259.2280
1 0.0237 13.2395 13.7391 15.4100 -2252.6083
2 0.0009 0.4996 0.4996 3.8400 -2252.3585
Cointegrating Vector for Largest Eigenvalue
LOGDJIA LOGRUT
-0.245027 0.205644
VAR/System - Estimation by Cointegrated Least Squares
Weekly Data From 2006:01:30 To 2016:08:15
Usable Observations 551
Dependent Variable LOGDJIA
Mean of Dependent Variable 0.0966126059
Std Error of Dependent Variable 2.3943280777
Standard Error of Estimate 2.3731239708
Sum of Squared Residuals 3058.0225377
Durbin-Watson Statistic 2.0053
Variable Coeff Std Error T-Stat Signif
************************************************************************************
1. D_LOGDJIA{1} -0.117250369 0.092965160 -1.26123 0.20776788
2. D_LOGDJIA{2} 0.198132421 0.093274441 2.12419 0.03410572
3. D_LOGDJIA{3} -0.049368614 0.092594345 -0.53317 0.59413340
4. D_LOGRUT{1} 0.045348956 0.068184110 0.66510 0.50627169
5. D_LOGRUT{2} -0.112910823 0.068212529 -1.65528 0.09844555
6. D_LOGRUT{3} -0.048833025 0.067742853 -0.72086 0.47130668
7. Constant 0.135572452 0.126626677 1.07065 0.28480391
8. EC1{1} 0.039519176 0.101098480 0.39090 0.69602609
Dependent Variable LOGRUT
Mean of Dependent Variable 0.0937932045
Std Error of Dependent Variable 3.2664309624
Standard Error of Estimate 3.2462321131
Sum of Squared Residuals 5722.1464523
Durbin-Watson Statistic 1.9991
Variable Coeff Std Error T-Stat Signif
************************************************************************************
1. D_LOGDJIA{1} 0.009589463 0.127168447 0.07541 0.93991821
2. D_LOGDJIA{2} 0.246595306 0.127591516 1.93269 0.05379364
3. D_LOGDJIA{3} -0.124696596 0.126661203 -0.98449 0.32531346
4. D_LOGRUT{1} -0.030392175 0.093270073 -0.32585 0.74466240
5. D_LOGRUT{2} -0.129190732 0.093308949 -1.38455 0.16675935
6. D_LOGRUT{3} -0.007213948 0.092666471 -0.07785 0.93797723
7. Constant -0.038951147 0.173214544 -0.22487 0.82216324
8. EC1{1} -0.181780727 0.138294138 -1.31445 0.18924975
Statistic Degrees Signif
459.73 45 0.00000
MV-GARCH, BEKK - Estimation by BFGS
Convergence in 149 Iterations. Final criterion was 0.0000093 <= 0.0000100
Weekly Data From 2006:01:30 To 2016:08:15
Usable Observations 551
Log Likelihood -2113.9993
Variable Coeff Std Error T-Stat Signif
************************************************************************************
Mean Model(LOGDJIA)
1. D_LOGDJIA{1} -0.114967410 0.074941399 -1.53410 0.12500568
2. D_LOGDJIA{2} 0.057624087 0.058718625 0.98136 0.32641544
3. D_LOGDJIA{3} -0.099604168 0.073320260 -1.35848 0.17431116
4. D_LOGRUT{1} 0.045666848 0.051505676 0.88664 0.37527426
5. D_LOGRUT{2} -0.045497591 0.048996151 -0.92860 0.35309891
6. D_LOGRUT{3} 0.042292345 0.054703623 0.77312 0.43945267
7. Constant 0.164919563 0.078706844 2.09536 0.03613856
8. EC1{1} 0.040635646 0.063921796 0.63571 0.52496626
Mean Model(LOGRUT)
9. D_LOGDJIA{1} -0.091530068 0.107792863 -0.84913 0.39580940
10. D_LOGDJIA{2} -0.018276775 0.087878011 -0.20798 0.83524541
11. D_LOGDJIA{3} -0.102141767 0.101853292 -1.00283 0.31594180
12. D_LOGRUT{1} 0.021765320 0.076116949 0.28595 0.77491965
13. D_LOGRUT{2} 0.008455941 0.071850440 0.11769 0.90631481
14. D_LOGRUT{3} 0.018670360 0.074985031 0.24899 0.80337020
15. Constant 0.042281694 0.108946436 0.38810 0.69794487
16. EC1{1} -0.168240647 0.090430531 -1.86044 0.06282319
17. C(1,1) 0.641116480 0.086131128 7.44349 0.00000000
18. C(2,1) 0.437823136 0.131507423 3.32927 0.00087075
19. C(2,2) 0.000000227 0.259694634 8.75396e-07 0.99999930
20. A(1,1) 0.097044995 0.110349145 0.87944 0.37916500
21. A(1,2) 0.462880872 0.132043927 3.50551 0.00045574
22. A(2,1) -0.156338746 0.073031896 -2.14069 0.03229894
23. A(2,2) -0.461553946 0.104720567 -4.40748 0.00001046
24. B(1,1) 0.743245370 0.049482823 15.02027 0.00000000
25. B(1,2) -0.210384590 0.058574904 -3.59172 0.00032850
26. B(2,1) 0.077251184 0.028655197 2.69589 0.00702014
27. B(2,2) 1.023907483 0.029695396 34.48034 0.00000000
28. D(1,1) 0.202370505 0.131970878 1.53345 0.12516546
29. D(1,2) 0.177560780 0.135340875 1.31195 0.18953621
30. D(2,1) 0.307113829 0.082476934 3.72363 0.00019638
31. D(2,2) 0.382522857 0.096319289 3.97140 0.00007145
Statistic Degrees Signif
46.48 45 0.41136
Multivariate Q(5)= 7.03995
Significance Level as Chi-Squared(20)= 0.99655
Chi-Squared(4)= 4.961551 or F(4,*)= 1.24039 with Significance Level 0.29126545
x1-->x2 causality
Chi-Squared(4)= 4.568218 or F(4,*)= 1.14205 with Significance Level 0.33453509
BEKK cross effects
Chi-Squared(6)= 68.634679 or F(6,*)= 11.43911 with Significance Level 0.00000000
Asymmetry
Chi-Squared(4)= 94.128382 or F(4,*)= 23.53210 with Significance Level 0.00000000
Weekly Data From 2006:01:30 To 2016:08:15
Observations 551
Sample Mean -0.008876 Variance 0.996227
Standard Error 0.998112 SE of Sample Mean 0.042521
t-Statistic (Mean=0) -0.208741 Signif Level (Mean=0) 0.834727
Skewness -0.640238 Signif Level (Sk=0) 0.000000
Kurtosis (excess) 0.646048 Signif Level (Ku=0) 0.002099
Jarque-Bera 47.225248 Signif Level (JB=0) 0.000000
Statistics on Series STD2
Weekly Data From 2006:01:30 To 2016:08:15
Observations 551
Sample Mean 0.000072 Variance 0.969000
Standard Error 0.984378 SE of Sample Mean 0.041936
t-Statistic (Mean=0) 0.001707 Signif Level (Mean=0) 0.998639
Skewness -0.510877 Signif Level (Sk=0) 0.000001
Kurtosis (excess) 0.266219 Signif Level (Ku=0) 0.204979
Jarque-Bera 25.595161 Signif Level (JB=0) 0.000003
Independence Tests for Series STD1
Test Statistic P-Value
Ljung-Box Q(20) 22.062687 0.3371
McLeod-Li(20) 21.780566 0.3525
Turning Points 1.518071 0.1290
Difference Sign -1.032094 0.3020
Rank Test 0.134117 0.8933
Independence Tests for Series STD2
Test Statistic P-Value
Ljung-Box Q(20) 25.1032284 0.1975
McLeod-Li(20) 10.7815777 0.9517
Turning Points 0.4048190 0.6856
Difference Sign 0.5897678 0.5553
Rank Test 0.3393461 0.7343
McLeod-Li Test for Series STD1
Using 551 Observations from 2006:01:30 to 2016:08:15
Test Stat Signif
McLeod-Li(20-0) 21.5791962 0.36377
Test for ARCH in STD1
Using data from 2006:01:30 to 2016:08:15
Lags Statistic Signif. Level
20 1.006 0.45330
McLeod-Li Test for Series STD2
Using 551 Observations from 2006:01:30 to 2016:08:15
Test Stat Signif
McLeod-Li(20-0) 10.6375076 0.95509
Test for ARCH in STD2
Using data from 2006:01:30 to 2016:08:15
Lags Statistic Signif. Level
20 0.485 0.97189
Graphs
This is a graph of the normalized log data
e
Graph of the conditional correlation with the correlation of the fixed VECM residuals shown.
One of the news impact surfaces, graphed as a contour graph for the impact versus the shocks in the two variables. Even though this is for the variance of the DJIA, it is most heavily impacted by the negative lagged shocks in the Russell 2000 (the contours are steepest in the negative Y direction).