RATS 10.1
RATS 10.1

GARCHMV.RPF is an illustrative example which includes several variants on multivariate GSARCH models, including "stock" estimates for DVECH, BEKK, CC and DCC models, plus GARCH with non-standard mean models (different explanatory variables in each) and GARCH with several types of M effects.

 

Note that this illustrates a wide range of GARCH models applied to a single set of data. In practice, you would focus in on one or two model types. Specifying, estimating and testing these types of models forms a large part of the RATS ARCH/GARCH and Volatility Models e-course, and we strongly recommend that you get that if you are planning to focus in this area.

 

This takes quite a while to run—it's a fairly large data set and it's doing many models. The notes on the instructions are mixed in with the output below.


Full Program

 

open data g10xrate.xls

data(format=xls,org=columns) 1 6237 usxjpn usxfra usxsui

*

set xjpn = 100.0*log(usxjpn/usxjpn{1})

set xfra = 100.0*log(usxfra/usxfra{1})

set xsui = 100.0*log(usxsui/usxsui{1})

*

* Examples with the different choices for the MV option

*

garch(p=1,q=1,pmethod=simplex,piters=10) / xjpn xfra xsui

garch(p=1,q=1,mv=bekk,pmethod=simplex,piters=10) / xjpn xfra xsui

*

* Restricted correlation models

*

garch(p=1,q=1,mv=cc) / xjpn xfra xsui

garch(p=1,q=1,mv=dcc)  / xjpn xfra xsui

garch(p=1,q=1,mv=cholesky)  / xjpn xfra xsui

*

* CC with VARMA variances

* This needs some special treatment to get convergence. Along with extra

* iterations on the GARCH, it uses NLPAR with DERIVE=FOURTH which does

* slower, but more accurate, numerical derivatives. EXACTLINE does

* slower exact line minimizations. These usually aren't necessary, but

* some model types can have some optimization problems that can require

* more careful treatment.

*

nlpar(derive=fourth,exactline)

garch(p=1,q=1,mv=cc,variances=varma,iters=500,pmethod=simplex,piters=10) / $

   xjpn xfra xsui

*

* This resets the NLPAR to the standard values

*

nlpar(derive=first,noexactline)

*

* EWMA with t-errors with an estimated degrees of freedom parameter

*

garch(p=1,q=1,mv=ewma,distrib=t) / xjpn xfra xsui

*

* CC-EGARCH with asymmetry

*

garch(p=1,q=1,mv=cc,asymmetric,variances=exp) / xjpn xfra xsui

*

* Estimates with graphs of conditional correlations

*

garch(p=1,q=1,mv=dcc,variances=koutmos,hmatrices=hh,iters=500) / $

   xjpn xfra xsui

*

set jpnfra = %cvtocorr(hh(t))(1,2)

set jpnsui = %cvtocorr(hh(t))(1,3)

set frasui = %cvtocorr(hh(t))(2,3)

*

spgraph(vfields=3,footer="Conditional Correlations")

 graph(header="Japan with France",min=-1.0,max=1.0)

 # jpnfra

 graph(header="Japan with Switzerland",min=-1.0,max=1.0)

 # jpnsui

 graph(header="France with Switzerland",min=-1.0,max=1.0)

 # frasui

spgraph(done)

*

* Estimates for a BEKK with t errors, saving the residuals and the

* variances (in the VECT[SERIES] and SYMM[SERIES] forms), and using them

* to compute the empirical probability of a residual (for Japan) being

* in the left .05 tail.

*

garch(p=1,q=1,mv=bekk,pmethod=simplex,piters=10,distrib=t,$

   rseries=rs,mvhseries=hhs) / xjpn xfra xsui

*

compute fixt=(%shape-2)/%shape

set trigger = %tcdf(rs(1)/sqrt(hhs(1,1)*fixt),%shape)<.05

sstats(mean) / trigger>>VaRp

disp "Probability of being below .05 level" #.#### VaRp

*

* Univariate AR(1) mean models for each series, DCC model for the variance

*

equation(constant) jpneq xjpn 1

equation(constant) fraeq xfra 1

equation(constant) suieq xsui 1

group ar1 jpneq fraeq suieq

garch(p=1,q=1,model=ar1,mv=dcc,pmethod=simplex,piter=10)

*

* VAR(1) model for the mean, BEKK for the variance

*

system(model=var1)

variables xjpn xfra xsui

lags 1

det constant

end(system)

*

garch(p=1,q=1,model=var1,mv=bekk,pmethod=simplex,piters=10)

*

* GARCH-M model

*

dec symm[series] hhs(3,3)

clear(zeros) hhs

*

equation jpneq xjpn

# constant hhs(1,1) hhs(1,2) hhs(1,3)

equation fraeq xfra

# constant hhs(2,1) hhs(2,2) hhs(2,3)

equation suieq xsui

# constant hhs(3,1) hhs(3,2) hhs(3,3)

*

group garchm jpneq fraeq suieq

garch(model=garchm,p=1,q=1,pmethod=simplex,piters=10,iters=500,$

   mvhseries=hhs)

*

* GARCH-M with using custom function of the variance (in this case, the

* square root of the variance of an equally weighted sum of the

* currencies).

*

set commonsd = 0.0

system(model=customm)

variables xjpn xfra xsui

det constant commonsd

end(system)

*

compute %nvar=%modelsize(customm)

compute weights=%fill(%nvar,1,1.0/%nvar)

garch(model=customm,p=1,q=1,mv=cc,variances=spillover,pmethod=simplex,piters=5,$

   iters=500,hmatrices=hh,hadjust=(commonsd=sqrt(%qform(hh,weights))))

*

* VMA(1) mean model

*

dec vect[series] u(3)

clear(zeros) u

system(model=varma)

variables xjpn xfra xsui

det constant u(1){1} u(2){1} u(3){1}

end(system)

*

garch(model=varma,p=1,q=1,mv=bekk,asymmetric,rseries=u,$

   pmethod=simplex,piters=10,iters=500)

*

* Estimate an asymmetric BEKK on the VAR1 with t errors, saving various statistics

* for diagnostics

*

nlpar(derive=fourth,exactline)

garch(model=var1,mv=bekk,asymmetric,p=1,q=1,distrib=t,$

   pmethod=simplex,piters=10,iters=500,$

   rseries=rs,mvhseries=hhs,stdresids=zu,derives=dd)

*

* Diagnostics on (univariate) standardized residuals

*

set z1 = rs(1)/sqrt(hhs(1,1))

set z2 = rs(2)/sqrt(hhs(2,2))

set z3 = rs(3)/sqrt(hhs(3,3))

@bdindtests(number=40) z1

@bdindtests(number=40) z2

@bdindtests(number=40) z3

*

* Multivariate Q statistic and ARCH test on jointly standardized

* residuals.

*

@mvqstat(lags=5)

# zu

@mvarchtest(lags=5)

# zu

*

* Fluctuations test

*

@flux

# dd

 


 

Output

This is generated by 

 

garch(p=1,q=1,pmethod=simplex,piters=10) / xjpn xfra xsui

 

which is a standard (or DVECH) model (the default model for GARCH). Since the covariance matrix is symmetric, only the lower triangle needs to be modeled. In the output, C(i,j) is the variance constant, A(i,j) is the lagged squared residual ("ARCH") coefficient and B(i,j) is the lagged variance ("GARCH") coefficient. Because the components of the covariance matrix are modeled separately, it's possible for the H matrix to be non-positive definite for some parameters (even if all are positive). As a result, a DVECH can be hard to estimate, particularly if applied to data series which don't have relatively similar behavior.

 

MV-GARCH - Estimation by BFGS

Convergence in   122 Iterations. Final criterion was  0.0000063 <=  0.0000100

 

Usable Observations                      6236

Log Likelihood                    -11835.6549

 

    Variable                        Coeff      Std Error      T-Stat      Signif

************************************************************************************

1.  Mean(XJPN)                    0.004660940  0.006798380      0.68560  0.49296810

2.  Mean(XFRA)                   -0.003498719  0.006868955     -0.50935  0.61050525

3.  Mean(XSUI)                   -0.002360634  0.008084067     -0.29201  0.77027846

 

4.  C(1,1)                        0.009016960  0.001272675      7.08504  0.00000000

5.  C(2,1)                        0.005697569  0.000749382      7.60303  0.00000000

6.  C(2,2)                        0.011503488  0.001287755      8.93298  0.00000000

7.  C(3,1)                        0.006015917  0.000806761      7.45688  0.00000000

8.  C(3,2)                        0.009933856  0.001236798      8.03192  0.00000000

9.  C(3,3)                        0.012775891  0.001565976      8.15842  0.00000000

10. A(1,1)                        0.105800775  0.007736000     13.67642  0.00000000

11. A(2,1)                        0.093889723  0.006062327     15.48741  0.00000000

12. A(2,2)                        0.128144732  0.007036947     18.21027  0.00000000

13. A(3,1)                        0.088759105  0.005470840     16.22404  0.00000000

14. A(3,2)                        0.113620201  0.005943420     19.11697  0.00000000

15. A(3,3)                        0.111522114  0.006146301     18.14459  0.00000000

16. B(1,1)                        0.883968076  0.007890261    112.03280  0.00000000

17. B(2,1)                        0.891020282  0.006097508    146.12859  0.00000000

18. B(2,2)                        0.860886644  0.007137910    120.60766  0.00000000

19. B(3,1)                        0.897670507  0.005628248    159.49376  0.00000000

20. B(3,2)                        0.874866959  0.006324082    138.33897  0.00000000

21. B(3,3)                        0.877454252  0.006493130    135.13579  0.00000000


 

This is generated by
 

garch(p=1,q=1,mv=bekk,pmethod=simplex,piters=10) / xjpn xfra xsui

 

which estimates a BEKK GARCH.

 

Because of the standard use of the transpose of \({\bf{A}}\) as the pre-multiplying matrix, the coefficients (unfortunately) have the opposite interpretation as they do for almost all other forms of GARCH models: A(i,j) is the effect of residual i on variable j, rather than j on i. However, note that it is very difficult to interpret the individual coefficients anyway.

 

MV-GARCH, BEKK - Estimation by BFGS

Convergence in    86 Iterations. Final criterion was  0.0000086 <=  0.0000100

 

Usable Observations                      6236

Log Likelihood                    -11821.7457

 

    Variable                        Coeff      Std Error      T-Stat      Signif

************************************************************************************

1.  Mean(XJPN)                    0.005284238  0.005779110      0.91437  0.36052308

2.  Mean(XFRA)                   -0.002360430  0.004150748     -0.56868  0.56957616

3.  Mean(XSUI)                   -0.002505826  0.004919824     -0.50933  0.61051925

 

4.  C(1,1)                        0.082827551  0.005082889     16.29537  0.00000000

5.  C(2,1)                        0.029966933  0.006774462      4.42352  0.00000971

6.  C(2,2)                        0.055802023  0.004748283     11.75204  0.00000000

7.  C(3,1)                        0.037995437  0.007654196      4.96400  0.00000069

8.  C(3,2)                       -0.004017902  0.006806450     -0.59031  0.55498413

9.  C(3,3)                        0.058506480  0.006245748      9.36741  0.00000000

10. A(1,1)                        0.359535262  0.012077169     29.76983  0.00000000

11. A(1,2)                        0.102691494  0.009048917     11.34848  0.00000000

12. A(1,3)                        0.111082248  0.011812194      9.40403  0.00000000

13. A(2,1)                        0.038123247  0.014041254      2.71509  0.00662580

14. A(2,2)                        0.403444341  0.016261862     24.80923  0.00000000

15. A(2,3)                       -0.066355330  0.013370391     -4.96286  0.00000069

16. A(3,1)                       -0.047522551  0.010449514     -4.54782  0.00000542

17. A(3,2)                       -0.125553482  0.012149506    -10.33404  0.00000000

18. A(3,3)                        0.291344292  0.010526939     27.67607  0.00000000

19. B(1,1)                        0.935272064  0.003791535    246.67373  0.00000000

20. B(1,2)                       -0.026717483  0.003105677     -8.60279  0.00000000

21. B(1,3)                       -0.028574502  0.004086792     -6.99192  0.00000000

22. B(2,1)                       -0.012475081  0.005562790     -2.24259  0.02492300

23. B(2,2)                        0.909746081  0.006295125    144.51597  0.00000000

24. B(2,3)                        0.029269416  0.005403727      5.41652  0.00000006

25. B(3,1)                        0.016548608  0.004489013      3.68647  0.00022739

26. B(3,2)                        0.048830173  0.004991829      9.78202  0.00000000

27. B(3,3)                        0.946852761  0.004722801    200.48544  0.00000000

 


This is generated by

 

garch(p=1,q=1,mv=cc) / xjpn xfra xsui

 

which does a CC (Constant Correlation) model. The variances themselves are computed using separate equations for each variable. This uses the default for the latter, which is a simple univariate GARCH model for each variable.

 

In the output, C(i) is the variance constant, A(i) is the lagged squared residual ("ARCH") coefficient and B(i) is the lagged variance ("GARCH") coefficient. Only the off-diagonal of the constant correlation \({\bf{R}}\) matrix needs to be estimated, since it's symmetric with ones on the diagonal. R(i,j) is the correlation between the residuals for variables i and j.

 

MV-CC GARCH  - Estimation by BFGS

Convergence in    52 Iterations. Final criterion was  0.0000029 <=  0.0000100

 

Usable Observations                      6236

Log Likelihood                    -12817.3747

 

    Variable                        Coeff      Std Error      T-Stat      Signif

************************************************************************************

1.  Mean(XJPN)                   -0.000774432  0.006309040     -0.12275  0.90230536

2.  Mean(XFRA)                   -0.004265646  0.006300943     -0.67699  0.49841526

3.  Mean(XSUI)                    0.003649864  0.007495332      0.48695  0.62629259

 

4.  C(1)                          0.016829173  0.001976098      8.51636  0.00000000

5.  C(2)                          0.028388500  0.002563720     11.07317  0.00000000

6.  C(3)                          0.032309814  0.003032845     10.65330  0.00000000

7.  A(1)                          0.164136506  0.010230563     16.04374  0.00000000

8.  A(2)                          0.133212898  0.008828392     15.08915  0.00000000

9.  A(3)                          0.112705972  0.006884732     16.37042  0.00000000

10. B(1)                          0.812626460  0.011288583     71.98658  0.00000000

11. B(2)                          0.804331051  0.011894292     67.62328  0.00000000

12. B(3)                          0.831306635  0.010391077     80.00197  0.00000000

13. R(2,1)                        0.564319761  0.008374994     67.38151  0.00000000

14. R(3,1)                        0.579295418  0.008040597     72.04632  0.00000000

15. R(3,2)                        0.828697521  0.003726935    222.35361  0.00000000

 

The next output is generated by

 

garch(p=1,q=1,mv=dcc)  / xjpn xfra xsui

 

DCC ("Dynamic Conditional Correlations") was proposed by Engle(2002) to handle a bigger set of variables than the more fully parameterized models (such as DVECH and BEKK), without requiring the conditional correlations to be constant as in the CC. This adds two scalar parameters which govern a “GARCH(1,1)” model on the covariance matrix as a whole.

 

The output adds the DCC(A) and DCC(B) parameters which are the \(a\) and \(b\) in the \(Q\) recursion.

 

Note that, while CC and DCC are not nested, the fact that the log likelihood different is roughly +1000 in favor of DCC indicates the CC is clearly inadequate.
 

MV-DCC GARCH  - Estimation by BFGS

Convergence in    48 Iterations. Final criterion was  0.0000054 <=  0.0000100

 

Usable Observations                      6236

Log Likelihood                    -11814.4403

 

    Variable                        Coeff      Std Error      T-Stat      Signif

************************************************************************************

1.  Mean(XJPN)                    0.003986947  0.006357057      0.62717  0.53054863

2.  Mean(XFRA)                   -0.003134785  0.005767888     -0.54349  0.58679300

3.  Mean(XSUI)                   -0.003074034  0.006627697     -0.46382  0.64277935

 

4.  C(1)                          0.008500425  0.001063138      7.99560  0.00000000

5.  C(2)                          0.012488407  0.001255711      9.94528  0.00000000

6.  C(3)                          0.016570623  0.001779164      9.31371  0.00000000

7.  A(1)                          0.151681007  0.008998322     16.85659  0.00000000

8.  A(2)                          0.138405513  0.007920769     17.47375  0.00000000

9.  A(3)                          0.123714323  0.007255348     17.05147  0.00000000

10. B(1)                          0.851988485  0.007806082    109.14419  0.00000000

11. B(2)                          0.848502716  0.008012928    105.89172  0.00000000

12. B(3)                          0.857976417  0.007724640    111.07009  0.00000000

13. DCC(A)                        0.053241934  0.003272021     16.27188  0.00000000

14. DCC(B)                        0.939057943  0.003968913    236.60331  0.00000000


 

The next output is generated by

 

garch(p=1,q=1,mv=cholesky)  / xjpn xfra xsui

 

The Cholesky model is a rarely used alternative restricted covariance model which borrows some ideas from the structural VAR literature.

 

MV-Cholesky GARCH  - Estimation by BFGS

Convergence in    45 Iterations. Final criterion was  0.0000021 <=  0.0000100

 

Usable Observations                      6236

Log Likelihood                    -12173.5176

 

    Variable                        Coeff      Std Error      T-Stat      Signif

************************************************************************************

1.  Mean(XJPN)                    0.000188426  0.005612913      0.03357  0.97321995

2.  Mean(XFRA)                   -0.006990799  0.005886029     -1.18769  0.23495414

3.  Mean(XSUI)                   -0.004037401  0.007076942     -0.57050  0.56833812

 

4.  C(1)                          0.007428676  0.001051529      7.06464  0.00000000

5.  C(2)                          0.007284625  0.001161357      6.27251  0.00000000

6.  C(3)                          0.005181124  0.000836962      6.19039  0.00000000

7.  A(1)                          0.176287644  0.011581222     15.22185  0.00000000

8.  A(2)                          0.146646259  0.011453410     12.80372  0.00000000

9.  A(3)                          0.183556643  0.015164645     12.10425  0.00000000

10. B(1)                          0.837268494  0.009216301     90.84648  0.00000000

11. B(2)                          0.841732018  0.011376399     73.98932  0.00000000

12. B(3)                          0.807020065  0.015552839     51.88892  0.00000000

13. F(2,1)                        0.561328027  0.010939180     51.31354  0.00000000

14. F(3,1)                        0.675957508  0.012167015     55.55656  0.00000000

15. F(3,2)                        0.919538827  0.009235754     99.56294  0.00000000

 


The next output is generated by

 

nlpar(derive=fourth,exactline)

garch(p=1,q=1,mv=cc,variances=varma,iters=500,pmethod=simplex,piters=5) / $

   xjpn xfra xsui

 

which does a CC model but with VARMA variances rather than the (default) variances used earlier. Note that VARIANCES=VARMA is a rather difficult model to fit which is why we use some rather high-end adjustments to the non-linear estimation process using NLPAR.

 

In the output, C(i) is the constant in the variance equation for variable i, A(i,j) is the coefficient for the "ARCH" term in the variance for i using the lagged squared residuals for variable j and B(i,j) is the coefficient in the "GARCH" term for computing the variance of i using the lagged variance of variable j.

 

 

MV-CC GARCH with VARMA Variances - Estimation by BFGS

Convergence in   102 Iterations. Final criterion was  0.0000000 <=  0.0000100

 

Usable Observations                      6236

Log Likelihood                    -12679.6117

 

    Variable                        Coeff      Std Error      T-Stat      Signif

************************************************************************************

1.  Mean(XJPN)                   -0.000736365  0.005687626     -0.12947  0.89698745

2.  Mean(XFRA)                   -0.004928502  0.006021537     -0.81848  0.41308370

3.  Mean(XSUI)                    0.004522104  0.007369896      0.61359  0.53948548

 

4.  C(1)                          0.011009851  0.001106113      9.95364  0.00000000

5.  C(2)                          0.011489265  0.001804615      6.36660  0.00000000

6.  C(3)                          0.021304610  0.001955298     10.89584  0.00000000

7.  A(1,1)                        0.129910659  0.009444814     13.75471  0.00000000

8.  A(1,2)                        0.032903256  0.004351554      7.56127  0.00000000

9.  A(1,3)                       -0.002625706  0.000774117     -3.39187  0.00069416

10. A(2,1)                        0.029611897  0.003823820      7.74406  0.00000000

11. A(2,2)                        0.142254621  0.010294643     13.81832  0.00000000

12. A(2,3)                       -0.012085976  0.001802078     -6.70669  0.00000000

13. A(3,1)                        0.002481219  0.005370488      0.46201  0.64407418

14. A(3,2)                        0.032937641  0.007797229      4.22428  0.00002397

15. A(3,3)                        0.089359548  0.007678690     11.63734  0.00000000

16. B(1,1)                        0.860433868  0.008348932    103.05915  0.00000000

17. B(1,2)                       -0.032049330  0.004814092     -6.65740  0.00000000

18. B(1,3)                        0.003083861  0.002378905      1.29634  0.19485967

19. B(2,1)                       -0.027854446  0.003415045     -8.15639  0.00000000

20. B(2,2)                        0.787363196  0.017806848     44.21688  0.00000000

21. B(2,3)                        0.054014152  0.010142159      5.32571  0.00000010

22. B(3,1)                       -0.007390051  0.006071894     -1.21709  0.22356937

23. B(3,2)                       -0.027511277  0.011884229     -2.31494  0.02061621

24. B(3,3)                        0.877652457  0.012249078     71.65049  0.00000000

25. R(2,1)                        0.559873267  0.008657402     64.66990  0.00000000

26. R(3,1)                        0.573176839  0.008445591     67.86699  0.00000000

27. R(3,2)                        0.830630501  0.003937859    210.93455  0.00000000


 

The next output is generated by

 

garch(p=1,q=1,mv=ewma,distrib=t) / xjpn xfra xsui

 

which does an EWMA (Exponentially Weighted Moving Average) model. This is a very tightly parameterized variance model which has just a single real parameter (Alpha in the output) governing the evolution of the covariance matrix.

 

Note that the log likelihood is substantially better than the previous models because this uses Student t errors (DISTRIB=T option)—all the previous examples assumed conditionally Normal residuals.

 

MV-GARCH, EWMA - Estimation by BFGS

Convergence in    12 Iterations. Final criterion was  0.0000058 <=  0.0000100

Usable Observations                      6236

Log Likelihood                    -10516.1908

 

    Variable                        Coeff      Std Error      T-Stat      Signif

************************************************************************************

1.  Mean(XJPN)                   -0.005311325  0.003704962     -1.43357  0.15169474

2.  Mean(XFRA)                   -0.004245558  0.004184190     -1.01467  0.31026475

3.  Mean(XSUI)                   -0.005260662  0.005173644     -1.01682  0.30923931

 

4.  Alpha                         0.049804403  0.001974483     25.22402  0.00000000

5.  Shape                         5.253165403  0.139850219     37.56280  0.00000000


 

The next is from

 

garch(p=1,q=1,mv=cc,asymmetric,variances=exp) / xjpn xfra xsui

 

This is a CC model with the individual variances computed using E-GARCH with asymmetry, which is chosen using the combination of VARIANCES=EXP and ASYMMETRIC. In the output C(i) is the constant in the log variance equation for variable i, A(i) is the coefficient on the standardized lag absolute residual, B(i) is the coefficient on the lagged log variance, D(i) is the asymmetry coefficient, which will be zero if the variance responds identically to positive vs negative residuals, and will be negative if the variance increases more with a negative residual than with a similarly sized positive residual.


 

MV-CC GARCH with E-GARCH Variances - Estimation by BFGS

Convergence in    46 Iterations. Final criterion was  0.0000050 <=  0.0000100

 

Usable Observations                      6236

Log Likelihood                    -12733.4127

 

    Variable                        Coeff      Std Error      T-Stat      Signif

************************************************************************************

1.  Mean(XJPN)                   -0.001061134  0.005995626     -0.17698  0.85952042

2.  Mean(XFRA)                   -0.007507476  0.006656264     -1.12788  0.25937002

3.  Mean(XSUI)                    0.006646038  0.008248918      0.80569  0.42042391

 

4.  C(1)                         -0.388302831  0.016520648    -23.50409  0.00000000

5.  C(2)                         -0.280006611  0.014158350    -19.77678  0.00000000

6.  C(3)                         -0.212549903  0.012904538    -16.47094  0.00000000

7.  A(1)                          0.390917065  0.016363203     23.89001  0.00000000

8.  A(2)                          0.265106474  0.013130072     20.19079  0.00000000

9.  A(3)                          0.216811643  0.012890599     16.81936  0.00000000

10. B(1)                          0.894099733  0.007099411    125.93999  0.00000000

11. B(2)                          0.912114139  0.006201980    147.06823  0.00000000

12. B(3)                          0.926770474  0.006197222    149.54612  0.00000000

13. D(1)                          0.019518794  0.009788554      1.99404  0.04614740

14. D(2)                         -0.011375872  0.006560111     -1.73410  0.08290078

15. D(3)                          0.043824402  0.006302749      6.95322  0.00000000

16. R(2,1)                        0.561037832  0.008673803     64.68187  0.00000000

17. R(3,1)                        0.575786420  0.008422018     68.36680  0.00000000

18. R(3,2)                        0.826049552  0.004068032    203.05878  0.00000000

 

The next is generated by

 

garch(p=1,q=1,mv=dcc,variances=koutmos,hmatrices=hh) / $

   xjpn xfra xsui

 

This is a DCC model with the individual variances computed using the option VARIANCES=KOUTMOS. (This was introduced in Koutmos(1996)). This is an extension of the multivariate EGARCH but allows for "spillover" effects among the residuals.

 

In the output, A(i,j) is the loading of the index for variable j on the variance for i. D(j) is the asymmetry coefficient for variable j.

 

MV-DCC GARCH with Koutmos EGARCH Variances - Estimation by BFGS

Convergence in   230 Iterations. Final criterion was  0.0000098 <=  0.0000100

 

Usable Observations                      6236

Log Likelihood                    -11667.5734

 

    Variable                        Coeff      Std Error      T-Stat      Signif

************************************************************************************

1.  Mean(XJPN)                    0.006024778  0.004630050      1.30123  0.19317842

2.  Mean(XFRA)                   -0.000146329  0.005191954     -0.02818  0.97751558

3.  Mean(XSUI)                    0.003879846  0.006243991      0.62137  0.53435445

 

4.  C(1)                         -0.333025563  0.014229055    -23.40462  0.00000000

5.  C(2)                         -0.318645749  0.011582791    -27.51027  0.00000000

6.  C(3)                         -0.256314323  0.011379009    -22.52519  0.00000000

7.  A(1,1)                        0.368281353  0.014178033     25.97549  0.00000000

8.  A(1,2)                        0.065355123  0.017493779      3.73591  0.00018704

9.  A(1,3)                       -0.038663116  0.015996067     -2.41704  0.01564735

10. A(2,1)                        0.089174917  0.008514294     10.47355  0.00000000

11. A(2,2)                        0.312079544  0.016205921     19.25713  0.00000000

12. A(2,3)                       -0.021657608  0.015265041     -1.41877  0.15596558

13. A(3,1)                        0.038346197  0.008701091      4.40706  0.00001048

14. A(3,2)                        0.104655778  0.012043972      8.68947  0.00000000

15. A(3,3)                        0.167712826  0.012842656     13.05905  0.00000000

16. B(1)                          0.938873078  0.003994526    235.03992  0.00000000

17. B(2)                          0.949210072  0.003458963    274.42045  0.00000000

18. B(3)                          0.952207255  0.003918152    243.02459  0.00000000

19. D(1)                          0.074075026  0.020036638      3.69698  0.00021818

20. D(2)                         -0.008025059  0.018932972     -0.42387  0.67166293

21. D(3)                          0.164340488  0.028595478      5.74708  0.00000001

22. DCC(A)                        0.050598072  0.003298070     15.34172  0.00000000

23. DCC(B)                        0.943670624  0.003834867    246.07648  0.00000000

 

 


This estimates a BEKK with t errors, saving the residuals and the variances (in the VECT[SERIES] and SYMM[SERIES] forms). Those extra outputs will be used in the next calculation. The only addition to the output is the SHAPE parameter for degrees of freedom of the t.

 

garch(p=1,q=1,mv=bekk,pmethod=simplex,piters=10,distrib=t,$

   rseries=rs,mvhseries=hhs) / xjpn xfra xsui


 

MV-GARCH, BEKK - Estimation by BFGS

Convergence in    92 Iterations. Final criterion was  0.0000055 <=  0.0000100

 

Usable Observations                      6236

Log Likelihood                    -10260.5399

 

    Variable                        Coeff      Std Error      T-Stat      Signif

************************************************************************************

1.  Mean(XJPN)                   -0.007107660  0.003858492     -1.84208  0.06546307

2.  Mean(XFRA)                   -0.004612319  0.004092372     -1.12705  0.25972018

3.  Mean(XSUI)                   -0.004487324  0.005066468     -0.88569  0.37578413

 

4.  C(1,1)                       -0.017751180  0.004269828     -4.15735  0.00003220

5.  C(2,1)                        0.031717592  0.007622471      4.16106  0.00003168

6.  C(2,2)                        0.027368262  0.008516552      3.21354  0.00131111

7.  C(3,1)                        0.029154447  0.013872244      2.10164  0.03558493

8.  C(3,2)                       -0.067679325  0.008945581     -7.56567  0.00000000

9.  C(3,3)                       -0.000003986  0.083151803 -4.79320e-05  0.99996176

10. A(1,1)                        0.291241803  0.013001384     22.40083  0.00000000

11. A(1,2)                        0.001808702  0.012235897      0.14782  0.88248537

12. A(1,3)                        0.008848857  0.016965762      0.52157  0.60196873

13. A(2,1)                       -0.004305175  0.009650004     -0.44613  0.65550192

14. A(2,2)                        0.315102162  0.016932181     18.60966  0.00000000

15. A(2,3)                       -0.088257932  0.020523901     -4.30025  0.00001706

16. A(3,1)                       -0.007935956  0.006432005     -1.23382  0.21726880

17. A(3,2)                       -0.033315781  0.011216219     -2.97032  0.00297488

18. A(3,3)                        0.347586918  0.018646535     18.64083  0.00000000

19. B(1,1)                        0.963713100  0.002641411    364.84781  0.00000000

20. B(1,2)                        0.000680630  0.002835135      0.24007  0.81027619

21. B(1,3)                        0.002794142  0.004115180      0.67898  0.49714786

22. B(2,1)                        0.000460633  0.003131357      0.14710  0.88305048

23. B(2,2)                        0.941278157  0.005761179    163.38289  0.00000000

24. B(2,3)                        0.045037314  0.007406098      6.08111  0.00000000

25. B(3,1)                        0.002189996  0.002497809      0.87677  0.38061333

26. B(3,2)                        0.021980487  0.004689952      4.68672  0.00000278

27. B(3,3)                        0.927099189  0.006927691    133.82513  0.00000000

28. Shape(t degrees)              4.229200497  0.138321210     30.57521  0.00000000

 

 

This takes the residuals and covariance matrices just saved, and uses them compute the empirical probability of a residual (for Japan) being in the left .05 tail.

 

compute fixt=(%shape-2)/%shape

set trigger = %tcdf(rs(1)/sqrt(hhs(1,1)*fixt),%shape)<.05

sstats(mean) / trigger>>VaRp

disp "Probability of being below .05 level" #.#### VaRp

 

This is the output from that calculation.

 

Probability of being below .05 level 0.0420


 

This does a DCC variance model with a mean model with different explanatory variables for each equation (separate univariate AR(1)). The more complicated mean model is shown with a subheader for each equation followed by the coefficients which apply to it.

 

 

equation(constant) jpneq xjpn 1

equation(constant) fraeq xfra 1

equation(constant) suieq xsui 1

group ar1 jpneq fraeq suieq

garch(p=1,q=1,model=ar1,mv=dcc,pmethod=simplex,piter=10)

 

 

MV-DCC GARCH  - Estimation by BFGS

Convergence in    50 Iterations. Final criterion was  0.0000000 <=  0.0000100

 

Usable Observations                      6235

Log Likelihood                    -11810.9128

 

    Variable                        Coeff      Std Error      T-Stat      Signif

************************************************************************************

Mean Model(XJPN)

1.  Constant                      0.004012130  0.006527278      0.61467  0.53877188

2.  XJPN{1}                       0.025541364  0.012106316      2.10976  0.03487944

Mean Model(XFRA)

3.  Constant                     -0.003121533  0.005934792     -0.52597  0.59890789

4.  XFRA{1}                       0.005552823  0.010310136      0.53858  0.59017737

Mean Model(XSUI)

5.  Constant                     -0.003058080  0.006973783     -0.43851  0.66101596

6.  XSUI{1}                      -0.001890715  0.010112046     -0.18698  0.85167907

 

7.  C(1)                          0.008296064  0.000941860      8.80817  0.00000000

8.  C(2)                          0.012328174  0.001261632      9.77161  0.00000000

9.  C(3)                          0.016479037  0.001792536      9.19314  0.00000000

10. A(1)                          0.151627200  0.009158629     16.55567  0.00000000

11. A(2)                          0.137359939  0.007952320     17.27294  0.00000000

12. A(3)                          0.123129448  0.007012873     17.55763  0.00000000

13. B(1)                          0.852578357  0.007747172    110.05026  0.00000000

14. B(2)                          0.849687450  0.007917933    107.31177  0.00000000

15. B(3)                          0.858564973  0.007545403    113.78650  0.00000000

16. DCC(A)                        0.053146867  0.003102971     17.12774  0.00000000

17. DCC(B)                        0.939177245  0.003732452    251.62473  0.00000000


 

This estimates a BEKK model with the mean model given by a VAR(1).

 

system(model=var1)

variables xjpn xfra xsui

lags 1

det constant

end(system)

*

garch(p=1,q=1,model=var1,mv=bekk,pmethod=simplex,piters=10)


 

MV-GARCH, BEKK - Estimation by BFGS

Convergence in    96 Iterations. Final criterion was  0.0000080 <=  0.0000100

 

Usable Observations                      6235

Log Likelihood                    -11809.4182

 

    Variable                        Coeff      Std Error      T-Stat      Signif

************************************************************************************

Mean Model(XJPN)

1.  XJPN{1}                       0.054047272  0.011942861      4.52549  0.00000603

2.  XFRA{1}                       0.023703172  0.012434190      1.90629  0.05661260

3.  XSUI{1}                      -0.034353787  0.010046576     -3.41945  0.00062747

4.  Constant                      0.006198638  0.005824749      1.06419  0.28724271

Mean Model(XFRA)

5.  XJPN{1}                       0.028507734  0.010648194      2.67724  0.00742322

6.  XFRA{1}                       0.023745977  0.011737459      2.02309  0.04306350

7.  XSUI{1}                      -0.009397730  0.009744988     -0.96437  0.33486271

8.  Constant                     -0.001867249  0.004229528     -0.44148  0.65886617

Mean Model(XSUI)

9.  XJPN{1}                       0.040997212  0.013027364      3.14701  0.00164951

10. XFRA{1}                       0.025073306  0.012911897      1.94188  0.05215209

11. XSUI{1}                      -0.018020929  0.009489028     -1.89913  0.05754694

12. Constant                     -0.001920828  0.005069808     -0.37888  0.70478004

 

13. C(1,1)                        0.080435930  0.004793012     16.78192  0.00000000

14. C(2,1)                        0.026820313  0.007480192      3.58551  0.00033642

15. C(2,2)                        0.055106684  0.005020203     10.97698  0.00000000

16. C(3,1)                        0.034208095  0.007693822      4.44618  0.00000874

17. C(3,2)                       -0.004050227  0.009476016     -0.42742  0.66907434

18. C(3,3)                        0.058671874  0.005634198     10.41353  0.00000000

19. A(1,1)                        0.356771446  0.011256418     31.69494  0.00000000

20. A(1,2)                        0.099064871  0.009417503     10.51923  0.00000000

21. A(1,3)                        0.107409312  0.012000034      8.95075  0.00000000

22. A(2,1)                        0.033552159  0.014691203      2.28383  0.02238173

23. A(2,2)                        0.398536021  0.016364355     24.35391  0.00000000

24. A(2,3)                       -0.070314051  0.018541044     -3.79235  0.00014923

25. A(3,1)                       -0.047183060  0.009893482     -4.76911  0.00000185

26. A(3,2)                       -0.122502522  0.012120930    -10.10669  0.00000000

27. A(3,3)                        0.292916086  0.013683608     21.40635  0.00000000

28. B(1,1)                        0.936382926  0.003618094    258.80557  0.00000000

29. B(1,2)                       -0.025339939  0.002987649     -8.48156  0.00000000

30. B(1,3)                       -0.027037423  0.003927755     -6.88368  0.00000000

31. B(2,1)                       -0.010670134  0.005503253     -1.93888  0.05251625

32. B(2,2)                        0.911878410  0.006460352    141.14997  0.00000000

33. B(2,3)                        0.030707222  0.006589697      4.65988  0.00000316

34. B(3,1)                        0.016110359  0.003907448      4.12299  0.00003740

35. B(3,2)                        0.047482297  0.005203515      9.12504  0.00000000

36. B(3,3)                        0.946088060  0.004837787    195.56215  0.00000000


 

To do GARCH-M with a multivariate model, you have to plan ahead a bit. On your GARCH instruction, you need to use the option MVHSERIES=SYMM[SERIES] which will save the paths of the variances (example: MVHSERIES=HHS). Your regression equations for the means will include references to the elements of this array of series. Since those equations need to be created in advance, you need the to declare that as a SYMM[SERIES] first as well. The model here includes in each equation all the covariances which involve the residuals from an equation.

 

dec symm[series] hhs(3,3)

clear(zeros) hhs

*

equation jpneq xjpn

# constant hhs(1,1) hhs(1,2) hhs(1,3)

equation fraeq xfra

# constant hhs(2,1) hhs(2,2) hhs(2,3)

equation suieq xsui

# constant hhs(3,1) hhs(3,2) hhs(3,3)

*

group garchm jpneq fraeq suieq

garch(model=garchm,p=1,q=1,pmethod=simplex,piters=10,iters=500,$

   mvhseries=hhs)


 

MV-GARCH - Estimation by BFGS

Convergence in   144 Iterations. Final criterion was  0.0000078 <=  0.0000100

 

Usable Observations                      6236

Log Likelihood                    -11832.2625

 

    Variable                        Coeff      Std Error      T-Stat      Signif

************************************************************************************

Mean Model(XJPN)

1.  Constant                     -0.007849113  0.009863216     -0.79580  0.42615031

2.  HHS(1,1)                      0.004435082  0.032556473      0.13623  0.89164156

3.  HHS(2,1)                     -0.041757909  0.089146786     -0.46842  0.63948610

4.  HHS(3,1)                      0.091558025  0.076002136      1.20468  0.22832802

Mean Model(XFRA)

5.  Constant                     -0.010751782  0.009107112     -1.18059  0.23776489

6.  HHS(2,1)                      0.069406060  0.044387649      1.56363  0.11790342

7.  HHS(2,2)                     -0.026189907  0.038262864     -0.68447  0.49367638

8.  HHS(3,2)                      0.010818971  0.041245292      0.26231  0.79308398

Mean Model(XSUI)

9.  Constant                     -0.004234646  0.011894647     -0.35601  0.72183103

10. HHS(3,1)                      0.065188220  0.047639040      1.36838  0.17119375

11. HHS(3,2)                      0.002382359  0.047947064      0.04969  0.96037159

12. HHS(3,3)                     -0.025794190  0.035166037     -0.73350  0.46325523

 

13. C(1,1)                        0.008963132  0.000875514     10.23756  0.00000000

14. C(2,1)                        0.005670072  0.000503384     11.26392  0.00000000

15. C(2,2)                        0.011533285  0.000754783     15.28027  0.00000000

16. C(3,1)                        0.005999971  0.000496130     12.09355  0.00000000

17. C(3,2)                        0.009941202  0.000386110     25.74709  0.00000000

18. C(3,3)                        0.012772167  0.000463267     27.56980  0.00000000

19. A(1,1)                        0.105756073  0.006443928     16.41174  0.00000000

20. A(2,1)                        0.093982155  0.005531661     16.98986  0.00000000

21. A(2,2)                        0.128299751  0.006464150     19.84789  0.00000000

22. A(3,1)                        0.088856383  0.005016732     17.71200  0.00000000

23. A(3,2)                        0.113612844  0.005621794     20.20936  0.00000000

24. A(3,3)                        0.111350277  0.005739693     19.40004  0.00000000

25. B(1,1)                        0.884137252  0.006391777    138.32418  0.00000000

26. B(2,1)                        0.891025082  0.005215409    170.84473  0.00000000

27. B(2,2)                        0.860654492  0.005303599    162.27744  0.00000000

28. B(3,1)                        0.897618057  0.004711600    190.51235  0.00000000

29. B(3,2)                        0.874814093  0.004491115    194.78771  0.00000000

30. B(3,3)                        0.877555588  0.004651904    188.64440  0.00000000


 

If you need some function of the variances or covariances, you need to use the HADJUST option to define it based upon the values that you save either with MVHSERIES or with HMATRICES. The following, for instance, adds the conditional standard deviation (generated into the series COMMONSD) of an equally weighted sum of the three currencies. This does a CC model using VARIANCES=SPILLOVER which includes cross terms (allowing for "spillover") on the lagged squared residuals.

 

set commonsd = 0.0

system(model=customm)

variables xjpn xfra xsui

det constant commonsd

end(system)

*

compute %nvar=%modelsize(customm)

compute weights=%fill(%nvar,1,1.0/%nvar)

garch(model=customm,p=1,q=1,mv=cc,variances=spillover,pmethod=simplex,piters=5,$

   iters=500,hmatrices=hh,hadjust=(commonsd=sqrt(%qform(hh,weights))))


 

MV-CC GARCH with Spillover Variances - Estimation by BFGS

Convergence in    93 Iterations. Final criterion was  0.0000088 <=  0.0000100

 

Usable Observations                      6236

Log Likelihood                    -12740.4462

 

    Variable                        Coeff      Std Error      T-Stat      Signif

************************************************************************************

Mean Model(XJPN)

1.  Constant                     -0.029842586  0.019071561     -1.56477  0.11763711

2.  COMMONSD                      0.053667188  0.035746474      1.50133  0.13327071

Mean Model(XFRA)

3.  Constant                     -0.047162588  0.021717377     -2.17165  0.02988191

4.  COMMONSD                      0.082817429  0.040949910      2.02241  0.04313421

Mean Model(XSUI)

5.  Constant                     -0.028134340  0.025811939     -1.08997  0.27572462

6.  COMMONSD                      0.060123324  0.047615815      1.26268  0.20670582

 

7.  C(1)                          0.013447146  0.001870851      7.18772  0.00000000

8.  C(2)                          0.023003846  0.002332264      9.86331  0.00000000

9.  C(3)                          0.032222689  0.002844326     11.32876  0.00000000

10. A(1,1)                        0.165972437  0.010884665     15.24828  0.00000000

11. A(1,2)                        0.013450984  0.002765010      4.86471  0.00000115

12. A(1,3)                       -0.002086714  0.000322548     -6.46946  0.00000000

13. A(2,1)                        0.023299804  0.003531985      6.59680  0.00000000

14. A(2,2)                        0.153443254  0.010282048     14.92341  0.00000000

15. A(2,3)                        0.009559265  0.002955966      3.23389  0.00122117

16. A(3,1)                       -0.000452464  0.003489263     -0.12967  0.89682495

17. A(3,2)                        0.026380541  0.005891969      4.47737  0.00000756

18. A(3,3)                        0.121021238  0.008060161     15.01474  0.00000000

19. B(1)                          0.811204171  0.010975985     73.90719  0.00000000

20. B(2)                          0.772015008  0.011621376     66.43061  0.00000000

21. B(3)                          0.808656548  0.010370612     77.97578  0.00000000

22. R(2,1)                        0.561778133  0.008001589     70.20833  0.00000000

23. R(3,1)                        0.576805047  0.007896678     73.04401  0.00000000

24. R(3,2)                        0.830563269  0.003711043    223.80858  0.00000000


 

This is an asymmetric BEKK model with a VMA (Vector Moving Average) mean model. The residuals (the U(1), U(2) and U(3) series) are generated recursively as the function is evaluated and saved used the RSERIES option. They are then used in computing the mean model for the next period.

 

dec vect[series] u(3)

clear(zeros) u

system(model=varma)

variables xjpn xfra xsui

det constant u(1){1} u(2){1} u(3){1}

end(system)

*

garch(model=varma,p=1,q=1,mv=bekk,asymmetric,rseries=u,$

   pmethod=simplex,piters=10,iters=500)

 

 

MV-GARCH, BEKK - Estimation by BFGS

Convergence in    98 Iterations. Final criterion was  0.0000091 <=  0.0000100

 

Usable Observations                      6236

Log Likelihood                    -11750.8048

 

    Variable                        Coeff      Std Error      T-Stat      Signif

************************************************************************************

Mean Model(XJPN)

1.  Constant                      0.000517049  0.005483577      0.09429  0.92487840

2.  U(1){1}                       0.056542853  0.010905316      5.18489  0.00000022

3.  U(2){1}                       0.023799767  0.011324202      2.10167  0.03558195

4.  U(3){1}                      -0.034939036  0.008870599     -3.93875  0.00008191

Mean Model(XFRA)

5.  Constant                     -0.004620079  0.003787255     -1.21990  0.22250207

6.  U(1){1}                       0.026529499  0.009291297      2.85531  0.00429954

7.  U(2){1}                       0.024502516  0.011042365      2.21896  0.02648976

8.  U(3){1}                      -0.004608662  0.007019772     -0.65653  0.51148581

Mean Model(XSUI)

9.  Constant                     -0.003730446  0.004553395     -0.81927  0.41263412

10. U(1){1}                       0.039552904  0.011341454      3.48746  0.00048763

11. U(2){1}                       0.024216883  0.010049424      2.40978  0.01596222

12. U(3){1}                      -0.011612455  0.009982280     -1.16331  0.24470501

 

13. C(1,1)                        0.071440677  0.004666243     15.31011  0.00000000

14. C(2,1)                        0.019904067  0.006920749      2.87600  0.00402751

15. C(2,2)                        0.049758322  0.005031042      9.89026  0.00000000

16. C(3,1)                        0.029643632  0.007841235      3.78048  0.00015653

17. C(3,2)                       -0.006359708  0.010108110     -0.62917  0.52923851

18. C(3,3)                        0.057138519  0.005507015     10.37559  0.00000000

19. A(1,1)                        0.337768489  0.009543796     35.39142  0.00000000

20. A(1,2)                        0.101314155  0.008208917     12.34196  0.00000000

21. A(1,3)                        0.095841256  0.011174256      8.57697  0.00000000

22. A(2,1)                        0.001144970  0.014701361      0.07788  0.93792201

23. A(2,2)                        0.376879863  0.017268295     21.82496  0.00000000

24. A(2,3)                       -0.092997279  0.017098999     -5.43876  0.00000005

25. A(3,1)                       -0.038116822  0.009680120     -3.93764  0.00008229

26. A(3,2)                       -0.126887422  0.012663180    -10.02019  0.00000000

27. A(3,3)                        0.303054636  0.013129243     23.08242  0.00000000

28. B(1,1)                        0.934735090  0.003065463    304.92461  0.00000000

29. B(1,2)                       -0.027758350  0.002866379     -9.68412  0.00000000

30. B(1,3)                       -0.027667110  0.003421433     -8.08641  0.00000000

31. B(2,1)                       -0.006977941  0.005582143     -1.25005  0.21128239

32. B(2,2)                        0.907298431  0.006209658    146.11086  0.00000000

33. B(2,3)                        0.035183606  0.006772729      5.19489  0.00000020

34. B(3,1)                        0.014406438  0.004140365      3.47951  0.00050233

35. B(3,2)                        0.052042648  0.005061386     10.28229  0.00000000

36. B(3,3)                        0.942431126  0.005114451    184.26827  0.00000000

37. D(1,1)                        0.184171852  0.021219408      8.67941  0.00000000

38. D(1,2)                        0.019941521  0.019292952      1.03362  0.30131529

39. D(1,3)                        0.108275699  0.025745289      4.20565  0.00002603

40. D(2,1)                        0.092772704  0.016025850      5.78894  0.00000001

41. D(2,2)                        0.108997998  0.021529893      5.06264  0.00000041

42. D(2,3)                        0.105291651  0.024522423      4.29369  0.00001757

43. D(3,1)                       -0.027926024  0.015632011     -1.78646  0.07402416

44. D(3,2)                        0.072495801  0.012881016      5.62811  0.00000002

45. D(3,3)                       -0.025898035  0.021433408     -1.20830  0.22693102

 

 

This estimates an asymmetric BEKK, with the VAR(1) mean model and t distributed errors, saving various statistics for diagnostics. This again requires some extra care with NLPAR settings to properly estimate.
 

nlpar(derive=fourth,exactline)

garch(model=var1,mv=bekk,asymmetric,p=1,q=1,distrib=t,$

   pmethod=simplex,piters=10,iters=500,$

   rseries=rs,mvhseries=hhs,stdresids=zu,derives=dd)

 

 

MV-GARCH, BEKK - Estimation by BFGS

Convergence in   128 Iterations. Final criterion was  0.0000096 <=  0.0000100

 

Usable Observations                      6235

Log Likelihood                    -10204.7792

 

    Variable                        Coeff      Std Error      T-Stat      Signif

************************************************************************************

Mean Model(XJPN)

1.  XJPN{1}                      -0.013922256  0.011718449     -1.18806  0.23480856

2.  XFRA{1}                       0.015596795  0.009916931      1.57274  0.11577808

3.  XSUI{1}                      -0.008653617  0.007005863     -1.23520  0.21675742

4.  Constant                     -0.008317954  0.003622105     -2.29644  0.02165061

Mean Model(XFRA)

5.  XJPN{1}                      -0.010041997  0.010743524     -0.93470  0.34994172

6.  XFRA{1}                      -0.004013781  0.008893109     -0.45134  0.65174735

7.  XSUI{1}                       0.020068634  0.008117448      2.47228  0.01342529

8.  Constant                     -0.003343760  0.004088702     -0.81780  0.41346871

Mean Model(XSUI)

9.  XJPN{1}                      -0.000913145  0.013654436     -0.06688  0.94668097

10. XFRA{1}                       0.033303662  0.010263716      3.24480  0.00117535

11. XSUI{1}                      -0.032648518  0.010025903     -3.25642  0.00112828

12. Constant                     -0.003634622  0.005161392     -0.70419  0.48131189

 

13. C(1,1)                        0.015388925  0.004323636      3.55926  0.00037191

14. C(2,1)                       -0.029941041  0.007329806     -4.08483  0.00004411

15. C(2,2)                        0.021794519  0.008962191      2.43183  0.01502278

16. C(3,1)                       -0.015010306  0.016045852     -0.93546  0.34954960

17. C(3,2)                       -0.063157209  0.008171624     -7.72884  0.00000000

18. C(3,3)                        0.000003465  0.057349090  6.04191e-05  0.99995179

19. A(1,1)                        0.280572594  0.014279402     19.64876  0.00000000

20. A(1,2)                        0.010020816  0.013238732      0.75693  0.44909081

21. A(1,3)                        0.011505341  0.018402599      0.62520  0.53183854

22. A(2,1)                       -0.003963025  0.009942508     -0.39859  0.69019231

23. A(2,2)                        0.312372338  0.018349368     17.02360  0.00000000

24. A(2,3)                       -0.075759059  0.021118346     -3.58736  0.00033405

25. A(3,1)                       -0.006188012  0.006596246     -0.93811  0.34818729

26. A(3,2)                       -0.023218635  0.012992926     -1.78702  0.07393404

27. A(3,3)                        0.345681109  0.019083399     18.11423  0.00000000

28. B(1,1)                        0.959879223  0.002999470    320.01626  0.00000000

29. B(1,2)                       -0.000179505  0.003237216     -0.05545  0.95577955

30. B(1,3)                        0.001566655  0.004553897      0.34403  0.73082734

31. B(2,1)                        0.000665128  0.003233844      0.20568  0.83704312

32. B(2,2)                        0.939704790  0.006320441    148.67709  0.00000000

33. B(2,3)                        0.041077450  0.008056670      5.09856  0.00000034

34. B(3,1)                        0.001479810  0.002558464      0.57840  0.56299552

35. B(3,2)                        0.020074720  0.005238068      3.83247  0.00012686

36. B(3,3)                        0.928790712  0.007365728    126.09626  0.00000000

37. D(1,1)                        0.185748046  0.024666145      7.53049  0.00000000

38. D(1,2)                       -0.030184227  0.027193491     -1.10998  0.26700771

39. D(1,3)                        0.055188975  0.042029155      1.31311  0.18914531

40. D(2,1)                        0.008546134  0.014572386      0.58646  0.55756587

41. D(2,2)                        0.037431586  0.035568899      1.05237  0.29263055

42. D(2,3)                        0.117933534  0.035422148      3.32937  0.00087042

43. D(3,1)                       -0.002750566  0.009722616     -0.28290  0.77725054

44. D(3,2)                        0.071494173  0.020963580      3.41040  0.00064868

45. D(3,3)                       -0.088057871  0.034007411     -2.58937  0.00961509

46. Shape(t degrees)              4.185348587  0.135853155     30.80789  0.00000000


 

These do "univariate" diagnostics on the residuals. The Z variables are the residuals standardized by their variances, so they should (if the model is correct) be mean zero, variance one and be serially uncorrelated. The one diagnostic which fails badly across all three variables is the Ljung-Box Q statistic which tests for serial correlation in the mean. This might indicate that the VAR(1) mean model isn't adequate, though it can also mean more serious problems exist.

 

set z1 = rs(1)/sqrt(hhs(1,1))

set z2 = rs(2)/sqrt(hhs(2,2))

set z3 = rs(3)/sqrt(hhs(3,3))

@bdindtests(number=40) z1

@bdindtests(number=40) z2

@bdindtests(number=40) z3


 

Independence Tests for Series Z1

Test            Statistic  P-Value

Ljung-Box Q(40)  121.34208     0.0000

McLeod-Li(40)     56.02381     0.0476

Turning Points    -1.32167     0.1863

Difference Sign   -0.76761     0.4427

Rank Test         -1.05722     0.2904

 

 

Independence Tests for Series Z2

Test            Statistic  P-Value

Ljung-Box Q(40)  100.66701     0.0000

McLeod-Li(40)      9.85788     1.0000

Turning Points     0.00000     1.0000

Difference Sign   -0.24125     0.8094

Rank Test         -0.42783     0.6688

 

 

Independence Tests for Series Z3

Test            Statistic  P-Value

Ljung-Box Q(40)  96.737938     0.0000

McLeod-Li(40)    45.299177     0.2604

Turning Points   -1.231559     0.2181

Difference Sign  -0.021932     0.9825

Rank Test        -2.172303     0.0298

 

 

These are tests on the jointly standardized residuals which are (if the model is correct) mutually serially uncorrelated, mean zero with an identity covariance matrix. (The univariate standardized residuals don't have any predictable "cross-variable" properties). @MVQSTAT checks for serial correlation in the mean, while @MVARCHTEST checks for residual cross-variable ARCH. These also reject very strongly. The failure of the multivariate Q is not a surprise given the univariate Q results. The failure of the multivariate ARCH test is more of a surprise given that the univariate McLeod-Li tests weren't too much of a problem.
 

@mvqstat(lags=5)

# zu

@mvarchtest(lags=5)

# zu

 

 

Multivariate Q Test

Test Run Over 3 to 6237

Lags Tested          5

Degrees of Freedom  45

Q Statistic        143.1558

Signif Level         0.0000

 

 

Multivariate ARCH Test

Statistic Degrees Signif

   520.49     180 0.00000

 

 

This does a Nyblom fluctuations test which is a fairly general test for structural breaks in the time sequence. This has a joint test on the entire coefficient vector, plus tests on the individual parameters. You can check the full GARCH output above to see which each coefficient is. The individual coefficient for which stability is really rejected is the SHAPE (#46). #28 (B(1,1), which is the own variance persistence on Japan) and #35 (B(3,2) which is the variance term for France from Switzerland) are also way over the 0.00 p-value limit.

 

In practice, this will almost always reject stability in a GARCH model if you have this large a model with this much data. (All GARCH models are approximations in some form.) The real question with any diagnostic rejection is whether they point you to a better model or other adjustment. In this case, a look at the Japanese returns data and Japanese residuals (the Z1 series) will show that roughly the first 1500 observations just don't look at all like the last 75% of the data set—overall the data are much "quieter" with a few very, very large changes. Most likely, this is due to central bank intervention. It is very unlikely that you can make a minor adjustment to the model to make a GARCH model fit both the period of relatively tight control and a later period with a generally freer market. Instead, a better approach would be to choose a subrange during which the market structure is closer to being uniform.

 

@flux

# dd

 

Test  Statistic  P-Value MaxBreak

Joint 33.7516152    0.00     1881

 

1      0.1349037    0.42     2526

2      1.8625182    0.00      723

3      1.0381692    0.00      856

4      0.2837077    0.15      731

5      0.0804031    0.67     3779

6      0.1188751    0.48     2460

7      0.0811202    0.67     5439

8      0.6483756    0.02     3403

9      0.1165895    0.49     4024

10     0.0412048    0.92     4489

11     0.0604203    0.80     3891

12     1.1238170    0.00     2700

13     1.2118427    0.00      867

14     1.1811254    0.00     1950

15     0.4665242    0.05     3408

16     1.1236611    0.00     3409

17     0.8525660    0.01     2666

18     0.7245221    0.01     4230

19     0.8873992    0.00      867

20     0.6174838    0.02     2511

21     0.2002908    0.26     4114

22     0.3249779    0.11      813

23     0.9102751    0.00     2479

24     0.4951248    0.04     3324

25     0.2702358    0.16     1304

26     1.9144147    0.00     2605

27     0.2420621    0.19     4339

28     2.8383894    0.00      867

29     0.5976814    0.02     2511

30     0.2141506    0.23     3918

31     0.8854576    0.00      754

32     1.9153297    0.00     2608

33     0.4733080    0.05     4339

34     0.8525894    0.01      513

35     2.6453120    0.00     2608

36     0.3470989    0.10     4339

37     0.6553186    0.02     1293

38     0.1300776    0.44     1795

39     0.2167433    0.23     4107

40     0.5152266    0.04     1293

41     0.1353293    0.42     1937

42     0.1415885    0.40     3114

43     0.2394900    0.20     4107

44     0.1191656    0.48     1937

45     0.3434281    0.10     2364

46     7.7646334    0.00     1685

 

 

 

Graphs

garch(p=1,q=1,mv=dcc,variances=koutmos,hmatrices=hh) / $

   xjpn xfra xsui

*

set jpnfra = %cvtocorr(hh(t))(1,2)

set jpnsui = %cvtocorr(hh(t))(1,3)

set frasui = %cvtocorr(hh(t))(2,3)

*

spgraph(vfields=3,footer="Conditional Correlations")

 graph(header="Japan with France",min=-1.0,max=1.0)

 # jpnfra

 graph(header="Japan with Switzerland",min=-1.0,max=1.0)

 # jpnsui

 graph(header="France with Switzerland",min=-1.0,max=1.0)

 # frasui

spgraph(done)

 

This generates and graphs the conditional correlations for the series from the DCC-Koutmos estimates. This can be done with any type of multivariate model (though a CC will be flat, so it won't be very interesting) as long as you save the series of covariance matrices using the HMATRICES option. In the SET instructions, %CVTOCORR(HH(T)) returns the covariance matrix converted to correlations, and the (i,j) subscripts pull that particular element out of the covariance matrix, so %CVTOCORR(HH(T))(1,2) is the conditional correlation of Japan (variable 1) with France (variable 2) at time T.


 


 


Copyright © 2024 Thomas A. Doan