GARCHMV.RPF is an illustrative example which includes several variants on multivariate GARCH models, including "stock" estimates for DVECH, BEKK, CC and DCC models, plus GARCH with non-standard mean models (different explanatory variables in each) and GARCH with several types of M effects. It's discussed in considerable detail in Section 9.4 of the User's Guide.


Note that this illustrates a wide range of GARCH models applied to a single set of data. In practice, you would focus in on one or two model types. Specifying, estimating and testing these types of models forms a large part of the RATS ARCH/GARCH and Volatility Models e-course, and we strongly recommend that you get that if you are planning to focus in this area.


This takes quite a while to run—it's a fairly large data set and it's doing many models.


Full Program


open data g10xrate.xls
data(format=xls,org=columns) 1 6237 usxjpn usxfra usxsui
*
set xjpn = 100.0*log(usxjpn/usxjpn{1})
set xfra = 100.0*log(usxfra/usxfra{1})
set xsui = 100.0*log(usxsui/usxsui{1})
*
* Examples with the different choices for the MV option
*
garch(p=1,q=1,pmethod=simplex,piters=10) / xjpn xfra xsui
garch(p=1,q=1,mv=bekk,pmethod=simplex,piters=10) / xjpn xfra xsui
*
* Restricted correlation models
*
garch(p=1,q=1,mv=cc) / xjpn xfra xsui
garch(p=1,q=1,mv=dcc)  / xjpn xfra xsui
garch(p=1,q=1,mv=choleski)  / xjpn xfra xsui
*
* CC with VARMA variances
*
garch(p=1,q=1,mv=cc,variances=varma,pmethod=simplex,piters=10) / $
   xjpn xfra xsui
*
* EWMA with t-errors with an estimated degrees of freedom parameter
*
garch(p=1,q=1,mv=ewma,distrib=t) / xjpn xfra xsui
*
* CC-EGARCH with asymmetry
*
garch(p=1,q=1,mv=cc,asymmetric,variances=exp) / xjpn xfra xsui
*
* Estimates with graphs of conditional correlations
*
garch(p=1,q=1,mv=dcc,variances=koutmos,hmatrices=hh) / $
   xjpn xfra xsui
*
set jpnfra = %cvtocorr(hh(t))(1,2)
set jpnsui = %cvtocorr(hh(t))(1,3)
set frasui = %cvtocorr(hh(t))(2,3)
*
spgraph(vfields=3,footer="Conditional Correlations")
 graph(header="Japan with France",min=-1.0,max=1.0)
 # jpnfra
 graph(header="Japan with Switzerland",min=-1.0,max=1.0)
 # jpnsui
 graph(header="France with Switzerland",min=-1.0,max=1.0)
 # frasui
spgraph(done)
*
* Estimates for a BEKK with t errors, saving the residuals and the
* variances (in the VECT[SERIES] and SYMM[SERIES] forms), and using them
* to compute the empirical probability of a residual (for Japan) being
* in the left .05 tail.
*
garch(p=1,q=1,mv=bekk,pmethod=simplex,piters=10,distrib=t,$
   rseries=rs,mvhseries=hhs) / xjpn xfra xsui
*
compute fixt=(%shape-2)/%shape
set trigger = %tcdf(rs(1)/sqrt(hhs(1,1)*fixt),%shape)<.05
sstats(mean) / trigger>>VaRp
disp "Probability of being below .05 level" #.#### VaRp
*
* Univariate AR(1) mean models for each series, DCC model for the variance
*
equation(constant) jpneq xjpn 1
equation(constant) fraeq xfra 1
equation(constant) suieq xsui 1
group ar1 jpneq fraeq suieq
garch(p=1,q=1,model=ar1,mv=dcc,pmethod=simplex,piter=10)
*
* VAR(1) model for the mean, BEKK for the variance
*
system(model=var1)
variables xjpn xfra xsui
lags 1
det constant
end(system)
*
garch(p=1,q=1,model=var1,mv=bekk,pmethod=simplex,piters=10)
*
* GARCH-M model
*
dec symm[series] hhs(3,3)
clear(zeros) hhs
*
equation jpneq xjpn
# constant hhs(1,1) hhs(1,2) hhs(1,3)
equation fraeq xfra
# constant hhs(2,1) hhs(2,2) hhs(2,3)
equation suieq xsui
# constant hhs(3,1) hhs(3,2) hhs(3,3)
*
group garchm jpneq fraeq suieq
garch(model=garchm,p=1,q=1,pmethod=simplex,piters=10,$
   mvhseries=hhs)
*
* GARCH-M with using custom function of the variance (in this case, the
* square root of the variance of an equally weighted sum of the
* currencies).
*
set commonsd = 0.0
system(model=customm)
variables xjpn xfra xsui
det constant commonsd
end(system)
*
compute %nvar=%modelsize(customm)
compute weights=%fill(%nvar,1,1.0/%nvar)
garch(model=customm,p=1,q=1,mv=cc,variances=spillover,$
   hmatrices=hh,hadjust=(commonsd=sqrt(%qform(hh,weights))))
*
* VMA(1) mean model
*
dec vect[series] u(3)
clear(zeros) u
system(model=varma)
variables xjpn xfra xsui
det constant u(1){1} u(2){1} u(3){1}
end(system)
*
garch(model=varma,p=1,q=1,mv=bekk,asymmetric,rseries=u,$
   pmethod=simplex,piters=10,iters=500)
*
* Diagnostics on (univariate) standardized residuals
*
garch(model=var1,mv=bekk,asymmetric,p=1,q=1,distrib=t,$
   pmethod=simplex,piters=10,iters=500,$
   rseries=rs,mvhseries=hhs,stdresids=zu,derives=dd)
set z1 = rs(1)/sqrt(hhs(1,1))
set z2 = rs(2)/sqrt(hhs(2,2))
set z3 = rs(3)/sqrt(hhs(3,3))
@bdindtests(number=40) z1
@bdindtests(number=40) z2
@bdindtests(number=40) z3
*
* Multivariate Q statistic and ARCH test on jointly standardized
* residuals.
*
@mvqstat(lags=5)
# zu
@mvarchtest(lags=5)
# zu
*
* Fluctuations test
*
@flux
# dd


Output

garch(p=1,q=1,pmethod=simplex,piters=10) / xjpn xfra xsui


For a standard (or DVECH) model (the default model for GARCH), there's a separate equation for each of component of the covariance matrix:


\({\bf{H}}_{ij,t} = {c_{ij}} + {a_{ij}}{\kern 1pt} {\kern 1pt} {u_{i,t - 1}}{\kern 1pt} {\kern 1pt} {\kern 1pt} {u_{j,t - 1}} + {b_{ij}}{\kern 1pt} {{\bf{H}}_{ij,t - 1}}\)


Since the covariance matrix is symmetric, only the lower triangle needs to be modeled. In the output, C(i,j) is the variance constant, A(i,j) is the lagged squared residual ("ARCH") coefficient and B(i,j) is the lagged variance ("GARCH") coefficient. Because the components of the covariance matrix are modeled separately, it's possible for the H matrix to be non-positive definite for some parameters (even if all are positive). As a result, a DVECH can be hard to estimate, particularly if applied to data series which don't have relatively similar behavior.


MV-GARCH - Estimation by BFGS

Convergence in   169 Iterations. Final criterion was  0.0000051 <=  0.0000100

Usable Observations                      6236

Log Likelihood                    -11835.6510


    Variable                        Coeff      Std Error      T-Stat      Signif

************************************************************************************

1.  Mean(XJPN)                    0.004646969  0.007064672      0.65778  0.51068231

2.  Mean(XFRA)                   -0.003473262  0.007241505     -0.47963  0.63148870

3.  Mean(XSUI)                   -0.002331525  0.008693786     -0.26818  0.78855853


4.  C(1,1)                        0.009026181  0.001051883      8.58097  0.00000000

5.  C(2,1)                        0.005697145  0.000701699      8.11907  0.00000000

6.  C(2,2)                        0.011473368  0.001290261      8.89229  0.00000000

7.  C(3,1)                        0.006003505  0.000744173      8.06735  0.00000000

8.  C(3,2)                        0.009885913  0.001210674      8.16562  0.00000000

9.  C(3,3)                        0.012694509  0.001481653      8.56780  0.00000000

10. A(1,1)                        0.105889668  0.007467950     14.17921  0.00000000

11. A(2,1)                        0.093909954  0.005676249     16.54437  0.00000000

12. A(2,2)                        0.127781756  0.005754532     22.20541  0.00000000

13. A(3,1)                        0.088713290  0.005240428     16.92863  0.00000000

14. A(3,2)                        0.113209884  0.004860551     23.29158  0.00000000

15. A(3,3)                        0.111076067  0.004654186     23.86584  0.00000000

16. B(1,1)                        0.883884251  0.007285389    121.32286  0.00000000

17. B(2,1)                        0.891015563  0.005782334    154.09272  0.00000000

18. B(2,2)                        0.861224733  0.006389700    134.78329  0.00000000

19. B(3,1)                        0.897759886  0.005427159    165.41986  0.00000000

20. B(3,2)                        0.875295362  0.005333268    164.11990  0.00000000

21. B(3,3)                        0.877950585  0.004919708    178.45584  0.00000000



garch(p=1,q=1,mv=bekk,pmethod=simplex,piters=10) / xjpn xfra xsui


For a BEKK, the variance constant is represented by a lower triangular matrix whose outer product is used. (Note that this is often represented as an upper triangular matrix—the resulting product matrix will be the same in either case). The "ARCH" and "GARCH" terms are formed by a sandwich product with an \(n \times n\) matrix of coefficients around a symmetric matrix. In keeping with the literature, the pre-multiplying matrix is the transposed one:


\({{\bf{H}}_t}{\rm{ = }}{\bf{CC'}}{\rm{ + }}{\bf{A'}}{{\bf{u}}_{t - 1}}{{\bf{u'}}_{t - 1}}{\bf{A}}{\rm{ + }}{\bf{B'}}{{\bf{H}}_{t - 1}}{\bf{B}}\)


It's important to note that the signs of each of the coefficient matrices aren't identified statistically—multiply \({\bf{A}}\) (or \({\bf{B}}\) or \({\bf{C}}\)) by -1 and you get exactly the same recursion. The guess values used by RATS tend to force them towards positive values on the diagonal, but if a GARCH model doesn't fit the data particularly well, it's possible for them to "flip" signs at some point in the estimation. Also, it is not unreasonable (and in fact not unexpected) for some off-diagonal elements of \({\bf{A}}\) (in particular) and \({\bf{B}}\) to be negative even where the diagonal elements are positive. This is most easily seen with \({\bf{A}}\): define \({{\bf{v}}_{t - 1}} = {\bf{A'}}{{\bf{u}}_{t - 1}}\), which is an \(n\) vector. The contribution of the "ARCH" term to the covariance matrix is then \({{\bf{v}}_{t - 1}}{{{\bf{v'}}}_{t - 1}}\) which means that the squares of the elements of \({\bf{v}}\) will be the contributions to the variances themselves. To have "spillover" effects, so that shocks in one component affect the variance of another, \({\bf{v}}\) will have to be a linear combination of the different components of \({\bf{u}}\). Negative coefficients in the off-diagonals of \({\bf{A}}\) mean that the variance is affected more when the shocks move in opposite directions than when they move in the same direction, which probably isn’t unreasonable in many situations. It's also possible (unlikely in practice, but still possible), for the diagonal elements in \({\bf{A}}\) (or less likely \({\bf{B}}\)) to have opposite signs. For instance, if the correlation between two components is near zero, the sign of any column of \({\bf{A}}\) (or \({\bf{B}}\)) has little effect on the likelihood. In most applications, the correlations among the variables tends to be high and positive, but increasingly GARCH models are being applied to series for which that is not the case.


Another common question is how it’s possible for the off-diagonals in the \({\bf{A}}\) and \({\bf{B}}\) matrices to be larger than the diagonals, since one would expect that the “own” effect would be dominant. However, the values of the coefficients are sensitive to the scales of the variables, since nothing in the recursion is standardized to a common variance. If you multiply component i by .01 relative to j, its residuals also go down by a factor of .01, so the coefficient A(i,j) which applies residual i to the variance of j has to go up by a factor of 100. Rescaling a variable keeps the diagonals of \({\bf{A}}\) and \({\bf{B}}\) the same, but forces a change in scale of the off-diagonals. Even without asymmetrical scalings, the tendency will be for (relatively) higher variance series to have lower off-diagonal coefficients than lower variance series.


Because of the standard use of the transpose of \({\bf{A}}\) as the pre-multiplying matrix, the coefficients (unfortunately) have the opposite interpretation as they do for almost all other forms of GARCH models: A(i,j) is the effect of residual i on variable j, rather than j on i. However, note that it is very difficult to interpret the individual coefficients anyway.


MV-GARCH, BEKK - Estimation by BFGS

Convergence in    99 Iterations. Final criterion was  0.0000095 <=  0.0000100

Usable Observations                      6236

Log Likelihood                    -11821.7455


    Variable                        Coeff      Std Error      T-Stat      Signif

************************************************************************************

1.  Mean(XJPN)                    0.005274621  0.005993779      0.88002  0.37885072

2.  Mean(XFRA)                   -0.002371319  0.005337535     -0.44427  0.65684579

3.  Mean(XSUI)                   -0.002518778  0.006411297     -0.39287  0.69441873


4.  C(1,1)                        0.082828746  0.005178298     15.99536  0.00000000

5.  C(2,1)                        0.029936102  0.007492155      3.99566  0.00006451

6.  C(2,2)                        0.055798917  0.005548758     10.05611  0.00000000

7.  C(3,1)                        0.037973459  0.008612781      4.40897  0.00001039

8.  C(3,2)                       -0.003978644  0.010336388     -0.38492  0.70029944

9.  C(3,3)                        0.058513092  0.006529872      8.96083  0.00000000

10. A(1,1)                        0.359533638  0.011779071     30.52309  0.00000000

11. A(1,2)                        0.102646412  0.009938548     10.32811  0.00000000

12. A(1,3)                        0.111026735  0.012909390      8.60046  0.00000000

13. A(2,1)                        0.038173251  0.015381725      2.48173  0.01307472

14. A(2,2)                        0.403485330  0.016928098     23.83524  0.00000000

15. A(2,3)                       -0.066141107  0.018546755     -3.56618  0.00036222

16. A(3,1)                       -0.047563779  0.010789285     -4.40843  0.00001041

17. A(3,2)                       -0.125567115  0.012877006     -9.75127  0.00000000

18. A(3,3)                        0.291199552  0.014670994     19.84866  0.00000000

19. B(1,1)                        0.935266231  0.003837748    243.70182  0.00000000

20. B(1,2)                       -0.026704236  0.003267176     -8.17349  0.00000000

21. B(1,3)                       -0.028562880  0.004291962     -6.65497  0.00000000

22. B(2,1)                       -0.012475984  0.006006868     -2.07695  0.03780587

23. B(2,2)                        0.909756881  0.006645208    136.90420  0.00000000

24. B(2,3)                        0.029206613  0.006931742      4.21346  0.00002515

25. B(3,1)                        0.016556233  0.004503752      3.67610  0.00023683

26. B(3,2)                        0.048816557  0.005701999      8.56131  0.00000000

27. B(3,3)                        0.946900974  0.005611772    168.73475  0.00000000



garch(p=1,q=1,mv=cc) / xjpn xfra xsui


For a CC (Constant Correlation) model, the variances are computed using separate equations for each variable. With the CC model, these are used to form the overall covariance matrix by using:


\({{\bf{H}}_{ij,t}} = {{\bf{R}}_{ij}}\sqrt {{{\bf{H}}_{ii,t}}{\kern 1pt} {\kern 1pt} {{\bf{H}}_{jj,t}}} \)


where the \({\bf{R}}\)'s are the constant correlations.


The default variance model (governed by the VARIANCES option, so this would be VARIANCES=SIMPLE) takes the form


\({H_{ii,t}} = {c_i} + {a_i}u_{i,t - 1}^2 + {b_i}{H_{ii,t - 1}}\)


In the output, C(i) is the variance constant, A(i) is the lagged squared residual ("ARCH") coefficient and B(i) is the lagged variance ("GARCH") coefficient. Only the off-diagonal of the constant correlation \({\bf{R}}\) matrix needs to be estimated, since it's symmetric with ones on the diagonal. R(i,j) is the correlation between the residuals for variables i and j.


MV-CC GARCH  - Estimation by BFGS

Convergence in    47 Iterations. Final criterion was  0.0000099 <=  0.0000100

Usable Observations                      6236

Log Likelihood                    -12817.3747


    Variable                        Coeff      Std Error      T-Stat      Signif

************************************************************************************

1.  Mean(XJPN)                   -0.000775585  0.007081544     -0.10952  0.91278842

2.  Mean(XFRA)                   -0.004266997  0.007222367     -0.59080  0.55465240

3.  Mean(XSUI)                    0.003648561  0.008713667      0.41872  0.67542294


4.  C(1)                          0.016827996  0.002079419      8.09264  0.00000000

5.  C(2)                          0.028385668  0.002678169     10.59891  0.00000000

6.  C(3)                          0.032306057  0.003053511     10.57997  0.00000000

7.  A(1)                          0.164130916  0.011353043     14.45700  0.00000000

8.  A(2)                          0.133203602  0.008892333     14.97960  0.00000000

9.  A(3)                          0.112696962  0.007653219     14.72543  0.00000000

10. B(1)                          0.812634601  0.012102769     67.14452  0.00000000

11. B(2)                          0.804346693  0.012209911     65.87654  0.00000000

12. B(3)                          0.831322084  0.010479082     79.33158  0.00000000

13. R(2,1)                        0.564320088  0.008522088     66.21852  0.00000000

14. R(3,1)                        0.579295504  0.008318390     69.64034  0.00000000

15. R(3,2)                        0.828697594  0.003977519    208.34536  0.00000000



garch(p=1,q=1,mv=dcc)  / xjpn xfra xsui


DCC ("Dynamic Conditional Correlations") was proposed by Engle(2002) to handle a bigger set of variables than the more fully parameterized models (such as DVECH and BEKK), without requiring the conditional correlations to be constant as in the CC. This adds two scalar parameters which govern a “GARCH(1,1)” model on the covariance matrix as a whole:


\({{\bf{Q}}_t} = (1 - a - b){\kern 1pt} {{\bf{\bar Q}}} + a{u_{t - 1}}{u'_{t - 1}} + b{\kern 1pt} {{\bf{Q}}_{t - 1}}\)


where \({{\bf{\bar Q}}}\) is the unconditional covariance matrix. However, \({\bf{Q}}\) isn’t the sequence of covariance matrices. Instead, it is used solely to provide the correlation matrix. The actual \({\bf{H}}\) matrix is generated using univariate GARCH models for the variances (controlled by the VARIANCES option), combined with the correlations produced by the \({\bf{Q}}\):


\({{\bf{H}}_{ij}}_{,t} = {{\bf{Q}}_{ij,t}}\frac{{\sqrt {{{\bf{H}}_{ii,t}}{{\bf{H}}_{jj,t}}} }}{{\sqrt {{{\bf{Q}}_{ii,t}}{{\bf{Q}}_{jj,t}}} }}\)


Engle's proposal was to estimate this in two stages, with the univariate GARCH models done first followed by estimating the a and b (and thus the dynamic correlations) in a second stage. This gives consistent, but not efficient estimates, but has the advantage that it can be applied to very large numbers of variables. The disadvantage (other than statistically inefficiency) is that only a limit number of univariate variance models can be used in the two step approach: of the five offered by RATS, only VARIANCES=SIMPLE and VARIANCES=EXPONENTIAL would work since the others have interaction terms. As a result, the GARCH instruction with MV=DCC does a full maximum likelihood estimation of all the coefficients.


The output adds the DCC(A) and DCC(B) parameters which are the a and b in the Q recursion. Note that while apparently a generalization of the CC model, the two don't formally nest because CC estimates freely the correlation matrix and (if \(n > 2\)) actually has more free parameters that the DCC model. While you can't do a standard likelihood ratio test to compare CC with DCC, in this case, DCC has a higher log likelihood by roughly 1000, so CC is clearly quite inadequate. The @TSECCTEST procedure offers a Lagrange Multiplier test for CC against a time-varying alternative, though that alternative is not DCC.



MV-DCC GARCH  - Estimation by BFGS

Convergence in    42 Iterations. Final criterion was  0.0000033 <=  0.0000100

Usable Observations                      6236

Log Likelihood                    -11814.4403


    Variable                        Coeff      Std Error      T-Stat      Signif

************************************************************************************

1.  Mean(XJPN)                    0.003987506  0.006060129      0.65799  0.51054435

2.  Mean(XFRA)                   -0.003133447  0.006152936     -0.50926  0.61056964

3.  Mean(XSUI)                   -0.003070966  0.007508317     -0.40901  0.68253341


4.  C(1)                          0.008499092  0.001136736      7.47675  0.00000000

5.  C(2)                          0.012485541  0.001314571      9.49780  0.00000000

6.  C(3)                          0.016566320  0.001742031      9.50977  0.00000000

7.  A(1)                          0.151662080  0.009226265     16.43808  0.00000000

8.  A(2)                          0.138382721  0.008036770     17.21870  0.00000000

9.  A(3)                          0.123692585  0.006958449     17.77588  0.00000000

10. B(1)                          0.852005668  0.008113585    105.00977  0.00000000

11. B(2)                          0.848527776  0.008240241    102.97366  0.00000000

12. B(3)                          0.858001790  0.007341456    116.87079  0.00000000

13. DCC(A)                        0.053230310  0.003340516     15.93476  0.00000000

14. DCC(B)                        0.939072327  0.003985276    235.63544  0.00000000



garch(p=1,q=1,mv=cholesky)  / xjpn xfra xsui


The Cholesky model has some similarities to the CC and DCC models, but one important difference—while the other models apply a univariate model to observed data, the Cholesky model uses ideas from structural VAR modeling to map the observable residuals (\({\bf{u}}\)) to uncorrelated residuals (\({\bf{v}}\)) using \({{\bf{u}}_t} = {\bf{F}}{{\bf{v}}_t}\), where \(\bf{F}\) is lower triangular. The difference with the VAR literature is that the components of v are assumed to follow (univariate) GARCH processes rather than having a fixed (identity) covariance matrix. As in the VAR literature, it’s necessary to come out with some normalization between the \(\bf{F}\) and the variances of \({\bf{v}}\). For the Cholesky model in the VAR, the obvious choice is to fix the variances at 1. However, here the variances aren’t fixed, so it’s simpler to make the diagonals of \(\bf{F}\) equal to 1 and leave the component GARCH processes free, so the free parameters in \(\bf{F}\) will be the elements below the diagonal.


The Cholesky model is sensitive to the order of listing—by construction, the first \({\bf{v}}\) is identical to the first \({\bf{u}}\). Any of the choices for the VARIANCE option are permitted with MV=CHOLESKY; this example uses VARIANCES=SIMPLE.



MV-Cholesky GARCH  - Estimation by BFGS

Convergence in   114 Iterations. Final criterion was  0.0000025 <=  0.0000100

Usable Observations                      6236

Log Likelihood                    -12173.5176


    Variable                        Coeff      Std Error      T-Stat      Signif

************************************************************************************

1.  Mean(XJPN)                    0.000188257  0.006440919      0.02923  0.97668251

2.  Mean(XFRA)                   -0.006990951  0.006479695     -1.07890  0.28063176

3.  Mean(XSUI)                   -0.004037282  0.007499684     -0.53833  0.59035134


4.  C(1)                          0.007428328  0.001043410      7.11928  0.00000000

5.  C(2)                          0.007284154  0.000984301      7.40033  0.00000000

6.  C(3)                          0.005180700  0.000874994      5.92084  0.00000000

7.  A(1)                          0.176281319  0.009392566     18.76817  0.00000000

8.  A(2)                          0.146640647  0.011034329     13.28949  0.00000000

9.  A(3)                          0.183550187  0.014973932     12.25798  0.00000000

10. B(1)                          0.837274108  0.007741453    108.15464  0.00000000

11. B(2)                          0.841738507  0.010952723     76.85199  0.00000000

12. B(3)                          0.807028041  0.015512532     52.02426  0.00000000

13. F(2,1)                        0.561329167  0.010339981     54.28725  0.00000000

14. F(3,1)                        0.675958192  0.010563103     63.99239  0.00000000

15. F(3,2)                        0.919538839  0.007987286    115.12532  0.00000000



nlpar(derive=fourth,exactline)

garch(p=1,q=1,mv=cc,variances=varma,iters=500,pmethod=simplex,piters=5) / $

   xjpn xfra xsui


VARIANCES=VARMA is a particular model for calculating the variances (only) as part of a CC, DCC or similar model. It includes not just own lagged squared residuals and own variances, but all the "other" lagged squared residuals and variances in each variance equation:


\({{\bf{H}}_{ii,t}} = {c_{ii}} + \sum\limits_j {{a_{ij}}u_{j,t - 1}^2 + } \sum\limits_j {{b_{ij}}{{\bf{H}}_{jj,t - 1}}} \)


In the output, C(i) is the constant in the variance equation for variable i, A(i,j) is the coefficient for the "ARCH" term in the variance for i using the lagged squared residuals for variable j and B(i,j) is the coefficient in the "GARCH" term for computing the variance of i using the lagged variance of variable j.


Note that VARIANCES=VARMA is rather difficult model to fit which is why we use some rather high-end adjustments to the non-linear estimation process using NLPAR. It has numerical problems because the coefficients can be either positive or negative as you can see below. (If you try to impose non-negativity, the model fits only barely better than the model without the cross variable terms). If high residual or high volatility entries don't align very well among variables, a variance for some variable can approach zero at some observation. For instance, here A(2,3) is fairly large negative, so a very big residual in the Swiss data (variable 3) might push the variance of France (variable 2) negative in the next entry. Any zero (or negative) value for H at any data point results in an uncomputable likelihood resulting in quite a few "dead-end" parameter paths, where you approach a local maximum near a computability boundary. The more dissimilar the data series are, the harder it will be to get estimates with VARMA variances.    


MV-CC GARCH  with VARMA Variances - Estimation by BFGS

Convergence in   100 Iterations. Final criterion was  0.0000044 <=  0.0000100


Usable Observations                      6236

Log Likelihood                    -12679.6117


    Variable                        Coeff      Std Error      T-Stat      Signif

************************************************************************************

1.  Mean(XJPN)                   -0.000736526  0.005800641     -0.12697  0.89896154

2.  Mean(XFRA)                   -0.004932286  0.006020712     -0.81922  0.41266110

3.  Mean(XSUI)                    0.004520785  0.007389388      0.61179  0.54067386


4.  C(1)                          0.011010474  0.001103742      9.97559  0.00000000

5.  C(2)                          0.011489506  0.001829405      6.28046  0.00000000

6.  C(3)                          0.021305583  0.001988511     10.71434  0.00000000

7.  A(1,1)                        0.129922935  0.009331420     13.92317  0.00000000

8.  A(1,2)                        0.032904582  0.004297358      7.65693  0.00000000

9.  A(1,3)                       -0.002626674  0.000726780     -3.61413  0.00030136

10. A(2,1)                        0.029612149  0.003842227      7.70703  0.00000000

11. A(2,2)                        0.142268555  0.010452185     13.61137  0.00000000

12. A(2,3)                       -0.012089233  0.001905454     -6.34454  0.00000000

13. A(3,1)                        0.002480943  0.005348943      0.46382  0.64277718

14. A(3,2)                        0.032942902  0.007925060      4.15680  0.00003227

15. A(3,3)                        0.089358992  0.007689536     11.62086  0.00000000

16. B(1,1)                        0.860420594  0.008337012    103.20491  0.00000000

17. B(1,2)                       -0.032051355  0.004687804     -6.83718  0.00000000

18. B(1,3)                        0.003086356  0.002261622      1.36466  0.17235850

19. B(2,1)                       -0.027854598  0.003416978     -8.15182  0.00000000

20. B(2,2)                        0.787331480  0.017259356     45.61766  0.00000000

21. B(2,3)                        0.054032656  0.009604093      5.62600  0.00000002

22. B(3,1)                       -0.007389152  0.006045852     -1.22219  0.22163756

23. B(3,2)                       -0.027518045  0.012180167     -2.25925  0.02386782

24. B(3,3)                        0.877652592  0.012406316     70.74240  0.00000000

25. R(2,1)                        0.559872016  0.008597643     65.11924  0.00000000

26. R(3,1)                        0.573176258  0.008452819     67.80889  0.00000000

27. R(3,2)                        0.830632130  0.003877817    214.20094  0.00000000



garch(p=1,q=1,mv=ewma,distrib=t) / xjpn xfra xsui


MV=EWMA (Exponentially Weighted Moving Average) is a very tightly parameterized variance model. There is just a single real parameter (α) governing the evolution of the variance:


\({{\bf{H}}_t} = (1 - \alpha ){{\bf{H}}_{t - 1}} + \alpha {{\bf{u}}_{t - 1}}{{{\bf{u'}}}_{t - 1}}\)


This is a special case of the DVECH model with all the coefficients equal across all components and an "I-GARCH" restriction. Note that the log likelihood is substantially better than the previous models because this uses Student t errors (DISTRIB=T option)—all the previous examples assumed conditional Normal residuals.


MV-GARCH, EWMA - Estimation by BFGS

Convergence in    12 Iterations. Final criterion was  0.0000058 <=  0.0000100

Usable Observations                      6236

Log Likelihood                    -10516.1908


    Variable                        Coeff      Std Error      T-Stat      Signif

************************************************************************************

1.  Mean(XJPN)                   -0.005311325  0.003704962     -1.43357  0.15169474

2.  Mean(XFRA)                   -0.004245558  0.004184190     -1.01467  0.31026475

3.  Mean(XSUI)                   -0.005260662  0.005173644     -1.01682  0.30923931


4.  Alpha                         0.049804403  0.001974483     25.22402  0.00000000

5.  Shape                         5.253165403  0.139850219     37.56280  0.00000000



garch(p=1,q=1,mv=cc,asymmetric,variances=exp) / xjpn xfra xsui


This is a CC model with the individual variances computed using E-GARCH with asymmetry, which is chosen using the combination of VARIANCES=EXP and ASYMMETRIC. The individual variable variances are computing using


\(\log \,\,{H_{ii,t}} = {c_{ii}} + {a_i}\frac{{\left| {{u_{i,t - 1}}} \right|}}{{\sqrt {{H_{ii,t - 1}}} }} + {d_i}\frac{{{u_{i,t - 1}}}}{{\sqrt {{H_{ii,t - 1}}} }} + {b_i}\log \,{\kern 1pt} {H_{ii,t - 1}}\)


This, by construction, produces a positive value for \(H\) regardless of the signs of the coefficients. In the output C(i) is the constant in the log variance equation for variable i, A(i) is the coefficient on the standardized lag absolute residual, B(i) is the coefficient on the lagged log variance, D(i) is the asymmetry coefficient, which will be zero if the variance responds identically to positive vs negative residuals, and will be negative if the variance increases more with a negative residual than with a similarly sized positive residual. Note that the C coefficients will not look at all like the corresponding coefficients in a standard GARCH recursion, since this is an equation for \(log H\) rather than \(H\) itself. The A (and D) coefficients are on standardized residuals and don't affect the overall persistence of the variance—only the B coefficient does that. Note that this is often written with the expected value subtracted off from the \(\left| u \right|/H\) term. That's omitted above since it just washes into the constant and it depends rather strongly on the distribution of u. (If you use t or GED errors, for instance, it will change with the shape parameter, and again, its presence or absence affects only the values of C.)



MV-CC GARCH  with E-GARCH Variances - Estimation by BFGS

Convergence in    85 Iterations. Final criterion was  0.0000044 <=  0.0000100

Usable Observations                      6236

Log Likelihood                    -12733.4127


    Variable                        Coeff      Std Error      T-Stat      Signif

************************************************************************************

1.  Mean(XJPN)                   -0.001060332  0.004956105     -0.21394  0.83059028

2.  Mean(XFRA)                   -0.007506138  0.004511126     -1.66392  0.09612906

3.  Mean(XSUI)                    0.006647523  0.006109784      1.08801  0.27658944


4.  C(1)                         -0.388295949  0.014171400    -27.39997  0.00000000

5.  C(2)                         -0.279987895  0.013542497    -20.67476  0.00000000

6.  C(3)                         -0.212536916  0.012447451    -17.07473  0.00000000

7.  A(1)                          0.390913505  0.013837912     28.24946  0.00000000

8.  A(2)                          0.265094072  0.012079158     21.94640  0.00000000

9.  A(3)                          0.216801697  0.012399646     17.48451  0.00000000

10. B(1)                          0.894103282  0.006442689    138.77797  0.00000000

11. B(2)                          0.912123027  0.005926409    153.90821  0.00000000

12. B(3)                          0.926777214  0.005831397    158.92885  0.00000000

13. D(1)                          0.019517848  0.007972415      2.44817  0.01435829

14. D(2)                         -0.011373668  0.006065338     -1.87519  0.06076643

15. D(3)                          0.043824490  0.005322117      8.23441  0.00000000

16. R(2,1)                        0.561039801  0.008207476     68.35717  0.00000000

17. R(3,1)                        0.575788380  0.007999527     71.97780  0.00000000

18. R(3,2)                        0.826050110  0.003391556    243.56082  0.00000000



garch(p=1,q=1,mv=dcc,variances=koutmos,hmatrices=hh) / $

   xjpn xfra xsui


This is a DCC model with the individual variances computed using the option VARIANCES=KOUTMOS. (This was introduced in Koutmos(1996)). This is an extension of the asymmetric VARIANCES=EGARCH but allows for "spillover" effects among the residuals. The variance recursion takes the form:



\(\log \,\,{{\bf{H}}_{ii,t}} = {c_{ii}} + \sum\limits_j {{a_{ij}}{\kern 1pt} \left( {\frac{{\left| {{u_{j,t - 1}}} \right|}}{{\sqrt {{H_{jj,t - 1}}} }} + {d_j}\frac{{{u_{j,t - 1}}}}{{\sqrt {{H_{jj,t - 1}}} }}} \right) + \,} {b_i}\log \,{\kern 1pt} {{\bf{H}}_{ii,t - 1}}\)


The EGARCH model would be this without the off-diagonal A terms. Note that the asymmetry comes in in a tightly restricted way—each variable has one asymmetrical "index" at a given point in time which applies to all terms which use it. In the output, A(i,j) is the loading of the index for variable j on the variance for i. D(j) is the asymmetry coefficient for variable j.



MV-DCC GARCH  with Koutmos EGARCH Variances - Estimation by BFGS

Convergence in    42 Iterations. Final criterion was  0.0000061 <=  0.0000100

Usable Observations                      6236

Log Likelihood                    -11667.5715


    Variable                        Coeff      Std Error      T-Stat      Signif

************************************************************************************

1.  Mean(XJPN)                    0.005989776  0.005036023      1.18939  0.23428774

2.  Mean(XFRA)                   -0.000170947  0.004443345     -0.03847  0.96931087

3.  Mean(XSUI)                    0.003852159  0.005970849      0.64516  0.51882282


4.  C(1)                         -0.333034803  0.010777764    -30.90017  0.00000000

5.  C(2)                         -0.318301003  0.011343927    -28.05916  0.00000000

6.  C(3)                         -0.256016986  0.011908892    -21.49797  0.00000000

7.  A(1,1)                        0.368494853  0.007279952     50.61776  0.00000000

8.  A(1,2)                        0.088850652  0.006908113     12.86178  0.00000000

9.  A(1,3)                        0.038046669  0.009960447      3.81978  0.00013357

10. A(2,1)                        0.065570711  0.015020497      4.36542  0.00001269

11. A(2,2)                        0.312389935  0.008568187     36.45928  0.00000000

12. A(2,3)                        0.104876318  0.010641738      9.85519  0.00000000

13. A(3,1)                       -0.039139969  0.015169378     -2.58020  0.00987443

14. A(3,2)                       -0.022110875  0.013142124     -1.68244  0.09248296

15. A(3,3)                        0.167400052  0.013712578     12.20777  0.00000000

16. B(1)                          0.938816563  0.003642546    257.73637  0.00000000

17. B(2)                          0.949220617  0.003384572    280.45516  0.00000000

18. B(3)                          0.952244390  0.003716941    256.19032  0.00000000

19. D(1)                          0.074120039  0.022222215      3.33540  0.00085176

20. D(2)                         -0.007741055  0.020207464     -0.38308  0.70166119

21. D(3)                          0.164427268  0.025485235      6.45186  0.00000000

22. DCC(A)                        0.050573164  0.003246238     15.57900  0.00000000

23. DCC(B)                        0.943697656  0.003785154    249.31555  0.00000000




garch(p=1,q=1,mv=bekk,pmethod=simplex,piters=10,distrib=t,$

   rseries=rs,mvhseries=hhs) / xjpn xfra xsui


This estimates a BEKK with t errors, saving the residuals and the variances (in the VECT[SERIES] and SYMM[SERIES] forms). Those extra outputs will be used in the next calculation. The only addition to the output is the SHAPE parameter for degrees of freedom of the t.



MV-GARCH, BEKK - Estimation by BFGS

Convergence in   174 Iterations. Final criterion was  0.0000017 <=  0.0000100

Usable Observations                      6236

Log Likelihood                    -10260.5399


    Variable                        Coeff      Std Error      T-Stat       Signif

*************************************************************************************

1.  Mean(XJPN)                   -0.007107571  0.003520367      -2.01899  0.04348872

2.  Mean(XFRA)                   -0.004612078  0.004012708      -1.14937  0.25040435

3.  Mean(XSUI)                   -0.004487020  0.004795857      -0.93560  0.34947740


4.  C(1,1)                        0.017750872  0.004250848       4.17584  0.00002969

5.  C(2,1)                       -0.031718066  0.007177749      -4.41894  0.00000992

6.  C(2,2)                        0.027367999  0.007390060       3.70335  0.00021277

7.  C(3,1)                       -0.029154724  0.012310787      -2.36823  0.01787363

8.  C(3,2)                       -0.067679807  0.008805753      -7.68586  0.00000000

9.  C(3,3)                       -0.000002633  0.092546709 -2.84488e-005  0.99997730

10. A(1,1)                        0.291241915  0.013392552      21.74656  0.00000000

11. A(1,2)                        0.001808593  0.012405060       0.14579  0.88408337

12. A(1,3)                        0.008848868  0.017283922       0.51197  0.60867128

13. A(2,1)                       -0.004304572  0.009865373      -0.43633  0.66259625

14. A(2,2)                        0.315102049  0.016579256      19.00580  0.00000000

15. A(2,3)                       -0.088257323  0.021391275      -4.12586  0.00003694

16. A(3,1)                       -0.007936246  0.006790998      -1.16864  0.24254785

17. A(3,2)                       -0.033315032  0.012407640      -2.68504  0.00725208

18. A(3,3)                        0.347587032  0.018458040      18.83120  0.00000000

19. B(1,1)                        0.963713123  0.002617505     368.18005  0.00000000

20. B(1,2)                        0.000680668  0.002804945       0.24267  0.80826316

21. B(1,3)                        0.002794144  0.003981563       0.70177  0.48282229

22. B(2,1)                        0.000460476  0.003022010       0.15237  0.87889201

23. B(2,2)                        0.941278388  0.005686360     165.53266  0.00000000

24. B(2,3)                        0.045037363  0.008192856       5.49715  0.00000004

25. B(3,1)                        0.002190083  0.002451443       0.89339  0.37165084

26. B(3,2)                        0.021980135  0.004998175       4.39763  0.00001094

27. B(3,3)                        0.927099000  0.007229271     128.24238  0.00000000

28. Shape                         4.229194265  0.130653096      32.36964  0.00000000




compute fixt=(%shape-2)/%shape

set trigger = %tcdf(rs(1)/sqrt(hhs(1,1)*fixt),%shape)<.05

sstats(mean) / trigger>>VaRp

disp "Probability of being below .05 level" #.#### VaRp


This uses the residuals and covariance matrices just saved, and uses them compute the empirical probability of a residual (for Japan) being in the left .05 tail. The output is shown below.


Probability of being below .05 level 0.0420




equation(constant) jpneq xjpn 1

equation(constant) fraeq xfra 1

equation(constant) suieq xsui 1

group ar1 jpneq fraeq suieq

garch(p=1,q=1,model=ar1,mv=dcc,pmethod=simplex,piter=10)


This is a DCC variance model with mean model with different explanatory variables for each equation (separate univariate AR(1)). The more complicated mean model is shown with a subheader for each equation followed by the coefficients which apply to it.


MV-DCC GARCH  - Estimation by BFGS

Convergence in    46 Iterations. Final criterion was  0.0000000 <=  0.0000100

Usable Observations                      6235

Log Likelihood                    -11810.8881


    Variable                        Coeff      Std Error      T-Stat      Signif

************************************************************************************

Mean Model(XJPN)

1.  Constant                      0.004014562  0.005991287      0.67007  0.50281525

2.  XJPN{1}                       0.025553751  0.011332247      2.25496  0.02413591

Mean Model(XFRA)

3.  Constant                     -0.003110255  0.005798073     -0.53643  0.59166204

4.  XFRA{1}                       0.005578945  0.008777423      0.63560  0.52503606

Mean Model(XSUI)

5.  Constant                     -0.003047090  0.006896665     -0.44182  0.65861887

6.  XSUI{1}                      -0.001867027  0.008487257     -0.21998  0.82588673


7.  C(1)                          0.008298121  0.001065518      7.78787  0.00000000

8.  C(2)                          0.012349739  0.001345125      9.18111  0.00000000

9.  C(3)                          0.016510023  0.001705979      9.67774  0.00000000

10. A(1)                          0.151651156  0.008649248     17.53345  0.00000000

11. A(2)                          0.137409295  0.007843544     17.51878  0.00000000

12. A(3)                          0.123174851  0.006552406     18.79842  0.00000000

13. B(1)                          0.852560638  0.007249044    117.61008  0.00000000

14. B(2)                          0.849633627  0.008057941    105.44053  0.00000000

15. B(3)                          0.858506129  0.007250363    118.40871  0.00000000

16. DCC(A)                        0.053195549  0.003431160     15.50366  0.00000000

17. DCC(B)                        0.939101210  0.004130209    227.37380  0.00000000




system(model=var1)

variables xjpn xfra xsui

lags 1

det constant

end(system)

*

garch(p=1,q=1,model=var1,mv=bekk,pmethod=simplex,piters=10)


This estimates a BEKK model with the mean given by a VAR(1).



MV-GARCH, BEKK - Estimation by BFGS

Convergence in    92 Iterations. Final criterion was  0.0000096 <=  0.0000100

Usable Observations                      6235

Log Likelihood                    -11809.4143


    Variable                        Coeff      Std Error      T-Stat      Signif

************************************************************************************

Mean Model(XJPN)

1.  XJPN{1}                       0.054037654  0.011182386      4.83239  0.00000135

2.  XFRA{1}                       0.023682621  0.013255290      1.78665  0.07399337

3.  XSUI{1}                      -0.034327923  0.010181658     -3.37155  0.00074748

4.  Constant                      0.006201575  0.005449407      1.13803  0.25510902

Mean Model(XFRA)

5.  XJPN{1}                       0.028493123  0.007798861      3.65350  0.00025869

6.  XFRA{1}                       0.023713659  0.010571715      2.24312  0.02488888

7.  XSUI{1}                      -0.009351366  0.008488737     -1.10162  0.27062673

8.  Constant                     -0.001862777  0.003988431     -0.46705  0.64046759

Mean Model(XSUI)

9.  XJPN{1}                       0.040977801  0.009376496      4.37027  0.00001241

10. XFRA{1}                       0.025030406  0.012992675      1.92650  0.05404179

11. XSUI{1}                      -0.017960025  0.009721875     -1.84738  0.06469167

12. Constant                     -0.001914884  0.004623543     -0.41416  0.67875739


13. C(1,1)                        0.080420869  0.004610689     17.44227  0.00000000

14. C(2,1)                        0.026790278  0.005889919      4.54850  0.00000540

15. C(2,2)                        0.055096257  0.004810186     11.45408  0.00000000

16. C(3,1)                        0.034166293  0.007852979      4.35074  0.00001357

17. C(3,2)                       -0.004060758  0.008276084     -0.49066  0.62366564

18. C(3,3)                       -0.058667313  0.005427901    -10.80847  0.00000000

19. A(1,1)                        0.356745196  0.011327049     31.49498  0.00000000

20. A(1,2)                        0.099016521  0.009757899     10.14732  0.00000000

21. A(1,3)                        0.107351092  0.012190861      8.80587  0.00000000

22. A(2,1)                        0.033518944  0.015356110      2.18278  0.02905233

23. A(2,2)                        0.398503867  0.015871681     25.10785  0.00000000

24. A(2,3)                       -0.070336064  0.018314828     -3.84039  0.00012284

25. A(3,1)                       -0.047181157  0.009773714     -4.82735  0.00000138

26. A(3,2)                       -0.122497792  0.011869847    -10.32008  0.00000000

27. A(3,3)                        0.292921264  0.013931915     21.02520  0.00000000

28. B(1,1)                        0.936389658  0.003620075    258.66581  0.00000000

29. B(1,2)                       -0.025326977  0.003018928     -8.38939  0.00000000

30. B(1,3)                       -0.027019383  0.003971902     -6.80263  0.00000000

31. B(2,1)                       -0.010660383  0.005715656     -1.86512  0.06216461

32. B(2,2)                        0.911888451  0.006231551    146.33410  0.00000000

33. B(2,3)                        0.030709661  0.006744664      4.55318  0.00000528

34. B(3,1)                        0.016113537  0.004129170      3.90237  0.00009526

35. B(3,2)                        0.047484087  0.004999802      9.49719  0.00000000

36. B(3,3)                        0.946092066  0.005254531    180.05260  0.00000000



dec symm[series] hhs(3,3)

clear(zeros) hhs

*

equation jpneq xjpn

# constant hhs(1,1) hhs(1,2) hhs(1,3)

equation fraeq xfra

# constant hhs(2,1) hhs(2,2) hhs(2,3)

equation suieq xsui

# constant hhs(3,1) hhs(3,2) hhs(3,3)

*

group garchm jpneq fraeq suieq

garch(model=garchm,p=1,q=1,pmethod=simplex,piters=10,iters=500,$

   mvhseries=hhs)


To do GARCH-M with a multivariate model, you have to plan ahead a bit. On your GARCH instruction, you need to use the option MVHSERIES=SYMM[SERIES] which will save the paths of the variances (example: MVHSERIES=HHS). Your regression equations for the means will include references to the elements of this array of series. Since those equations need to be created in advance, you need the to declare that as a SYMM[SERIES] first as well. The model here includes in each equation all the covariances which involve the residuals from an equation.



MV-GARCH - Estimation by BFGS

Convergence in   307 Iterations. Final criterion was  0.0000065 <=  0.0000100

Usable Observations                      6236

Log Likelihood                    -11832.2529


    Variable                        Coeff      Std Error      T-Stat      Signif

************************************************************************************

Mean Model(XJPN)

1.  Constant                     -0.007826198  0.009458987     -0.82738  0.40802045

2.  HHS(1,1)                      0.004330142  0.035786484      0.12100  0.90369152

3.  HHS(2,1)                     -0.041418733  0.096875687     -0.42755  0.66898229

4.  HHS(3,1)                      0.091415703  0.079952015      1.14338  0.25287998

Mean Model(XFRA)

5.  Constant                     -0.010748239  0.009321273     -1.15309  0.24887470

6.  HHS(2,1)                      0.069507553  0.048175312      1.44280  0.14907557

7.  HHS(2,2)                     -0.026451605  0.035767541     -0.73954  0.45957783

8.  HHS(3,2)                      0.011003924  0.037422503      0.29405  0.76872302

Mean Model(XSUI)

9.  Constant                     -0.004246433  0.011234855     -0.37797  0.70545320

10. HHS(3,1)                      0.065304691  0.049066030      1.33096  0.18320373

11. HHS(3,2)                      0.002358563  0.052118159      0.04525  0.96390473

12. HHS(3,3)                     -0.025805514  0.037254256     -0.69269  0.48850642


13. C(1,1)                        0.008904091  0.000955334      9.32040  0.00000000

14. C(2,1)                        0.005617118  0.000510199     11.00965  0.00000000

15. C(2,2)                        0.011424107  0.000975179     11.71488  0.00000000

16. C(3,1)                        0.005944968  0.000599493      9.91666  0.00000000

17. C(3,2)                        0.009836654  0.000927038     10.61084  0.00000000

18. C(3,3)                        0.012644626  0.001257515     10.05525  0.00000000

19. A(1,1)                        0.105317240  0.005856045     17.98436  0.00000000

20. A(2,1)                        0.093487282  0.003615995     25.85382  0.00000000

21. A(2,2)                        0.127516767  0.005670313     22.48849  0.00000000

22. A(3,1)                        0.088424588  0.003935099     22.47074  0.00000000

23. A(3,2)                        0.112939266  0.004930254     22.90740  0.00000000

24. A(3,3)                        0.110733029  0.005216830     21.22611  0.00000000

25. B(1,1)                        0.884651950  0.006372744    138.81805  0.00000000

26. B(2,1)                        0.891673687  0.003993003    223.30906  0.00000000

27. B(2,2)                        0.861543181  0.005714278    150.77027  0.00000000

28. B(3,1)                        0.898187207  0.004150039    216.42864  0.00000000

29. B(3,2)                        0.875620493  0.005011153    174.73433  0.00000000

30. B(3,3)                        0.878296876  0.005263227    166.87422  0.00000000



set commonsd = 0.0

system(model=customm)

variables xjpn xfra xsui

det constant commonsd

end(system)

*

compute %nvar=%modelsize(customm)

compute weights=%fill(%nvar,1,1.0/%nvar)

garch(model=customm,p=1,q=1,mv=cc,variances=spillover,pmethod=simplex,piters=5,$

   iters=500,hmatrices=hh,hadjust=(commonsd=sqrt(%qform(hh,weights))))


If you need some function of the variances or covariances, you need to use the HADJUST option to define it based upon the values that you save either with MVHSERIES or with HMATRICES. The following, for instance, adds the conditional standard deviation (generated into the series COMMONSD) of an equally weighted sum of the three currencies.


VARIANCES=SPILLOVER is another choice for the VARIANCE option. This includes the cross terms (allowing for "spillover") on the lagged squared residuals, but just an "own" term on the lagged variance:


\({H_{ii,t}} = {c_{ii}} + \sum\limits_j {{a_{ij}}{\kern 1pt} u_{j,t - 1}^2 + \,} {b_i}{\kern 1pt} {H_{ii,t - 1}}\)


Thus it's a generalization of VARIANCES=SIMPLE and a restriction on VARIANCES=VARMA. In the output, A(i,j) is the contribution of the residuals for variable j to the variance for i. B(i) is the cofficient on the lagged variance in computing the variance of i.


MV-CC GARCH  with Spillover Variances - Estimation by BFGS

Convergence in   119 Iterations. Final criterion was  0.0000086 <=  0.0000100

Usable Observations                      6236

Log Likelihood                    -12756.3977


    Variable                        Coeff      Std Error      T-Stat      Signif

************************************************************************************

Mean Model(XJPN)

1.  Constant                     -0.022568963  0.025081358     -0.89983  0.36821063

2.  COMMONSD                      0.041373595  0.043324744      0.95496  0.33959562

Mean Model(XFRA)

3.  Constant                     -0.053071880  0.027336040     -1.94146  0.05220228

4.  COMMONSD                      0.098511447  0.048195888      2.04398  0.04095549

Mean Model(XSUI)

5.  Constant                     -0.037672632  0.032147471     -1.17187  0.24124955

6.  COMMONSD                      0.082253879  0.057070245      1.44127  0.14950720


7.  C(1)                          0.017700054  0.002151240      8.22784  0.00000000

8.  C(2)                          0.027393116  0.002443877     11.20888  0.00000000

9.  C(3)                          0.031550488  0.002699347     11.68819  0.00000000

10. A(1,1)                        0.167750546  0.012382366     13.54754  0.00000000

11. A(1,2)                        0.017118685  0.010184291      1.68089  0.09278404

12. A(1,3)                       -0.041865504  0.008373446     -4.99979  0.00000057

13. A(2,1)                       -0.053353465  0.006970518     -7.65416  0.00000000

14. A(2,2)                        0.162574231  0.011612094     14.00042  0.00000000

15. A(2,3)                       -0.007820467  0.008080816     -0.96778  0.33315330

16. A(3,1)                       -0.030341878  0.005180577     -5.85685  0.00000000

17. A(3,2)                       -0.007273839  0.007633572     -0.95287  0.34065348

18. A(3,3)                        0.125687264  0.008003167     15.70469  0.00000000

19. B(1)                          0.822608394  0.012289923     66.93357  0.00000000

20. B(2)                          0.814725928  0.010651896     76.48647  0.00000000

21. B(3)                          0.839040858  0.009307537     90.14639  0.00000000

22. R(2,1)                        0.571210137  0.007732364     73.87264  0.00000000

23. R(3,1)                        0.584929593  0.007706896     75.89691  0.00000000

24. R(3,2)                        0.831118224  0.003520824    236.05785  0.00000000



dec vect[series] u(3)

clear(zeros) u

system(model=varma)

variables xjpn xfra xsui

det constant u(1){1} u(2){1} u(3){1}

end(system)

*

garch(model=varma,p=1,q=1,mv=bekk,asymmetric,rseries=u,$

   pmethod=simplex,piters=10,iters=500)


This is a asymmetric BEKK model with a VMA (Vector Moving Average) mean model. The residuals (the U(1), U(2) and U(3) series) are generated recursively as the function is evaluated and saved used the RSERIES option. They are then used in computing the mean model for the next period.


MV-GARCH, BEKK - Estimation by BFGS

Convergence in   108 Iterations. Final criterion was  0.0000033 <=  0.0000100

Usable Observations                      6236

Log Likelihood                    -11750.8005


    Variable                        Coeff      Std Error      T-Stat      Signif

************************************************************************************

Mean Model(XJPN)

1.  Constant                      0.000523303  0.005813198      0.09002  0.92827146

2.  U(1){1}                       0.056554362  0.009808178      5.76604  0.00000001

3.  U(2){1}                       0.023803688  0.011465595      2.07610  0.03788498

4.  U(3){1}                      -0.034958557  0.009508359     -3.67661  0.00023635

Mean Model(XFRA)

5.  Constant                     -0.004613971  0.004694026     -0.98295  0.32563432

6.  U(1){1}                       0.026529120  0.007421725      3.57452  0.00035087

7.  U(2){1}                       0.024498547  0.010730269      2.28313  0.02242300

8.  U(3){1}                      -0.004617997  0.007544712     -0.61208  0.54048223

Mean Model(XSUI)

9.  Constant                     -0.003721936  0.005520153     -0.67425  0.50015553

10. U(1){1}                       0.039553340  0.008607083      4.59544  0.00000432

11. U(2){1}                       0.024237283  0.011630158      2.08400  0.03715991

12. U(3){1}                      -0.011646145  0.010604136     -1.09826  0.27208907


13. C(1,1)                        0.071422771  0.004516114     15.81509  0.00000000

14. C(2,1)                        0.019883625  0.006323327      3.14449  0.00166378

15. C(2,2)                        0.049749451  0.004647989     10.70343  0.00000000

16. C(3,1)                        0.029611499  0.005290450      5.59716  0.00000002

17. C(3,2)                       -0.006374633  0.008271129     -0.77071  0.44087948

18. C(3,3)                       -0.057137171  0.005141634    -11.11265  0.00000000

19. A(1,1)                        0.337710951  0.008658665     39.00266  0.00000000

20. A(1,2)                        0.101237507  0.004177367     24.23476  0.00000000

21. A(1,3)                        0.095746615  0.005565063     17.20495  0.00000000

22. A(2,1)                        0.001127453  0.012262610      0.09194  0.92674386

23. A(2,2)                        0.376841707  0.015655157     24.07141  0.00000000

24. A(2,3)                       -0.093041436  0.015675601     -5.93543  0.00000000

25. A(3,1)                       -0.038118765  0.009517395     -4.00517  0.00006197

26. A(3,2)                       -0.126869832  0.012222708    -10.37985  0.00000000

27. A(3,3)                        0.303089438  0.012548885     24.15270  0.00000000

28. B(1,1)                        0.934752661  0.002876731    324.93568  0.00000000

29. B(1,2)                       -0.027739103  0.002345306    -11.82750  0.00000000

30. B(1,3)                       -0.027638434  0.002440450    -11.32514  0.00000000

31. B(2,1)                       -0.006973578  0.005422110     -1.28614  0.19839511

32. B(2,2)                        0.907310830  0.006752667    134.36333  0.00000000

33. B(2,3)                        0.035199366  0.006556973      5.36823  0.00000008

34. B(3,1)                        0.014409585  0.003967165      3.63221  0.00028100

35. B(3,2)                        0.052041664  0.005570214      9.34285  0.00000000

36. B(3,3)                        0.942421445  0.005391111    174.81026  0.00000000

37. D(1,1)                        0.184149303  0.023016895      8.00061  0.00000000

38. D(1,2)                        0.019930251  0.020548357      0.96992  0.33208666

39. D(1,3)                        0.108268772  0.026514164      4.08343  0.00004438

40. D(2,1)                        0.092760590  0.017406768      5.32900  0.00000010

41. D(2,2)                        0.108974929  0.020496916      5.31665  0.00000011

42. D(2,3)                        0.105254202  0.022420676      4.69452  0.00000267

43. D(3,1)                       -0.027936003  0.017903316     -1.56038  0.11866975

44. D(3,2)                        0.072493050  0.015363244      4.71860  0.00000237

45. D(3,3)                       -0.025909857  0.015084676     -1.71763  0.08586455




garch(model=var1,mv=bekk,asymmetric,p=1,q=1,distrib=t,$

   pmethod=simplex,piters=10,iters=500,$

   rseries=rs,mvhseries=hhs,stdresids=zu,derives=dd)


This is an asymmetric BEKK with a VAR(1) mean model. It saves quite a bit of extra information for use in the diagnostics which follow.


MV-GARCH, BEKK - Estimation by BFGS

Convergence in   345 Iterations. Final criterion was  0.0000037 <=  0.0000100

Usable Observations                      6235

Log Likelihood                    -10204.7966


    Variable                        Coeff      Std Error      T-Stat       Signif

*************************************************************************************

Mean Model(XJPN)

1.  XJPN{1}                      -0.013921151  0.011194265      -1.24360  0.21364801

2.  XFRA{1}                       0.015594486  0.008867192       1.75867  0.07863317

3.  XSUI{1}                      -0.008652645  0.006518585      -1.32738  0.18438277

4.  Constant                     -0.008318846  0.003284624      -2.53266  0.01131998

Mean Model(XFRA)

5.  XJPN{1}                      -0.010038844  0.009868755      -1.01724  0.30904162

6.  XFRA{1}                      -0.004017891  0.008766931      -0.45830  0.64673638

7.  XSUI{1}                       0.020068910  0.009535854       2.10457  0.03532842

8.  Constant                     -0.003345475  0.003861700      -0.86632  0.38631370

Mean Model(XSUI)

9.  XJPN{1}                      -0.000910601  0.012528476      -0.07268  0.94205878

10. XFRA{1}                       0.033291046  0.014106353       2.36000  0.01827475

11. XSUI{1}                      -0.032641038  0.014289210      -2.28431  0.02235310

12. Constant                     -0.003636505  0.004782305      -0.76041  0.44701052


13. C(1,1)                       -0.015392658  0.004343307      -3.54400  0.00039411

14. C(2,1)                        0.029951772  0.007645844       3.91739  0.00008951

15. C(2,2)                       -0.021807417  0.008736865      -2.49602  0.01255945

16. C(3,1)                        0.015015021  0.015619442       0.96130  0.33639970

17. C(3,2)                        0.063187456  0.008216776       7.69005  0.00000000

18. C(3,3)                       -0.000018067  0.087576711 -2.06294e-004  0.99983540

19. A(1,1)                        0.280604635  0.014683508      19.11019  0.00000000

20. A(1,2)                        0.010016035  0.013843954       0.72350  0.46937566

21. A(1,3)                        0.011491007  0.019573140       0.58708  0.55714971

22. A(2,1)                       -0.003966151  0.010040058      -0.39503  0.69281880

23. A(2,2)                        0.312460200  0.018676057      16.73052  0.00000000

24. A(2,3)                       -0.075822038  0.022894095      -3.31186  0.00092678

25. A(3,1)                       -0.006185871  0.006748768      -0.91659  0.35935611

26. A(3,2)                       -0.023251431  0.013402764      -1.73482  0.08277204

27. A(3,3)                        0.345784467  0.020605075      16.78152  0.00000000

28. B(1,1)                        0.959869243  0.002985897     321.46766  0.00000000

29. B(1,2)                       -0.000180508  0.003202131      -0.05637  0.95504610

30. B(1,3)                        0.001570903  0.004669265       0.33643  0.73654301

31. B(2,1)                        0.000666509  0.003214894       0.20732  0.83576056

32. B(2,2)                        0.939664925  0.006161669     152.50170  0.00000000

33. B(2,3)                        0.041120339  0.008141994       5.05040  0.00000044

34. B(3,1)                        0.001479435  0.002533738       0.58389  0.55929140

35. B(3,2)                        0.020100283  0.005007705       4.01387  0.00005973

36. B(3,3)                        0.928741374  0.007323588     126.81508  0.00000000

37. D(1,1)                       -0.185790914  0.025936078      -7.16342  0.00000000

38. D(1,2)                        0.030160685  0.031051947       0.97130  0.33140006

39. D(1,3)                       -0.055270115  0.047589871      -1.16138  0.24548577

40. D(2,1)                       -0.008550264  0.015076597      -0.56712  0.57063159

41. D(2,2)                       -0.037446781  0.037124812      -1.00867  0.31313167

42. D(2,3)                       -0.117934969  0.035581661      -3.31449  0.00091811

43. D(3,1)                        0.002752208  0.010059619       0.27359  0.78439993

44. D(3,2)                       -0.071491809  0.023186020      -3.08340  0.00204649

45. D(3,3)                        0.088110333  0.035118148       2.50897  0.01210842

46. Shape                         4.185325813  0.130614594      32.04332  0.00000000



set z1 = rs(1)/sqrt(hhs(1,1))

set z2 = rs(2)/sqrt(hhs(2,2))

set z3 = rs(3)/sqrt(hhs(3,3))

@bdindtests(number=40) z1

@bdindtests(number=40) z2

@bdindtests(number=40) z3


These do "univariate" diagnostics on the residuals. The Z variables are the residuals standardized by their variances, so they should (if the model is correct) be mean zero, variance one and be serially uncorrelated. The one diagnostic which fails badly across all three variables is the Ljung-Box Q statistic which tests for serial correlation in the mean. This might indicate that the VAR(1) mean model isn't adequate, though it can also mean more serious problems exist.


Independence Tests for Series Z1

Test            Statistic  P-Value

Ljung-Box Q(40)  121.33672     0.0000

McLeod-Li(40)     55.97498     0.0480

Turning Points    -1.32167     0.1863

Difference Sign   -0.76761     0.4427

Rank Test         -1.05736     0.2903



Independence Tests for Series Z2

Test            Statistic  P-Value

Ljung-Box Q(40)  100.67880     0.0000

McLeod-Li(40)      9.87095     1.0000

Turning Points     0.00000     1.0000

Difference Sign   -0.24125     0.8094

Rank Test         -0.42778     0.6688



Independence Tests for Series Z3

Test            Statistic  P-Value

Ljung-Box Q(40)  96.734741     0.0000

McLeod-Li(40)    45.290929     0.2607

Turning Points   -1.231559     0.2181

Difference Sign  -0.021932     0.9825

Rank Test        -2.172267     0.0298




@mvqstat(lags=5)

# zu

@mvarchtest(lags=5)

# zu


These are tests on the jointly standardized residuals which are (if the model is correct) mutually serially uncorrelated, mean zero with an identity covariance matrix. (The univariate standardized residuals don't have any predictable "cross-variable" properties). @MVQSTAT checks for serial correlation in the mean, while @MVARCHTEST checks for residual cross-variable ARCH. These also reject very strongly. The failure of the multivariate Q is not a surprise given the univariate Q results. The failure of the multivariate ARCH test is more of a surprise given that the univariate McLeod-Li tests weren't too much of a problem.



Multivariate Q(5)=     143.15596

Significance Level as Chi-Squared(45)=  3.68438e-012


Test for Multivariate ARCH

Statistic Degrees Signif

   520.29     180 0.00000




@flux

# dd


This does a Nyblom fluctuations test which is a fairly general test for structural breaks in the time sequence. This has a joint test on the entire coefficient vector, plus tests on the individual parameters. You can check the full GARCH output above to see which each coefficient is. The individual coefficient for which stability is really rejected is the SHAPE (#46). #28 (B(1,1), which is the own variance persistence on Japan) and #35 (B(3,2) which is the variance term for France from Switzerland) are also way over the 0.00 p-value limit.


In practice, this will almost always reject stability in a GARCH model if you have this large a model with this much data. (All GARCH models are approximations in some form.) The real question with any diagnostic rejection is whether they point you to a better model or other adjustment. In this case, a look at the Japanese returns data and Japanese residuals (the Z1 series) will show that roughly the first 1500 observations just don't look at all like the last 75% of the data set—overall the data are much "quieter" with a few very, very large changes. Most likely, this is due to central bank intervention. It is very unlikely that you can make a minor adjustment to the model to make a GARCH model fit both the period of relatively tight control and a later period of with a generally freer market. Instead, a better approach would be to choose a subrange during which the market structure is closer to being uniform.


Test  Statistic  P-Value

Joint 33.7520472    0.00


    1  0.1348914    0.42

    2  1.8624746    0.00

    3  1.0380136    0.00

    4  0.2837474    0.15

    5  0.0804013    0.67

    6  0.1188408    0.48

    7  0.0811365    0.67

    8  0.6485388    0.02

    9  0.1165840    0.49

   10  0.0412141    0.92

   11  0.0604180    0.80

   12  1.1239040    0.00

   13  1.2120879    0.00

   14  1.1816754    0.00

   15  0.4669988    0.05

   16  1.1253165    0.00

   17  0.8529359    0.01

   18  0.7246583    0.01

   19  0.8872770    0.00

   20  0.6176391    0.02

   21  0.2002857    0.26

   22  0.3249375    0.11

   23  0.9103172    0.00

   24  0.4956675    0.04

   25  0.2701695    0.16

   26  1.9152490    0.00

   27  0.2420097    0.19

   28  2.8383045    0.00

   29  0.5980212    0.02

   30  0.2142316    0.23

   31  0.8854486    0.00

   32  1.9161060    0.00

   33  0.4738197    0.05

   34  0.8524883    0.01

   35  2.6470491    0.00

   36  0.3472104    0.10

   37  0.6553268    0.02

   38  0.1302127    0.44

   39  0.2168067    0.23

   40  0.5151937    0.04

   41  0.1351453    0.42

   42  0.1417446    0.40

   43  0.2395982    0.20

   44  0.1190260    0.48

   45  0.3437533    0.10

   46  7.7629884    0.00


Graphs

garch(p=1,q=1,mv=dcc,variances=koutmos,hmatrices=hh) / $

   xjpn xfra xsui

*

set jpnfra = %cvtocorr(hh(t))(1,2)

set jpnsui = %cvtocorr(hh(t))(1,3)

set frasui = %cvtocorr(hh(t))(2,3)

*

spgraph(vfields=3,footer="Conditional Correlations")

 graph(header="Japan with France",min=-1.0,max=1.0)

 # jpnfra

 graph(header="Japan with Switzerland",min=-1.0,max=1.0)

 # jpnsui

 graph(header="France with Switzerland",min=-1.0,max=1.0)

 # frasui

spgraph(done)


This generates and graphs the conditional correlations for the series from the DCC-Koutmos estimates. This can be done with any type of multivariate model (though a CC will be flat, so it won't be very interesting) as long as you save the series of covariance matrices using the HMATRICES option. In the SET instructions, %CVTOCORR(HH(T)) returns the covariance matrix converted to correlations, and the (i,j) subscripts pull that particular element out of the covariance matrix, so %CVTOCORR(HH(T))(1,2) is the conditional correlation of Japan (variable 1) with France (variable 2) at time T.