Recursive VECM - Johansen ML technique

Questions and discussions on Vector Autoregressions
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: Recursive VECM - Johansen ML technique

Unread post by TomDoan »

A VECM will have a certain number forcibly on the unit circle. It generally only makes sense if you are unsure of the rank of cointegration, as you might see roots that are just barely inside the unit circle, which might indicate that, whether they are "statistically" significantly different from unit roots, they may not be practically different.
ac_1
Posts: 495
Joined: Thu Apr 15, 2010 6:30 am

Re: Recursive VECM - Johansen ML technique

Unread post by ac_1 »

TomDoan wrote: Tue Feb 27, 2024 12:14 pm A VECM will have a certain number forcibly on the unit circle. It generally only makes sense if you are unsure of the rank of cointegration, as you might see roots that are just barely inside the unit circle, which might indicate that, whether they are "statistically" significantly different from unit roots, they may not be practically different.

The VAR in lutkp077.RPF is a good example. I have tried on other VAR's and the inverse roots are plotted as calculated:

Code: Select all

compute companion=%modelcompanion(varmodel)
eigen(cvalues=cv) companion

The calculation's, hence plot's from VECM's are not as expected. Example from the User's Guide, Section 7.8. ECT.RPF

Code: Select all

*
* Allowing for two cointegrating vectors. (The results of the
* cointegration test suggest that one is adequate).
*
@johmle(lags=6,det=rc,vectors=cvectors)
# ftbs3 ftb12 fcm7
equation(coeffs=%xcol(cvectors,1)) ect1 *
# ftbs3 ftb12 fcm7 constant
equation(coeffs=%xcol(cvectors,2)) ect2 *
# ftbs3 ftb12 fcm7 constant
*
system(model=ect2model)
variables ftbs3 ftb12 fcm7
lags 1 to 6
ect ect1 ect2
end(system)
estimate

Running the procedure, and displaying cv

( 0.13852, 0.00000) ( -nan(ind), -nan(ind)) ( -nan(ind), -nan(ind)) ( -nan(ind), -nan(ind)) ( -nan(ind), -nan(ind)) ( -nan(ind), -nan(ind)) ( -nan(ind), -nan(ind)) ( -nan(ind), -nan(ind)) ( -nan(ind), -nan(ind))
( -nan(ind), -nan(ind)) ( -nan(ind), -nan(ind)) ( -nan(ind), -nan(ind)) ( -nan(ind), -nan(ind)) ( -nan(ind), -nan(ind)) ( -nan(ind), -nan(ind)) ( -nan(ind), -nan(ind)) ( -nan(ind), -nan(ind)) ( -nan(ind), -nan(ind))

i.e. output for 18 inverse roots, and plots 1 inverse root. Why are there 'spaces' for 18 inverse roots?

Shouldn't the number of VECM inverse roots = (3 variables or eqns * 5 lags) + (3 variables or eqns * 2 ect terms * 1 lag)
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: Recursive VECM - Johansen ML technique

Unread post by TomDoan »

Do

compute companion=%modelcompanion(%modelsubstect(ectmodel))
eigen(cvalues=cv) companion
ac_1
Posts: 495
Joined: Thu Apr 15, 2010 6:30 am

Re: Recursive VECM - Johansen ML technique

Unread post by ac_1 »

TomDoan wrote: Wed Feb 28, 2024 7:31 am Do

compute companion=%modelcompanion(%modelsubstect(ectmodel))
eigen(cvalues=cv) companion

1) Here's the output from the procedure for the ECT.RPF example:

(i) acECT.RGF (attached) depicting the inverse roots: 1 unit root, and 17 inside the complex unit circle.

(ii) Table with 18 inverse roots, Modulus and Period as in LAGPOLYROOTS.SRC

Code: Select all

inverse AR roots:
Real              Imag     Modulus Period
          1.00000  0.00000 1.00000
          0.95722 -0.00000 0.95722
         -0.47818  0.58345 0.75437  2.78342
         -0.47818 -0.58345 0.75437
          0.73863  0.05914 0.74099 78.64495
          0.73863 -0.05914 0.74099
          0.34590 -0.64913 0.73554
          0.34590  0.64913 0.73554  5.81133
         -0.21104 -0.68605 0.71778
         -0.21104  0.68605 0.71778  3.36138
          0.53261  0.41448 0.67489  9.50113
          0.53261 -0.41448 0.67489
          0.03510 -0.58437 0.58542
          0.03510  0.58437 0.58542  4.15884
         -0.50067 -0.00000 0.50067  2.00000
          0.42779  0.00000 0.42779
          0.18694 -0.00000 0.18694
         -0.08272  0.00000 0.08272  2.00000
Question: Why are there 18 inverse roots?


2) In the E&G examples earlier in this topic (Thu Jan 25, 2024 12:56 pm), another reason for using the second ECT version is that 6 inverse roots (1 unit root + 5 other's) are calculated and plotted, as opposed to the RESIDS (i.e. VAR) version with only 4 inverse roots.


3) UG-247 says: "If P is full-rank, there is nothing to be gained by writing the system in form (31) rather than (30): the two are equivalent."

If I compare a VAR in levels vs. a VECM including all ECT's w.r.t. aic & bic and OOS forecasts, they can be very different.

Thus, algebraically they are equivalent, but not so empirically. Is that fair?
Attachments
acECT.rgf
(10.59 KiB) Downloaded 508 times
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: Recursive VECM - Johansen ML technique

Unread post by TomDoan »

3 variables x 6 lags = 18.

I have no idea what you are thinking. The number of roots is no indication at all of the quality of the model. You seem to have learned something very, very wrong about the roots of dynamic processes.

Clearly you did something wrong. A VAR and a VECM with a complete set of ECT's should give exactly the same results to as many decimal places as you are likely to want.
ac_1
Posts: 495
Joined: Thu Apr 15, 2010 6:30 am

Re: Recursive VECM - Johansen ML technique

Unread post by ac_1 »

TomDoan wrote: Sun Mar 03, 2024 8:43 pm Clearly you did something wrong. A VAR and a VECM with a complete set of ECT's should give exactly the same results to as many decimal places as you are likely to want.

This is using ECT.RPF.

Unfortunately, I cannot get the VAR and VECM with all the ECT's equivalent with regards to:
- aic & bic
- OOS forecasts

Code: Select all

*===============================
* Equivalent Fit and OOS forecasts
* --------------------------------

comp nsteps = 10


* VAR
* ---
system(model=ratemodel)
variables ftbs3 ftb12 fcm7
lags 1 to 6
det constant
end(system)
estimate(print,noftests) 1975:07 2001:06-nsteps
*
comp N = %NREGSYSTEM
comp aic = %nobs*%logdet + 2*N
comp sbc = %nobs*%logdet + N*log(%nobs)
dis 'aic = '  aic 'sbc = ' sbc
*
* characteristic roots
compute companion=%modelcompanion(ratemodel)
eigen(cvalues=cv1) companion
disp cv1
*
forecast(model=ratemodel,results=f_ratemodel,from=2001:6-nsteps+1,steps=nsteps)

prin / f_ratemodel

@uforeErrors ftbs3 f_ratemodel(1)
@uforeErrors ftb12 f_ratemodel(2)
@uforeErrors fcm7 f_ratemodel(3)


*===============================
* VECM
* ----
@johmle(lags=6,det=rc,vectors=cvectors,print) 1975:07 2001:06-nsteps
# ftbs3 ftb12 fcm7
equation(coeffs=%xcol(cvectors,1)) ect1 *
# ftbs3 ftb12 fcm7 constant
equation(coeffs=%xcol(cvectors,2)) ect2 *
# ftbs3 ftb12 fcm7 constant
*
system(model=ect2model)
variables ftbs3 ftb12 fcm7
lags 1 to 6
ect ect1 ect2
end(system)
estimate(print,noftests) 1975:07 2001:06-nsteps
*
comp N = %NREGSYSTEM
comp aic = %nobs*%logdet + 2*N
comp sbc = %nobs*%logdet + N*log(%nobs)
dis 'aic = '  aic 'sbc = ' sbc
*
* characteristic roots
compute companion=%modelcompanion(%modelsubstect(ect2model))
eigen(cvalues=cv2) companion
disp cv2
*
forecast(model=ect2model,results=f_ect2model,from=2001:6-nsteps+1,steps=nsteps)

prin / f_ect2model

@uforeErrors ftbs3 f_ect2model(1)
@uforeErrors ftb12 f_ect2model(2)
@uforeErrors fcm7 f_ect2model(3)
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: Recursive VECM - Johansen ML technique

Unread post by TomDoan »

A full set of ECT's in a 3 variable model has 3 not 2. It's of no practical interest since it's identical to the simpler VAR in levels. If you add a 3rd component, you'll see that the log likelihoods match. You won't be able to match the AIC and SBC because the two step VECM's have extra parameters embedded in the first step that aren't included in %NREGSYSTEM.
ac_1
Posts: 495
Joined: Thu Apr 15, 2010 6:30 am

Re: Recursive VECM - Johansen ML technique

Unread post by ac_1 »

TomDoan wrote: Tue Mar 05, 2024 8:34 pm A full set of ECT's in a 3 variable model has 3 not 2. It's of no practical interest since it's identical to the simpler VAR in levels. If you add a 3rd component, you'll see that the log likelihoods match. You won't be able to match the AIC and SBC because the two step VECM's have extra parameters embedded in the first step that aren't included in %NREGSYSTEM.
:oops:

LL = 35.61781
aic = -2528.35224 sbc = -2316.85790

For the VECM, comp N = (((3*5)+1+3)*3)
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: Recursive VECM - Johansen ML technique

Unread post by TomDoan »

That's the correct value for N, but not for the correct reasons. (The PI matrix is decomposed as alpha beta' where there are redundant parameters in those). %NREGSYSTEM will includes the alpha's in the VECM, but not the beta's. With 3 ECT's (and with restricted constant), beta' is dimension 3 x 4. But there are 3 x 3 restrictions to identify the alpha beta' combinations, so it adds just an extra 3 parameters.

Note, however, that the logic behind AIC and SBC don't apply to choosing rank in a VECM's because of the asymptotic growth rate of unit root processes.
ac_1
Posts: 495
Joined: Thu Apr 15, 2010 6:30 am

Re: Recursive VECM - Johansen ML technique

Unread post by ac_1 »

TomDoan wrote: Wed Mar 06, 2024 9:31 pm That's the correct value for N, but not for the correct reasons. (The PI matrix is decomposed as alpha beta' where there are redundant parameters in those). %NREGSYSTEM will includes the alpha's in the VECM, but not the beta's. With 3 ECT's (and with restricted constant), beta' is dimension 3 x 4. But there are 3 x 3 restrictions to identify the alpha beta' combinations, so it adds just an extra 3 parameters.

Note, however, that the logic behind AIC and SBC don't apply to choosing rank in a VECM's because of the asymptotic growth rate of unit root processes.

Thus, it's not straightforward regarding N.

A similar VECM:
- 5 variables
- restricted constant, and 3 significant ECT's
- lags 1 2 3; * 3 lagged levels are equivalent to 2 lagged changes

There are 65 beta's, 13 in each equation? Rationally how best to comp N?
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: Recursive VECM - Johansen ML technique

Unread post by TomDoan »

The beta's are the coefficients that determine the cointegrating vectors. (PI = alpha beta'). And you have to allow for the fact that they need to be normalized, and so aren't all free. But it really doesn't matter because you can't use the AIC or SBC to choose the rank of the cointegrating space.
ac_1
Posts: 495
Joined: Thu Apr 15, 2010 6:30 am

Re: Recursive VECM - Johansen ML technique

Unread post by ac_1 »

TomDoan wrote: Fri Mar 08, 2024 9:46 am The beta's are the coefficients that determine the cointegrating vectors. (PI = alpha beta'). And you have to allow for the fact that they need to be normalized, and so aren't all free. But it really doesn't matter because you can't use the AIC or SBC to choose the rank of the cointegrating space.
Yes, there are two tests to determine the rank of PI i.e. the number of cointegrating vectors, and the test statistics are based around characteristic roots i.e. eigenvalues (in order) as here https://estima.com/ratshelp/index.html? ... edure.html:
(i) maximal eigenvalue statistic
(ii) trace statistic: a likelihood ratio test for the trace of the matrix

However, I am aiming to compare VAR-in-levels vs. VAR-in-differences vs. VECM, regarding in-sample estimation. Are you saying AIC or SBC is not appropriate for a VECM in assessing overall model fit? If no, then what is the logic for comp N? If yes, then which information do I use or calculate?

Also noting for comparison, the VAR-in-differences and VECM should have 1 less lag than the VAR-in-levels, despite their lag choice via @VARLAGSELECT.
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: Recursive VECM - Johansen ML technique

Unread post by TomDoan »

This seemed like a bad idea two years ago (read the initial response), and it still seems like a bad idea. If there is no theoretical reason for the series to be cointegrated, then it would be a bad idea to force it onto the model---it's a very strong restriction which ties series together.

The unrestricted VAR and VECM use the same choice for the LAGS. (The VECM rearranges the inputs to the model). You can also do the VAR in differences with a blank ECT instruction using the same LAGS options.
ac_1
Posts: 495
Joined: Thu Apr 15, 2010 6:30 am

Re: Recursive VECM - Johansen ML technique

Unread post by ac_1 »

TomDoan wrote: Mon Mar 11, 2024 1:12 pm You can also do the VAR in differences with a blank ECT instruction using the same LAGS options.
Yes, that's useful as it's straightforward to be in levels and not experience the problems here https://estima.com/forum/viewtopic.php?t=1724


Questions

(i) I have noticed if I have, e.g.

equation(coeffs=%xcol(cvectors,1)) D_ect1 *
# ftbs3 ftb12 fcm7 constant

equation(coeffs=%xcol(cvectors,2)) D_ect2 *
# ftbs3 ftb12 fcm7 constant

they will appear as just EC1{1} EC2{1} in the VECM after ESTIMATE regardless of the D_ in front. Why?

Presumably, if after the dynamic (D_) forecasts, within the same RPF file, I am generating static 1-step ahead VECM static forecasts and defining the ECT's as S_ECT1, S_ECT2 using EQUATION, and ESTIMATE, there will be an overwrite, and the latest ECT's i.e. S_ECT1 S_ECT2 will be used?


(ii) To aid my understanding, given a VAR-in-levels with DET=NONE or DET=CONSTANT I can manually calculate the PI matrix

PI = - (I - GAMMA(1) - GAMMA(2) - ... - GAMMA(p))
where
I is the identity matrix
GAMMA's are the matrices of parameters up to the AR(pth) lag.

How to 'by-hand' calculate PI including the deterministic terms: DET=RC, DET=TREND, and SEASONAL?


(iii) Does %VARLAGSUMS always = -PI?


(iv) I'll accept that I do not need to normalize the cointegrating vectors prior to forecasting, but I do not understand the normalization as in https://www.estima.com/ratshelp/index.h ... edure.html, enders4p389.rpf and Enders (1996) RATS Handbook for Econometric Time Series Chapter 6? Why normalize?


JohMLE.src

(v) The generalized evalues and evectors are calculated from submatrices of the s matrix and not of the PI matrix.

Solving | s10_00_01 - (lambda*%%s11) | = 0

What is the relationship between PI and
(a) s10_00_01
(b) %%s00
(c) %%s11
(d) %%s01
?


(vi) Should the values of PI (long-run matrix of coefficients) be interpreted or it's factor matrices:
- alpha (speed of adjustment to equilibrium coefficients) and
- beta (long-run matrix of coefficients).
What do the sign and magnitude of the coefficients mean?
Further, how to interpret the PI*Y(t-1) term in the VECM UG-247 eqn(31) ?


(vii) DET=CONSTANT allows for a linear trend in the data; DET=RC does not. But exactly what is the RC? If DET=RC there is an additional ECT, e.g. 3 variables, 4 ECT's. I have read the explanation in ECT.RPF but am none the wiser. Please can you provide a simpler numerical explanation.
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: Recursive VECM - Johansen ML technique

Unread post by TomDoan »

For the statistical questions (particularly regarding the restricted cointegrating vectors), you should consult:

Johansen, S (1995), Likelihood-Based Inference in Cointegrated Vector Autoregressive Models, Oxford: Oxford University Press.
ac_1 wrote: Mon Mar 18, 2024 2:59 am
Questions

(i) I have noticed if I have, e.g.

equation(coeffs=%xcol(cvectors,1)) D_ect1 *
# ftbs3 ftb12 fcm7 constant

equation(coeffs=%xcol(cvectors,2)) D_ect2 *
# ftbs3 ftb12 fcm7 constant

they will appear as just EC1{1} EC2{1} in the VECM after ESTIMATE regardless of the D_ in front. Why?

Presumably, if after the dynamic (D_) forecasts, within the same RPF file, I am generating static 1-step ahead VECM static forecasts and defining the ECT's as S_ECT1, S_ECT2 using EQUATION, and ESTIMATE, there will be an overwrite, and the latest ECT's i.e. S_ECT1 S_ECT2 will be used?
That's exactly what it says in does in the description of the ECT instruction.
ac_1 wrote: Mon Mar 18, 2024 2:59 am (ii) To aid my understanding, given a VAR-in-levels with DET=NONE or DET=CONSTANT I can manually calculate the PI matrix

PI = - (I - GAMMA(1) - GAMMA(2) - ... - GAMMA(p))
where
I is the identity matrix
GAMMA's are the matrices of parameters up to the AR(pth) lag.

How to 'by-hand' calculate PI including the deterministic terms: DET=RC, DET=TREND, and SEASONAL?
DET=RC and DET=RTREND are the only ones that are more complicated, and that is only because they anticipate a restriction on the intercept (or trend). The adjusted "PI" matrix in rearranging the unrestricted VAR simply adds a column of the estimates on the constant (DET=RC) and trend term (DET=RTREND).
ac_1 wrote: Mon Mar 18, 2024 2:59 am (iii) Does %VARLAGSUMS always = -PI?
For a VAR estimated in levels.
ac_1 wrote: Mon Mar 18, 2024 2:59 am (iv) I'll accept that I do not need to normalize the cointegrating vectors prior to forecasting, but I do not understand the normalization as in https://www.estima.com/ratshelp/index.h ... edure.html, enders4p389.rpf and Enders (1996) RATS Handbook for Econometric Time Series Chapter 6? Why normalize?
You are misunderstanding that. You have to normalize (somehow) in order to fully estimate the model. It's just that the normalization washes out of forecasts and anything related to that.

The normalization of the beta in @JOHMLE is purely mechanical and inherited from the embedded generalized eigenvalue problem. It has no economic interpretation.
ac_1 wrote: Mon Mar 18, 2024 2:59 am (vi) Should the values of PI (long-run matrix of coefficients) be interpreted or it's factor matrices:
- alpha (speed of adjustment to equilibrium coefficients) and
- beta (long-run matrix of coefficients).
What do the sign and magnitude of the coefficients mean?
Further, how to interpret the PI*Y(t-1) term in the VECM UG-247 eqn(31) ?
alpha depends upon beta. Different normalizations of beta leads to different alphas and different interpretations of alpha. (It's also possible to normalize alpha (the loadings) rather than beta, though that is less common).
Post Reply