Recursive VECM - Johansen ML technique

Questions and discussions on Vector Autoregressions
ac_1
Posts: 415
Joined: Thu Apr 15, 2010 6:30 am

Re: Recursive VECM - Johansen ML technique

Unread post by ac_1 »

I have managed to plot AR inverse roots and MA inverse roots within unit circle(s) using SPGRAPH, GCONTOUR and GRTEXT.

I'd like to write a generic procedure to plot inverse roots from BOXJENK, however it's complicated as I need to take into account all options: Seasonals, intervention, RegARIMA...

Also,
ac_1 wrote: For (b) does this mean for any series used with the model the sequence generated i.e. process implied is: stationary and invertible (MA part)?
TomDoan
Posts: 7702
Joined: Wed Nov 01, 2006 4:36 pm

Re: Recursive VECM - Johansen ML technique

Unread post by TomDoan »

ac_1 wrote:I have managed to plot AR inverse roots and MA inverse roots within unit circle(s) using SPGRAPH, GCONTOUR and GRTEXT.

I'd like to write a generic procedure to plot inverse roots from BOXJENK, however it's complicated as I need to take into account all options: Seasonals, intervention, RegARIMA...
If you DEFINE an EQUATION off the BOXJENK, you can extract the multiplied-out AR and MA polynomials out of that.
Also,
ac_1 wrote: For (b) does this mean for any series used with the model the sequence generated i.e. process implied is: stationary and invertible (MA part)?
If applied to a self-contained ARIMA model, yes. (Stationary for the AR polynomial, invertible for the MA polynomial). If it's a RegARIMA or intervention model, the process itself isn't stationary since it floats around with the intervention and regression.
ac_1
Posts: 415
Joined: Thu Apr 15, 2010 6:30 am

Re: Recursive VECM - Johansen ML technique

Unread post by ac_1 »

TomDoan wrote: If you DEFINE an EQUATION off the BOXJENK, you can extract the multiplied-out AR and MA polynomials out of that.
Here's a basic setup:

Within the main program, e.g.

Code: Select all

BOXJENK(constant,ar=||1,4||,ma=||1||,define=bjeq) series
In the procedure:
a) equation(const=const,ar=ARNumb,ma=MANumb,coeffs=%BETA) arma y
b) calculate characteristic roots and inverse characteristic roots
c) plot the inverse characteristic roots

Code: Select all

* Plot inverse Characteristic roots within Complex Unit Circle
* https://estima.com/ratshelp/index.html?gcontourinstruction.html

@gridseries(from=-1.0,to=1.0,n=100) z1grid
@gridseries(from=-1.0,to=1.0,n=100) z2grid

dec rect f(101,101)
ewise f(i,j) = z1grid(i)^2 + z2grid(j)^2

spgraph(hfi=2,vfi=1)
gcontour(f=f,x=z2grid,y=z1grid,number=1,$
         header="Inverse characteristic AR Roots",Hlabel="Real",Vlabel="Imaginary")
   do k = 1, (%NARMA-MANumb)
      grtext(x=iARroot_real(k),y=iARroot_imag(k),BOLD,BOX) "iAR"
   end do k
gcontour(f=f,x=z2grid,y=z1grid,number=1,$
         header="Inverse characteristic MA Roots",HLABEL="Real",VLABEL="Imaginary")
   do l = 1, (%NARMA-ARNumb)
      grtext(x=iMAroot_real(l),y=iMAroot_imag(l),BOLD,BOX) "iMA"
   end do l
spgraph(done)
Run the procedure, e.g.:

Code: Select all

@ACBJCR(const=1,ARNumb=2,MANumb=1,coeffs=%BETA) series
The above runs fine, however on the first run of the main program, the LHS AR plot does not produce vertical and horizontal grid lines at 0's - the AR plot does on the second run. Maybe it's to do with the OS: I am on a MacBookPro 2012 running MacOS Mojave 10.14.6?

To make the procedure generic for use with 'all' options with BOXJENK, how do I handle the following:
SAR, SMA, REGRESSORS, are there any others?

TomDoan wrote: If applied to a self-contained ARIMA model, yes. (Stationary for the AR polynomial, invertible for the MA polynomial).
What does self-contained mean?
TomDoan
Posts: 7702
Joined: Wed Nov 01, 2006 4:36 pm

Re: Recursive VECM - Johansen ML technique

Unread post by TomDoan »

You should use the %EQNLAGPOLY function to pull the polynomials out:

https://estima.com/ratshelp/polynomialfunctions.html

equation(ma=2,ar=2,noconst,coeffs=||1.3,-.6,.3,.2||) ydef y
compute ar=%eqnlagpoly(ydef,y)
compute ma=%eqnlagpoly(ydef,%mvgavge)
compute arroots=%polycxroots(ar)
compute maroots=%polycxroots(ma)

The EQUATION that gets defined by BOXJENK will multiply out all the polynomials involved so the lag polynomials for the dependent variable and %MVGAVGE will give you the overall AR and MA polynomials. See the description of DEFINE=...

https://estima.com/ratshelp/boxjenkinstruction.html

"Self-contained" means that there are no outside inputs such as regressors or intervention variables.
ac_1
Posts: 415
Joined: Thu Apr 15, 2010 6:30 am

Re: Recursive VECM - Johansen ML technique

Unread post by ac_1 »

Taking into account Inverse Characteristic Roots which lie outside the Complex Unit Circle, make the grid larger.

Code: Select all

* Plot Inverse Characteristic Roots within Complex Unit Circle
* https://estima.com/ratshelp/index.html?gcontourinstruction.html

@gridseries(from=-2.0,to=2.0,n=200) z1grid
@gridseries(from=-2.0,to=2.0,n=200) z2grid

dec rect f(201,201)
ewise f(i,j) = z1grid(i)^2 + z2grid(j)^2

dec vect contours(1)
ewise contours(i)=1.0*i

spgraph(hfi=2,vfi=1)
gcontour(f=f,x=z2grid,y=z1grid,number=1,contours=contours,$
         header="Inverse characteristic AR Roots",Hlabel="Real",Vlabel="Imaginary")
   do k = 1, (%NARMA-MANumb)
      grtext(x=iARroot_real(k),y=iARroot_imag(k),BOLD,BOX,SIZE=9) "iAR"
   end do k
gcontour(f=f,x=z2grid,y=z1grid,number=1,contours=contours,$
         header="Inverse characteristic MA Roots",HLABEL="Real",VLABEL="Imaginary")
   do l = 1, (%NARMA-ARNumb)
      grtext(x=iMAroot_real(l),y=iMAroot_imag(l),BOLD,BOX,SIZE=9) "iMA"
   end do l
spgraph(done)
ac_1
Posts: 415
Joined: Thu Apr 15, 2010 6:30 am

Re: Recursive VECM - Johansen ML technique

Unread post by ac_1 »

Hi Tom,

A few questions:

1)
A simple example using Engle and Granger:

Code: Select all

* LHS first-differences = c + RHS in first-differences + ECT's

*===============================
linreg(define=eq1) x start+1 begin resids1
# constant y

system(model=ectmodel1) 1 to 2
variables dx dy
lags 1 to 2
det constant resids1{1}
end(system)
estimate(noftests) start+1 begin
compute aic =  %nobs*%logdet + 2*(((2*2)+1+1)*2)
compute sbc =  %nobs*%logdet + (((2*2)+1+1)*2)*log(%nobs)
dis 'aic = '  aic 'sbc = ' sbc

Equivalently, as:

Code: Select all

* LHS in levels = c + RHS in first-differences + ECT's

*===============================
linreg(define=eq1) x start+1 begin resids1
# constant y

system(model=ectmodel2)
variables x y
lags 1 2 3; * 3 lagged levels are equivalent to 2 lagged changes
det constant
ect eq1
end(system)
estimate(print,resids=resids,noftests) start+2 begin
compute aic =  %nobs*%logdet + 2*(((2*2)+1+1)*2)
compute sbc =  %nobs*%logdet + (((2*2)+1+1)*2)*log(%nobs)
dis 'aic = '  aic 'sbc = ' sbc
ectmodel1 and ectmodel2 have the same aic & sbc, and the same coeffs for all variables, but the ECT's have different signs in each model, and importantly the LHS's are different.

So how do I interpret the specifications? Is it
LHS is the first-difference = c + RHS in first-differences + ECT's
Or
LHS is the level = c + RHS in first-differences + ECT's ?


2)
I am using Johansen in all VECM analysis, as here: https://www.estima.com/ratshelp/index.h ... dure.html , and the LHS of the Error Correction Model is in levels.

Enders(2014) AETS, says the Johansen procedure is nothing more than a multivariate generalization of the Dickey–Fuller test. But isn't calculating the ECT's similar to calculating principal components (which happen to be stationary)?


3)
Is it fair to say 'rival' or comparison models for a VAR(p) in levels are:
(i) a VAR(p-1) in first-differences
(ii) a VECM(p-1, Max (p-1) cointegrating vector's), as a VECM is just a VAR in first-differences with a maximum of (p-1) cointegrating vector's.

Or would a more fair comparison be: A VAR(4) in levels, versus a VAR(4) in first-differences, versus a VECM(4, Max (4-1) cointegrating vector's)?


4)
In ECT.RPF, User's Guide, Section 7.8, the unit root test's say all 3 series: ftbs3, ftb12, fcm7 are I(1). Graphically all 3 series show no strong trend. So the series are non-stationary as per ADF (even if I include a TREND term in @dfunit(lags=6,trend)) despite visually having no strong trend. How can that be?

Also with,

@johmle(lags=6,det=rc,cv=cvector)

what affect does DET=RC (restricted constant) have, compared to DET=CONSTANT?


5)
Generating multi-step ahead forecasts from a VAR(p) in first-differences I get wider fans than comparing fans from a VAR(p) in levels. Is that to be expected?

I am aggregating the forecast SE's as per here: https://estima.com/forum/viewtopic.php ... 4&start=15, for a VAR(p) in first-differences, but there was not a solution to why these

* forecast SE's of the DIFFERENCE series
prin / s(1) s(2) s(3) s(4)
prin / COREERRORS(1) COREERRORS(2) COREERRORS(3) COREERRORS(4)

were not the same, and they should be?

To aggregate back to levels, I am using

* forecast SE's of the ORIGINAL series in LEVELS
prin / COREERRORS(5) COREERRORS(6) COREERRORS(7) COREERRORS(8)

Also, if I am modelling term structure of interest rates for a single country, should I log-transform all series, as that will restrict the fans in all models: VAR in levels, VAR in first-difference, and VECM; to being greater than zero; as one could say negative interest rates are unlikely, unless it's Japan?

Further, how much emphasis should be placed on the interpretation in the magnitude & sign of the coefficients in all models, especially the ECT (speed of adjustment) parameters?


thanks,
Amarjit
TomDoan
Posts: 7702
Joined: Wed Nov 01, 2006 4:36 pm

Re: Recursive VECM - Johansen ML technique

Unread post by TomDoan »

ac_1 wrote: Thu Jan 25, 2024 12:56 pm Hi Tom,

A few questions:

1)
A simple example using Engle and Granger:

Code: Select all

* LHS first-differences = c + RHS in first-differences + ECT's

*===============================
linreg(define=eq1) x start+1 begin resids1
# constant y

system(model=ectmodel1) 1 to 2
variables dx dy
lags 1 to 2
det constant resids1{1}
end(system)
estimate(noftests) start+1 begin
compute aic =  %nobs*%logdet + 2*(((2*2)+1+1)*2)
compute sbc =  %nobs*%logdet + (((2*2)+1+1)*2)*log(%nobs)
dis 'aic = '  aic 'sbc = ' sbc

Equivalently, as:

Code: Select all

* LHS in levels = c + RHS in first-differences + ECT's

*===============================
linreg(define=eq1) x start+1 begin resids1
# constant y

system(model=ectmodel2)
variables x y
lags 1 2 3; * 3 lagged levels are equivalent to 2 lagged changes
det constant
ect eq1
end(system)
estimate(print,resids=resids,noftests) start+2 begin
compute aic =  %nobs*%logdet + 2*(((2*2)+1+1)*2)
compute sbc =  %nobs*%logdet + (((2*2)+1+1)*2)*log(%nobs)
dis 'aic = '  aic 'sbc = ' sbc
ectmodel1 and ectmodel2 have the same aic & sbc, and the same coeffs for all variables, but the ECT's have different signs in each model, and importantly the LHS's are different.

So how do I interpret the specifications? Is it
LHS is the first-difference = c + RHS in first-differences + ECT's
Or
LHS is the level = c + RHS in first-differences + ECT's ?
The sign difference is because that's the sign convention on the ECT instruction. The LHS's are different because the VECM understands that this is a model for the levels, not the differences.

You don't interpret the regression coefficients.
ac_1 wrote: Thu Jan 25, 2024 12:56 pm 2)
I am using Johansen in all VECM analysis, as here: https://www.estima.com/ratshelp/index.h ... dure.html , and the LHS of the Error Correction Model is in levels.

Enders(2014) AETS, says the Johansen procedure is nothing more than a multivariate generalization of the Dickey–Fuller test. But isn't calculating the ECT's similar to calculating principal components (which happen to be stationary)?
It's really not either of those. It's a special case of a reduced rank regression. The "principal components" (at least the first few) are actually more closely related to the common trends (since they dominate the long-run variance), but just a straight analysis of the PC's doesn't work correctly because it ignores the underlying dynamic model.
ac_1 wrote: Thu Jan 25, 2024 12:56 pm 3)
Is it fair to say 'rival' or comparison models for a VAR(p) in levels are:
(i) a VAR(p-1) in first-differences
(ii) a VECM(p-1, Max (p-1) cointegrating vector's), as a VECM is just a VAR in first-differences with a maximum of (p-1) cointegrating vector's.

Or would a more fair comparison be: A VAR(4) in levels, versus a VAR(4) in first-differences, versus a VECM(4, Max (4-1) cointegrating vector's)?
The VAR(k) and VAR(k-1) on differences are just special cases of a VECM at the extreme ends (k cointegrating vectors and 0 cointegrating vectors).
ac_1 wrote: Thu Jan 25, 2024 12:56 pm 4)
In ECT.RPF, User's Guide, Section 7.8, the unit root test's say all 3 series: ftbs3, ftb12, fcm7 are I(1). Graphically all 3 series show no strong trend. So the series are non-stationary as per ADF (even if I include a TREND term in @dfunit(lags=6,trend)) despite visually having no strong trend. How can that be?
The simplest case of a unit root series is a random walk, which has no trend.

ac_1 wrote: Thu Jan 25, 2024 12:56 pm Also with,

@johmle(lags=6,det=rc,cv=cvector)

what affect does DET=RC (restricted constant) have, compared to DET=CONSTANT?
DET=CONSTANT allows for a linear trend in the data; DET=RC does not.
ac_1 wrote: Thu Jan 25, 2024 12:56 pm 5)
Generating multi-step ahead forecasts from a VAR(p) in first-differences I get wider fans than comparing fans from a VAR(p) in levels. Is that to be expected?
If the VAR in differences is misspecified, that wouldn't be unexpected.
ac_1
Posts: 415
Joined: Thu Apr 15, 2010 6:30 am

Re: Recursive VECM - Johansen ML technique

Unread post by ac_1 »

TomDoan wrote: Wed Jan 31, 2024 3:20 pm
ac_1 wrote: Thu Jan 25, 2024 12:56 pm Hi Tom,

A few questions:

1)
A simple example using Engle and Granger:

Code: Select all

* LHS first-differences = c + RHS in first-differences + ECT's

*===============================
linreg(define=eq1) x start+1 begin resids1
# constant y

system(model=ectmodel1) 1 to 2
variables dx dy
lags 1 to 2
det constant resids1{1}
end(system)
estimate(noftests) start+1 begin
compute aic =  %nobs*%logdet + 2*(((2*2)+1+1)*2)
compute sbc =  %nobs*%logdet + (((2*2)+1+1)*2)*log(%nobs)
dis 'aic = '  aic 'sbc = ' sbc

Equivalently, as:

Code: Select all

* LHS in levels = c + RHS in first-differences + ECT's

*===============================
linreg(define=eq1) x start+1 begin resids1
# constant y

system(model=ectmodel2)
variables x y
lags 1 2 3; * 3 lagged levels are equivalent to 2 lagged changes
det constant
ect eq1
end(system)
estimate(print,resids=resids,noftests) start+2 begin
compute aic =  %nobs*%logdet + 2*(((2*2)+1+1)*2)
compute sbc =  %nobs*%logdet + (((2*2)+1+1)*2)*log(%nobs)
dis 'aic = '  aic 'sbc = ' sbc
ectmodel1 and ectmodel2 have the same aic & sbc, and the same coeffs for all variables, but the ECT's have different signs in each model, and importantly the LHS's are different.

So how do I interpret the specifications? Is it
LHS is the first-difference = c + RHS in first-differences + ECT's
Or
LHS is the level = c + RHS in first-differences + ECT's ?
The sign difference is because that's the sign convention on the ECT instruction. The LHS's are different because the VECM understands that this is a model for the levels, not the differences.

You don't interpret the regression coefficients.
ectmodel1: I can forecast the difference, but only one-step ahead. If I add-back to the level I get the same forecast results as ectmodel2. How do I forecast ectmodel1 multi-steps ahead: as I require the last resids value from LINREG moved forward nsteps?

ectmodel2: I can forecast multi-step ahead, and the mean's and SE's are in levels.

The regression coefficients:
- In both models ectmodel1 and ectmodel2 they are for the differences.
- And there's no interpretation as VAR's are like ARIMA's in just picking-up the autocorrelation structure in the data. The main points being, in-sample check the residuals visually for randomness, no autocorrelation, & normality; the inverse charactertistic roots for stability, and backtesting for out-of-sample performance.
Correct?


TomDoan wrote: Wed Jan 31, 2024 3:20 pm
ac_1 wrote: Thu Jan 25, 2024 12:56 pm 2)
I am using Johansen in all VECM analysis, as here: https://www.estima.com/ratshelp/index.h ... dure.html , and the LHS of the Error Correction Model is in levels.

Enders(2014) AETS, says the Johansen procedure is nothing more than a multivariate generalization of the Dickey–Fuller test. But isn't calculating the ECT's similar to calculating principal components (which happen to be stationary)?
It's really not either of those. It's a special case of a reduced rank regression. The "principal components" (at least the first few) are actually more closely related to the common trends (since they dominate the long-run variance), but just a straight analysis of the PC's doesn't work correctly because it ignores the underlying dynamic model.
Please expand: what does "ignores the underlying dynamic model" mean?


TomDoan wrote: Wed Jan 31, 2024 3:20 pm
ac_1 wrote: Thu Jan 25, 2024 12:56 pm 3)
Is it fair to say 'rival' or comparison models for a VAR(p) in levels are:
(i) a VAR(p-1) in first-differences
(ii) a VECM(p-1, Max (p-1) cointegrating vector's), as a VECM is just a VAR in first-differences with a maximum of (p-1) cointegrating vector's.

Or would a more fair comparison be: A VAR(4) in levels, versus a VAR(4) in first-differences, versus a VECM(4, Max (4-1) cointegrating vector's)?
The VAR(k) and VAR(k-1) on differences are just special cases of a VECM at the extreme ends (k cointegrating vectors and 0 cointegrating vectors).
The RATS manual UG-209 advises to model VAR's in levels, not differences. I agree.

Regarding VECM's, UG-247 says if the coefficient matrix PI is full-rank, there is nothing to be gained by writing the system in form (31) rather than (30): the two are equivalent equations, i.e. I do not need to model the system as a VECM. How do I calculate the rank of the PI matrix to check if it's full-rank?

If you are saying there are no rival/comparison model's for a VAR(p)? Is there an equivalent random-walk/benchmark for multi-equation systems?
TomDoan
Posts: 7702
Joined: Wed Nov 01, 2006 4:36 pm

Re: Recursive VECM - Johansen ML technique

Unread post by TomDoan »

ac_1 wrote: Fri Feb 02, 2024 4:54 am ectmodel1: I can forecast the difference, but only one-step ahead. If I add-back to the level I get the same forecast results as ectmodel2. How do I forecast ectmodel1 multi-steps ahead: as I require the last resids value from LINREG moved forward nsteps?
You don't. That's the whole point of the VECM structure.
ac_1 wrote: Fri Feb 02, 2024 4:54 am ectmodel2: I can forecast multi-step ahead, and the mean's and SE's are in levels.

The regression coefficients:
- In both models ectmodel1 and ectmodel2 they are for the differences.
- And there's no interpretation as VAR's are like ARIMA's in just picking-up the autocorrelation structure in the data. The main points being, in-sample check the residuals visually for randomness, no autocorrelation, & normality; the inverse charactertistic roots for stability, and backtesting for out-of-sample performance.
Correct?
That's correct.

TomDoan wrote: Wed Jan 31, 2024 3:20 pm
ac_1 wrote: Thu Jan 25, 2024 12:56 pm 2)
I am using Johansen in all VECM analysis, as here: https://www.estima.com/ratshelp/index.h ... dure.html , and the LHS of the Error Correction Model is in levels.

Enders(2014) AETS, says the Johansen procedure is nothing more than a multivariate generalization of the Dickey–Fuller test. But isn't calculating the ECT's similar to calculating principal components (which happen to be stationary)?
It's really not either of those. It's a special case of a reduced rank regression. The "principal components" (at least the first few) are actually more closely related to the common trends (since they dominate the long-run variance), but just a straight analysis of the PC's doesn't work correctly because it ignores the underlying dynamic model.
ac_1 wrote: Fri Feb 02, 2024 4:54 am Please expand: what does "ignores the underlying dynamic model" mean?
PC analysis is a model-free analysis of the correlations/covariance of data. It doesn't take into account the fact that the dynamic model has a certain number of common stochastic trends.
ac_1 wrote: Fri Feb 02, 2024 4:54 am The RATS manual UG-209 advises to model VAR's in levels, not differences. I agree.

Regarding VECM's, UG-247 says if the coefficient matrix PI is full-rank, there is nothing to be gained by writing the system in form (31) rather than (30): the two are equivalent equations, i.e. I do not need to model the system as a VECM. How do I calculate the rank of the PI matrix to check if it's full-rank?
That's the whole point of the Johansen method. PI will always be "full rank" empirically. The question is whether it is significantly different from a reduced rank matrix.
ac_1
Posts: 415
Joined: Thu Apr 15, 2010 6:30 am

Re: Recursive VECM - Johansen ML technique

Unread post by ac_1 »

TomDoan wrote: Fri Feb 02, 2024 9:05 am
ac_1 wrote: Fri Feb 02, 2024 4:54 am The RATS manual UG-209 advises to model VAR's in levels, not differences. I agree.

Regarding VECM's, UG-247 says if the coefficient matrix PI is full-rank, there is nothing to be gained by writing the system in form (31) rather than (30): the two are equivalent equations, i.e. I do not need to model the system as a VECM. How do I calculate the rank of the PI matrix to check if it's full-rank?
That's the whole point of the Johansen method. PI will always be "full rank" empirically. The question is whether it is significantly different from a reduced rank matrix.
Naively, I was hoping for a simple %RANK function returning the number of linearly independent rows(columns) in the PI matrix.

But yes, it's Johansen's Method, and as you say "a special case of a reduced rank regression", and more involved.

Therefore, I will re-read Enders(2014) AETS, Chapter 6.
ac_1
Posts: 415
Joined: Thu Apr 15, 2010 6:30 am

Re: Recursive VECM - Johansen ML technique

Unread post by ac_1 »

I have a better understanding than I had prior to re-reading.

Questions:

(A)

(i)
How do I plot the inverse roots from

compute companion=%modelcompanion(varmodel)
eigen(cvalues=cv) companion
disp cv; * =(2 * lags * number of variables)

in the unit circle?

Would that be similar as previous for ARIMA models, or is there an easier way?


(ii) Also, why if I use e.g.

lags 1 2 3 6 12

in SYSTEM

are there 'in-between' inverse roots, i.e. the full 1 to 12 roots rather than just the roots for lags 1 2 3 6 12?


(B)

It is reasonably straightforward to generate bootstrap forecasts from a VAR.

How do I bootstrap multi-step and one-step ahead forecasts from Johansen's VECM?
TomDoan
Posts: 7702
Joined: Wed Nov 01, 2006 4:36 pm

Re: Recursive VECM - Johansen ML technique

Unread post by TomDoan »

ac_1 wrote: Fri Feb 23, 2024 2:54 am I have a better understanding than I had prior to re-reading.

Questions:

(A)

(i)
How do I plot the inverse roots from

compute companion=%modelcompanion(varmodel)
eigen(cvalues=cv) companion
disp cv; * =(2 * lags * number of variables)

in the unit circle?

Would that be similar as previous for ARIMA models, or is there an easier way?
There are actually lags x number of variables complex numbers. And yes, it would be similar (effectively identical) to the process for doing univariate models, except there are no MA roots.

ac_1 wrote: Fri Feb 23, 2024 2:54 am (ii) Also, why if I use e.g.

lags 1 2 3 6 12

in SYSTEM

are there 'in-between' inverse roots, i.e. the full 1 to 12 roots rather than just the roots for lags 1 2 3 6 12?
You asked the same question about univariate models, and the answer is the same. There will be lags x number of variables roots counting multiplicities, regardless of any zero constraints on lag coefficients.

ac_1 wrote: Fri Feb 23, 2024 2:54 am (B)

It is reasonably straightforward to generate bootstrap forecasts from a VAR.

How do I bootstrap multi-step and one-step ahead forecasts from Johansen's VECM?
That depends upon how deeply you intend to bootstrap. If you are bootstrapping out-of-sample residuals, the process is exactly the same.
ac_1
Posts: 415
Joined: Thu Apr 15, 2010 6:30 am

Re: Recursive VECM - Johansen ML technique

Unread post by ac_1 »

TomDoan wrote: Fri Feb 23, 2024 10:01 am
ac_1 wrote: Fri Feb 23, 2024 2:54 am I have a better understanding than I had prior to re-reading.

Questions:

(A)

(i)
How do I plot the inverse roots from

compute companion=%modelcompanion(varmodel)
eigen(cvalues=cv) companion
disp cv; * =(2 * lags * number of variables)

in the unit circle?

Would that be similar as previous for ARIMA models, or is there an easier way?
There are actually lags x number of variables complex numbers. And yes, it would be similar (effectively identical) to the process for doing univariate models, except there are no MA roots.

Am have problems including all the series names in an SPGRAPH title. Within a procedure I have something like

Code: Select all

if %defined(title)
compute ltitle=title
  else
compute ltitle="inverse roots of the characteristic polynomial"

declare vector[string] slabels(%modelsize(mtsmodel)); * mtsmodel is declared as type model mtsmodel
do i = 1, %modelsize(mtsmodel)
   compute slabels(i) = %modellabel(mtsmodel,i)
end do
disp slabels

spgraph(hfields=1,vfields=1,header=slabels+"\\"+ltitle,WIDTH=10.0,HEIGHT=5.0)
and I'd like to include the VECTOR of STRINGS (slabels) on the line above ltitle but I get an error,

## SX22. Expected Type STRING, Got LIST[STRING] Instead
>>>>labels+"\\"+ltitle,<<<<

TomDoan wrote: Fri Feb 23, 2024 10:01 am
ac_1 wrote: Fri Feb 23, 2024 2:54 am (B)

It is reasonably straightforward to generate bootstrap forecasts from a VAR.

How do I bootstrap multi-step and one-step ahead forecasts from Johansen's VECM?
That depends upon how deeply you intend to bootstrap. If you are bootstrapping out-of-sample residuals, the process is exactly the same.

Yes, works well :)
TomDoan
Posts: 7702
Joined: Wed Nov 01, 2006 4:36 pm

Re: Recursive VECM - Johansen ML technique

Unread post by TomDoan »

Even if that "worked", it wouldn't do what you want, since it would make a blob of the labels (GDPRATEMONEY) which is what string concatenation does. Presumably you want label1,label2,... which requires inserting "," between pairs except at the end. You would want to build up a string while looping over the labels.

However, I assume you aren't picking random series for the model, so why don't you just input the label you want instead of trying to figure out how to manufacture a header string?
ac_1
Posts: 415
Joined: Thu Apr 15, 2010 6:30 am

Re: Recursive VECM - Johansen ML technique

Unread post by ac_1 »

TomDoan wrote: Mon Feb 26, 2024 8:02 am Even if that "worked", it wouldn't do what you want, since it would make a blob of the labels (GDPRATEMONEY) which is what string concatenation does. Presumably you want label1,label2,... which requires inserting "," between pairs except at the end. You would want to build up a string while looping over the labels.

However, I assume you aren't picking random series for the model, so why don't you just input the label you want instead of trying to figure out how to manufacture a header string?
Actually it's a better idea to have mtsmodel as a string (i.e. the name of the model above the title); this is as an option in the procedure.

And although they are important, including all the dependent series names (e.g. as a footer or elsewhere) on the SCATTER plot may be somewhat messy: for a large VAR or long series names.

As an example, using lutkp077.RPF, I've plotted the inverse characteristic roots on the complex unit circle (attached). Correct?


Also, does it make sense to calculate and plot inverse characteristic roots for a VECM?
Attachments
lutkp077.rgf
(8.54 KiB) Downloaded 124 times
Post Reply