## VECM-GARCH Model

Discussions of ARCH, GARCH, and related models

### Re: VECM-GARCH Model

Hello Sir

thank you for clearing it, as we know that stock market data is volatile that's why we normalize it, but it is not in the case of the bond interest rate, so My question is should I normalize bond interest or I simply use the log of interest rate rather than multiply it with 100 or do as like this - set logpgsec = 100.0*(log(PGsec/PGsec(1))) as the interest rate is already mentioned in percentage. so I should set it like - Set logPGSec = (log(pgsec)).

Here I have one more query - what is (1) in this code if it is not a previous value - set logpgsec = 100.0*(log(PGsec/PGsec(1)))
faaequah13

Posts: 36
Joined: Wed Jul 01, 2020 10:33 am

### Re: VECM-GARCH Model

faaequah13 wrote:Hello Sir

thank you for clearing it, as we know that stock market data is volatile that's why we normalize it, but it is not in the case of the bond interest rate, so My question is should I normalize bond interest or I simply use the log of interest rate rather than multiply it with 100 or do as like this - set logpgsec = 100.0*(log(PGsec/PGsec(1))) as the interest rate is already mentioned in percentage. so I should set it like - Set logPGSec = (log(pgsec)).

"Normalizing" in the sense of dividing through by some value to change the scale isn't done because of "volatility" as it has no effect on that (variance relative to size stays the same). Stock prices and the like are done in logs because in log form, it's easier to compute relative returns to different portfolios, and returns (not absolute prices) are what drives investment decisions. Multiplying that by 100 is done for the purpose of changing the values presented to be more natural-looking (.4 is is easier to read than .00004, which is what you can get for a variance or variance parameter). It also has the side effect for numerical purposes of making the parameters more easily handled for basically the same reason (they are generally within a few orders of magnitude of 1). But barring some numeric problem with a parameter getting so small that it's hard to distinguish from zero, you would get the same results whether or not you do the 100 x scale.

If you do bonds as yields, you've already converted to returns. If you are doing a multivariate analysis with bond yields and stock prices, you would keep the yields as is. OTOH, if you were doing bonds as bond prices, you would do those as log(price) (or 100 log(price)) for the same reason as you do for stocks.

faaequah13 wrote:Here I have one more query - what is (1) in this code if it is not a previous value - set logpgsec = 100.0*(log(PGsec/PGsec(1)))

It's the value of the 1st entry of the series. If you look at the logpgsec data, you'll see that it starts at 0 since at entry 1, you're taking log(PGsec(1)/PGSec(1)). There is no particularly compelling statistical reason for doing that---all it does is shift the log data up or down to start at 0, and a data shift is fully handled by an equivalent shift in the intercept in the mean model. It just makes one particular graph easier to read.
TomDoan

Posts: 7388
Joined: Wed Nov 01, 2006 5:36 pm

### Re: VECM-GARCH Model

Dear Sir,

I have attached the result of Johansen cointegration and volatility Garch model result.
as in Johansen, trace statistics is greater than the critical value when R=0, it shows cointegration but lamba statistic is smaller than the critical value Please tell me should I accept it as cointegrated.
Second when I run Bekk model, diagnostic results of arch not showing significant result, will this make my analysis wrong.
third when I run CC model, it showing "LOW ITERATION COUNT ON BFGS MAY LEAD TO POOR ESTIMATES FOR STANDARD ERRORS"
I have attached the results of my analysis please check it.

thank you
Attachments result today.docx
faaequah13

Posts: 36
Joined: Wed Jul 01, 2020 10:33 am

### Re: VECM-GARCH Model

1. Have you tested for unit roots first? It's possible that your logIR variable is borderline stationary/unit root. If it is, all the Johansen test would be uncovering is that same result. (If one of the variables is stationary, the cointegrating rank has to be at least 1). The cointegrating vector depends upon scale of the variables, but it looks to be mostly weighted on logIR rather than being a non-trivial linear combination.

2. Even if you seem to have three integrated series, given the amount of data, that's a very weak result. The result in https://www.estima.com/ratshelp/diagnos ... mples.html applies here---that difference in logL's is pretty small given the number of data points.

A CC model probably doesn't work. (It often doesn't unless you have very similar data series, which you don't). Note that the log L is much, much worse than what you have with the BEKK model. VARIANCES=VARMA is also often doesn't work well.
TomDoan

Posts: 7388
Joined: Wed Nov 01, 2006 5:36 pm

### Re: VECM-GARCH Model

Dear Sir,

yes I have checked the unit root, logIR is stationary at level at 10%. I am attaching my result again please check one more time.

thank you
Attachments 2nd june faa.docx
faaequah13

Posts: 36
Joined: Wed Jul 01, 2020 10:33 am

### Re: VECM-GARCH Model

So the series aren't cointegrated---you could safely run a VAR with the stock indices done in returns rather than log levels.

Please do not post 500 lines of output and ask me to "check" it. If you have a question about it, let me know what it is. At any rate, you have a problem at the end where you are referencing (1,2,3) of a two dimensional object. I'm not sure what you are trying to do, but that won't work.
TomDoan

Posts: 7388
Joined: Wed Nov 01, 2006 5:36 pm

### Re: VECM-GARCH Model

I am Sorry sir, actually I want to check the transmission of shocks on stock indices due to the change of exchange rate and Interest rate, what will happen on returns of indices due the these two markets.
faaequah13

Posts: 36
Joined: Wed Jul 01, 2020 10:33 am

### Re: VECM-GARCH Model

Mean to mean? Variance to variance?
TomDoan

Posts: 7388
Joined: Wed Nov 01, 2006 5:36 pm

Previous