NearVAR - Identification and Diagnostic Tests

Questions and discussions on Vector Autoregressions
ditadal89
Posts: 3
Joined: Sun Jun 02, 2013 4:38 pm

NearVAR - Identification and Diagnostic Tests

Unread post by ditadal89 »

Dear Tom or RATS Team,


I am totally new with RATS, but I do really need to replicate two papers for my thesis (Cushman and Zha, 1997 and Kim and Roubini, 2000).
I want to use SVAR with exogenous variables (nearVAR).
I already read related sources like this forum, user guide, books and email an author.
However, what I got is still not complete or maybe I miss things from what I found out. Besides, this author whom I emailed using old version of RATS.
I also look into Tao Zha's page (http://www.tzha.net/code). He is using SVAR with block exogeneity in his paper: Identifying Monetary Policy in a Small Open Economy under Flexible Exchange Rate (Cushman and Zha, 1997), but he also used an old version of RATS.

Regarding my question, here I attached one of them. I attached the code showing a modified model of Kim and Roubini (2000)-Exchange rate anomalies in the
industrial countries: A solution with a structural VAR approach. First, I tried the model as what stated on the paper, then I modified it to see monetary policy shock on Indonesian stock market (what I attached in here).
Is it possible I did this in the identification ?
In this case, both exchange rate and stock market consider forward looking variables, so they react to all variables in the model (my two first rows in the identification matrix A).
[/color]The result looks good, the significance level of the overidentification test is high.

Since I frequently use EViews, I am familiar that there are some diagnostic test to find out whether this VAR model good or not, like VAR stability test and serial correlation test. I already browse in this forum for the stability test by calculating eigenvalue (I also put it in my code in the attachment):

Code: Select all

function %ModelLargestRoot model
type model model
*
local vect[complex] cv
eigen(cvalues=cv) %modelcompanion(model)
compute %ModelLargestRoot=%cabs(cv(1))
end
*
dis %ModelLargestRoot(inanearvar)
I got only one value which is 0.99 something which I assumed my nearVAR model is ok. Is it right only one value? Is my interpreation right with only one value I can say my the stability of my nearVAR model is less then one?

I also found these codes related to eigen:

Code: Select all

eigen(scale) %sigma * eigen
dis eigen
and

Code: Select all

dec rec co
compute co=%modelcompanion(inanearvar)
eigen co eigenvalue
dis eigenvalue
I really need guidance regarding this which actually for nearVAR stability test or for assuming that my model is good?

Besides, how can I do the LM serial correlation or any serial correlation test that is suitable for my nearVAR model (SVAR with exogenous variables)?

I read on some sources, it is needed to calculate this.

Is there any suggestion for VAR diagnostic pre or post estimation?
I only know for pre estimation, I need to calculate VAR lag, I use both RATS and EViews, two is enough.
And for post estimation is VAR stability (eigenvalue) and serial correlation test.
Thank you in advance and looking forward to your reply and would be very happy to get any comment or suggestion for the code.


Regards,
Attachments
KRJSX.txt
Code in TXT file
(7.61 KiB) Downloaded 1123 times
ModifyKimandRoubini.RPF
The Code
(7.61 KiB) Downloaded 1156 times
Data.xls
Data
(96 KiB) Downloaded 874 times
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: NearVAR - Identification and Diagnostic Tests

Unread post by TomDoan »

I'm not sure about Kim and Roubini, but Cushman and Zha are quite explicit about the fact that they don't assume stationarity, so a test for the maximum root of the companion matrix isn't really useful. You can do an @MVQSTAT test for remaining multivariate serial correlation. @CVSTABTEST can be used to test for stability of the covariance matrix.

The Cushman and Zha RATS code, aside from being rather old, is also using an algorithm which I wouldn't recommend any longer---they do importance sampling on the large parameter space (28) rather than a more modern treatment with random walk Metropolis as is done in the MONTENEARSVAR program.
ditadal89
Posts: 3
Joined: Sun Jun 02, 2013 4:38 pm

Re: NearVAR - Identification and Diagnostic Tests

Unread post by ditadal89 »

Dear Tom,


Thanks a lot for your reply.
I have read your code for Cushman and Zha (1997): http://www.estima.com/forum/viewtopic.php?f=8&t=1857
I should have asked you earlier regarding this.

I have some questions:
1. So, I better not use near VAR for Cushman and Zha?
2. Can I follow the code directly with some modification regarding my own data?
3. related to: compute c=.9*inv(%decomp(y2m2)), what is it actually?
4. related to: * These are the guess values used in the authors' program, can I just copy this guess value or just make it up myself?
5. related to: compute fcvx=.25*%decomp(%xx), what is the range that I can change?
6. related to: Covariance Model-Likelihood - Estimation by BFGS, Significance Level is 0.0000000, meaning the OIR is rejected, isn't it? Actually, can I use a model with rejected OIR?
7. related to: @MVQSTAT, is H0: no autocol?, so if the significance level is more than 0.95, does it mean significantly at least 5% no autocorrelation?
8. related to: @MVQSTAT and @WestChoTest, I am not sure about the non constant variances, which one I should use? Is there any example for @WestChoTest?
9. related to: @CVSTABTEST, is the H0: no stability or other way around? I got the approx p-value of 0.000. If it is no stability, so does it mean I can not proceed or infer from this as what I understand when I read the user guide?
10. related to IRF, is it bad when the real IRF line cross out the confidence bands?

Sorry for my long and silly questions. Really looking forward to your feedback.


Thanks and regards,
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: NearVAR - Identification and Diagnostic Tests

Unread post by TomDoan »

ditadal89 wrote:Dear Tom,


Thanks a lot for your reply.
I have read your code for Cushman and Zha (1997): http://www.estima.com/forum/viewtopic.php?f=8&t=1857
I should have asked you earlier regarding this.
I just did it in the last couple of days.
ditadal89 wrote: I have some questions:
1. So, I better not use near VAR for Cushman and Zha?
No. That's certainly appropriate here. The excluded block is very, very insignificant.
ditadal89 wrote: 2. Can I follow the code directly with some modification regarding my own data?
Yes.
ditadal89 wrote: 3. related to: compute c=.9*inv(%decomp(y2m2)), what is it actually?
That's a "BFGS" adjustment. Without the .9, it would actually be spot on the optimized values. If you start at the optimum, the BFGS algorithm can't estimate the curvature. This backs it away slightly from there. Read the section on BFGS in the User's Guide if you need more.
ditadal89 wrote: 4. related to: * These are the guess values used in the authors' program, can I just copy this guess value or just make it up myself?
They are rather strange guess values as they don't adapt at all to the data. Apparently, the model fit OK even with these, so they never saw a need for anything different. I would just warn you that they may not work as well with other data.
ditadal89 wrote: 5. related to: compute fcvx=.25*%decomp(%xx), what is the range that I can change?
I believe I tried 1.0, .5, .35 before finally settling on .25. The others had too low an acceptance rate.
ditadal89 wrote: 6. related to: Covariance Model-Likelihood - Estimation by BFGS, Significance Level is 0.0000000, meaning the OIR is rejected, isn't it? Actually, can I use a model with rejected OIR?
They don't address that in the paper that I've seen.
ditadal89 wrote: 7. related to: @MVQSTAT, is H0: no autocol?, so if the significance level is more than 0.95, does it mean significantly at least 5% no autocorrelation?
Yes. H0 is no autocorrelation. The rejection values on the significance levels are low ones, not high ones. .95 is showing no sign at all of autocorrelation; it's numbers like .05 that are of concern.
ditadal89 wrote: 8. related to: @MVQSTAT and @WestChoTest, I am not sure about the non constant variances, which one I should use? Is there any example for @WestChoTest?
@WestChoTest is strictly univariate, so they really aren't comparable.
ditadal89 wrote: 9. related to: @CVSTABTEST, is the H0: no stability or other way around? I got the approx p-value of 0.000. If it is no stability, so does it mean I can not proceed or infer from this as what I understand when I read the user guide?
The null is stability. So an approximate p-value of .000 means that there is rather strong evidence of instability. What the source is for that isn't clear, since you could have a constant covariance matrix on a model which has an (unmodeled) break elsewhere. You certainly might want to see if you can figure out where the problem lies. However, note that the vast majority of published work doesn't test for model stability, so you're a step ahead of most by even looking at it.
ditadal89 wrote: 10. related to IRF, is it bad when the real IRF line cross out the confidence bands?
A lot depends upon what you're graphing as the "estimate". If it's the IRF from the point estimates, that's actually not all that uncommon, which is why it's more common to use either the mean or median of the "cloud" of estimates. You should read Sims and Zha, Econometrica 1999 if you want more information.
ditadal89
Posts: 3
Joined: Sun Jun 02, 2013 4:38 pm

Re: NearVAR - Identification and Diagnostic Tests

Unread post by ditadal89 »

Dear Tom,


Thanks for the answer...
I decided to use nearVAR approach.

I am not good at math and confused about this.
I want to apply near-SVAR with short run restricion as in my first post.
If my matrix has two rows without zero, is it possible? Is my matrix still a singular matrix?

When I try with JMulTi, it did not work, but when I did with RATS it worked as in my first post.

Looking forward to your response.


Thanks in advance
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: NearVAR - Identification and Diagnostic Tests

Unread post by TomDoan »

ditadal89 wrote:Dear Tom,

Thanks for the answer...
I decided to use nearVAR approach.

I am not good at math and confused about this.
I want to apply near-SVAR with short run restricion as in my first post.
If my matrix has two rows without zero, is it possible? Is my matrix still a singular matrix?

When I try with JMulTi, it did not work, but when I did with RATS it worked as in my first post.

Looking forward to your response.


Thanks in advance
Whether it's a near SVAR or a full SVAR, you can't have a covariance matrix model with two solid rows of free parameters (other than normalization). It won't be identified. In order for a model like that to work, you would have to have some type of other restriction (2nd and 3rd coefficients equal or something like that).
irin
Posts: 2
Joined: Fri Apr 15, 2016 4:33 pm

Re: NearVAR - Identification and Diagnostic Tests

Unread post by irin »

Dear all,

I currently estimate near VAR model using Montenearsvar procedure. However some of my time series are I(1), some are I(0). The block exogeneity assumption is important for theoretical interpretation (I have small home country block and global block) – that’s why I decided to use model with differenced I(1) and levels I(0).

However, there is a strong evidence that there is cointegration between two I(1) variables, and moreover one variable is in home block, the other is in global (oil prices) – of course variable of global block do not react to this cointegation. Is there some way to include the error correction term (with known cointegrating vector) in domestic block, so that the procedure will incorporate it in impulse responses and further in variance decomposition?

Thanks in advance for any help!
Kr
Irin
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: NearVAR - Identification and Diagnostic Tests

Unread post by TomDoan »

How big is the model? (Total number of regressors across equations). Smaller size models can be handled with brute force "SUR" sampling so you don't need specialized structure. It's the bigger ones (>500 regressors) where it becomes infeasible to continually do the full size multivariate regression draws that would be needed with special restrictions.
irin
Posts: 2
Joined: Fri Apr 15, 2016 4:33 pm

Re: NearVAR - Identification and Diagnostic Tests

Unread post by irin »

Thank you for answering, Tom!

it's a small model ( due to small sample size available) - 4 var in home block, 2-3 in globall one (basicly I'm interested in home country, global block I need only for having a shock originating globally), 60 observations, 2 lags. Just for differences (without cointegration) works fine!

I thought to do something like this (just as an example). *D -means differenced, H-home, G-global:

linreg ha
#constant ga
equation(lastreg) coin

** or alternatively and even better set it manually as
** equation(coeffs=||1.0,-0.5,0.2||) coin
** # ha ga constant

equation(coeffs=||1.0,1.0||,identity) haid ha
# ha{1} dha
equation(coeffs=||1.0,1.0||,identity) gaid ga
# ga{1} dga

system(model=Hblock)
variables dha dhb dhc
lags 1 to nlags
det dga{1 to nlags} dgb{1 to nlags} constant coin{1}
end(system)

system(model=Gblock)
variables dga dgb
lags 1 to nlags
det constant
end(system)

compute nearvar = Hblock + Gblock ***** ?+ identities haid, gaid?

... and then proceed with Gibssampling and impulse responses as in montenearsvar..
The problem is however that impulses from dga are also going through cointegration - and i want also see this in impulse responses obtained (in this case 5x5 matrix). What do you think, does it make sense? When yes, how then these identities could be incorporated in code?

I really appreciate your help, thanks in advance!

Kr
Irin

PS my referee insist strongly that cointegration should be taken into account.. really thank you for your time, I'm very often guest at this forum, it's really helpful!
Post Reply