the bug comes at line
Code: Select all
@MCProcessIRF(model=MSSysRegModel,lower=lower,upper=upper,irf=irf)its seems nice. Do you think that we can implement these computations with your MS-VAR code? And for Generalized IRF?
Thank you so much
Code: Select all
@MCProcessIRF(model=MSSysRegModel,lower=lower,upper=upper,irf=irf)The @MCProcessIRF requires that you do the Gibbs sampling operation, which you haven't done. (That's in the eev_mcmc.rpf program). Plus, that entire block of instructions was pulled without change from the original EEV application and doesn't match your model. (Note the labels, which are still for the EEV variables).Akawa wrote:I have removed the g variable but the problem remains:
the bug comes at lineCode: Select all
@MCProcessIRF(model=MSSysRegModel,lower=lower,upper=upper,irf=irf)
First of all, that isn't new to this paper---what they propose is exactly what is included in Diebold-Yilmaz(2012). And what it demonstrates is that if you take a bunch of non-negative numbers and divide each by the total, you decompose the sum. When you apply this to the standard FEVD, that sum that is being decomposed is the multi-step forecast error variance (hence the full name "Forecast Error Variance Decomposition"). Here, it isn't the variance of anything. It isn't the variance of generalized forecasts. It isn't the generalized variance of forecasts. It quite literally is the number that you need to divide everything by to make these "contributions" sum to one.Akawa wrote: Concerning the Generalized FEVD, I found this paper from the CREATES: http://econ.au.dk/fileadmin/site_files/ ... p14_17.pdf
its seems nice. Do you think that we can implement these computations with your MS-VAR code? And for Generalized IRF?
I assume you have a fair number of entries where the probability of the regimes is somewhere in the midrange rather than near 0 or 1.alexecon wrote:I have adapted this for my own work and the output is very sensible, especially w.r.t. impulse responses. I'm using the MCMC program with 20,000 draws (4,000 burn-in) and find that the number of observations in each regime differs quite substantially from estimation to estimation. Should I increase the number of draws or is there a more likely 'culprit' for this?
No, actually the regimes are very clearly defined with most probabilities being close to 0 or 1. To see what's going on, I ran my program six times and got two sets of results (in each set the results were almost identical). It seems that what is making the difference is whether I clear the memory (with the yellow sponge) or restart RATS. Does clearing the memory in this way keep a few things that may affect the estimation that follows?TomDoan wrote:I assume you have a fair number of entries where the probability of the regimes is somewhere in the midrange rather than near 0 or 1.
No. Unless you did something wrong---clearing the memory should have no effect on a re-run. If you're getting very different results from different runs, then you have an absorbing state in the chain where it apparently either picks 1 or picks 0 for some set of entries and can't move away from those.alexecon wrote:No, actually the regimes are very clearly defined with most probabilities being close to 0 or 1. To see what's going on, I ran my program six times and got two sets of results (in each set the results were almost identical). It seems that what is making the difference is whether I clear the memory (with the yellow sponge) or restart RATS. Does clearing the memory in this way keep a few things that may affect the estimation that follows?TomDoan wrote:I assume you have a fair number of entries where the probability of the regimes is somewhere in the midrange rather than near 0 or 1.
You are right, I re-ran the program a few more times and my theory of "restart" vs "memory clear" does not stand up to scrutiny.TomDoan wrote:No. Unless you did something wrong---clearing the memory should have no effect on a re-run. If you're getting very different results from different runs, then you have an absorbing state in the chain where it apparently either picks 1 or picks 0 for some set of entries and can't move away from those.
Code: Select all
* EEV_MCMC.RPF
*
* Estimation of the model from Ehrmann, Ellison, Valla (2003),
* "Regime-dependent impulse response functions in a Markov-switching
* vector autoregression model", Economics Letters, Vol. 78, pp. 295-299.
* The data set is a reconstruction rather than the author's original
* data set.
*
* Estimation by Gibbs Sampling
*
open data 2018_july_turkey_daily_oil_ret_usd.xlsx
calendar(d) 1997:01:03
data(format=xlsx,org=columns) 1997:01:03 2018:06:27
table
set ret = TKNATAL
stats(fractile) EIAEBRT
CMOMENT(print, corr)
# EIAEBRT TURKLI TKVIBON RET
*
@varlagselect(Lags=10,crit=hq)
# EIAEBRT TURKLI TKVIBON RET
@mssysregression(states=3,switch=ch)
# EIAEBRT TURKLI TKVIBON RET
# constant EIAEBRT{1 to 1} TURKLI{1 to 1} TKVIBON{1 to 1} RET{1 to 1}
*
gset pt_t = %zeros(nstates,1)
gset pt_t1 = %zeros(nstates,1)
gset psmooth = %zeros(nstates,1)
*
compute gstart=1997:01:05, gend=2018:06:27
@MSSysRegInitial gstart gend
compute p=%mspexpand(p)
gset MSRegime gstart gend = %ranbranch(||.5,.5||)
*
* Prior for transitions. Weak Dirichlet priors with preference for
* staying in a given regime.
*
dec vect[vect] gprior(nstates)
compute gprior(1)=||8.0,1.0,1.0||
compute gprior(2)=||1.0,8.0,1.0||
compute gprior(3)=||1.0,1.0,8.0||
dec vect tcounts(nstates)
*
* Hierarchical prior for sigmas
* Uninformative prior on the common component.
*
* Priors for the relative Wisharts
*
dec vect nuprior(nstates)
compute nuprior=%fill(nstates,1,6.0)
*
dec vect[series] vresids(nvar)
dec vect[symm] uu(nstates)
*
* Flat prior for the regressions.
*
dec symm hprior
dec vect bprior
compute hprior=%zeros(nvar*nreg,nvar*nreg)
compute bprior=%zeros(nvar*nreg,1)
*
compute nburn=1000, ndraws=1000
nonlin(parmset=allparms) betasys sigmav p
*
* For relabeling
*
dec vect voil(nstates)
dec vect[rect] tempbeta(nstates)
dec vect[symm] tempsigmav(nstates)
dec rect tempp(nstates,nstates)
*
* Bookkeeping arrays
*
dec series[vect] bgibbs
gset bgibbs 1 ndraws = %parmspeek(allparms)
set regime1 gstart gend = 0.0
*
* Bookkeeping information for impulse responses
*
declare vect[rect] %%responses
dim %%responses(ndraws)
compute steps=15
*
infobox(action=define,lower=-nburn,upper=ndraws,progress) $
"Gibbs Sampling"
do draw=-nburn,ndraws
*
* Draw sigma's given beta's and regimes
* Compute the regime-specific residuals
*
@MSSysRegResids(regime=MSRegime) vresids gstart gend
*
* Compute the sums of squared residuals for each regime.
*
ewise uu(i)=%zeros(nvar,nvar)
ewise tcounts(i)=0.0
do time=gstart,gend
compute uu(MSRegime(time))=uu(MSRegime(time))+$
%outerxx(%xt(vresids,time))
compute tcounts(MSRegime(time))=tcounts(MSRegime(time))+1
end do time
compute uucommon=%zeros(nvar,nvar)
do k=1,nstates
compute uucommon=uucommon+nuprior(k)*inv(sigmav(k))
end do k
*
* Draw the common sigma given the regime-specific ones.
*
compute sigma=%ranwishartf(%decomp(inv(uucommon)),%sum(nuprior))
*
* Draw the regime-specific sigmas given the common one.
*
do k=1,nstates
compute sigmav(k)=%ranwisharti(%decomp(inv(uu(k)+nuprior(k)*sigma)),$
tcounts(k)+nuprior(k))
end do k
*
* Draw beta's given sigma's and regimes
*
do i=1,nstates
cmom(smpl=(MSRegime==i),model=MSSysRegModel) gstart gend
compute betasys(i)=%reshape($
%ranmvkroncmom(%cmom,inv(sigmav(i)),hprior,bprior),nreg,nvar)
end do i
*
*
*
ewise voil(i)=sigmav(i)(3,3)
compute swaps=%index(voil)
if swaps(1)==2
disp "Draw" draw "Executing swap"
*
* Relabel the beta's
*
ewise tempbeta(i)=betasys(i)
ewise betasys(i)=tempbeta(swaps(i))
*
* Relabel the sigma's
*
ewise tempsigmav(i)=sigmav(i)
ewise sigmav(i)=tempsigmav(swaps(i))
*
* Relabel the transitions
*
compute tempp=p
ewise p(i,j)=tempp(swaps(i),swaps(j))
*
* Draw the regimes
*
:skipbeta
@MSSysRegFilter gstart gend
:redrawregimes
@%mssample(start=gstart,end=gend) p pt_t pt_t1 MSRegime
do i=1,nstates
sstats gstart gend MSRegime==i>>tcounts(i)
end do i
if %minvalue(tcounts)<5 {
disp "Draw" draw "Redrawing regimes with regime of size" $
%minvalue(tcounts)
goto redrawregimes
}
*
* Draw p's
*
do j=1,nstates
do i=1,nstates
sstats(smpl=(MSRegime{1}==j.and.MSRegime==i)) gstart+1 gend $
1>>tcounts(i)
end do i
compute pdraw=%randirichlet(tcounts+gprior(j))
do i=1,nstates
compute p(i,j)=pdraw(i)
end do i
end do j
infobox(current=draw)
if draw>0 {
*
* Do the bookkeeping
*
set regime1 gstart gend = regime1+(MSRegime==1)
compute bgibbs(draw)=%parmspeek(allparms)
*
* Save IRF's for regime 1. Choleski factor standardized to unit
* shock is used.
*
@MSSysRegSetModel(regime=1)
compute factor=%decomp(sigmav(1)),factor=factor*inv(%diag(%xdiag(factor)))
impulse(noprint,model=MSSysRegModel,results=impulses1,steps=steps,factor=factor)
*
* Save IRF's for regime 2
*
@MSSysRegSetModel(regime=2)
compute factor=%decomp(sigmav(2)),factor=factor*inv(%diag(%xdiag(factor)))
impulse(noprint,model=MSSysRegModel,results=impulses2,steps=steps,factor=factor)
* Save IRF's for regime 2
@MSSysRegSetModel(regime=3)
compute factor=%decomp(sigmav(3)),factor=factor*inv(%diag(%xdiag(factor)))
impulse(noprint,model=MSSysRegModel,results=impulses3,steps=steps,factor=factor)
* This will interleave the shocks from the two regimes.
*
dim %%responses(draw)(%rows(impulses1)*%cols(impulses1)*2,steps)
ewise %%responses(draw)(i,j)=ix=%vec(%xt(impulses1,j)~~%xt(impulses2,j)~~%xt(impulses3,j)),ix(i)
}
end do draw
infobox(action=remove)
@mcmcpostproc(ndraws=ndraws,mean=bmeans,stderrs=bstderrs) bgibbs
*
report(action=define)
report(atrow=1,atcol=1,fillby=cols) %parmslabels(allparms)
report(atrow=1,atcol=2,fillby=cols) bmeans
report(atrow=1,atcol=3,fillby=cols) bstderrs
report(action=format,picture="*.###")
report(action=show)
*
set regime1 gstart gend = regime1/ndraws
graph(header="MCMC Probability of Regime 1")
# regime1
*
set regime2 gstart gend = regime2/ndraws
graph(header="MCMC Probability of Regime 2")
# regime2
*
set regime3 gstart gend = regime3/ndraws
graph(header="MCMC Probability of Regime 2")
# regime3
*
@MCProcessIRF(model=MSSysRegModel,lower=lower,upper=upper,irf=irf,percentile=||.10,.90||)
table
set irf4_1 = IRF(4,1)
set lower4_1 = LOWER(4,1)
set upper4_1 = UPPER(4,1)
*
set irf4_2 = IRF(4,2)
set lower4_2 = LOWER(4,2)
set upper4_2 = UPPER(4,2)
*
set irf4_3 = IRF(4,3)
set lower4_3 = LOWER(4,3)
set upper4_3 = UPPER(4,3)
*
set irf4_4 = IRF(4,4)
set lower4_4 = LOWER(4,4)
set upper4_4 = UPPER(4,4)
* *******************************************EIAEBRT TURKLI TKVIBON TKNAT30
spgraph(hfields=4,vfields=1)
graph(nodate, header ="Responses of ret to Oil Shocks: Regime 1") 3
# irf4_1 * 15
# lower4_1 * 15 2
# upper4_1 * 15 2
*
graph(nodate,header ="Responses of ret to oil Shocks: Regime 2") 3
# irf4_2 * 15
# lower4_2 * 15 2
# upper4_2 * 15 2
*
graph(nodate,header ="Responses of ret to oil Shocks: Regime 3") 3
# irf4_3 * 15
# lower4_3 * 15 2
# upper4_3 * 15 2
graph(nodate,header ="Responses of ret to oil Shocks: Regime 3") 3
# irf4_4 * 15
# lower4_4 * 15 2
# upper4_4 * 15 2
spgraph(done)
Then, what about STDERRS and CV options of @MCMCPostProc, Tom? Do you mean any inference based on their elements are invalid for individual significance?TomDoan wrote:You can't test anything with the MCMC estimates---any testing has to be done with the maximum likelihood. You can do a LR test for covariance and coefficients switching vs just covariances by estimating the models with and without the coefficient switching. That will have standard asymptotics.
Is the attached code consistent with what you recommend, Tom?TomDoan wrote:You can't test anything with the MCMC estimates---any testing has to be done with the maximum likelihood. You can do a LR test for covariance and coefficients switching vs just covariances by estimating the models with and without the coefficient switching. That will have standard asymptotics.