Page 2 of 2
Re: time varying
Posted: Thu Oct 01, 2009 7:02 am
by unforgiven02
Hi Tom,
When replicating the Özlale's model via codes you post, Rats have displayed some error messages such as "## DLM2. No Observations Produce Valid Output. Check Data and Initial Values", "## DLM5. Probable Model Error. Diffuse prior was not reduced to zero rank The Error Occurred At Location 0238 of loop/block, "## DLM5. Probable Model Error. Diffuse prior was not reduced to zero rank"..
My emphasising is estimation of nairu with state space kalman filtering using phillips curve relation. Özlale's model is very interesting and original one due to have been used the extended kalman filter (with no employment and inlation). My rats version is 7. Do you have any any idea how i can cope with this situation? thanks a lot
Re: time varying
Posted: Fri Oct 02, 2009 2:55 pm
by TomDoan
You can ignore that warning message. It was generated in error (corrected with 7.2). The results should be correct. You're just getting a bunch of them because it's doing the same thing 40 times over.
Re: time varying
Posted: Thu Mar 18, 2010 7:07 am
by ecrgap
Hi
does the dsge instruction works for nonlinear state space models?
Is there an example?
Thanks a lot in advance
Re: time varying
Posted: Thu Mar 18, 2010 12:42 pm
by TomDoan
Yes. It will linearize or log-linearize any type of dynamic model; even ones without expectational terms. This isn't non-linear, but it uses DSGE to create a standard state-space form from a more general dynamic model.
Code: Select all
*
* Sargent77Bayes.prg
* Bayesian estimation of model in Sargent(1977), "The Demand for Money
* During Hyperinflations under Rational Expectations: I", Int'l Economic
* Review, vol 18, no 1, 59-82.
*
* Example 5 in Barillas, et. al. "Practicing Dynare"
*
open data cagan_data.prn
data(format=prn,org=cols) 1 34 mu x
*
declare series a1 a2 eps eta
declare real alpha lambda sig_eta sig_eps
*
* There's no particular reason to substitute out the current a1 and a2
* in the first two equations.
*
* Note that the model in this form has no expectational terms. It could
* be coded directly for input into DLM; however, DSGE can produce the
* state-space model as well.
*
frml(identity) f1 = x-(x{1}+a1-lambda*a1{1})
frml(identity) f2 = mu-((1-lambda)*x{1}+lambda*mu{1}+a2-lambda*a2{1})
frml(identity) f3 = a1-1.0/(lambda+(1-lambda)*alpha)*(eps-eta)
frml(identity) f4 = a2-$
1.0/(lambda+(1-lambda)*alpha)*((1+alpha*(1-lambda))*eps-(1-lambda)*eta)
frml d1 = eps
frml d2 = eta
*
group cagan f1 f2 f3 f4 d1 d2
compute alpha=-2.344,lambda=.5921,sig_eta=.10,sig_eps=.10
*****
*
* This solves the model to produce the state space form
*
function SolveModel
dsge(model=cagan,a=adlm,f=fdlm) x mu a1 a2 eps eta
end SolveModel
*****
*
* Since the dimension of A can increase as part of the solution by DSGE,
* this solves the model once, then sets up the C matrix for DLM based
* upon the size of the generated A matrix.
*
compute SolveModel()
compute cdlm=%identity(2)~~%zeros(%rows(adlm)-2,2)
compute gdlm=%dlmgfroma(adlm)
*****
*
* This solves the model, then evaluates the log likelihood
*
function EvalModel
compute SolveModel()
dlm(a=adlm,f=fdlm,sw=%diag(||sig_eps^2,sig_eta^2||),y=||x,mu||,$
c=cdlm,g=gdlm,method=solve)
compute EvalModel=%logl
end
******
*
* Get the ranges for the priors. If you directly specify uniforms, just
* use the upper and lower bounds. We are using this because the original
* code specified the uniforms based upon mean and standard deviation.
*
source uniformparms.src
compute alphaparms =%UniformParms(-5.0,2.0)
compute lambdaparms =%UniformParms(0.68,0.5)
compute sig_etaparms=%UniformParms(0.5,0.25)
compute sig_epsparms=%UniformParms(0.5,0.25)
*****
*
* This solves the model, then evaluates the posterior. Since all
* components of the prior are uniform, we don't even have to worry about
* the densities; we just need to reject out of range values. This also
* discards values which come close to a zero divisor in f3 and f4.
*
function EvalPosterior
if abs(lambda+(1-lambda)*alpha)<1.e-3.or.$
alpha<alphaparms(1).or.alpha>alphaparms(2).or.$
lambda<lambdaparms(1).or.lambda>lambdaparms(2).or.$
sig_eta<sig_etaparms(1).or.sig_eta>sig_etaparms(2).or.$
sig_eps<sig_epsparms(1).or.sig_eps>sig_epsparms(2) {
compute EvalPosterior=%na
return
}
*
* Since all the priors are uniform, the log densities will be constant
* once we've excluded out of range values.
*
compute EvalPosterior=EvalModel()
end
*****
*
* Maximize the posterior. As with the ML for this model, we do this in
* two stages, first estimating the two standard deviations, then doing
* the full parameter set. Because the prior is flat, in this case, this
* is the same as ML.
*
nonlin sig_eta sig_eps
find(method=simplex,iters=5,noprint) maximum EvalPosterior()
end find
nonlin(parmset=pset) alpha lambda sig_eta sig_eps
find(method=bfgs,parmset=pset,stderrs) maximum EvalPosterior()
end find
*
* Start at the posterior mode
*
compute nbeta =%nreg
compute logplast=%funcval
*
compute nburn =5000
compute ndraws=25000
compute accept=0
*
dec series[vect] bgibbs
gset bgibbs 1 ndraws = %zeros(nbeta,1)
*
* Acceptance rate with a full step is a bit low, so we take a reduced size
* increment.
*
compute [rect] fxx=0.5*%decomp(%xx)
*
infobox(action=define,progress,lower=-nburn,upper=ndraws) "Random Walk MH"
do draw=-nburn,ndraws
compute parmslast=%parmspeek(pset)
compute %parmspoke(pset,parmslast+%ranmvnormal(fxx))
compute logptest=EvalPosterior()
if %valid(logptest)
compute %a=exp(logptest-logplast)
else
compute %a=0.0
if %a>1.0.or.%uniform(0.0,1.0)<%a {
compute accept=accept+1
compute logplast=logptest
}
else
compute %parmspoke(pset,parmslast)
*
infobox(current=draw) %strval(100.0*accept/(draw+nburn+1),"##.#")
if draw<=0
next
*
* Do the bookkeeping here.
*
compute bgibbs(draw)=%parmspeek(pset)
end do draw
infobox(action=remove)
*
@mcmcpostproc(ndraws=ndraws,mean=bmean,stderrs=bstderrs,cd=bcd) bgibbs
*
set alphas 1 ndraws = bgibbs(t)(1)
density(grid=automatic,maxgrid=100,band=.75) alphas 1 ndraws xalpha falpha
scatter(style=line,vmin=0.0,footer="Posterior for alpha")
# xalpha falpha
*
set lambdas 1 ndraws = bgibbs(t)(2)
density(grid=automatic,maxgrid=100,band=.10) lambdas 1 ndraws xlambda flambda
scatter(style=line,vmin=0.0,footer="Posterior for lambda")
# xlambda flambda
*
set sigetas 1 ndraws = bgibbs(t)(3)
density(grid=automatic,maxgrid=100,band=.05) sigetas 1 ndraws xsigeta fsigeta
scatter(style=line,vmin=0.0,footer="Posterior for sig_eta")
# xsigeta fsigeta
*
set sigepss 1 ndraws = bgibbs(t)(4)
density(grid=automatic,maxgrid=100,band=.003) sigepss 1 ndraws xsigeps fsigeps
scatter(style=line,vmin=0.0,footer="Posterior for sig_eps")
# xsigeps fsigeps
Data file:
Re: time varying
Posted: Thu Mar 18, 2010 2:03 pm
by ecrgap
Thanks a lot.
So I guess this should work in a dsge model where I assume that the interest rate rule is nonlinear in the exchange rate for example, like i = rho*i{1} + (1-rho)*(faip*dp{1}+faiy*y+faiq(q*q{1})+faiqq*(q*q{1}^2), or not?
Thank you
Re: time varying
Posted: Thu Mar 18, 2010 2:42 pm
by TomDoan
That should work fine. It will linearize the around the steady state. If Q is the only term that's involved in non-linearities, it will be the only one for which the expansion point matters, so you could just input that using Q<<q0 on the DSGE instruction.
Re: time varying
Posted: Fri Mar 19, 2010 4:42 pm
by ecrgap
Thanks a lot
Is there an example available? I have a paper in mind which is the following (attached). The authors apply a threshold interest rate rule in a dsge model
Thank you