Smoother/Filter with missing data

Discussion of State Space and Dynamic Stochastic General Equilibrium Models
comac
Posts: 25
Joined: Mon Jun 29, 2009 7:31 am

Smoother/Filter with missing data

Unread post by comac »

Hello everybody. I am trying to run a very simple model:

MEASUREMENT: infl(t) = e_infl(t) + v(t)
mt(t) = m(t)* + u(t)

TRANSITION: e_infl(t) = e_infl(t-1) + gamma m(t-1)* + n(t)
m(t)* = m(t-1)* + e(t)

My calendar is from 1975 1 to 2008 6 (monthly).
The problem here is that for the observable m(t) I only have the series from 1987. How could I handle that?


2) I am not sure how to treat the variances. In a simplified model when I input them in the 'nonlin' command it returns me the error message 'Rank of Prediction Error Variance < Number of Observables'. Can somebody look what's going on?

Code: Select all

nonlin sigmaw=1.0 sigmav=1.0
dec frml[rect] af
dec frml[rect] cf
dec frml[vect] yf
dec frml[vect] x0f
dec symm sw
compute sw = ||sigmaw||
dec symm sv
compute sv = ||sigmav||
frml x0f = ||ukinfl_hp||
frml cf = ||1.0||
frml af = ||1.0||
frml yf = ||ukinfl||
dlm(a=af,y=yf,c=cf,sw=sw,sv=sv,x0=x0f,type=filter,method=bfgs,print) / states

set f_ukinfl = states(t)(1)


Many thanks
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: Smoother/Filter with missing data

Unread post by TomDoan »

comac wrote:Hello everybody. I am trying to run a very simple model:

MEASUREMENT: infl(t) = e_infl(t) + v(t)
mt(t) = m(t)* + u(t)

TRANSITION: e_infl(t) = e_infl(t-1) + gamma m(t-1)* + n(t)
m(t)* = m(t-1)* + e(t)

My calendar is from 1975 1 to 2008 6 (monthly).
The problem here is that for the observable m(t) I only have the series from 1987. How could I handle that?
Other than making sure that m(t) is NA for the period before 1987, you don't have to do anything. DLM will automatically adjust for the reduced number of observables.
comac wrote: 2) I am not sure how to treat the variances. In a simplified model when I input them in the 'nonlin' command it returns me the error message 'Rank of Prediction Error Variance < Number of Observables'. Can somebody look what's going on?

Code: Select all

nonlin sigmaw=1.0 sigmav=1.0
dec frml[rect] af
dec frml[rect] cf
dec frml[vect] yf
dec frml[vect] x0f
dec symm sw
compute sw = ||sigmaw||
dec symm sv
compute sv = ||sigmav||
frml x0f = ||ukinfl_hp||
frml cf = ||1.0||
frml af = ||1.0||
frml yf = ||ukinfl||
dlm(a=af,y=yf,c=cf,sw=sw,sv=sv,x0=x0f,type=filter,method=bfgs,print) / states

set f_ukinfl = states(t)(1)


Many thanks
Use the EXACT option with that set up. That's not the source of the problem however.

Code: Select all

nonlin sigmaw=1.0 sigmav=1.0


You may be thinking that that's making sigmaw and sigmav into free parameters and giving them 1.0 guess values. It isn't. It's fixing the two parameters at 1 for all function evaluations. Since you're not using a METHOD option to estimate the state space model, it never even looks at the parameter set so sigmaw and sigmav aren't defined. What I think you mean is:

Code: Select all

nonlin sigmaw sigmav
compute sigma=1.0,sigmav=1.0
That should at least get the DLM to solve out. However, in order to estimate the parameters, you need to use METHOD=BFGS or one of the other estimation methods. (The default is METHOD=SOLVE which just takes the current values of anything entering the state space matrices).
comac
Posts: 25
Joined: Mon Jun 29, 2009 7:31 am

Re: Smoother/Filter with missing data

Unread post by comac »

Tom thank you. Now I solved the problem I had yesterday.

Sometimes the model does not converge depending on starting values. So I had to play a bit.

Nonetheless I wonder whether there is some criterion in the choice of the priors. For coefficients to be estimated I normally use an OLS regression. But what about variances and the elements in x0? In the latter case I use HP filtered series and I stat values of variances for the observables. Is this the most "dogmatic" criterion to choose?

Thanks a lot
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: Smoother/Filter with missing data

Unread post by TomDoan »

If the estimation is stalling with very small parameter values, you can often get better behavior by multiplying the data by 100. That increases the variances by a factor of 10^4, which often pulls them out of the gray zone where it's not clear whether they're small because that's their natural scale, or small because they're "machine-zero". If guess values (which I assume you mean rather than "prior") are a problem, doing a handful of simplex iterations (adding PMETHOD=SIMPLEX,PITERS=5 for instance) can often help out, since simplex isn't as sensitive to guess values as BFGS.
Post Reply