Page 1 of 1
EM-Algorithm and DLM
Posted: Wed Apr 13, 2011 2:25 am
by jonasdovern
Among the example files shipping with RATS, is there one that uses the EM-algorithm to estimate a state-space model? (I would like to implement the estimation of a large dynamic factor model (with missing data points) by iterating over factor estimation by KF and estimation of the parameters of the model. Seeing how DLM is used in an EM-algorithm would help a lot.)
Re: EM-Algorithm and DLM
Posted: Wed Apr 13, 2011 10:34 am
by TomDoan
jonasdovern wrote:Among the example files shipping with RATS, is there one that uses the EM-algorithm to estimate a state-space model? (I would like to implement the estimation of a large dynamic factor model (with missing data points) by iterating over factor estimation by KF and estimation of the parameters of the model. Seeing how DLM is used in an EM-algorithm would help a lot.)
Do you actually need to do that? It sounds as if you can simply estimate directly with
DLM, since it will adjust the likelihood for missing observables. The replication for
Diebold, Rudebusch & Aruoba (2006), "The macroeconomy and the yield curve: a dynamic latent factor approach" would be an example.
If you really have a two-step process, you wouldn't be estimating the state-space model using EM; instead it would be using the state-space model as the "E" step. I don't believe we have any examples of that using EM, but there are several which do something similar using Gibbs sampling. The difference between the two is that EM will use
TYPE=SMOOTH on
DLM, while Gibbs sampling will use
TYPE=CSIM. The best example of that would be FAVAR which generates the unobservable factors, then runs them through a standard VAR. See
http://www.estima.com/forum/viewtopic.php?f=8&t=1029.
Re: EM-Algorithm and DLM
Posted: Thu Apr 14, 2011 1:34 am
by jonasdovern
Thanks. The reference to the Gibbs sampling helps. I guess directly estimating with DLM is not working because there are so many parameters involved (huge matrix of factor loadings, coefficient matrices of the VAR for the factors) in the model.
Would EM and Gibbs sampling yield equivalent estimates asymptotically?
Would the EM approach simply boil down to iteratively using DLM to get the smoothed factors (E) and regressing the observables on the new factors to get an update of the loadings and re-estimating the factor VAR with the new factors (M) until convergence? Should this ensure that the likelihood increases after each iteration step?
Re: EM-Algorithm and DLM
Posted: Thu Apr 14, 2011 9:07 am
by TomDoan
jonasdovern wrote:Thanks. The reference to the Gibbs sampling helps. I guess directly estimating with DLM is not working because there are so many parameters involved (huge matrix of factor loadings, coefficient matrices of the VAR for the factors) in the model.
Would EM and Gibbs sampling yield equivalent estimates asymptotically?
Would the EM approach simply boil down to iteratively using DLM to get the smoothed factors (E) and regressing the observables on the new factors to get an update of the loadings and re-estimating the factor VAR with the new factors (M) until convergence? Should this ensure that the likelihood increases after each iteration step?
From the sounds of it, yes, EM should work to converge eventually. EM should give the same result as patiently waiting for
DLM to finish maximizing across a huge matrix of factor loadings. Gibbs sampling will give (asymptotically) the mean of the posterior of the parameters, which won't necessarily be the same as the likelihood maximum. For most models, they're similar, but if the likelihood function has any pathologies, you're more likely to find them with Gibbs sampling than with ML.