Dummy Implementation of Minnesota Prior

Questions and discussions on Vector Autoregressions
dareios82
Posts: 2
Joined: Fri Nov 16, 2012 1:34 pm

Dummy Implementation of Minnesota Prior

Unread post by dareios82 »

Dear Tom,

I am fairly new to RATs. I would like to estimate a VAR model using the dummy implementation of the Minnesota Prior described in "Bayesian Macroeconometrics" Del Negro and Schorfheide (2010):

http://economics.sas.upenn.edu/~schorf/ ... _macro.pdf

Schorfheide posted a MATLAB code that generates the dummy observations, which are then added to the existing dataset. The code can be found at:

http://economics.sas.upenn.edu/~schorf/ ... Matlab.zip

As I am not aware of any existing RATs routine that generates the same dummy observations, I tried to estimate the same VAR in RATS using the extended sample (data + prior) generated by the MATLAB code. When I do it, RATs returns completely different estimates for the OLS coefficients than those obtained in MATLAB. Even the inv(X'X) matrix for computing the OLS estimates is different. Instead, results are identical in MATLAB and RATs when I use just data.

I did some basic checking and everything seems fine, namely the extended dataset is loaded correctly and I use the same observations in MATLAB and RATs.

Any help on the matter would be greatly appreciated. If needed I can construct an example with Schorfheide's data to replicate the problem.

Thanks,
Dario
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: Dummy Implementation of Minnesota Prior

Unread post by TomDoan »

This is the description of an example out of the Bayesian Econometrics course (http://www.estima.com/forum/viewtopic.php?f=24&t=483). The @BVARBuildPrior can be used to create the mean and precision matrices for the prior without actually creating "dummy observations" in the data set.
BVARBuildPrior.pdf
Description
(57.5 KiB) Downloaded 161 times
bvarbuildprior.src
Main Procedure file
(1.98 KiB) Downloaded 1114 times
bvarbuildpriormn.src
Minnesota prior procedure file
(4.17 KiB) Downloaded 1050 times
6-8_Example_BVAR.rpf
Example file
(2.65 KiB) Downloaded 159 times
dareios82
Posts: 2
Joined: Fri Nov 16, 2012 1:34 pm

Re: Dummy Implementation of Minnesota Prior

Unread post by dareios82 »

Hi Tom,

thanks a lot for the pointer. The reason I want to use the dummy observations is to avoid using the Gibbs sampler. In the examples I have seen, RATs implements the Minnesota prior with the Gibbs sampler, and I would like to understand why that is the case since the Gibbs sampler might not be needed, and we could save computational time.

Del Negro and Schorfheide have 5 hyper-parameters to control their prior. 2 parameters control the overall tightness and the decay, and they are easy to map into the RATS code. The other three parameters set the prior for the covariance matrix, tune in the prior for the constant (in RATS I can use contight to control the tightness, not the mean, which in Del Negro and Schorfheide is based on a pre-sample), and the covariance between coefficients. It seems to me I cannot exactly match the dummy observation prior of Del Negro and Schorfheide with the RATS implementation. I just wanted to know whether I am missing something.

More in general, since Del Negro and Schorfheide are very prominent Bayesian macro-econometricians, I would find useful to have a code to replicate their implementation of the Minn prior without Gibbs sampler. This is my personal opinion, and I really appreciate all the existing tools RATs offers and the great feedback on the forum! :)

Thanks,
Dario
TomDoan
Posts: 7814
Joined: Wed Nov 01, 2006 4:36 pm

Re: Dummy Implementation of Minnesota Prior

Unread post by TomDoan »

Then why wouldn't you just use the built-in prior done using ESTIMATE?
tclark
Posts: 99
Joined: Wed Nov 08, 2006 3:20 pm

Re: Dummy Implementation of Minnesota Prior

Unread post by tclark »

Just for clarification, keep in mind that the Minnesota prior can be implemented in either dummy observations or with an explicit prior specification. The Bayesian estimator is based on moments that are data + prior: (X'X + prior term)*(X'Y + prior term). The dummy obs. approach is based on expanding X and Y to include fake observations, such that, e.g., X*'X* = X'X + prior term.

As described in sources like Kadiyala and Karlsson (1997, Journal of Applied Econometrics) or a new survey on Bayesian VARs by Karlsson that is forthcoming in vol 2. of the Handbook of Economic Forecasting, the specification of the prior for the VAR coefficients and error variance matrix determines whether or not simulation is needed. As Tom noted, there are tools in RATS for estimating VARs without simulation, using SPECIFY and ESTIMATE. I believe these are based on the setup of Litterman (1986, Journal of Business and Economic Statistics), which treats the error variance matrix as fixed and diagonal. Under that assumption, the posterior mean of the VAR coefficients can be obtained without simulation (and with calculations that proceed equation-by-equation).

Gibbs sampling is needed most typically when the prior includes (1) Litterman's idea of a different degree of shrinkage on "other" lags versus "own" lags and (2) either a diffuse or Wishart prior on the error variance matrix. Tom's Bayesian econometrics course covers this case (and others).

The Del Negro-Schorfheide treatment is based on a Normal-Wishart prior, in which there isn't a different degree of shrinkage on "other" lags versus "own" lags. Under that specification, the posterior means of the coefficients and error variance matrix can be obtained without simulation. They implement the prior with dummy observations, but as noted above, that isn't particularly material, except that the sums of coefficients and initial observations priors suggested by Sims and Doan, Litterman, and Sims (1984, Econometric Reviews) are only implemented with dummy observations.

With that long prelude, in a paper of mine (attached) forthcoming in the Journal of Applied Econometrics, I have put together and made available RATS code for estimating a VAR with a prior that is based on dummy observations. Note, however, that rather than augment the actual data matrix, I use the dummy obs. to build the prior terms and add the prior terms to the data moments that are easily computed with CMOM without having to define data matrices. The data and code for this are now available at the journal's data archive (link below). The case you're interested in is reflected in the baseline VAR specification of my paper.

http://qed.econ.queensu.ca/jae/forthcom ... arcellino/
Attachments
CCM.JAEacceptedversion.pdf
(418.91 KiB) Downloaded 827 times
Todd Clark
Economic Research Dept.
Federal Reserve Bank of Cleveland
Post Reply