Page 1 of 1

Koop textbook example

Posted: Sat Apr 16, 2011 4:12 am
by iloverats
dear
I run the textexample koop_bp200.rpf
I find the resluts esitmated by the maximum likelihood and Gibbs sampler are so different in the aspect of of space state curves (a1 in the program)?
I think two methods should not get such a big different results
May you tell me the reasons:?:
thank you very much

Re: Koop textbook example

Posted: Sat Apr 16, 2011 9:09 am
by TomDoan
iloverats wrote:dear
I run the textexample koop_bp200.rpf
I find the resluts esitmated by the maximum likelihood and Gibbs sampler are so different in the aspect of of space state curves (a1 in the program)?
I think two methods should not get such a big different results
May you tell me the reasons:?:
thank you very much
There's quite a lengthy discussion in the comments near the end of that about how the prior isn't really appropriate. Even though the priors are quite loose (1 degree of freedom), their scales are so far out of line with the appropriate values that they force way too much time variation.

Re: Koop textbook example

Posted: Mon Apr 18, 2011 12:19 pm
by iloverats
thank you

Re: Koop textbook example

Posted: Mon Apr 18, 2011 12:21 pm
by iloverats
Are nulambda and s2lambda the shape parameter and the scale parameter respectively :?:

or What?

Re: Koop textbook example

Posted: Mon Apr 18, 2011 5:10 pm
by TomDoan
Shape and scale on the variance of the coefficient drift.

Re: Koop textbook example

Posted: Tue Apr 19, 2011 2:39 am
by iloverats
in this model
i think we hard ,even might not to get hyperparameter lambadas' nulambda and s2lambda prior information
can s2lambda = 1.0/the number of observations be considered as general rule for such case?

how can we choose the proper prior information for such case?

Re: Koop textbook example

Posted: Tue Apr 19, 2011 7:31 pm
by TomDoan
iloverats wrote:in this model
i think we hard ,even might not to get hyperparameter lambadas' nulambda and s2lambda prior information
can s2lambda = 1.0/the number of observations be considered as general rule for such case?

how can we choose the proper prior information for such case?
There's a fairly lengthy discussion of TVP models at:

http://www.estima.com/forum/viewtopic.php?f=5&t=381

The value for s2lambda is appropriate for an autoregressive parameter, which has a natural scale on the order of 1 because it's scale-independent. It's not as clear with other regression coefficients.

Re: Koop textbook example

Posted: Mon Apr 25, 2011 11:51 pm
by iloverats
dear

how can i set the unconditional prior information to do bayesian gibbs estimation in such case?
thank you

Re: Koop textbook example

Posted: Thu Apr 28, 2011 10:06 am
by iloverats
The value for s2lambda is appropriate for an autoregressive parameter, which has a natural scale on the order of 1 because it's scale-independent. It's not as clear with other regression coefficients.[/quote]
Dear
which textbook or papers i should refer to ?
i need more details
thank you very much

Re: Koop textbook example

Posted: Thu Apr 28, 2011 10:14 am
by TomDoan
iloverats wrote:how can i set the unconditional prior information to do bayesian gibbs estimation in such case?
thank you
That's a question for which I have no really good answer. There are some coefficients (autoregressive coefficients, stock beta's) which have a natural scale. Many others don't. Using the scaling information from a full-sample linear regression will generally be the best course when you have nothing else. This is similar to what's done with time-varying parameter VAR's, where the covariance matrix of the increments is a fraction (like .01) times a prior based upon empirical estimates.

Re: Koop textbook example

Posted: Thu Apr 28, 2011 10:54 am
by iloverats
dear
what is the scaling information?

Re: Koop textbook example

Posted: Fri Apr 29, 2011 7:35 am
by TomDoan
iloverats wrote:dear
what is the scaling information?
The prior for a variance is an inverse gamma with degrees of freedom and a scale. It's the scale that often isn't obvious from the context, and if it's way off (particularly on the high side) you can end up with too much time-variation.