Page 2 of 2
Re: Laubach and Williams RESTAT 2003
Posted: Mon Jun 05, 2017 3:57 pm
by lali62
Ok. I just wanted to know if the gap estimated from stage 2 should be used in stage 3 estimation. Currently stage1 gap is being used in stage 3 estimations. I do not have access to the gauss code, so wanted to confirm
if this is correct.
Re: Laubach and Williams RESTAT 2003
Posted: Tue Jun 06, 2017 11:01 am
by TomDoan
It's "used" in the 3rd stage to get guess values for the parameters, nothing more.
Re: Laubach and Williams RESTAT 2003
Posted: Tue Jun 13, 2017 10:53 am
by asmith05
Hi Tom,
When I look at the smoothed state estimates from this code they look quite reasonable; however, the filtered output is incredibly volatile for the first few quarters of the sample before settling down.
Is this a problem unique to this model (For example, from what I understand, Laubach and Williams use a different method for initializing the states than the presample=ergodic option in RATS)?
Or, is this a common issue in non-stationary SS models? In other words, is there a reason in such models to discount the first few periods of filtered state estimates when the SS model is non-stationary (even if one uses the ergodic method in RATS to initialize the model)? Should we instead focus on the smoothed estimates in this case?
Also, a clarifying question from your 2010 technical paper on SS initialization with non-stationary dynamics: Is RATS automatically using the conditional likelihood (i.e. dropping the first few observations when forming the likleihood) in non-stationary models such as Laubach and Williams? If so, how does it determine how many to drop?
Thanks for any input!
Re: Laubach and Williams RESTAT 2003
Posted: Tue Jun 13, 2017 11:41 am
by TomDoan
asmith05 wrote:Hi Tom,
When I look at the smoothed state estimates from this code they look quite reasonable; however, the filtered output is incredibly volatile for the first few quarters of the sample before settling down.
Is this a problem unique to this model (For example, from what I understand, Laubach and Williams use a different method for initializing the states than the presample=ergodic option in RATS)?
Or, is this a common issue in non-stationary SS models? In other words, is there a reason in such models to discount the first few periods of filtered state estimates when the SS model is non-stationary (even if one uses the ergodic method in RATS to initialize the model)? Should we instead focus on the smoothed estimates in this case?
Almost any dynamic model will have really iffy estimates of the states in the first few periods of the Kalman filter. However, that's particularly acute in a model with unit roots in the states (the three models have 1, 2 and 3 unit roots, respectively) since there is no pre-sample mean for those. Theoretically, the two observables is enough to at least estimate the two unit roots in model #2, but since those two states are the potential GDP trend
and its rate of growth, it doesn't seem likely that the model will be able to tease out the latter from just one value of GDP plus one value of inflation.
asmith05 wrote:
Also, a clarifying question from your 2010 technical paper on SS initialization with non-stationary dynamics: Is RATS automatically using the conditional likelihood (i.e. dropping the first few observations when forming the likleihood) in non-stationary models such as Laubach and Williams? If so, how does it determine how many to drop?
You can use the condition option if you want (which is recommended if the number of unit roots might not be known
a priori, if, for instance, you have a freely estimated AR component). However, by default, as soon as it has enough information to have a finite component to the predictive density (even if it's not full rank), it will use it in computing the log likelihood. For instance, in model 3, with 3 unit roots, the first observation will be skipped in the likelihood calculation, but the second will have a rank one component (3 unit roots - 2 prior observables) and the third observation will be fully included.
Laubach & Williams (2003)
Posted: Fri May 03, 2019 9:37 am
by hardmann
Dear Dr. Doan:
I had read the discussion about LW(2003) in the forum. However, I have three questions.
1, ystar is a potential growth state varible, gap=y - ystar. In measurement equation, gap should be used to stead of ystar. Maybe (y - ystar)?
2, I had confused the usage of %eqnxvector(stage2eq,t), such as:
equation stage2eq *
# logrgdp{1 2} exanterr{1 2} pceinflation{1} pi3{2} pi5{5} pioilgap{1} piimpgap{0} constant
frml regf = ||a1,a2,a3/2,a3/2,0,0,0,0,0,a4|b3,0,0,0,b1,b2,1-b1-b2,b4,b5,0||
frml muf = regf*%eqnxvector(stage2eq,t)
I had also read the explanation from State space model course 2ed (%EQNXVECTOR function p16-17), and manuals. Could you give me a illustration in detail.
3, In pure trend-cycles decomposition, both trend and cycle should be specified as states, in general, cycles is AR(2) structure. In the paper,however, cycle or gap is derived by trend, which should be modeled explicitly. Is it right or suitable?
Best regards
hardmann
Re: Laubach & Williams (2003)
Posted: Tue May 07, 2019 6:47 am
by TomDoan
hardmann wrote:Dear Dr. Doan:
I had read the discussion about LW(2003) in the forum. However, I have three questions.
1, ystar is a potential growth state varible, gap=y - ystar. In measurement equation, gap should be used to stead of ystar. Maybe (y - ystar)?
gap itself isn't directly observable. The measurement equation has Y as the dependent variable, and 1 x YSTAR as part of the C'X(t).
hardmann wrote:
2, I had confused the usage of %eqnxvector(stage2eq,t), such as:
equation stage2eq *
# logrgdp{1 2} exanterr{1 2} pceinflation{1} pi3{2} pi5{5} pioilgap{1} piimpgap{0} constant
frml regf = ||a1,a2,a3/2,a3/2,0,0,0,0,0,a4|b3,0,0,0,b1,b2,1-b1-b2,b4,b5,0||
frml muf = regf*%eqnxvector(stage2eq,t)
The MU is to handle any
observable variables in the gap regression function. The C part handles any part of that that is due to the states. So, for instance, you have a1*gap(t-1)+a2*gap(t-2). gap is y (observable) - ystar (state). The a1*y{1} goes into the MU, the -a1*ystar{1} (which is state 2---ystar{0} is state 1) into C.
hardmann wrote:
3, In pure trend-cycles decomposition, both trend and cycle should be specified as states, in general, cycles is AR(2) structure. In the paper,however, cycle or gap is derived by trend, which should be modeled explicitly. Is it right or suitable?
The whole point of this is that the gap (i.e. the "cycle") is modeled based upon its economic effects rather than just pure time series effects. Now, the L&W model has a number of problems, not least of which is that even fairly modest changes to the data produce a completely different decomposition. As is typical of models like this, the likelihood is very flat across a wide range of different decompositions, so the output depends heavily on the parameters that are fixed by the user.
Re: Laubach and Williams RESTAT 2003
Posted: Sun Nov 03, 2019 7:58 am
by hardmann
Dear Tom:
I had noticed that the code of Rats mentioned "Potential GDP is I(2), (I(1) drifting growth rate) " in second stage. I do unitroot test for logrgdp, result is I(1). It may be misspecify. In perron & Wada(2009), the paper states that in trend-cycle decompostions, the
local trend model, eg, the trend rate with random walk is generaliazed trend function, but the feature is not supported by the data. So they introduce trend as random walk with drift, then drift as intercept with a break.
I try this specification, I dont assure it.
Hardmann
Re: Laubach and Williams RESTAT 2003
Posted: Sun Jul 19, 2020 4:27 am
by hardmann
Dear Tom:
Could we estimate the natural rate of interest in monthly frequency?
I guess that we use monthly r, inflation, and quarterly rGDP to jointly estimate r* with Extended Kalman Filter.
Best Regard
Hardmann
Re: Laubach and Williams RESTAT 2003
Posted: Fri Nov 27, 2020 11:03 am
by TomDoan
hardmann wrote:Dear Tom:
Could we estimate the natural rate of interest in monthly frequency?
I guess that we use monthly r, inflation, and quarterly rGDP to jointly estimate r* with Extended Kalman Filter.
Best Regard
Hardmann
The LW model is rather fragile as it is, so I'm not sure how well it would work to extend it to monthly. However, yes. You would first need to see if anything about the dynamics of the model needs to change to adapt it to a monthly frequency, and the quarterly real GDP numbers would require a non-linear measurement equation since the non-logged RGDP would add up across the months of a quarter, while the model is producing predictions for monthly log RGDP.
Re: Laubach and Williams RESTAT 2003
Posted: Tue Jan 04, 2022 10:00 am
by hardmann
Dear Tom:
I had learned the Winrats codes for LW2003, and found a little different specifiguations or codes from their paper. However, the final results has no effectively different.
In LW2003, in their inflation equation, they uses the 4-month avergae PI(t-5,8), I guess the winrats codes should be:
filter(type=lagging,span=4) pceinflation / pi5
In replication codes for LW2003:
filter(type=lagging,span=5) pceinflation / pi5
Should it be 4 rather than 5?
Best Regard
Hardmann
Re: Laubach and Williams RESTAT 2003
Posted: Sat Jan 15, 2022 12:16 am
by hardmann
Dear Tom:
could we use altertive pi3 as follows:
dec frml[series[real]] avgpi3
frml avgpi3 = (pceinflation{1}+pceinflation+pceinflation{-1})/3.0
However, it does not run. It seems that the type of avgpi3 is wrong. How to correct it?
Thanks.
Hardmann
Re: Laubach and Williams RESTAT 2003
Posted: Mon Jan 17, 2022 2:29 pm
by TomDoan
Why would that be a FRML[SERIES[REAL]]? That looks like a garden-variety FRML.
Measuring the Natural Rate of Interest After COVID-19
Posted: Thu Oct 17, 2024 9:49 am
by hardmann
Dear Tom:
Holston, Laubach, Williams (2023) has updated their paper and R code of Holston,Laubach, and Williams (2017) and Laubach and Williams (2003). Estima also provided the example and coder of Laubach and Williams (2003). Could estima also update code for rats.
The website of the paper, code and data is as follows
https://www.newyorkfed.org/research/policy/rstar
Best regard
Hardmann
Re: Measuring the Natural Rate of Interest After COVID-19
Posted: Thu Oct 17, 2024 2:47 pm
by TomDoan
I will take another look at it, but it was my experience with it (and others came to the same conclusion) that the LW model doesn't actually do much; it's the variance pegs that produce a desired result.
I sent this to another user a few years back:
One of our other users has been also working with the LW model and found it difficult to reproduce their results. (LW post updated results on Laubach's web site). I went back to the original data set and found that, if one is *very* careful with the optimization, the results come out quite different from their published results. Radically different results (from trend = fixed rate to trend is effectively equal to the data) produce almost the same log likelihood, and minor differences in how to initialize the Kalman filter can change where on the spectrum the estimates fall.