Laubach and Williams RESTAT 2003
Laubach and Williams RESTAT 2003
This estimates the twoobservable models (real GDP and inflation) from Laubach and Williams(2003), "Measuring the Natural Rate of Interest", Review of Economics and Statistics, vol. 85, no 4, 10631070. A similar model with multiple observables with a combination of regression equations and latent variables is Fabiani & Mestre(2004).

 Posts: 2
 Joined: Fri Oct 03, 2014 10:03 am
Re: Laubach and Williams RESTAT 2003
Hi,
Can anyone tell me where the lamg estimate of 0.11 in the second stage comes from? It should be the ratio between two s.e but I dont find them.
[nonlin(parmset=peglam2) lamg=.11]
thanks
Can anyone tell me where the lamg estimate of 0.11 in the second stage comes from? It should be the ratio between two s.e but I dont find them.
[nonlin(parmset=peglam2) lamg=.11]
thanks
Re: Laubach and Williams RESTAT 2003
It's used here, so it's the ratio of the two standard deviations of the shocks.
frml swf = %diag(sig4^2,(lamg*sig4)^2)
frml swf = %diag(sig4^2,(lamg*sig4)^2)

 Posts: 2
 Joined: Fri Oct 03, 2014 10:03 am
Re: Laubach and Williams RESTAT 2003
Thanks
Is the value lamg=.11 used as an initial value for the estimation?
Is the value lamg=.11 used as an initial value for the estimation?
Re: Laubach and Williams RESTAT 2003
No. It looks like LAMG gets pegged to some value in each of the model estimations.
Re: Laubach and Williams RESTAT 2003
Hi Tom,
I have a question about this replication and the method used in the paper. After reading the paper it seems the sequential estimation used in this paper is necessary to pindown values of lamg and lamz. The first stage for example estimates a simple model of potential GDP assuming trend growth (g) is constant and also omitting the realrate gap from the IS equation. After getting this estimate of ystar (potential GDP), the authors use a result from Stock and Watson (1998, Median Unbiased Estimation of Coefficient Variance in a TimeVarying Parameter Model) which provides a way to estimate lamg. The Stock and Watson approach seems to amount to taking the estimate of ystar and regressing the growth rate of ystar on a constant and then obtain the Andrews and Ploberger (1994, Optimal Tests When a Nuisance Parameter is Present Only Under the Alternative) exponential wald statistic for a structural break at an unknown date. This exponential wald statistic is then transformed (via Stock and Watson's Table 3) into an estimate of lamg.
My question is as follows:
Stock and Watson (1998) provide in Table 3 Median Unbiased Estimators under a normalization that "D=1." It is not clear to me how to normalize the regression of ystar growth on a constant to generate the appropriately normalized exponential wald statistic. This is my approach to implementing this into the code (acknowledging the fact that I am not sure about the normalization):
Then one would want to go lookup the "AndrewsPloberger Test Statistic" in Stock and Watson's Table 3. Does this look like the right approach to implementing? Also, do you have a suggestion on this normalization?
Thanks for any input!
I have a question about this replication and the method used in the paper. After reading the paper it seems the sequential estimation used in this paper is necessary to pindown values of lamg and lamz. The first stage for example estimates a simple model of potential GDP assuming trend growth (g) is constant and also omitting the realrate gap from the IS equation. After getting this estimate of ystar (potential GDP), the authors use a result from Stock and Watson (1998, Median Unbiased Estimation of Coefficient Variance in a TimeVarying Parameter Model) which provides a way to estimate lamg. The Stock and Watson approach seems to amount to taking the estimate of ystar and regressing the growth rate of ystar on a constant and then obtain the Andrews and Ploberger (1994, Optimal Tests When a Nuisance Parameter is Present Only Under the Alternative) exponential wald statistic for a structural break at an unknown date. This exponential wald statistic is then transformed (via Stock and Watson's Table 3) into an estimate of lamg.
My question is as follows:
Stock and Watson (1998) provide in Table 3 Median Unbiased Estimators under a normalization that "D=1." It is not clear to me how to normalize the regression of ystar growth on a constant to generate the appropriately normalized exponential wald statistic. This is my approach to implementing this into the code (acknowledging the fact that I am not sure about the normalization):
Code: Select all
set ystar %regstart() %regend() = xstates(t)(1)
set g_ystar = ystar  ystar{1}
linreg g_ystar
# constant
@RegHBreak
Thanks for any input!
Re: Laubach and Williams RESTAT 2003
Hello,
I have a problem with the Laubach and Williams’ model. Here the direct link to the paper: http://www.frbsf.org/economicresearch/ ... 01516.pdf
In fact, your rats program allow only the result on the page 12 (not for the page 19)
Can anyone know a solution?
Also, I have a problem with this line on the code:
set trend=t
set btrend1=%max(t1973:4,0)
set btrend2=%max(t1995:2,0)
Many thanks in you return
Here is the whole code :
I have a problem with the Laubach and Williams’ model. Here the direct link to the paper: http://www.frbsf.org/economicresearch/ ... 01516.pdf
In fact, your rats program allow only the result on the page 12 (not for the page 19)
Can anyone know a solution?
Also, I have a problem with this line on the code:
set trend=t
set btrend1=%max(t1973:4,0)
set btrend2=%max(t1995:2,0)
Many thanks in you return
Here is the whole code :
 Attachments

 model1.rpf
 (7.55 KiB) Downloaded 1601 times
Re: Laubach and Williams RESTAT 2003
You posted a newer LaubachWilliams paper. I assume you're talking about the original working paper. Page 19 is simply a graph of the filtered output against the smoothed output. The filtered output is obtained by running the DLM (I'm not sure which one is used there) with TYPE=FILTER rather than TYPE=SMOOTHED.
You're missing a space between the series name and the = in your SET instructions.
You're missing a space between the series name and the = in your SET instructions.
Re: Laubach and Williams RESTAT 2003
Hello,
Many thanks you for answer me and sorry for the wrong link.
I still have 3 problems to resolve and I will be grateful for your help!
1) On the new LaubachWilliams paper,I don’t understand how we get the Standard LW’s curve with rats program (page 28 ) http://www.frbsf.org/economicresearch/ ... 01516.pdf
2) In your rats code, about the previous line on the code, I don’t understand how we get the dates 1973 and 1995
3) Finally, always in the code (about Phillips curve), I don’t understand how we get 0.42 and 0.58? Here is the few lines on the code from the whole code:
Phillips curve
*
linreg pceinflation 1961:1 *
# gap{1} pceinflation{1} pi3{2} pi5{5} pioilgap{1} piimpgap{0}
compute b3=%beta(1),b1=%beta(2),b2=%beta(3),b4=%beta(5),b5=%beta(6),sig2=sqrt(%seesq)
*
nonlin(parmset=peglam3) lamg=.042 lamz=sqrt(2)*.058
Thank you a lot!!
Many thanks you for answer me and sorry for the wrong link.
I still have 3 problems to resolve and I will be grateful for your help!
1) On the new LaubachWilliams paper,I don’t understand how we get the Standard LW’s curve with rats program (page 28 ) http://www.frbsf.org/economicresearch/ ... 01516.pdf
2) In your rats code, about the previous line on the code, I don’t understand how we get the dates 1973 and 1995
3) Finally, always in the code (about Phillips curve), I don’t understand how we get 0.42 and 0.58? Here is the few lines on the code from the whole code:
Phillips curve
*
linreg pceinflation 1961:1 *
# gap{1} pceinflation{1} pi3{2} pi5{5} pioilgap{1} piimpgap{0}
compute b3=%beta(1),b1=%beta(2),b2=%beta(3),b4=%beta(5),b5=%beta(6),sig2=sqrt(%seesq)
*
nonlin(parmset=peglam3) lamg=.042 lamz=sqrt(2)*.058
Thank you a lot!!
 Attachments

 model1.rpf
 (7.55 KiB) Downloaded 1532 times
Re: Laubach and Williams RESTAT 2003
I'm not sure which of the models is the "Standard LW", but that's just the smoothed estimates from it, for instance, like the NRR graphed in the final graph of the model1.rpf program.GAM2016 wrote:Hello,
Many thanks you for answer me and sorry for the wrong link.
I still have 3 problems to resolve and I will be grateful for your help!
1) On the new LaubachWilliams paper,I don’t understand how we get the Standard LW’s curve with rats program (page 28 ) http://www.frbsf.org/economicresearch/ ... 01516.pdf
That came out of one of the authors' Gauss programs. Since it's just used to generate a crude estimate of potential GDP in order to get guess values, the precise values really don't matter.GAM2016 wrote: 2) In your rats code, about the previous line on the code, I don’t understand how we get the dates 1973 and 1995
You would have to ask the authors about that. They make clear that you can't freely estimate all the variances, but the source of the pegs seems to vary from model to model.GAM2016 wrote:3) Finally, always in the code (about Phillips curve), I don’t understand how we get 0.42 and 0.58? Here is the few lines on the code from the whole code:
Phillips curve
*
linreg pceinflation 1961:1 *
# gap{1} pceinflation{1} pi3{2} pi5{5} pioilgap{1} piimpgap{0}
compute b3=%beta(1),b1=%beta(2),b2=%beta(3),b4=%beta(5),b5=%beta(6),sig2=sqrt(%seesq)
*
nonlin(parmset=peglam3) lamg=.042 lamz=sqrt(2)*.058
Re: Laubach and Williams RESTAT 2003
What are the FRED codes of the data in nr2_0902.dat? I would like to update the model, but want to make sure I am not mixing and matching series. Thank you.
Re: Laubach and Williams RESTAT 2003
This is the authors Gauss code to create the working data file from raw data. However, their zip didn't include the raw data itself. I don't know if their data appendix has enough specifics to figure out the source series.
Code: Select all
/* 
rst_data.g: Reads raw data from text file nr2_0902.q, and transforms
the data as described in the appendix to LaubachWilliams,
FEDS 200156. Last modified 09/03/02.
 */
output file = nr2_0902.dat reset;
screen off;
format /rd 9,4;
load daq[218,10] = nr2_0902.q;
daq[1:76,8] = daq[1:76,7]*daq[77,8]/daq[77,7];
mv = miss(0,0);
x = reshape(mv,218,7);
x[.,1:2] = ln(daq[.,2:3]);
x[.,3] = daq[1:45,4]daq[46:218,5];
x[2:218,5] = 400*ln(daq[2:218,6]./daq[1:217,6]);
x[2:218,6] = 400*ln(daq[2:218,8]./daq[1:217,8]);
x[.,7] = 100*((1+daq[.,10]/36000)^3651);
x[69:218,7] = daq[69:218,9];
yr = (x[10:218,3]+x[9:217,3]+x[8:216,3]+x[7:215,3])/4;
xr = x[6:218,3]~x[5:217,3]~x[4:216,3]~ones(213,1);
i = 1;
do while i <= 170;
beta = invpd(xr[i:i+39,.]'xr[i:i+39,.])*xr[i:i+39,.]'yr[i:i+39];
x[48+i,4] = xr[i+43,.]*beta;
i = i+1;
endo;
x;
end;
Re: Laubach and Williams RESTAT 2003
Hi,aubide wrote:What are the FRED codes of the data in nr2_0902.dat? I would like to update the model, but want to make sure I am not mixing and matching series. Thank you.
Don't know if you've resolved your issue yet, but the authors publish the input data and the core results publicly. Just a case of Googling "Laubach Williams natural rate" or something similar and you should find a link to an Excel file on the first return page (current link is: http://www.frbsf.org/economicresearch/ ... mates.xlsx). They update their estimates quarterly, so there will be further updates as you go along.
If you are looking to compile the data series yourself, I would recommend looking at the data appendix of their paper to get the names of the data series and the sources they tap: https://www.federalreserve.gov/pubs/fed ... 156pap.pdf
Most of it you can find on the BEA website: http://www.bea.gov/ just go to the interactive tables in the gross domestic product and the personal income and outlays sections to get all the data on GDP and price indices. For the interest rate data, that's where I would use the Fred database. Just input the relevant search terms for the Fed Funds and NY Fed Discount rates.
Hope that helps.
All the best,
Rupert
Re: Laubach and Williams RESTAT 2003
Hi,
I have a doubt related to this.
When you are estimating the "Third stage without hours", you use the gap from stage 1 instead of stage 2 and also in the starting params guess values, you use the gap estimated from stage1. So the IS and Phillip guess values for stage 3 are same as
stage 2.
Is this correct or should it be the gap estimated from stage2.
Also if I use the given values of lamg for stage 2 ,lamg and lamz for stage 3 and lamg value for stage 3(with z following an AR(2) stationary process). Can I assume the same values for current dataset (Till Q12017) or would this drastically change and affect the estimation.
Thanks
I have a doubt related to this.
When you are estimating the "Third stage without hours", you use the gap from stage 1 instead of stage 2 and also in the starting params guess values, you use the gap estimated from stage1. So the IS and Phillip guess values for stage 3 are same as
stage 2.
Is this correct or should it be the gap estimated from stage2.
Also if I use the given values of lamg for stage 2 ,lamg and lamz for stage 3 and lamg value for stage 3(with z following an AR(2) stationary process). Can I assume the same values for current dataset (Till Q12017) or would this drastically change and affect the estimation.
Thanks
Re: Laubach and Williams RESTAT 2003
Please note that we had nothing to do with creating this model. If you check the comments above, you'll see that there are many things that are in the RATS code because they were in the Gauss code but may not have been described in the papers. We can answer questions about the RATS coding, but more detailed technical information about how the model is constructed need to be addressed to the authors.
It's important to note that this model is quite unstable as you change the data range. It appears to need quite a bit of ongoing change to the variance pegs to keep the results within reasonable limits.
It's important to note that this model is quite unstable as you change the data range. It appears to need quite a bit of ongoing change to the variance pegs to keep the results within reasonable limits.