RATS 10.1
RATS 10.1

Paper Replications /

Stock Watson JASA 1998

Home Page

← Previous Next →

Stock and Watson(1998) uses an indirect method to estimate the variance of coefficient drift in a time-varying parameters model. If the coefficient drift is relatively small (relative to the observation equation variance), the maximum likelihood estimate is often zero (in fact, the "maximum" can be achieved at a negative value).

 

Most of the analysis concerns an unobserved components model with a (potentially) time-varying level with a serial correlated error process:

\begin{equation} \beta _t = \beta _{t - 1} + v_t \end{equation}

\begin{equation} y_t = \beta _t + u_t \end{equation}

\begin{equation} a(L)u_t = \varepsilon _t \end{equation}

This includes two example programs:

 

TABLE4.RPF

This does the calculation of the indirect estimates.

 

TABLE5.RPF

This does the analysis of the formal state-space model.

 

These analyze per capita US Real GDP growth over the sample 1947:2 to 1995:4. The question is whether mean growth rate is constant (drifit variance is zero), or time-varying (drift variance non-zero). \(a(L)\) is chosen to be a 4-lag autoregression.

 

In TABLE4.RPF, the test statistics are all for breaks in a feasible GLS estimator on a constant only:

 

compute p=4

 

linreg dypc

# constant

linreg(define=areq) %resids

# %resids{1 to p}

filter(equation=areq) dypc / fdypc

compute a1=1-%sum(%beta)

set fc = a1

linreg fdypc

# fc

compute scalef=sqrt(%seesq)/(a1*%nobs)

 

SCALEF is the scale factor for converting the estimate of \(\lambda\) into the corresponding standard deviation of \(v\).
 

@APBREAKTEST can compute the EW and QLR test statistics:

 

@apbreaktest fdypc

# fc

@StockWatsonMUE(lambda=ewlambda,ew=%%APTEST)

@StockWatsonMUE(lambda=qlrlambda,qlr=%%AQTEST)

 

The Nyblom test can be done using @STABTEST. The test on the first (and only) coefficient is in %HSTATS(1).

 

@stabtest fdypc

# fc

@StockWatsonMUE(lambda=llambda,l=%HSTATS(1))

 

Note that the @StockWatsonMUE procedure, which does the table lookup, uses an expanded and re-estimated table of values which is quite a bit different for the QLR in the vicinity of the value found in this example.

 

TABLE5.RPF is a formal analysis of original unobserved components model. Under the null hypothesis that \({\mathop{\rm var}} (v) = 0\), this is can be estimated by BOXJENK.

 

compute p=4

*

boxjenk(ar=p,constant,maxl) dypc

 

 In general, it can be estimated by DLM as a state-space model. The model takes the form:

 

\begin{equation} \left[ {\begin{array}{*{20}c} {\beta _t } \\ {u_t } \\ {u_{t - 1} } \\ {u_{t - 2} } \\ {u_{t - 3} } \\ \end{array}} \right] = \left[ {\begin{array}{*{20}c} 1 \hfill & 0 \hfill & 0 \hfill & 0 \hfill & 0 \hfill \\ 0 \hfill & {\varphi _1 } \hfill & {\varphi _2 } \hfill & {\varphi _3 } \hfill & {\varphi _4 } \hfill \\ 0 \hfill & 1 \hfill & 0 \hfill & 0 \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill & 1 \hfill & 0 \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill & 0 \hfill & 1 \hfill & 0 \hfill \\ \end{array}} \right]\left[ {\begin{array}{*{20}c} {\beta _{t - 1} } \\ {u_{t - 1} } \\ {u_{t - 2} } \\ {u_{t - 3} } \\ {u_{t - 4} } \\ \end{array}} \right] + \left[ {\begin{array}{*{20}c} 1 \hfill & 0 \hfill \\ 0 \hfill & 1 \hfill \\ 0 \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill \\ \end{array}} \right]\left[ {\begin{array}{*{20}c} {v_t } \\ {\varepsilon _t } \\ \end{array}} \right] \end{equation}

\begin{equation} y_t = \left[ {\begin{array}{*{20}c} 1 \hfill & 1 \hfill & 0 \hfill & 0 \hfill & 0 \hfill \\ \end{array}} \right] \bullet \left[ {\begin{array}{*{20}c} {\beta _t } & {u_t } & {u_{t - 1} } & {u_{t - 2} } & {u_{t - 3} } \\ \end{array}} \right] \end{equation}

 

This sets up the model. The free parameters are the AR coefficients (VECTOR R), standard deviations of epsilon (SIGEPS) and of the change in beta (SIGB). The guess values for the first two come off the results of the BOXJENK, while the guess value for SIGB is a small multiple of SIGEPS.

 

dec vect r(p)

nonlin(parmset=dparms) sigb sigeps r

compute sigeps=sqrt(%sigmasq)

compute sigb=.01*sigeps

compute r=%xsubvec(%beta,2,p+1)

 

The %DLMAR function creates the lower \(4 \times 4\) corner in the transition matrix:

 

function %dlmar a

type rect %dlmar

type vect a

dim %dlmar(%size(a),%size(a))

ewise %dlmar(i,j)=%if(i==1,a(j),i==j+1)

end

 

What Stock and Watson call the MMLE (Maximum Marginal Likelihood Estimator) can be done using DLM with the option PRESAMPLE=ERGODIC.

 

dlm(y=dypc,save=dlm,title="MMLE Estimates",$

  a=1.0~\%dlmar(r),f=1.0~\%unitv(%size(r),1),sw=sigb^2~\sigeps^2,c=1.0~~%unitv(%size(r),1),$

  parmset=dparms,presample=ergodic,method=bfgs,type=smooth) / xstatesMMLE

 

This produces the Kalman smoothed estimates of the states into xstatesMMLE—the one of principal interest is the \(\beta_t\), which is the first state in the model.

 

The remainder of the DLM instructions estimate with pegged SIGB. Pegged to zero gives the MPLE (Maximum Profile Likelihood Estimator). The \(\beta_0\) estimates are slightly different because, as described in the paper, they are the value at time 0, while these are the smoothed values at time=1 (Everything else will be the same).

 

nonlin(parmset=peg00) sigb=0.0

dlm(y=dypc,get=dlm,title="MPLE Estimates",$

  parmset=dparms+peg00,presample=ergodic,method=bfgs,type=smooth) / xstatesMPLE vstatesMPLE

?"beta_0" xstatesMPLE(%regstart())(1) sqrt(vstatesMPLE(%regstart())(1,1))

 

These are with different non-zero values. Both are larger than the freely estimated value of .04.

 

dlm(y=dypc,get=dlm,title="Estimates with .13 sigma",$

  parmset=dparms+peg13,presample=ergodic,method=bfgs,type=smooth) / xstates13 vstates13

?"beta_0" xstates13(%regstart())(1) sqrt(vstates13(%regstart())(1,1))

nonlin(parmset=peg62) sigb=.62

dlm(y=dypc,get=dlm,title="Estimates with .62 sigma",$

  parmset=dparms+peg62,presample=ergodic,method=bfgs,type=smooth) / xstates62 vstates62

?"beta_0" xstates62(%regstart())(1) sqrt(vstates62(%regstart())(1,1))

 

This graphs the different estimates of growth rates, including the actual growth from the data. The scale and volatility of the actual data largely obscure the (much smoother) estimates from the model.

 

set betample = xstatesmple(t)(1)

set betammle = xstatesmmle(t)(1)

set beta13   = xstates13(t)(1)

set beta62   = xstates62(t)(1)

graph(footer="Figure 3. Growth Rate in US Real Per Capita GDP",key=below,$

   klabels=||"GY","MPLE","MMLE","Sigma=.13","Sigma=.62"||)  5

# dypc

# betample

# betammle

# beta13

# beta62

 

This does the same without the actual data:

 

graph(footer="Figure 4. Estimated Trends of US Real GDP Growth",key=below,$

   klabels=||"MPLE","MMLE","Sigma=.13","Sigma=.62"||)  4

# betample

# betammle

# beta13

# beta62

 

This builds the table of estimates over various subsamples, using SSTATS to compute the sample means of the data and the various estimates. Note that the values for the actual data in the paper don't appear to be correct for the subsamples.

 

dec vect[vect[int]] ranges

dec vect[int] range

compute ranges=|| ||1947:2,1995:4|| , ||1947:2,1970:4|| , ||1970:1,1995:4|| , ||1950:1,1960:4|| , $

                  ||1960:1,1970:4|| , ||1970:1,1980:4|| , ||1980:1,1990:4|| , ||1990:1,1995:4|| ||

report(use=rreport,action=define,title="Estimated average trend growths")

report(use=rreport,atrow=1,atcol=1,align=center) "Date" "$\bar{GY}$" "MPLE" "MMLE" "$\sigma_{\Delta\beta}=.13$" "$\sigma_{\Delta\beta}=.62$"

dofor range = ranges

   sstats(mean) range(1) range(2) dypc>>gbar xstatesmple(t)(1)>>mplebar xstatesmmle(t)(1)>>mmlebar $

      xstates13(t)(1)>>m13bar xstates62(t)(1)>>m62bar

   report(use=rreport,row=new,atcol=1) %string(%year(range(1)))+"-"+%string(%year(range(2))) gbar mplebar mmlebar m13bar m62bar

end dofor range

report(use=rreport,action=format,atrow=2,picture="*.##",align=decimal)

report(use=rreport,action=show)

 


Copyright © 2025 Thomas A. Doan