RATS 11.1
RATS 11.1

Much of applied macroeconometrics in the past thirty years has been devoted to an analysis of the implications of “unit root” and related behaviors in time series data. The econometrics governing the tests is technically difficult, with a large body of published papers discussing the proper methods. Because of this, we have chosen not to code specific tests directly into RATS. Instead, we rely upon procedures, which can more easily be adapted to changes in current practice. We will discuss here procedures to do the basic forms of the tests.

 

The Time Series>Unit Root Tests wizard offers a simple way to access the most important of these. Among the tests listed there are those that allow for structural breaks. There are also tests using panel data rather than just a single time series.

 

For additional technical details on unit root testing, see the RATS Programming Manual by Enders and Doan, which is included with RATS.

 

Before you run unit root tests on your data, you might want to ask yourself why you are doing it. There have been many papers published that have included unit root tests even though there was nothing in the paper that actually depended upon the result of the tests. Worse than pointless unit root tests, we have had more than a few users who are almost paralyzed by the (incorrect) idea that you can’t run regressions when some of the variables are “I(1)”. It’s true that static regressions (ones without lags) on series involving unit roots have a good chance of being “spurious” (Granger and Newbold, 1974), but regressions that have proper handling of dynamics (lags) such as vector autoregressions and ARDL’s are generally fine. See “Spurious Regressions”.

 

Dickey-Fuller Test

The most common choice in published empirical work nowadays is the Dickey-Fuller test (from Fuller(1976) and Dickey and Fuller(1979)) with empirically chosen augmenting lag length. This can be done using the procedure @DFUNIT:

 

@dfunit( options )   series  start  end

 

The main options are:

 

DET=NONE/[CONSTANT]/TREND

Choose what deterministic components to include.

 

LAGS=number of additional lags [0]

MAXLAGS=maximum number of additional lags to consider [number of observations^.25]

METHOD=[INPUT]/AIC/BIC/HQ/TTEST/GTOS/SBC/MAIC

SIGNIF=cutoff significance level for METHOD=TTEST or GTOS[.10]

 

These options select the method for deciding the number of additional lags for an “augmented Dickey-Fuller test”, which is the number of lags on the difference to include in the regression to handle the shorter-term dynamics. If METHOD=INPUT, the number of lags given by the MAXLAGS (or LAGS) option is used. If AIC, the AIC-minimizing value between 0 and MAXLAGS is used; if BIC, it’s the BIC-minimizing value, and if TTEST or GTOS, the number of lags for which the last included lag has a marginal significance level less than the cutoff given by the SIGNIF option. Which you should use is largely a matter of taste, though AIC and TTEST are probably the most common choices.

 

The augmented test has the same asymptotic distribution under the null as the standard Dickey–Fuller test does in the AR(1) case. Note that some authors describe the number of lags in a Dickey-Fuller test as the number of lags in the AR, not (as is done here) by the number of additional lags on the difference. If that’s the case, their “3” lag test would be done using LAGS=2.

 

UNITROOT.RPF example

For examples, UNITROOT.RPF looks at the two series examined in Chapter 17 of Hamilton(1994). One is the U.S. 3-month Treasury bill rate, which is assumed not to be trending, and the other is log GNP, where a trend is assumed. While, for illustration, this demonstrates all the unit root tests that we cover in this section, in practice you would do either one or at most two, unless you get conflicting results.

 

The Dickey-Fuller tests for T-bills with several choices for handling the augmenting lags are:

 

@dfunit(det=constant) tbill

@dfunit(det=constant,maxlags=6,method=gtos) tbill

@dfunit(det=constant,maxlags=6,method=aic) tbill

 

In this case, both GTOS and AIC pick the full set of six lags, which might indicate that a somewhat longer lag length might be in order. The similar treatment for log GNP uses the DET=TREND option:

 

@dfunit(det=trend) lgnp

@dfunit(det=trend,maxlags=6,method=gtos) lgnp

@dfunit(det=trend,maxlags=6,method=aic) lgnp

 

Here both end up picking 2 augmenting lags.

Other Tests

There are two obvious problems with the standard Dickey-Fuller tests:

1.The test depends upon the “nuisance” parameter of the extra lags to remove serial correlation.

2.The deterministics change their meanings as the model moves between the null (unit root) and the alternative (stationary). For instance, under the unit root, the constant is a drift rate, while it determines the mean for a stationary process.

 

The Phillips-Perron test (from Phillips(1987) and Phillips and Perron(1988)) is similar to the Dickey-Fuller, but uses a non-parametric correction for the short-term serial correlation. The Dickey-Fuller and Phillips-Perron test each tend to exhibit rather poor behavior in the presence of certain types of serial correlation—see the Monte Carlo analysis in Schwert (1989). However, the types of serial correlation for which the PP test does poorly are much more common than the ones for which it does well, so it is much less commonly used now than DF. In RATS, the Phillips-Perron test can be done using the @PPUNIT procedure.

 

The examples on UNITROOT.RPF are:

 

@ppunit(det=constant,lags=12,table) tbill

@ppunit(det=trend,lags=12,table) lgnp

 

The TABLE option displays a sensitivity table which shows how the test statistic depends upon the lag length in the window for the non-parametric correction. There really isn't the same collection of methods for objectively choosing the non-parametric lag length, so this allows you to see if the decision is sensitive to the choice. The statistics tend to stabilize once the lag window is long enough to handle the bulk of the serial correlation.

 

There have been several different approaches to dealing with the second issue. One is to replace the Dickey-Fuller “Wald” test with a Lagrange multiplier test. This is done with the Schmidt-Phillips test (Schmidt and Phillips 1992) which is executed using the @SPUNIT procedure. As with the Phillips-Perron test, this deals with the short-term serial correlation using non-parametric methods. @SPUNIT uses the P option (for the power of time) rather than the DET option used by the other tests as they allow for higher powers of time than just a linear trend (though there is little call for quadratic and above). The examples are:

 

@spunit(p=0,lags=12) tbill

@spunit(p=1,lags=12) lgnp

 

A similar idea is to improve the estimate of the trend using GLS. Probably the most popular form is the test developed in Elliott, Rothenberg and Stock(1996). Since first differencing the data is inappropriate if the data are trend-stationary, they quasi-difference the data using a filter which is “local to unity” (close to a unit root, but not quite), and use that to estimate the trend. The detrended filtered data are then subjected to a Dickey-Fuller test (without any deterministics). The procedure for doing this is @ERSTEST. There is little to be gained from using this for a non-trending series, so the example just applies it to GNP:

 

@erstest(det=trend,lags=12) lgnp

 

All of the tests described so far have had the unit root as the null. This makes sense since it’s the “simple” hypothesis while the alternative of stationarity is composite. However, it is possible to construct a test with a null of stationarity—this is shown in Kwiatkowski, et. al(1992). The variance of the deviations from trend for a stationary process is bounded, while it’s unbounded for a non-stationary process so if the process wanders too far to be compatible with stationarity, we conclude that it’s non-stationary. This can be done using the @KPSS procedure. As with the Phillips-Perron and Schmidt-Phillips tests, this requires a lag window estimator. Note that the hypothesis is reversed, so if you do both KPSS and one of the other tests, you would hope to get opposite results regarding acceptance of the null.

 

@kpss(det=constant,lmax=12) tbill

@kpss(det=trend,lmax=12) lgnp

 

Bayesian Tests

The tests above are all “classical” tests. A very different procedure implements the Bayesian odds ratio test proposed by Sims (1988). This is the procedure @BAYESTST. This is more of an intellectual curiosity because it doesn't allow for deterministic components and so really has little practical use.

 


Copyright © 2026 Thomas A. Doan