RATS 10.1
RATS 10.1

RATS can produce generalized method of moments estimators (Hansen, 1982) for models whose orthogonality conditions can be expressed as

\begin{equation} E{\kern 1pt} {\kern 1pt} \left[ {Z'_t {\kern 1pt} {\kern 1pt} f\left( {y_t ,{\kern 1pt} {\kern 1pt} {\kern 1pt} X_t ,{\kern 1pt} {\kern 1pt} {\kern 1pt} \beta } \right)} \right] = 0 \label{eq:nonlin_gmmcondition} \end{equation}               

for a closed form function \(f\) and a list of instruments \(Z\) which does not depend upon \(\beta\). This is a very broad class, which includes two-stage least squares and other instrumental variables estimators. If \(f\) is linear, this can be done with the simpler LINREG, so here we will be dealing with models where \(f\) is non-linear. The extension to multiple equations is treated elsewhere. As with LINREG, you do instrumental variables by setting the instruments list using INSTRUMENT and including the INST option when estimating. The following estimates the same basic model as NLLS.RPF (taken from a different textbook, so it uses different variable names), but by non-linear two-stage least squares, using lags of consumption and income as the instruments.

 

nonlin a b g

linreg realcons

# constant realdpi

compute a=%beta(1),b=%beta(2),g=1

frml cfrml realcons = a+b*realdpi^g

*

instruments constant realcons{1} realdpi{1 2}

nlls(frml=cfrml,inst) realcons 1950:3 *

 

For model \eqref{eq:nonlin_gmmcondition} and for instrument set \(Z\), nonlinear two-stage least squares minimizes
\begin{equation} \left( {\sum\limits_{t = 1}^T {u_t Z_t } } \right){\bf{W}}\left( {\sum\limits_{t = 1}^T {Z'_t {\kern 1pt} u_t } } \right) \label{eq:nonlin_uzwzu} \end{equation}

where

\begin{equation} {\bf{W}} = \left( {\sum {Z'_t {\kern 1pt} Z_t } } \right)^{ - 1} \label{eq:nonlin_wmatrix} \end{equation}

This is what NLLS with INST does by default. Most of the remainder of the section describes other ways to compute \(\bf{W}\).

 

The OPTIMALWEIGHTS and WMATRIX Options

NLLS with the INST option minimizes the quadratic form in \eqref{eq:nonlin_uzwzu} for some choice of \(\bf{W}\), which weights the orthogonality conditions. Hansen shows that a more efficient estimator can be obtained by replacing the two-stage least squares weight matrix \(\left( {{\bf{Z'}}{\kern 1pt} {\kern 1pt} {\bf{Z}}} \right)^{ - 1} \) by the inverse of

\begin{equation} {\rm{mcov}}\left( {{\bf{zu}}} \right) = \sum\limits_{k = - L}^L {\sum\limits_t {{\kern 1pt} Z'_t {\kern 1pt} u_t {\kern 1pt} u_{t - k} } } Z_{t - k} {\kern 1pt} \label{eq:nonlin_woptimal} \end{equation}

This matrix plays a key role in the general covariance matrix calculations, and some of its numerical properties are discussed there.

 

RATS provides the option OPTIMALWEIGHTS on LINREG and NLLS for doing this estimation directly. For example:

 

instruments constant z1{1 to 6}

linreg(inst,optimalweights,lags=2,lwindow=newey) y1

# constant x1 x2 x3

 

For LINREG, OPTIMALWEIGHTS tells RATS to first compute the two-stage least squares estimator, use the residuals to compute the matrix shown in \eqref{eq:nonlin_woptimal} and take its inverse as the weighting matrix. This is a “two-step” estimator. NLLS is similar, but isn’t as simple because, unlike a linear model, the first step can’t be done as a single matrix calculation since it requires minimization across the parameters. Instead of two-step, NLLS resets the weight matrix with each iteration using the current residuals in computing \eqref{eq:nonlin_woptimal}. With either instruction, you can retrieve the last weight matrix used as the SYMMETRIC array %WMATRIX.

 

The example file GIV.RPF estimates the free parameters (the discount rate b and coefficient of risk aversion s) where the first-order condition for allocation of consumption across time yields the condition:

 

\(E_{t - 1} \left[ {\beta R_t \left( {C_t /C_{t - 1} } \right)^\sigma   - 1} \right] = 0\)

 

Theoretically, anything that’s part of the information set at time \(t-1\) can be used as an instrument. The following uses the CONSTANT (which will almost always be an instrument) and six lags of the two data series and uses the OPTIMALWEIGHTS calculation with no lags in \eqref{eq:nonlin_wmatrix}. (You wouldn’t expect serial correlation in the moment conditions for this model.)

 

nonlin discount riskaver

frml h = discount*realret(t)*consgrow(t)^riskaver-1

compute discount = .99,riskaver = -.95

*

instruments constant consgrow{1 to 6} realret{1 to 6}

nlls(inst,frml=h,optimal) *

 

If you wish to provide your own weighting matrix, you can use the WMATRIX option on LINREG or NLLS. These steps would be used to compute your own weight matrix:

1.Estimate the model in the standard way, saving the residuals.

2.Use MCOV to compute the mcov(\(\bf{Z}\bf{u}\)) matrix. To get the proper scale for the covariance matrix, the weight matrix needs to be the inverse of a matrix which is \(O(T)\).

3.Re-estimate the model with the option WMATRIX=new weighting matrix (if you’ve computed the inverse) or IWMATRIX=mcov matrix (matrix before inversion). (IWMATRIX is short for Inverse Weight Matrix, which is often the easiest thing to compute). If this is an (intentionally) sub-optimal weight matrix, you can use ROBUSTERRORS, LAGS and LWINDOW to correct the covariance matrix.

J-Tests and the %UZWZU Variable

Another result from Hansen is that if optimal weights are used, the value of \({\bf{u'ZWZ'u}}\) is asymptotically distributed \(\chi^2\) with degrees of freedom equal to the number of overidentifying restrictions. (If the number of instruments equals the number of parameters, the model is just identified and \({\bf{u'ZWZ'u}}\) will be zero, at least to machine-precision.) He proposes this as a test of these assumptions—a test sometimes known as the J-test. LINREG and NLLS compute and print the J-statistic and its significance level using the weight matrix you supply, or the optimal weight matrix if you use the OPTIMALWEIGHTS option. This is included in the regression output, and is available afterwards in the variables %JSTAT, %JSIGNIF and %JDF (test statistic, significance level, and degrees of freedom respectively).

 

In GIV.RPF, the following is used to do specification tests using several different lag lengths on the instruments. Note that this uses a common estimation range (determined by the six lag set which was estimated first).

 

compute start=%regstart()

dofor nlag = 1 2 4 6

  instruments constant consgrow{1 to nlag} realret{1 to nlag}

  nlls(inst,noprint,frml=h,optimal) * start *

  cdf(title="Specification Test for "+nlag+" lags") $

     chisqr %jstat 2*nlag-1

end dofor

 

If the weight matrix used in estimation isn’t the “optimal” one, there are two ways to adjust the specification test. One uses the alternative form for the test statistic (Hansen’s Lemma 4.1). The other, proposed by Jagannathan and Wang (1996), uses the standard J-statistic but uses its (non-standard) distribution. You choose between these with the option JROBUST=STATISTIC or JROBUST=DISTRIBUTION.

 

In addition, the LINREG and NLLS instructions (and the systems estimators SUR and NLSYSTEM) define the variable %UZWZU. %UZWZU is the value of \({\bf{u'ZWZ'u}}\) for whatever weight matrix is used. In the case of simple 2SLS, it will be the J-statistic times the estimated variance. Note that if the model is just identified (that is, if the number of free coefficients is equal to the number of instruments), %UZWZU is zero, theoretically. It may be very slightly different due to a combination of computer roundoff error and (for non-linear models) the differences between the estimated and true parameters.

 

The ZUMEAN option

The model \eqref{eq:nonlin_gmmcondition} can be a bit too restrictive in some applications. For instance, in financial econometrics, it’s not uncommon for a model to generate something similar to \eqref{eq:nonlin_gmmcondition}, but with a fixed but non-zero value for the expectation. The calculations done for GMM can easily be adjusted to allow for such a fixed mean for the moment conditions. To do this with RATS, use the ZUMEAN=vector of expected values. The vector of expected values should have the same dimensions as the set of instruments.

 

For example, this has 10 conditions with an expected value of 1 rather than 0:

 

nonlin delta gamma

frml fx  = delta*cons^(-gamma)

instruments r1 to r10

compute [vect] zumean=%ones(10,1)

nlls(frml=fx,inst,zumean=zumean,optimalweights)

 

 

The CENTER option

If the model is just identified

\begin{equation} \sum\limits_t {Z'_t } u_t \end{equation} 

will be equal to zero. If the model is overidentified, at least some elements would be expected to be non-zero. Hall(2000) recommends subtracting off the sample mean from each \(Zu\) term in \eqref{eq:nonlin_woptimal}, that is, to compute

\begin{equation} \sum\limits_{k = - L}^L {\sum\limits_t {\left( {Z'_t {\kern 1pt} u_t - \mu _{zu} } \right)\left( {Z'_{t - k} {\kern 1pt} u_{t - k} - \mu _{zu} } \right)^\prime } } \end{equation}

This will have no effect if the model is just-identified (the weight matrix has no effect on the estimates in that case anyway), but he shows that it improves the performance of specification tests for the overidentifying restrictions.

 


Copyright © 2025 Thomas A. Doan