*
* Enders, Applied Econometric Time Series, 4th edition
* Example from Section 7.9, pp 449-452
* STAR models
*
* This line (if uncommented) would generated 500 data points starting
* with 0 as the initial value. You would use the last 250.
*
*set(first=0.0) yg 1 500 = 1.0+.9*yg{1}+(-3.0-1.7*yg{1})*%logistic(10.0*(yg{1}-5.0),1.0)+%ran(1.0)
*
open data lstar.xls
data(format=xls,org=columns) 1 250 entry y
graph(footer="Figure 7.10 The Simulated LSTAR Process")
# y
*
stats y
@bjident y
linreg y
# constant y{1}
@regcorrs(number=12)
*
* This does the RESET test on the most recent regression.
*
@regreset(h=4)
*
* The STAR test can be done using the @STARTEST procedure. This does the
* regression described in the text and produces several test statistics.
* The non-linearity test is an F-test of the three non-linear terms. H12
* tests the joint significance of the squared and cubic terms and H03
* tests the signficance of the fourth degree term. If the model were
* ESTAR rather than LSTAR, the H03 would be expected to be insignificant.
*
@startest(p=1,d=1) y
@startest(p=1,d=2) y
*
* Estimate the LSTAR model
*
nonlin(parmset=starparms) gamma c
linreg y
# constant y{1}
frml(vector=phi1,lastreg) phi1f
frml(vector=phi2,lastreg) phi2f
*
frml glstar = %logistic(gamma*(y{1}-c),1.0)
frml star y = g=glstar,phi1f+g*phi2f
*
* Use the sample mean and 2/sigma as the guess values for the STAR
* parameters.
*
stats y
compute c0=c=%mean
compute gamma0=gamma=2.0/sqrt(%variance)
*
nonlin(parmset=regparms) phi1 phi2
*
* First, estimate the model with the STAR parameters left out.
*
nlls(frml=star,parmset=regparms) y
*
* Now, re-estimate with both the regression parameters and the STAR
* parameters.
*
nlls(frml=star,parmset=regparms+starparms) y
*
* The above process doesn't always work for finding the optimum because
* the sum of squares will sometimes be nearly discontinuous in C if
* gamma is large. One strategy that has been recommended is to do a
* preliminary grid search for c (estimating the other parameters) then
* estimates a final model starting at the best value found by the grid
* search.
*
stats(fractiles) y
compute grid=%seqa(%fract01,(%fract99-%fract01)/99.0,100)
dec real cgrid
*
* The parameters are all equal to the least squares estimates, which, in
* this case, are on the completely other end of the grid from the first
* one that we will be doing. So it makes sense to start the estimation
* over again by doing the two-step optimization, but with C picked to be
* the first value in the grid. Once we have estimates for that value, we
* should be able to feed one set of estimates into the next NLLS, since
* the differences in C will be small.
*
* Peg gamma at the guess value used above, and C at the first element of
* the grid. Estimate just the regression coefficients.
*
compute gamma=gamma0
compute c=grid(1)
*
nlls(frml=star,parmset=regparms,print) y
*
compute minrss=1.e+100
nonlin(parmset=starparms) gamma c=cgrid
dofor cgrid = grid
nlls(frml=star,parmset=regparms+starparms,noprint) y
if %rss