NLPAR Instruction |
NLPAR(no parameters)
NLPAR allows you to change parameter settings which govern the inner workings of the iterative processes. Changes made using it will apply to any subsequent non-linear estimations (until you change them again with another NLPAR or quit RATS). The instructions BOXJENK and ITERATE (ARMA estimation), NLLS and NLSYSTEM (Non-Linear Least Squares and systems estimation), DDV and LDV (discrete and limited dependent variables), ESMOOTH (exponential smoothing), MAXIMIZE, FIND, DLM (dynamic linear models) and CVMODEL all involve nonlinear optimization.
Because of this, estimation is an iterative process, and certain models may prove difficult to fit. Most of the time, you can get your model estimated by altering the method used, the initial guess values, and the number of iterations. The options to control those are provided on each estimation instruction.
However, some models can be difficult or impossible to estimate without further adjustment to the estimation process. The options on NLPAR provide control over these settings.
In general, there are five main methods of optimization used by RATS: the hill-climbing methods, and simplex, genetic, simulated annealing and genetic annealing methods. Of these, the ones most likely to benefit from “tuning” are the last three, which are all designed to deal with surfaces with possibly multiple peaks. If you do have multiple peaks, it requires quite a bit more effort (bigger populations, slower speed of convergence) to allow all of those to be explored properly. The genetic algorithm has four controls and the annealing algorithm does as well (and genetic annealing uses both sets).
Options
CRITERION=[COEFFS]/VALUE
This chooses whether RATS determines convergence by looking at the change from one iteration to the next in the coefficients, or the change in the value of the function being optimized. Convergence of coefficients is the default. Convergence on value should be chosen only if the coefficients themselves are unimportant.
EXACTLINESEARCH/[NOEXACTLINESEARCH]
In the hill-climbing methods, where a direction is chosen and a search made along that direction, this option controls whether the estimation routine attempts to find the “exact” optimum in that direction, or only looks for a new set of points which meet certain criteria. This choice rarely affects whether you will get convergence—it merely alters how quickly you get there. In general, EXACTLINESEARCH is slower for smaller models, as the extra function evaluations provide little improvement for the effort. It can speed up models with many free parameters, where the “cost” of each new iteration, particularly the need to compute the gradient, is higher.
DERIVE=[FIRST]/SECOND/FOURTH
This determines the method used for computing numerical derivatives. The default is a right arc derivative. DERIVE=SECOND and DERIVE=FOURTH use more accurate but slower formulas. In most cases, you won't need those.
ALPHA=value for the \(\alpha\) control parameter [0.0]
DELTA=value for the \(\delta\) control parameter [0.1]
These parameters control details of the hill-climbing methods (BHHH and BFGS). There are few situations in which changing these is likely to help.
MUTATE=SIMPLE/[SHRINK]/BEST/RANDOM
CROSSOVER=probability of taking mutated parameter [.5]
SCALEFACTOR=scale factor for mutations [.7]
POPULATE=scale for population size [6]
These parameters control details of the GENETIC estimation method.
ASCHEDULE=fraction by which temperature is reduced on each iteration [.85]
AINNER=number of inner repetitions before V is adjusted [20]
AOUTER=scale of the number of parameters for the number of outer repetitions [3]
AVADJUST=adjustment control for the V values [2.0]
These parameters control details of the ANNEALING estimation method (added with version 9.2)
MARQUARDT/[NOMARQUARDT]
For Gauss-Newton estimation, you can use MARQUARDT to select the Levenberg–Marquardt subiteration algorithm (Marquardt, 1963). We generally don’t recommend this, as we have generally obtained better results using the default method.
JIGGLE=iterations between perturbations of simplex [3-]
When using SIMPLEX, RATS normally perturbs the simplex vertices every 30 iterations. If the TRACE output suggests that the simplex estimation is not moving much, you may find some benefit to using a smaller value for JIGGLE.
CVCRIT=convergence limit [0.00001]
SUBITERATIONS=subiteration limit [30]
Sets the default convergence limits and subiteration limits. Since the estimation instructions to which these apply have their own CVCRIT and SUBITERS options, you should use those instead.
Example
This switches to use exact line searches.
nlpar(exactline)
maximize(start=%uuinit(),onlyif=(d>=0.0.and.d<1.0),$
parmset=meanparms+figarchparms,iters=2500,trace,hessian=%xx) logl gstart gend
This changes the numerical derivatives to use the slower, but more accurate second order approximation.
frml logl = q=qf,rho=%dot(summer,%cvtocorr(qx)),hx=DECOUpdate(t),-.5*($
n*log(2*%pi)+%sum(%log(hx))+(n-1)*log(1-rho)+log(1+(n-1)*rho)+$
1.0/(1-rho)*(sumsq-rho/(1+(n-1)*rho)*sqsum))
nonlin(parmset=decoparms) a b
compute b=.80,a=.10
nonlin(parmset=uniparms) g
*
nlpar(derive=second)
maximize(parmset=decoparms+uniparms,pmethod=simplex,piters=5,iters=500) logl 2 *
Copyright © 2025 Thomas A. Doan