RATS 10.1
RATS 10.1

 

 

The examples below use some of the "Bootstrapping and Simulation Tools" for generating the random draws.

Using PARMSETS

Even if you’re not actually doing non-linear estimation which requires a PARMSET, you can benefit from using them to organize your statistics. For instance, suppose you’re running a loop and you need to keep track of M1, M2, M3 and M4. If you do

 

nonlin(parmset=simstats) m1 m2 m3 m4

 

then the function %PARMSPEEK(SIMSTATS) will return a 4-vector with the current values of the four variables. You can thus do the calculations using statistics on the VECTOR rather than having to program up bookkeeping on the separate variables. When you’re done, you can also use %PARMSLABELS(SIMSTATS) to get the labels for output.

 

Note that while, in this case, these are all scalars, they could be a mix of scalars and matrices, just like any other PARMSET.

 

Progress Indicators

Before you do 10,000 replications on a model, it’s a good idea to start with a more modest number and make sure that the program is doing what you want. Write your code using a variable like NDRAWS to represent the number of desired draws so you can change it easily. When you’re ready for a full run, keep in mind that these methods are called, with good reason, computationally intensive. Even with a fast computer, it can sometimes take minutes, or even hours, to run enough replications to get results at the desired level of accuracy. (In general, accuracy improves with the square root of the number of draws.) We use the INFOBOX instruction in these situations to give an idea of how things are going.

 

infobox(action=define,progress,lower=1,upper=ndraws) "Progress"

do draw=1,ndraws

   .... instructions ...

   infobox(current=draw)

end do draw

infobox(action=remove)

...

 

The Pseudo-Code Generator—Monte Carlo/Gibbs Loop wizard can be used to set up this type of loop so all you have to do is fill in the details of the calculation itself.

 

Computing a Probability

To compute a probability, you need to keep track of the number of times an event occurred, and divide by the number of draws that you made. You can use an INTEGER variable, or entries of a series or array, to hold this information. For instance, the following uses the SIMULATE instruction to generate simulations of an (already estimated) GDP model, and uses the simulated data to compute the probability distribution for the onset of the first recession (defined as two consecutive quarters of decline in GDP). The series HIST counts the number of times this is reached at a particular period. Outside the loop, that is converted to probabilities by dividing by the count of draws.

 

smpl 2018:3 2020:4

set hist = 0.0

do draw=1,ndraws

  simulate(model=gdpmodel)

  do date=2018:3,2020:4

    if gdp(date)<gdp(date-1).and.gdp(date-1)<gdp(date-2) {

       compute hist(date) = hist(date)+1

       break

       }

  end do date

end do draw

set hist = hist/ndraws

 

Computing Moments

You compute these using the same formulas you would for any other random sample. To estimate the mean, sum the generated values and divide by the number of draws. To estimate a variance, add up the squares, divide by the number of draws and subtract the squared estimate of the mean. For example, the following computes the expected value of the maximum and minimum values achieved for a log Normal diffusion approximated by simulation over a grid. In this, MU is the expected return per year, SIGMA the volatility, PERIOD the fractional part of a year being analyzed and BASEPRICE the original price. GRID is the number of equally-spaced time periods used in the simulation.

 

compute hdrift = (mu-.5*sigma^2)*(period/grid)

compute hsigma = sigma*sqrt(period/grid)

 

set price 1 grid = 0.0

compute totalmax=0.0,totalmin=0.0

do draw=1,ndraws

   compute pcond=baseprice

   do i=1,grid

      compute pcond = price(i) = pcond*exp(hdrift+%ran(hsigma))

   end do i

   compute totalmax = totalmax+%maxvalue(price),$

           totalmin = totalmin+%minvalue(price)

end do draw

disp "Maximum" totalmax/ndraws "Minimum" totalmin/ndraws

 

This is the same thing using PARMSETs for bookkeeping (which doesn’t make much sense for just two variables, but illustrates the idea): 

 

nonlin(parmset=minmax) minv maxv

compute total=%zeros(%size(%parmspeek(minmax)),1)

do draw=1,ndraws

   compute pcond=baseprice

   do i=1,grid

      compute pcond = price(i) = pcond*exp(hdrift+%ran(hsigma))

   end do i

   compute maxv=%maxvalue(price),minv=%minvalue(price)

   compute total=total+%parmspeek(minmax)

end do draw

compute total=total/ndraws

report(action=define)

report(atrow=1,atcol=1,fillby=cols) %parmslabels(minmax)

report(atrow=1,atcol=2,fillby=cols) total

report(action=show)

 

Computing Fractiles (Quantiles)

Fractiles (quantiles) are needed when you are computing critical values for a test. They are also useful when there is some concern that a distribution is asymmetric—if so, standard error bands computed using the mean and variance may give a misleading picture of the distribution. Unfortunately, there are no sufficient statistics for the quantiles of a distribution—you can’t summarize the data by keeping a running sum as was done in the two examples above. To compute these, you have to save the entire set of draws and do the calculations later. Set up a vector, or possibly a matrix of vectors if you’re keeping track of more than one random variable, and dimension the vectors to the number of draws that you’re making. The instruction STATISTICS with the FRACTILES option computes a standard collection of quantiles for a data series. You can also use the %FRACTILES(array,fractiles) function, which takes as parameters an array of data and a vector of desired quantiles, and returns those quantile points in the data set. Just as a warning: if you do a very large number of draws (100,000 and up), computing the sample quantiles can actually take more time than computing the statistics in the first place.

 

This example produces simulations of TBILLS (using a model already estimated). %FRACTILES is used to find the 25th and 75th percentiles.

 

dec vect sims(ndraws)

do draw=1,ndraws

   simulate(model=ratemodel)

   compute sims(draw) = tbills(2018:12)

end do draw

compute ratefract = %fractiles(sims,||.25,.75||)

disp "25-75%iles of T-Bills" ##.### ratefract(1) "to" ratefract(2)

 

Computing a Density Function

Just as with quantiles, you need to save all the simulated values. Here, they must be saved in a data series, which you process using the instruction DENSITY. For instance, the following computes an estimated density function for the autoregressive coefficient when the data actually follow a random walk. The coefficient estimates go into the series BETAS. The density is estimated, then graphed. The default bandwidth for DENSITY has certain general optimality properties, but we’ve found that it tends to be too narrow for densities generated from Monte Carlo, so it helps to use the SMOOTHING option on DENSITY to slightly increase the bandwidth above the default.

 

set betas 1 ndraws = 0.0

do i=1,ndraws

   set(first=0.0) y = y{1}+%ran(1.0)

   linreg(noprint) y

   # y{1}

   compute betas(i) = %beta(1)

end do i

density(smoothing=1.50) betas / xb fb

scatter(style=lines)

# xb fb

 

Computing a Covariance Matrix of Coefficients

A covariance matrix of coefficients is a special case of a moment estimator. There are, however, some special tools available in RATS to help you with this. The following set up can be used with no modifications other than replacing the code for generating draws.

 

compute [vector] betamean = %zeros(number of regressors,1)

compute [vector] betacmom = %zeros(number,number)

do i=1,ndraws

   ... code to compute draw for coefficients %BETA  ...

   compute betamean = betamean+%beta

   compute betacmom = betacmom+%outerxx(%beta)

end do i

compute betamean = betamean/ndraws

compute betacmom = betacmom/ndraws-%outerxx(betamean)

 

If this is for a linear regression, you can print the new coefficient vector and covariance matrix by using LINREG with the CREATE, COEFFS, and COVMAT options. You must also use the option FORM=CHISQUARED, because BETACMOM is a direct estimate of the covariance matrix:

 

linreg(create,covmat=betacmom,coeffs=betamean,form=chisq)  .....

Weighted Draws

The examples examined so far assumed that all draws receive equal weight. There are techniques described later which require placing unequal weights on the draws. The adjustment required for this is fairly simple when you’re computing means and other moments: just sum the weight of a draw times the value obtained on that draw, and, at the end, divide by the sum of the weights.

 

It’s not so simple with the quantiles and the density function. In addition to saving all the simulated values (do not multiply by the weights), you need to save the weights for each draw. On DENSITY, include the option WEIGHTS=series of weights. This will do the appropriate weighting of observations. To compute quantiles, use the %WFRACTILES function instead of %FRACTILES. This adds a third parameter which is the vector or series of weights. Note that the computation of quantiles for weighted draws can be quite time-consuming. If you do many draws, it may even take longer than generating the draws in the first place.

 

density(weights=weights,smpl=weights>0.00,smoothing=1.5) persist $

   1 draws xx dx

scatter(style=line,$

  header="Density of Persistence Measure (alpha+beta)")

# xx dx

 


Copyright © 2025 Thomas A. Doan