RATS 10.1
RATS 10.1

Standard Normal Linear Model—Coefficients

The natural prior for the coefficients in the model

\begin{equation} {\bf{y}} = {\bf{X}}\beta + {\bf{u}},\left. {{\kern 1pt} {\kern 1pt} {\kern 1pt} {\bf{u}}\,} \right|{\bf{X}}\sim N\left( {0,h^{ - 1} {\bf{I}}} \right) \label{eq:boot_snlm} \end{equation}

where \(h\) is the precision (reciprocal of variance) takes the form

\begin{equation} \beta \sim N\left( {\bar \beta ,{\bf{\bar H}}^\dagger } \right) \label{eq:boot_snlmprior} \end{equation}

For many reasons, these priors are usually stated in terms of their precision \((\bf{\bar H}\)) rather than the variance. \({\bf{\bar H}}^\dagger  \) is the generalized inverse of \(\bf{\bar H}\), which allows the prior to be uninformative (have “infinite” variance) in some dimensions or directions. The data evidence from \eqref{eq:boot_snlm} is summarized as

\begin{equation} \beta \sim N\left( {\hat \beta ,h^{ - 1} \left( {{\bf{X'X}}} \right)^{ - 1} } \right) \label{eq:boot_snlmdatadensity} \end{equation}

where the precision is

\begin{equation} {\bf{\hat H}} = h{\bf{X'X}} \end{equation} 

The posterior from combining \eqref{eq:boot_snlmprior} and \eqref{eq:boot_snlmdatadensity} is

\begin{equation} \beta \sim N\left( {\left( {{\bf{\bar H + \hat H}}} \right)^{ - 1} \left( {{\bf{\hat H}}\hat \beta + {\bf{\bar H}}\bar \beta } \right),\left( {{\bf{\bar H + \hat H}}} \right)^{ - 1} } \right) \label{eq:boot_snlmposterior} \end{equation}

RATS offers three convenience functions for generating draws from the distribution in \eqref{eq:boot_snlmposterior}:

%RANMVPOST(H1,B1,H2,B2) takes the precision matrices (H1 and H2) and their corresponding means (B1 and B2).

%RANMVPOSTHB(H,HB) takes the combined precision matrix, and the combined \(\bf{H}\beta\) matrix (which is often easier to compute than \(\beta\) itself).

%RANMVPOSTCMOM(CMOM,h,H2,B2) assumes that CMOM is the cross moment matrix of \(\bf{X},\bf{y}\) (\(\bf{X}\) variables first), h is the \(h\) (reciprocal of residual variance) from \eqref{eq:boot_snlm} and H2 and B2 are the prior precision and mean from \eqref{eq:boot_snlmprior}. 

Standard Normal Linear Model—Variance

The natural prior for the precision of \(\bf{u}\) in \eqref{eq:boot_snlm} is

\begin{equation} \nu {\kern 1pt} {\kern 1pt} s^2 h\sim \chi _\nu ^2 \label{eq:boot_snlmresidualprior} \end{equation}

(Note: \(s^2\) is a scale parameter for the prior). The data evidence from \eqref{eq:boot_snlm} for \(h\) is summarized as

\begin{equation} \left( {\left( {{\bf{y}} - {\bf{X}}\beta } \right)^\prime \left( {{\bf{y}} - {\bf{X}}\beta } \right)} \right)h\sim \chi _T^2 \label{eq:boot_snlmresidualdata} \end{equation}          

\eqref{eq:boot_snlmresidualprior} and \eqref{eq:boot_snlmresidualdata} combine to produce the posterior

\begin{equation} \left( {\left( {{\bf{y}} - {\bf{X}}\beta } \right)^\prime \left( {{\bf{y}} - {\bf{X}}\beta } \right) + \nu s^2 } \right)h\sim \chi _{T + \nu }^2 \end{equation}

If we call the term in parentheses RSSPLUS, we can get a draw for \(h\) with 

 

compute hu = %ranchisqr(nu+%nobs)/rssplus

 

Note that this draws the reciprocal of the variance. That’s what %RANMVPOSTCMOM (above) wants as its second parameter.

 

The sum of squared residuals can be computed using SSTATS if you have the residuals already computed (into, say, the series U):

 

sstats / u^2>>rssbeta

 

In most cases, however, it will be easier to use the convenience function %RSSCMOM(CMOM,Beta). Like %RANMVPOSTCMOM, %RSSCMOM works with the cross product matrix of \(\bf{X},\bf{y}\) (\(\bf{X}\) variables first). A complete sequence of drawing \(h\) given \(\beta\) and drawing \(\beta\) given \(h\) is:

 

compute rssplus = nu*s2 + %rsscmom(%cmom,beta)

compute hu      = %ranchisqr(nu+%nobs)/rssplus

compute beta    = %ranmvpostcmom(%cmom,hu,priorh,priorb)

 

Multivariate Normal Regression (VAR)—Jeffreys Prior

Write a vector autoregression (or any multivariate regression with identical explanatory variables) as

\begin{equation} {\bf{y}}_t = \bf{\beta} {\bf{X}}_t + {\bf{u}}_t ,{\kern 1pt} {\kern 1pt} {\kern 1pt} t = 1, \ldots ,T \end{equation}          

\({\bf{X}}_t \) is a \(p\)-vector of explanatory variables, \({\bf{y}}_t \), \({\bf{u}}_t \) are \(n\)-vectors (all these are column vectors) and \(\bf{\beta}\) is the \(p \times n\) matrix of coefficients. We use a matrix for \(\bf{\beta}\) in this discussion as much as possible since that’s the arrangement that is used by the functions %MODELGETCOEFFS and %MODELSETCOEFFS. The \({\bf{u}}_t \) are assumed i.i.d. over time with distribution \(N(0,\Sigma)\) . Let the OLS estimates for \(\bf{\beta}\) and \(\Sigma\) be \(\bf{B}\) and \(\bf{S}\). With the standard diffuse (Jeffreys) prior of

\begin{equation} f\left( {\bf{\beta} ,\Sigma } \right) \propto \left| {{\kern 1pt} {\kern 1pt} \Sigma {\kern 1pt} {\kern 1pt} } \right|^{{{ - \left( {n + 1} \right)} \mathord{\left/ {\vphantom {{ - \left( {n + 1} \right)} 2}} \right.} 2}} \end{equation}

the posterior is

\begin{equation} \Sigma \sim IW\left[ {\left( {T{\kern 1pt} {\kern 1pt} {\bf{S}}} \right)^{ - 1} ,T - p} \right] \label{eq:boot_iwposterior} \end{equation}

and, given \(\Sigma\),

\begin{equation} vec\left( \bf{\beta} \right)\sim N\left[ {vec\left( {\bf{B}} \right),{\kern 1pt} {\kern 1pt} {\kern 1pt} \Sigma \otimes \left( {{\bf{X'}}{\kern 1pt} {\bf{X}}} \right)^{ - 1} } \right] \label{eq:boot_varpostwithjeffreys} \end{equation}

\(\Sigma\) has an inverse Wishart distribution, which can be drawn using %RANWISHARTI. This takes as its arguments a factor of \(\left( {T\,{\bf{S}}} \right)^{ - 1} \) and the degrees of freedom. If the VAR is estimated using ESTIMATE, the following will generate a draw for \(\Sigma\).   

 

compute fwish   = %decomp(inv(%nobs*%sigma))

compute wishdof = %nobs - %nreg

compute sigmad  = %ranwisharti(fwish,wishdof)

 

To draw from \eqref{eq:boot_varpostwithjeffreys}, note that

\begin{equation} \left( {{\bf{F}}_\Sigma \otimes {\bf{F}}_{{\bf{XX}}} } \right)\left( {{\bf{F}}_\Sigma \otimes {\bf{F}}_{{\bf{XX}}} } \right)^\prime = \Sigma \otimes {\bf{X'X}} \end{equation}

where \({\bf{F}}_\Sigma  {\bf{F'}}_\Sigma   = \Sigma \) and \({\bf{F}}_{{\bf{XX}}} {\bf{F'}}_{{\bf{XX}}}  = \left( {{\bf{X'}}{\kern 1pt} {\bf{X}}} \right)^{ - 1} \) so a factor of the (possibly) very large covariance matrix can be obtained from factors of the component matrices. And \({\bf{F}}_{{\bf{XX}}} \) is a function just of the observable data, and so can be computed just once. Given the component factors, the function %RANMVKRON(FSIGMA,FXX) will draw a matrix which is the “unvec”ed draw from \eqref{eq:boot_varpostwithjeffreys} with a mean of zero. Adding the \(\bf{B}\) matrix will give a draw for \(\bf{\beta}\). The following sequence draws \(\Sigma\) (as SIGMAD) and \(\bf{\beta}\) (as BETADRAW). The first five lines depend only upon the data and can (and should) be done outside the simulation loop. Only the last three need to be done within it.       

 

estimate

compute fwish   = %decomp(inv(%nobs*%sigma))

compute wishdof = %nobs - %nreg

compute fxx     = %decomp(%xx)

compute betaols = %modelgetcoeffs(varmodel)

 

compute sigmad  = %ranwisharti(fwish,wishdof)

compute fsigma  = %decomp(sigmad)

compute betadraw = betaols + %ranmvkron(fsigma,fxx)

 

Multivariate Normal Regression (VAR)—General Priors

The most time-consuming part of drawing a multivariate Normal vector of high dimension is factoring the covariance matrix. If we have a six variable VAR with 20 coefficients per equation, factoring the \(120 \times 120\) covariance matrix and generating a draw based upon that takes roughly 30 times as long as using %RANMVKRON with the \(20 \times 20\) component already factored. And this gap gets larger as the size of the covariance matrix increases. The Kroneker product form saves time both by avoiding factoring a full-sized matrix, and also by taking advantage of the structure of the matrix in multiplying out to get the final draw.

 

However, unless the prior on the coefficients takes a very specific form, the posterior covariance matrix will not have a convenient structure (see Kadiyala and Karlsson, 1997). And \(\Sigma\) will not have an unconditional distribution as it does in \eqref{eq:boot_iwposterior}.

 

If we keep the standard diffuse prior for \(\Sigma\), but now have the informative multivariate Normal prior for \(\bf{\beta}\): 

\begin{equation} f\left( {\bf{\beta} ,\Sigma } \right) \propto \left| {{\kern 1pt} \Sigma {\kern 1pt} {\kern 1pt} } \right|^{{{ - \left( {n + 1} \right)} \mathord{\left/ {\vphantom {{ - \left( {n + 1} \right)} 2}} \right. } 2}} \exp \left( { - \frac{1}{2}vec\left( {\bf{\beta} - {\bf{\bar B}}} \right)^\prime {\bf{\bar H}}vec\left( {\bf{\beta} - {\bf{\bar B}}} \right)} \right) \end{equation}

the posterior for \(\Sigma\) is given by

\begin{equation} \Sigma |\beta \sim IW\left( {\left( {{\bf{Y}} - {\bf{X}}{\kern 1pt} \beta } \right)^\prime \left( {{\bf{Y}} - {\bf{X}}{\kern 1pt} \beta } \right),T} \right) \label{eq:boot_vargeneralsigma} \end{equation} 

and the posterior for \(vec\left( \beta  \right)|\Sigma \) is a multivariate Normal with precision

\begin{equation} \Sigma ^{ - 1} \otimes {\bf{X'X}} + {\bf{\bar H}} \label{eq:boot_vargeneralprecision} \end{equation}

and mean

\begin{equation} \left( {\Sigma ^{ - 1} \otimes {\bf{X'X}} + {\bf{\bar H}}} \right)^{ - 1} \left( {\left( {\Sigma ^{ - 1} \otimes {\bf{X'X}}} \right)vec\left( {\bf{B}} \right) + {\bf{\bar H}}vec\left( {{\bf{\bar B}}} \right)} \right) \label{eq:boot_vargeneralbeta} \end{equation}

While it’s possible to apply %RANMVPOST to this, there is an even greater advantage in using a pair of cross-moment based functions to avoid unnecessary calculations.These are

%SIGMACMOM(CMOM,B), which computes \({\left( {{\bf{Y}} - {\bf{X}}{\kern 1pt} \beta } \right)^\prime  \left( {{\bf{Y}} - {\bf{X}}{\kern 1pt} \beta } \right)}\) from \eqref{eq:boot_vargeneralsigma}

%RANMVKRONCMOM(CMOM,SINV,H,B), which, given \(\Sigma^{-1}\) (in the SINV parameter), generates a draw from the posterior given by \eqref{eq:boot_vargeneralprecision} and \eqref{eq:boot_vargeneralbeta}.

Note that if the B argument (the prior mean) in %RANMVKRONCMOM is \(p \times n\), the draw will be as well; while if B is stacked in vector form, the draw will be also. The cross product matrix in this case should have the \(\bf{X}\) variables first, then all the \(\bf{Y}\)’s. This is the order that CMOM(MODEL=varmodel) will create.

 

The following will create BDRAW given SIGMAD and SIGMAD given BDRAW. The CMOM will be outside the loop, the others inside of it. Note that this is an example of Gibbs sampling.

 

cmom(model=varmodel)

 

compute bdraw  = %ranmvkroncmom(%cmom,inv(sigmad),hprior,bprior)

compute rssmat = %sigmacmom(%cmom,bdraw)

compute sigmad = %ranwisharti(%decomp(inv(rssmat)),%nobs)

 

Probabilities

The natural prior for the probability \(p\) in a binomial random variable is the beta. If you have data with \(n\) observations and \(k\) successes, and a beta prior with parameters \(a,b\), the posterior is beta with parameters \(a + k,b + n - k\). In most cases, the easiest way to get the count value \(k\) is with the instruction SSTATS applied to a logical or relational expression. For instance, the following counts (and puts into the variable COUNT) the number of times that \(S_{t-1}\) was 1 and \(S_t\) was 2. The number of observations in the sample, that is, the number of times \(S_{t-1}=1\) , will be put in %NOBS. The COMPUTE then makes a draw from the posterior (here for the probability of staying in regime 1) combining this with a beta prior with parameters \(\gamma _{11} ,\gamma _{12} \).

 

sstats(smpl=(s{1}==1)) 2 * s==2>>count

compute p11 = %ranbeta(gamma11+%nobs-count,gamma12+count)

 

If there are more than two alternatives, you would use the Dirichlet distribution. %RANDIRICHLET returns a vector with the draw for the probabilities of each of the states. The same basic setup with three states is done with

 

sstats(smpl=(s{1}==1)) 2 * s==2>>count12 s==3>>count13

compute p1x = %randirichlet(||gamma11+%nobs-count12-count13,$

   gamma12+count12,gamma13+count13||)


Copyright © 2025 Thomas A. Doan