RATS 10.1
RATS 10.1

There are quite a few matrix decomposition algorithms, which take a matrix (or pair of matrices) and produce a representation of it or them as a product of matrices which are easier to handle. There are three specific forms which are usually the aim of the decomposition. These, with their advantages:

Diagonal

These can be inverted by merely taking reciprocals of the diagonal elements. The determinant is just the product of the elements. And the rank can be determined by counting the number of non-zero elements.

Lower or Upper Triangular

Sometimes abbreviated to L (for Lower or Left) and U for Upper or R for right. The determinant is the product of the diagonal elements. The eigenvalues are the diagonal elements. The class of L matrices (or the class of U/R matrices) is closed under multiplication, addition and inversion. Solving a system of equations in the form \({\bf{Lx}} = {\bf{y}}\) is simple and doesn't require actually inverting \({\bf{L}}\).

Orthogonal/unitary/self-adjoint

This means \({\bf{QQ'}} = {\bf{I}}\), that is, the inverse is the transpose. (For complex-valued matrices, the adjoint (conjugate transpose) is used rather than the transpose). These must be full rank. Their determinant is 1. Solving \({\bf{Qx}} = {\bf{y}}\) requires just computing \({\bf{x}} = {\bf{Q'y}}\).

 

The following are the principal matrix decompositions provided within RATS.

 

Cholesky Factorization

This is heavily used in time series work, particularly in Vector Autoregressions. For a positive-semi definite symmetric matrix \(\Sigma \), the Cholesky factorization is \(\Sigma {\bf{ = FF'}}\) where \({\bf{F}}\) is lower triangular (and thus \({{\bf{F'}}}\) is upper triangular). \({\bf{F}}\) is computed with a straightforward constructive algorithm, while many other decompositions require some form of iteration.


The Cholesky decomposition can be done in RATS using the %CHOL(sigma) or %DECOMP(sigma) functions (they're synonyms). A related factorization which is a reordered Cholesky factorization can be done with %PSDFACTOR(sigma,||row order||) (Positive Semi-Definite FACTOR).
 

Eigenvalue Decomposition (symmetric matrices)

A symmetric matrix \({\bf{A}}\) can always be written \({\bf{A = PDP'}}\) where \({\bf{P}}\) is orthogonal and \({\bf{D}}\) is diagonal. The diagonal of \({\bf{D}}\) forms the vector of eigenvalues, and the columns of \({\bf{P}}\) are the eigenvectors. Powers of \({\bf{A}}\) can be computed easily using manipulations like

 

\({\bf{AA = PD(P'P)DP' = PDDP'}}\)

 

where \({\bf{DD}}\) will just be the diagonal matrix of the squares of \({\bf{D}}\). This extends to any power of \({\bf{A}}\) and, in fact, to any function with the proper behavior; for instance, \(\exp ({\bf{A}})\) can be defined as \({\bf{P}}\exp ({\bf{D}}){\bf{P'}}\) where \(\exp ({\bf{D}})\) is the diagonal matrix formed by taking exp of the diagonal of \({\bf{D}}\).
 

The eigenvalue decomposition can be done in RATS using either the EIGEN instruction, or the %EIGDECOMP function. The latter returns a 2-VECTOR of arrays, where the first element is the diagonal of \({\bf{D}}\), and the second is \({\bf{P}}\).
 

Eigenvalue Decomposition (general matrices)

Ideally, a general matrix \({\bf{A}}\) can be written \({\bf{A}} = {\bf{PD}}{{\bf{P}}^{ - 1}}\). However, there is no guarantee that this is possible; all that is guaranteed is that the center matrix is in "Jordan" form. However, as long as there are no repeated eigenvalues (which will usually be true if \({\bf{A}}\) is based upon sample information or estimates), then this decomposition is possible. Note, however, that \({\bf{P}}\) is not orthogonal so this isn't quite as convenient. Also, it's quite possible for \({\bf{D}}\) and \({\bf{P}}\) to be made up of complex numbers, even when \({\bf{A}}\) is all real-valued.

 

Again, you can do the eigen decomposition of general matrices with the EIGEN instruction, but unless you know that the eigenvalues have to be real-valued, you should use the CVALUES and CVECTORS options.

 

QR Decomposition

Any real square matrix \({\bf{A}}\) can be written \({\bf{A}}{\rm{ = }}{\bf{QR}}\), where \({\bf{Q}}\) is orthogonal and \({\bf{R}}\) is right (upper) triangular. Because of the properties of the matrices, the determinant of \({\bf{A}}\) is the determinant of \({\bf{R}}\), which is the product of the diagonal elements of \({\bf{R}}\). This is (in effect) the Gram-Schmidt orthonormalization of the columns of \({\bf{A}}\): for any p, the first p columns of \({\bf{Q}}\) form an orthonormal basis for the first p columns of \({\bf{A}}\).
 

The QR decomposition can be done in RATS using the %QRDECOMP(A) function. This returns a 2-VECTOR of matrices, where the first element is \({\bf{Q}}\) and the second is \({\bf{R}}\).

 

QR can be generalized to an m × n matrix (mn) where \({\bf{Q}}\) will be m × m and \({\bf{R}}\) is m × n with zeros in the extra rows.

 

Singular Value Decomposition

Any m × n matrix \({\bf{A}}\) (mn) can be written in the form \({\bf{A}} = {\bf{UWV'}}\) where \({\bf{U}}\) is m × n column orthonormal (\({\bf{UU'}} = {\bf{I}}\)), \({\bf{V}}\) is n × n orthonormal (\({\bf{V'V}} = {\bf{VV'}} = {\bf{I}}\)) and \({\bf{W}}\) is diagonal with non-negative values. The values of \({\bf{W}}\) are known as the singular values of \({\bf{A}}\). As RATS generates these, they will be in decreasing order. If \({\bf{A}}\) isn't full column rank, then the number of (effectively) non-zero values of \({\bf{W}}\) is the column rank of \({\bf{A}}\). What "effectively" means isn't always clear—in practice, none of the singular values are likely to be true zero. RATS uses the singular value decomposition internally in doing the %NULLSPACE, %PERP and %GINV functions—where it needs to determine whether a singular value is machine-zero, it compares it with the largest of the singular values, and treats it as zero if it has no precision relative to that (that is, if adding it to largest value doesn't change the largest value given the machine-precision). You can compute the SVD directly using the %SVDECOMP function, which returns a VECT[RECT] whose first element is \({\bf{U}}\), second element are the diagonal elements of \({\bf{W}}\) and third is \({\bf{V}}\).

Schur Decomposition

Any real square matrix \({\bf{A}}\) can be written \({\bf{A}} = {\bf{Q\Lambda Q'}}\), where \({\bf{Q}}\) is orthogonal and \({\bf{\Lambda}}\) is block upper triangular. It's strictly upper triangular if \({\bf{A}}\) has all real eigenvalues. If \({\bf{A}}\) has complex eigenvalues, there's a 2 x 2 block on the diagonal for each complex conjugate pairs of eigenvalues. (There's also a strictly upper triangular representation if the matrices are allowed to be complex-valued). Compared with the eigenvalue decomposition for the general square matrix, this has the advantage of using an orthogonal matrix. It's also more stable numerically—computing eigenvectors for a general matrix with repeated, or near-repeated eigenvalues is a very difficult process.

 

The Schur decomposition has some similarities to QR, but is especially useful for handling dynamic models. For instance, if we replace the \({\bf{A}}\) in a state-space model with its Schur decomposition, we get the following transformation:


\({{\bf{x}}_t} = {\bf{Q\Lambda Q'}}{{\bf{x}}_{t - 1}} + {{\bf{w}}_t} \Rightarrow {\bf{Q'}}{{\bf{x}}_t} = {\bf{\Lambda Q'}}{{\bf{x}}_{t - 1}} + {\bf{Q'}}{{\bf{w}}_t} \Rightarrow {{\bf{\tilde x}}_t} = {\bf{\Lambda }}{{\bf{\tilde x}}_{t - 1}} + {{\bf{\tilde w}}_t}\)


which rotates the states to replace a general transition matrix with an easier-to-handle block upper triangular matrix.


The Schur decomposition can be done with RATS using the QZ instruction with B=the identity.

 

QZ (Generalized Schur) Decomposition

Any pair of real square matrices \({\bf{A}}\) and \({\bf{B}}\) can be written \({\bf{A}}{\rm{ = }}{\bf{Q\Lambda Z'}}\) and \({\bf{B}}{\rm{ = }}{\bf{Q\Omega Z'}}\), where \({\bf{Q}}\) and \({\bf{Z}}\) are each orthogonal and where \({\bf{\Omega }}\) is upper triangular and \({\bf{\Lambda }}\) is block upper triangular. \({\bf{\Lambda }}\) is strictly upper triangular if the \(({\bf{A}},{\bf{B}})\) combination has all real generalized eigenvalues.

 

As with the regular Schur, this is useful in manipulating dynamic models, though here it's when the "left side" of the equation isn't necessarily full rank.

 

The QZ decomposition is done using the QZ instruction. Note that unlike the eigen decompositions, where it's relatively easy to rearrange the eigenvalues/vectors, in QZ it isn't because the center matrices aren't diagonal. Reordering the generalized eigenvalues (ratios of the diagonal elements) is computationally more demanding than computing a decomposition in the first place. Instead, if necessary, you block the matrices based upon eigenvalues above and below a certain value, which is not as difficult.


 


Copyright © 2025 Thomas A. Doan