PCA
Without knowing the paper, I can't give you a definitive answer.
However, if you multiply each eigenvector by the square root of its respective eigenvalue, that 'weights' each eigenvector by its importance in explaining the system. Perhaps that is what the paper refers to.
For example, if you are using the first k principal components of a NxN correlation matrix to simulate correlated standard normal random numbers, you would multiply a 1xk vector of independent random numbers by the kxN weighted eigenvectors to get a 1xN vector of correlated random numbers.
I hope that this helps.
Ted
However, if you multiply each eigenvector by the square root of its respective eigenvalue, that 'weights' each eigenvector by its importance in explaining the system. Perhaps that is what the paper refers to.
For example, if you are using the first k principal components of a NxN correlation matrix to simulate correlated standard normal random numbers, you would multiply a 1xk vector of independent random numbers by the kxN weighted eigenvectors to get a 1xN vector of correlated random numbers.
I hope that this helps.
Ted
The examples from the Tsay second edition, those from tsayp408 to tsayp437, all do various forms of factor analysis. We've added or updated quite a few procedures for implementing these.
Regarding the "loadings", the basic factor model is X (the observed data) = L * F + U
F are the factors and U are the idiosyncratic shocks. L are the loadings from the factors to the actual variables. If the X's are roughly the same scale (factor analysis is often done on correlation matrices to make that true), then relatively high values of a loading indicate that a factor is relatively important. What you generally hope to see (in a multi-factor model) is one factor loading heavily on one set of variables and another loading heavily on a different set. Since only the space spanned by the factors, not the factors themselves, are determined by the estimation procedure, there are ways to "rotate" the factors to try to achieve such a simple interpretation.
Regarding the "loadings", the basic factor model is X (the observed data) = L * F + U
F are the factors and U are the idiosyncratic shocks. L are the loadings from the factors to the actual variables. If the X's are roughly the same scale (factor analysis is often done on correlation matrices to make that true), then relatively high values of a loading indicate that a factor is relatively important. What you generally hope to see (in a multi-factor model) is one factor loading heavily on one set of variables and another loading heavily on a different set. Since only the space spanned by the factors, not the factors themselves, are determined by the estimation procedure, there are ways to "rotate" the factors to try to achieve such a simple interpretation.
Re: PCA
It's PC (Principal Components), not PCA (Principal Components Analysis---that is, the technique).
Although @PRINCOMP can be applied to non-stationary series, it's a bad idea. Non-stationary series don't have a constant mean, much less a constant covariance matrix. If you try to extract PC's from a set of I(1) series, you are likely to get what amounts to being a "spurious regression" component. Both cointegration and common trends approaches use calculations closely related to PCA but more carefully deal with the dynamics of the process.
Although @PRINCOMP can be applied to non-stationary series, it's a bad idea. Non-stationary series don't have a constant mean, much less a constant covariance matrix. If you try to extract PC's from a set of I(1) series, you are likely to get what amounts to being a "spurious regression" component. Both cointegration and common trends approaches use calculations closely related to PCA but more carefully deal with the dynamics of the process.