Uncertain about Forecast Uncertainty
Posted: Thu Nov 18, 2010 3:54 pm
Hi
I'm looking for some guidance on incorporating uncertainty in VAR forecasts - on a system that contains
both economic (inflation) and financial (S&P 500) variables.
Let's assume my VAR system uses 1 lag, 3 variables y,x,z and I am producing (monthly) forecasts 1,2,3
periods later based on a monthly history of 4 years. Let's also assume that all historical and forecasted values are positive. First thing I want to find out is:
"what is the lowest value y OR x takes between 1 and 3 months into the future 95% of the time ?"
Second question is:
"what % of the time does y<a AND x<b ?"
If we only had one variable y, an AR(1) process, then we could just look at that variable's variance for h steps (1,2,3) ahead and take the 95th worst value assuming a normal distribution. That variance would be expressed through an MA(oo) process.
For 3 variables, am I correct in saying that if we are interested in y OR x then it suffices to just look at the variance
of y h steps ahead and derive the 95th limit ? Then we would separately look at the h step ahead variance of x. These variances could be the outcome of the 'errors' function - note that I'm interested in the total variance of each variable.
But, when we look at the joint event for y and x then obviously we have to take the correlations into account.
1. Can we just assume a normal distribution for all 3 variables h steps ahead based on their calculated covariance matrix (described in the middle of page 15 of the VAR workbook) and then do a Monte Carlo to calculate the joint event frequency ?
2. Can we do a Monte Carlo integration to draw coefficients from the posterior and produce forecasts based on those ? Then repeat the draws 1000 times to calculate the joint event frequency ?
A couple of dummy questions here. Why do we draw the coefficient covariance matrix based on the residuals covariance matrix ? Couldn't we draw from the coefficient covariance matrix instead ? And why do we need a posterior covariance matrix and not draw based on the prior ? Is that methodology the same as the I and P step of data augmentation for incomplete multivariate normal data and is there a reference to understand more about it ?
3. I read another method where random shocks are added to the beta coefficients of a magnitude inversely related to the t-statistics one scenario at a time during a Monte Carlo simulation. Then, say, 2000 sets of forecast paths are generated for all 3 variables. Have you heard such method and can I read about it under a certain naming convention ?
If you are still reading this extra long topic then I would appreciate a reply !
I'm looking for some guidance on incorporating uncertainty in VAR forecasts - on a system that contains
both economic (inflation) and financial (S&P 500) variables.
Let's assume my VAR system uses 1 lag, 3 variables y,x,z and I am producing (monthly) forecasts 1,2,3
periods later based on a monthly history of 4 years. Let's also assume that all historical and forecasted values are positive. First thing I want to find out is:
"what is the lowest value y OR x takes between 1 and 3 months into the future 95% of the time ?"
Second question is:
"what % of the time does y<a AND x<b ?"
If we only had one variable y, an AR(1) process, then we could just look at that variable's variance for h steps (1,2,3) ahead and take the 95th worst value assuming a normal distribution. That variance would be expressed through an MA(oo) process.
For 3 variables, am I correct in saying that if we are interested in y OR x then it suffices to just look at the variance
of y h steps ahead and derive the 95th limit ? Then we would separately look at the h step ahead variance of x. These variances could be the outcome of the 'errors' function - note that I'm interested in the total variance of each variable.
But, when we look at the joint event for y and x then obviously we have to take the correlations into account.
1. Can we just assume a normal distribution for all 3 variables h steps ahead based on their calculated covariance matrix (described in the middle of page 15 of the VAR workbook) and then do a Monte Carlo to calculate the joint event frequency ?
2. Can we do a Monte Carlo integration to draw coefficients from the posterior and produce forecasts based on those ? Then repeat the draws 1000 times to calculate the joint event frequency ?
A couple of dummy questions here. Why do we draw the coefficient covariance matrix based on the residuals covariance matrix ? Couldn't we draw from the coefficient covariance matrix instead ? And why do we need a posterior covariance matrix and not draw based on the prior ? Is that methodology the same as the I and P step of data augmentation for incomplete multivariate normal data and is there a reference to understand more about it ?
3. I read another method where random shocks are added to the beta coefficients of a magnitude inversely related to the t-statistics one scenario at a time during a Monte Carlo simulation. Then, say, 2000 sets of forecast paths are generated for all 3 variables. Have you heard such method and can I read about it under a certain naming convention ?
If you are still reading this extra long topic then I would appreciate a reply !