Markov-Switching: Time-Varying Transition Probabilities
Markov-Switching: Time-Varying Transition Probabilities
Hi everyone,
Working on Markov-switching model, I would like to use time-varying (not constant) transition probabilities technique as in Filardo (1994) and Diebold, Lee and Weinbach (1994). I have just started using RATS. I am just now able to estimate Markov-switching models with RATS, following the RATS User's Guide.
I would like to ask you for some suggestions about time-varying transition probabilities. How did you perform this with RATS? And would it be possible that you share RATS codes with me? I would like to start by learning from your codes how to perform this with RATS. I honestly appreciate your kindness to help.
Best wishes,
TL
Working on Markov-switching model, I would like to use time-varying (not constant) transition probabilities technique as in Filardo (1994) and Diebold, Lee and Weinbach (1994). I have just started using RATS. I am just now able to estimate Markov-switching models with RATS, following the RATS User's Guide.
I would like to ask you for some suggestions about time-varying transition probabilities. How did you perform this with RATS? And would it be possible that you share RATS codes with me? I would like to start by learning from your codes how to perform this with RATS. I honestly appreciate your kindness to help.
Best wishes,
TL
Re: Markov-Switching: Time-Varying Transition Probabilities
See Filardo JBES 1994.TL wrote:Hi everyone,
Working on Markov-switching model, I would like to use time-varying (not constant) transition probabilities technique as in Filardo (1994) and Diebold, Lee and Weinbach (1994). I have just started using RATS. I am just now able to estimate Markov-switching models with RATS, following the RATS User's Guide.
I would like to ask you for some suggestions about time-varying transition probabilities. How did you perform this with RATS? And would it be possible that you share RATS codes with me? I would like to start by learning from your codes how to perform this with RATS. I honestly appreciate your kindness to help.
Best wishes,
TL
Re: Markov-Switching: Time-Varying Transition Probabilities
Dear Tom,
I am not sure whether my previous post was clear enough. Nevertheless, please let me know if I could help clarify this. I would appreciate it if you could give me suggestions how to implement the procedure in RATS (or what I should start with).
To acquire parameter estimates and estimated standard errors from the time-varying transition probability Markov-switching model, I have used the attached code you and Mr Maycock have kindly helped me.
Currently, I would like to construct residual-based diagnostic tests such as the Ljung-Box portmanteau test which
(1) applied to the residuals for j autocorrelations, to test for residual autocorrelation up to jth order [Q(j)]
(2) applied to the squared residuals for j autocorrelations, to test for autoregressive heteroscedasticity of up to jth order [Q^2(j)]
Nevertheless, since the state variable St is unobservable, the residuals from the fitted model are also unobservable.
Following Maheu and McCurdy (2000), I understand that I should construct the standardized expected residuals as a weighted average of the residuals obtaining under each regime, with the weights equal to the smoothed probabilities of being in each regime.
Could you please give me suggestions how to implement the procedure in RATS?
Thank you very much for your help and kindness.
Tim
I am not sure whether my previous post was clear enough. Nevertheless, please let me know if I could help clarify this. I would appreciate it if you could give me suggestions how to implement the procedure in RATS (or what I should start with).
To acquire parameter estimates and estimated standard errors from the time-varying transition probability Markov-switching model, I have used the attached code you and Mr Maycock have kindly helped me.
Currently, I would like to construct residual-based diagnostic tests such as the Ljung-Box portmanteau test which
(1) applied to the residuals for j autocorrelations, to test for residual autocorrelation up to jth order [Q(j)]
(2) applied to the squared residuals for j autocorrelations, to test for autoregressive heteroscedasticity of up to jth order [Q^2(j)]
Nevertheless, since the state variable St is unobservable, the residuals from the fitted model are also unobservable.
Following Maheu and McCurdy (2000), I understand that I should construct the standardized expected residuals as a weighted average of the residuals obtaining under each regime, with the weights equal to the smoothed probabilities of being in each regime.
Could you please give me suggestions how to implement the procedure in RATS?
Thank you very much for your help and kindness.
Tim
Last edited by TL on Sat Jan 30, 2010 2:32 pm, edited 1 time in total.
Re: Markov-Switching: Time-Varying Transition Probabilities
Dear Tom,
I have estimated the time-varying transition probabilities Markov-switching model. I find that, in the transition probability matrix, the constant term (B10 or B20) is big, comparing to the estimate of the coefficient of the variable in the transition probability matrix (B11 or B21). Is this unusual?
Thank you very much for your help and kindness.
Sincerely,
Tim
I have estimated the time-varying transition probabilities Markov-switching model. I find that, in the transition probability matrix, the constant term (B10 or B20) is big, comparing to the estimate of the coefficient of the variable in the transition probability matrix (B11 or B21). Is this unusual?
Thank you very much for your help and kindness.
Sincerely,
Tim
Last edited by TL on Sat Jan 30, 2010 2:32 pm, edited 1 time in total.
Re: Markov-Switching: Time-Varying Transition Probabilities
Won't the scale of the coefficient on the explanatory variable depend upon the scale of the explanatory variable itself? Plus, the value of the intercept will (as in any regression) depend upon whether the explanatory variable has mean near zero or not. In your case, the explanatory variable has a mean of -8.5 and a standard error fairly small relative to that. So it's roughly an order of magnitude larger than 1, with a non-zero mean.
Re: Markov-Switching: Time-Varying Transition Probabilities
Dear Tom,
I have read a message you posted about EM and ML estimation under the topic ‘Filardo JBES 1994 Time-Varying MS Model’.
I have used ML for estimating a time-varying transition probabilities Markov-switching model as it is implemented in existing literature.
What would be the difference between using EM and ML? Could you please give me suggestions on this?
Thank you very much for your help and kindness.
Sincerely,
Tim
I have read a message you posted about EM and ML estimation under the topic ‘Filardo JBES 1994 Time-Varying MS Model’.
I have used ML for estimating a time-varying transition probabilities Markov-switching model as it is implemented in existing literature.
What would be the difference between using EM and ML? Could you please give me suggestions on this?
Thank you very much for your help and kindness.
Sincerely,
Tim
Re: Markov-Switching: Time-Varying Transition Probabilities
EM tends to get closer to the optimum quicker because it eliminates the function evaluations required for computing derivatives with respect to the linear regression parameters. However, because it does the parameter set in pieces, it doesn't get a full system covariance matrix. In general, starting with EM, then switching to ML gives the best of both worlds.
In the case of the time-varying probability model, however, the two aren't doing exactly the same optimization. EM estimates the pre-sample probabilities as part of the smoothing algorithm; ML can only duplicate that by adding the pre-sample probabilities to the parameter set. Unlike the fixed transition probablities, there is no "ergodic" solution for the pre-sample probabilities. In the Filardo code, we used the ergodic probabilities evaluated at the mean of the z variables to initialize ML.
In the case of the time-varying probability model, however, the two aren't doing exactly the same optimization. EM estimates the pre-sample probabilities as part of the smoothing algorithm; ML can only duplicate that by adding the pre-sample probabilities to the parameter set. Unlike the fixed transition probablities, there is no "ergodic" solution for the pre-sample probabilities. In the Filardo code, we used the ergodic probabilities evaluated at the mean of the z variables to initialize ML.