I have noticed that repeated execution of MAXIMIZE command using last obtained parameter estimates as initial values in subsequent executions of that command yields unreasonably low standard errors of GARCH parameters.
I am running Koutmos (1996) code you posted here: http://www.estima.com/forum/viewtopic.p ... tmos#p1587
The only change I have made to the code is to duplicate the last line so to re-run MAXIMIZE after the code has reached convergence. This of course has no practical purpose in this context, other than enabling me to ask my question in a clean setting. So, the only deviation from the code you posted is that it ends with
Code: Select all
nonlin b a d g rr
maximize(pmethod=simplex,piters=2,method=bfgs,trace,iters=200) Lt start+1 end
maximize(pmethod=simplex,piters=2,method=bfgs,trace,iters=200) Lt start+1 end
Notice how standard errors decrease substantially from one run to the next, although likelihood did not change.
Can you please explain this effect? What should be done?
I am worried that re-starting MAXIMIZE in case of no convergence in the first run would produce incorrect standard errors.
Thanks,
Marin