Code: Select all
nonlin landa gamma
frml eq1 = landa**2*var(gt)+gamma**2*var(ft)
compute landa=0, gamma=0
nlls(frml=eq1) var(yt)
Code: Select all
nonlin landa gamma
frml eq1 = landa**2*var(gt)+gamma**2*var(ft)
compute landa=0, gamma=0
nlls(frml=eq1) var(yt)
Hi Tom. Thanks for the quick reply. f and g are two independent risk factors estimated from a state-space model. In my model, asset return can be explain by g and f and it's own idiosyncratic error(Ri,t=b1Gt+b2Ft+Et). Since the two risk factors are orthogonal to each other, the variance decomposition of each asset return can be expressed as linear combination of the variances of the two factor and idiosyncratic error (VAR(Ri,t)=a1VAR(Gt)+a2VAR(Ft)+VAR(Et), where a1=b1**2, a2=b2**2). Now, I have VAR(Ri,t), VAR(Gt) and VAR(Ft), but I am not sure how to ensure a1, a2 will be positive as a1 equals squared b1 and a2 equals squared b2. Once again, thank you for the quick replyTomDoan wrote:What are f and g? There's nothing in this that would distinguish them. As you have this written, both f and g have to somehow be normalized or you don't have identification between their variance and the lambda and gamma multipliers.
In general, there's nothing you can do to ensure that the three effects are all positive---it's quite possible (in some cases likely) that at least one of the components is zero, after all, you're decomposing the variance of a scalar process into three components based solely upon the values of the scalar process.
I am so sorry for the confusion here. Actually, f and g are both estimated state factors from a state space model. So the variance decomposition can be done by first do a linear regression and square the regression coefficients ; then, get variances on y , f and g from "statistics"; finally, the proportion of variance of y can be explained by variance of f equals to the variance of g times the squared corresponding coefficient divided by the variance of y; similarly for factor g. And the remaining part of the variance of y is due to variance of idiosyncratic errors. I am sorry for asking this naïve question. Please, kindly correct me if the procedure is wrong.TomDoan wrote:I'm confused. If y, f and g are "data" and y is linearly related to f and g, why wouldn't you just do a linear regression and square the regression coefficients to get the variance decomposition?
Thank you, Tom. Can I ask one more question? I thought the sum of the first two components and the idiosyncratic error should be about or less than 100% as I am omitting the cross variance. I do not understand that why the first components should sum to more than 100% as you suggested. In addition, I found that factor f, in some cases, can explain more than 100% of the variance of y. Is it normal?TomDoan wrote:That would be correct. Note, however, that there is no guarantee that the first two components won't sum to more than 100%---you're making the assumption that the three components are orthogonal so your variance calculations are omitting the cross terms which, in practice, won't be zero.
Hi Tom, I really appreciate your great patience and explanation. However, I do not understand why such big covariance can be possible if g and f are estimated as orthogonal factors. One possible reason I could think about is the estimation procedure is incorrect, therefore the two factors are not really orthogonal to each. I checked my codes many times but could not find the potential mistake. Could you please kindly take a look my code? In my model, I am trying to estimate three orthogonal factors (mkt, opp, and sent) from a state space modelTomDoan wrote:Covariances can be negative---in this case, apparently they are, and not just barely.
Code: Select all
set yr34 = hlf2
set divyield = divyield*100
set tbill = rf
set slrf = sl-tbill
set smrf = sm-tbill
set shrf = sh-tbill
set blrf = bl-tbill
set bmrf = bm-tbill
set bhrf = bh-tbill
stats(noprint) slrf
compute slm=%mean
set sl1 = slrf-slm
stats(noprint) smrf
compute smm=%mean
set sm1 = smrf-smm
stats(noprint) shrf
compute shm=%mean
set sh1 = shrf-shm
stats(noprint) blrf
compute blm=%mean
set bl1 = blrf-blm
stats(noprint) bmrf
compute bmm=%mean
set bm1 = bmrf-bmm
stats(noprint) bhrf
compute bhm=%mean
set bh1 = bhrf-bhm
stats(noprint) term
compute termm=%mean
set term1 = term-termm
stats(noprint) default
compute defaultm=%mean
set default1 = default-defaultm
stats(noprint) divyield
compute divyieldm=%mean
set divyield1 = divyield-divyieldm
stats(noprint) tbill
compute tbillm=%mean
set tbill1 = tbill-tbillm
stats(noprint) mktrf
compute mktrfm=%mean
set mktrf1 = mktrf-mktrfm
stats(noprint) mktrf1
compute sigmktrf=%variance*0.5
stats(noprint) II
compute IIm=%mean
set II1 = II-IIm
stats(noprint) II1
compute sigII1=%variance*0.5
stats(noprint) osent
compute osentm=%mean
set osent1 = osent-osentm
stats(noprint) osent1
compute sigosent1=%variance*0.5
linreg(noprint) divyield1
# mktrf1{1} divyield1{1} term1{1} default1{1} tbill1{1}
frml(lastreg,vector=x1) eq1
compute sigdiv=%seesq*0.5
linreg(noprint) term1
# mktrf1{1} divyield1{1} term1{1} default1{1} tbill1{1}
frml(lastreg,vector=x2) eq2
compute sigterm=%seesq*0.5
linreg(noprint) default1
# mktrf1{1} divyield1{1} term1{1} default1{1} tbill1{1}
frml(lastreg,vector=x3) eq3
compute sigdf=%seesq*0.5
linreg(noprint) tbill1
# mktrf1{1} divyield1{1} term1{1} default1{1} tbill1{1}
frml(lastreg,vector=x4) eq4
compute sigtbill=%seesq*0.5
nonlin x1 x2 x3 x4 $
a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 a16 a17 a18 $
g0 g1 g2 g3 g4 g5 g6 $
sigsl sigsm sigsh sigbl sigbm sigbh sigmktrf sigdiv sigterm sigdf sigtbill sigII1 sigosent1 $
psi1 psi2 psi3 dummy1 dummy2 dc
dec frml[rect] cf
frml cf = ||a1,a4,a7,a10,a13,a16,g0,0.0,0.0,0.0,0.0,0.0,0.0|$
a2,a5,a8,a11,a14,a17,0.0,g1,g2,g3,g4,0.0,0.0|$
a3,a6,a9,a12,a15,a18,0.0,0.0,0.0,0.0,0.0,g5,g6||
linreg(noprint) sl1
# mktrf1 II1
compute a1=%beta(1),a3=%beta(2),sigsl=%seesq*0.5
linreg(noprint) sm1
# mktrf1 II1
compute a4=%beta(1),a6=%beta(2),sigsm=%seesq*0.5
linreg(noprint) sh1
# mktrf1 II1
compute a7=%beta(1),a9=%beta(2),sigsh=%seesq*0.5
linreg(noprint) bl1
# mktrf1 II1
compute a10=%beta(1),a12=%beta(2),sigbl=%seesq*0.5
linreg(noprint) bm1
# mktrf1 II1
compute a13=%beta(1),a15=%beta(2),sigbm=%seesq*0.5
linreg(noprint) bh1
# mktrf1 II1
compute a16=%beta(1),a18=%beta(2),sigbh=%seesq*0.5
dec frml[symm] svf
frml svf = %diag(||sigsl,sigsm,sigsh,sigbl,sigbm,sigbh,sigmktrf,sigdiv,sigterm,sigdf,sigtbill,sigosent1,sigII1||)
dec frml[vect] zf
frml zf = ||0.0,0.0,dummy1*yr34-psi3*dummy1*yr34{1}+dummy2*demo-psi3*dummy2*demo{1}+dc*crisis||
dec frml[vect] muf
frml muf = ||0.0,0.0,0.0,0.0,0.0,0.0,0.0,eq1,eq2,eq3,eq4,0.0,0.0||
dec frml[rect] af
frml af = ||psi1,0.0,0.0|$
0.0,psi2,0.0|$
0.0,0.0,psi3||
linreg(noprint) mktrf1
# mktrf1{1}
compute psi1=%beta(1)
linreg(noprint) osent1
# osent1{1} yr34 demo crisis
compute psi3=%beta(1),dummy1=%beta(2),dummy2=%beta(3),dc=%beta(4)
compute [symm] swf =%diag(||1.0,1.0,1.0||)
compute [rect] f=%identity(3)
dlm(presample=ergodic,a=af,c=cf,z=zf,mu=muf,f=f,sv=svf,sw=swf,y=||sl1,sm1,sh1,bl1,bm1,bh1,mktrf1,divyield1,term1,default1,tbill1,osent1,II1||,method=bfgs,type=filter,swhat=swhat,pmethod=simplex,piters=10,iters=100) / states
set mkt = states(t)(1)
set opp = states(t)(2)
set senti = states(t)(3)
Thank you for the reply, Tom. So you are suggesting that the "orthogonal" factors are not actually not orthogonal in my model, right? Therefore, I should think about other factors that more likely to be less correlated, right?TomDoan wrote:In state space models, assuming that components are "orthogonal" is merely an assumption---there is nothing in the estimation process that forces the filtered or smoothed estimates of the component to actually meet those assumptions. (Think of how OLS is based upon the assumption that the residuals are uncorrelated, while the actual residuals from OLS might be very highly correlated).
Since your factors are now "data", you can check whether they are nearly orthogonal yourself---the results you got from the last exercise seem to suggest that they aren't. As to what you might do instead, that's not really my call.fan wrote:Thank you for the reply, Tom. So you are suggesting that the "orthogonal" factors are not actually not orthogonal in my model, right? Therefore, I should think about other factors that more likely to be less correlated, right?TomDoan wrote:In state space models, assuming that components are "orthogonal" is merely an assumption---there is nothing in the estimation process that forces the filtered or smoothed estimates of the component to actually meet those assumptions. (Think of how OLS is based upon the assumption that the residuals are uncorrelated, while the actual residuals from OLS might be very highly correlated).
Hi Tom. Thank you for the reply. I checked the factors and they are not nearly orthogonal. Is there any process I can use to orthogonalize the factors, such as Gram-Schmidt process, in Rats? Once again, thank you for your great patience and help.TomDoan wrote:Since your factors are now "data", you can check whether they are nearly orthogonal yourself---the results you got from the last exercise seem to suggest that they aren't. As to what you might do instead, that's not really my call.fan wrote:Thank you for the reply, Tom. So you are suggesting that the "orthogonal" factors are not actually not orthogonal in my model, right? Therefore, I should think about other factors that more likely to be less correlated, right?TomDoan wrote:In state space models, assuming that components are "orthogonal" is merely an assumption---there is nothing in the estimation process that forces the filtered or smoothed estimates of the component to actually meet those assumptions. (Think of how OLS is based upon the assumption that the residuals are uncorrelated, while the actual residuals from OLS might be very highly correlated).