Supplementary material for the paper Identifiability and bias reduction in the skew-probit model for a binary response
|
|
- Jewel Ross
- 5 years ago
- Views:
Transcription
1 Supplementary material for the paper Identifiability and bias reduction in the skew-probit model for a binary response DongHyuk Lee and Samiran Sinha Department of Statistics, Texas A&M University, College Station, TX , USA s: dhyuklee@stat.tamu.edu, sinha@stat.tamu.edu S. Figures and tables of the simulation study In this section, we provide the simulation results for scenarios 5-2. Figures S. S.8 present boxplots for each parameters corresponding to the simulation scenarios 5 2, respectively. Tables S. S.3 contain the mean and the standard deviation of computation time for simulation scenarios 5-6. Detailed discussion of the simulation results can be found in Section 4 of the manuscript.
2 S.2 Comparison between the optimization algorithms Here we have compared different optimization algorithms for estimating parameters under the five methods. Particularly, we compared algorithms Nelder-Mead, BFGS, L-BFGS-B, nlm, nlminb, ucminf, newuoa, bobyqa, nmkb that are available in the R package optimx, in terms of mean and standard deviation of the estimators (Tables S.4, S.5, S.6), computation time (Table S.7), and the number of non-converging datasets out of replications (Table S.8). Here we present the results for scenario 6 (X Normal(, ( 4/3) 2 ), β =, δ = 4, β =.42, p m = 4%) only. However, based on our short and limited simulation study, the results for other scenarios follow the same trend as of scenario 6. The simulation results can be summarized as follows: When sample size is large, resulting estimates are quite close regardless of the algorithm and method of estimation for β, β and δ. With large sample sizes, ucminf computes estimates much faster than others. For methods J and C, algorithms except BFGS produce very close results for β, β and δ. Also, the mean bias from algorithm BFGS is slightly larger than that from other algorithm when the sample size is small. For method N, estimates seem to differ across algorithms. Nelder-Mead, BFGS, L-BFGS-B, nlm and nlminb are suffering from the non-convergence issue especially for method N. Based on these findings, by far ucminf seems to be the best algorithm among the algorithms we consider. 2
3 S.3 R codes ## Necessary libraries 2 library ( sn) 3 library ( ucminf ) 4 5 ## Method N 6 loglik <- function ( paras, X, y){ 7 delta <- paras [ length ( paras )] 8 eta <- as. vector (X %% paras [- length ( paras )]) 9 mu <- psn (as. vector ( eta ), alpha = delta ) mu [ which ( mu ==) ] <- min ( mu [ mu!= ]) mu [ which ( mu ==) ] <- max ( mu [ mu!= ]) 2 re <- sum (y log ( mu ) + ( -y) log ( - mu )) 3 return (-re) 4 } 5 6 ## Method B 7 imat <- function (y, X, paras ){ 8 inf. mat = matrix (, nrow = length ( paras ), ncol = length ( paras )) 9 delta = paras [ length ( paras )] 2 eta <- as. vector (X %% paras [- length ( paras )]) 2 mu <- psn (as. vector ( eta ), alpha = delta ) 22 mu <- psn (as. vector ( eta ), alpha = paras [ length ( paras )]) 23 # mu [ which ( mu ==) ] <- min ( mu [ mu!= ]) 24 mu [ which ( mu ==) ] <- e - 25 # mu [ which ( mu ==) ] <- max ( mu [ mu!= ]) 26 mu [ which ( mu ==) ] <- -e - 27 term = dnorm ( eta ) pnorm ( delta eta ) 28 term = mu ( - mu ) 29 term2 = term term / term 3 term3 = exp ( -.5 eta ^2(+ delta ^2) ) 3 term4 = term3 term3 32 inf. mat.b <- 4t( term2 X) %% X 33 inf. mat.bd <- -2 colsums (( term term3 / term )X)/(pi(+ delta ^2) ) 34 inf. mat.d <- sum ( term4 / term )/(pi(+ delta ^2) )^2 35 inf. mat [- length ( paras ), - length ( paras )] <- inf. mat.b 36 inf. mat [ length ( paras ), length ( paras )] <- inf. mat.d 37 inf. mat [ length ( paras ), - length ( paras )] <- inf. mat.bd 38 inf. mat [- length ( paras ), length ( paras )] <- inf. mat.bd 39 return ( inf. mat ) 4 } 4 42 ## Method J 43 Jloglikp <- function ( paras, X, y){ 44 inf. mat = matrix (, nrow = length ( paras ), ncol = length ( paras )) 45 delta = paras [ length ( paras )] 46 eta <- as. vector (X %% paras [- length ( paras )]) 47 mu <- psn (as. vector ( eta ), alpha = delta ) 48 mu <- psn (as. vector ( eta ), alpha = paras [ length ( paras )]) 49 # mu [ which ( mu ==) ] <- min ( mu [ mu!= ]) 5 mu [ which ( mu ==) ] <- e - 5 # mu [ which ( mu ==) ] <- max ( mu [ mu!= ]) 52 mu [ which ( mu ==) ] <- -e - 53 term = dnorm ( eta ) pnorm ( delta eta ) 54 term = mu ( - mu ) 55 term2 = term term / term 56 term3 = exp ( -.5 eta ^2(+ delta ^2) ) 57 term4 = term3 term3 58 inf. mat.b <- 4t( term2 X) %% X 59 inf. mat.bd <- -2 colsums (( term term3 / term )X)/(pi(+ delta ^2) ) 6 inf. mat.d <- sum ( term4 / term )/(pi(+ delta ^2) )^2 6 inf. mat [- length ( paras ), - length ( paras )] <- inf. mat.b 62 inf. mat [ length ( paras ), length ( paras )] <- inf. mat.d 63 inf. mat [ length ( paras ), - length ( paras )] <- inf. mat.bd 64 inf. mat [- length ( paras ), length ( paras )] <- inf. mat.bd 65 if(det ( inf. mat ) < ) qnty = else qnty =.5 log ( det ( inf. mat )) 3
4 66 # ###### 67 re <- sum (y log ( mu ) + ( -y) log ( - mu )) + qnty 68 return (-re) 69 } 7 7 ## Method C 72 Cloglikp <- function ( paras, X, y){ 73 delta = paras [ length ( paras )] 74 eta <- as. vector (X %% paras [- length ( paras )]) 75 mu <- psn (as. vector ( eta ), alpha = delta ) 76 mu [ which ( mu ==) ] <- min ( mu [ mu!= ]) 77 mu [ which ( mu ==) ] <- max ( mu [ mu!= ]) 78 re <- sum (y log ( mu ) + ( -y) log ( - mu )) - sum ( log (+ paras ^2/ 2.5^2) ) 79 return (-re) 8 } 8 82 ## Method G 83 GJloglikp <- function ( paras, X, y){ 84 inf. mat = matrix (, nrow = length ( paras ), ncol = length ( paras )) 85 delta = paras [ length ( paras )] 86 eta <- as. vector (X %% paras [- length ( paras )]) 87 mu <- psn (as. vector ( eta ), alpha = delta ) 88 mu <- psn (as. vector ( eta ), alpha = paras [ length ( paras )]) 89 # mu [ which ( mu ==) ] <- min ( mu [ mu!= ]) 9 mu [ which ( mu ==) ] <- e - 9 # mu [ which ( mu ==) ] <- max ( mu [ mu!= ]) 92 mu [ which ( mu ==) ] <- -e - 93 term = dnorm ( eta ) pnorm ( delta eta ) 94 term = mu ( - mu ) 95 term2 = term term / term 96 term3 = exp ( -.5 eta ^2(+ delta ^2) ) 97 term4 = term3 term3 98 inf. mat.b <- 4t( term2 X) %% X 99 inf. mat.bd <- -2 colsums (( term term3 / term )X)/(pi(+ delta ^2) ) inf. mat.d <- sum ( term4 / term )/(pi(+ delta ^2) )^2 inf. mat [- length ( paras ), - length ( paras )] <- inf. mat.b 2 inf. mat [ length ( paras ), length ( paras )] <- inf. mat.d 3 inf. mat [ length ( paras ), - length ( paras )] <- inf. mat.bd 4 inf. mat [- length ( paras ), length ( paras )] <- inf. mat.bd 5 if(det ( inf. mat ) < ) qnty = else qnty =.5 log ( det ( inf. mat )) 6 # ###### 7 re <- sum (y log ( mu ) + ( -y) log ( - mu )) + qnty - as. numeric (.5 t( paras )%% inf. mat %% paras ) 8 return (-re) 9 } # ########################################################################################## 2 ## Data generation 3 set. seed () 4 n <- 5 b < b <- 7 delta <- 4 8 x <- runif (n, -2, 2) 9 X <- cbind (, x) 2 eta <- as. numeric ( b + b x) 2 p <- psn ( eta, alpha = delta ) 22 y <- rbinom (n,, p) ## Probit regression 25 PR <- glm ( y ~ x, family = binomial ( link = " probit ")) ## Initial value for beta parameters 28 beta <- coef ( PR) 29 delta <- runif (,, ) 3 3 ## Method N 32 fit _ naive <- ucminf ( c( beta, delta ), fn = loglik, X = X, y = y, hessian = 2) 33 Nest <- fit _ naive $ par 4
5 34 Nse <- sqrt ( diag ( fit _ naive $ invhessian )) 35 coef _ naive <- cbind (Nest, Nse, Nest /Nse, 2( - pnorm ( abs ( Nest / Nse ))), Nest + qnorm (5) Nse, Nest + qnorm (.975) Nse ) ## Method B 38 store _ boot <- matrix (, nrow =, ncol = 3) 39 k <- 4 total. boot <- 4 n <- nrow ( X) 42 while () { 43 total. boot <- total. boot + 44 cat (k, ) 45 if(!(k%% ) ) cat ( \n ) 46 idx. boot <- sample (: n, n, replace = TRUE ) 47 beta. boot <- coef ( glm (y[ idx. boot ] ~ x[ idx. boot ], family = binomial ( link = " probit "))) 48 fit _ boot <- ucminf (c( beta.boot, delta ), fn = loglik, X = X[ idx.boot,], y = y[ idx. boot ], hessian = ) 49 k <- k+ 5 store _ boot [k, ] <- fit _ boot $ par 5 52 if( k == ) { 53 cat ( \n ) 54 break 55 } 56 } 57 Bmle <- 2 Nest - apply ( store _ boot, 2, mean ) 58 se <- sqrt ( diag ( solve ( imat (y, X, Bmle )))) 59 coef _BC <- cbind (Bmle, se, Bmle /se, 2( - pnorm ( abs ( Bmle /se))), Bmle + qnorm (5) se, Bmle + qnorm (.975) se) 6 6 ## Method J 62 fit _ Jeff <- ucminf ( c( beta, delta ), fn = Jloglikp, X = X, y = y, hessian = 2) 63 Jest <- fit _ Jeff $ par 64 Jse <- sqrt ( diag ( fit _ Jeff $ invhessian )) 65 coef _ Jeff <- cbind (Jest, Jse, Jest /Jse, 2( - pnorm ( abs ( Jest / Jse ))), Jest + qnorm (5) Jse, Jest + qnorm (.975) Jse ) ## Method G 68 fit _ GJ <- ucminf ( c( beta, delta ), fn = GJloglikp, X = X, y = y, hessian = 2) 69 Gest <- fit _ GJ$ par 7 Gse <- sqrt ( diag ( fit _GJ$ invhessian )) 7 coef _GJ <- cbind (Gest, Gse, Gest /Gse, 2( - pnorm ( abs ( Gest / Gse ))), Gest + qnorm (5) Gse, Gest + qnorm (.975) Gse ) ## Method C 74 fit _ Cauchy <- ucminf ( c( beta, delta ), fn = Cloglikp, X = X, y = y, hessian = 2) 75 Cest <- fit _ Cauchy $ par 76 Cse <- sqrt ( diag ( fit _ Cauchy $ invhessian )) 77 coef _ Cauchy <- cbind (Cest, Cse, Cest /Cse, 2( - pnorm ( abs ( Cest / Cse ))), Cest + qnorm (5) Cse, Cest + qnorm (.975) Cse ) 5
6 Table S.: Mean and standard deviation of the computation time in seconds for simulation scenarios 5-8. Scenario n Method N B J G C (.988) (33.67) (.827) (.44) (.29) (4.564) ( ) (2.334) (2.736) (.722) (7.68) (95.3) (3.74) (4.773) (.445) (6.7) (92.6) (6.45) (8.845) (2.784) (5.9) (793.9) (5.342) (2.62) (7.2) (.837) (66.953) (.68) (.629) (.339) (3.932) (47.77) (.525) (.35) (.725) (4.772) (79.979) (2.872) (2.59) (.342) (4.49) (882.93) (6.77) (4.72) (2.75) (5.) (65.99) (3.735) (.9) (6.579) (.86) (6.72) (.89) (.93) (.287) (4.895) (479.68) (.742) (2.77) (.73) (.38) (4.366) (3.44) (4.45) (.433) (7.) (294.45) (6.338) (9.54) (2.892) (23.27) ( ) (2.79) (25.326) (7.92) (.76) (5.36) (.656) (.667) (.33) (4.57) (49.2) (.5) (.456) (.743) (9.3) (94.99) (2.822) (2.794) (.367) (5.48) ( ) (5.274) (4.457) (2.564) (5.35) (299.52) (2.9) (9.996) (5.95) 6
7 Table S.2: Mean and standard deviation of the computation time in seconds for simulation scenarios 9-2. Scenario n Method N B J G C (.979) (57) (.822) (.66) (.39) (4.62) ( ) (2.48) (3.98) (.827) (7.449) ( ) (3.834) (6.379) (.535) (7.352) ( ) (6.389) (4.482) (3.45) (5.859) (64.78) (5.539) (43.728) (7.89) (.877) (62.64) (.732) (.627) (.343) (4.5) (536.72) (.657) (.27) (.8) (6.56) ( ) (2.8) (.84) (.48) (6.24) (85.355) (5.23) (3.5) (2.65) (4.44) (687.28) (2.32) (9.474) (6.42) (.874) (38.42) (.776) (.54) (.275) (5.3) ( ) (.742) (3.36) (.84) (9.738) (62.34) (3.378) (7.452) (.652) (6.359) (275.74) (6.767) (6.474) (3.76) (2.33) ( ) (7.52) (44.928) (7.5) (.686) (36.53) (.674) (.68) (.334) (4.567) (479.97) (.472) (.27) (.695) (9.285) (63.782) (2.757) (2.257) (.38) (7.852) ( ) (5.29) (3.438) (2.567) (2.553) ( ) (3.34) (9.577) (6.59) 7
8 Table S.3: Mean and standard deviation of the computation time in seconds for simulation scenarios 3-6. Scenario n Method N B J G C (2.324) (3.245) (4) (.57) (.352) (6.347) (57.78) (2.493) (4.855) (6) (.464) (278.7) (4.66) (.3) (.93) (4.6) ( ) (9.357) (9.773) (3.862) (7.435) (695.3) (9.27) (4.9) (9.27) (2.292) (79.94) (.836) (5) (.427) (5.72) (65.24) (.882) (.899) (.9) (8.82) (6.2) (3.77) (2.727) (.762) (9.888) (65.853) (7.289) (5.27) (3.396) (6.42) (67.874) (5.38) (3.24) (7.959) (2.94) (8.483) (3) (.47) (.348) (6.88) ( ) (2.33) (4.565) (8) (2.239) (32.6) (4.836) (8.758) (.995) (24.92) ( ) (.88) (6.2) (3.97) (43.29) ( ) (29.32) (34.232) (9.475) (2.59) (58) (.85) (.3) (.47) (5.874) (569.56) (.929) (2.373) (.9) (2.847) (35.694) (3.835) (3.745) (.769) (22.548) ( ) (8.367) (5.933) (3.58) (33.355) ( ) (2.552) (3.49) (8.6) 8
9 Table S.4: The mean (standard deviation) different estimators of the intercept parameter for scenario 6. The true value of β was.42. Method N J G C n Algorithms Nelder-Mead BFGS L-BFGS-B nlm nlminb ucminf newuoa bobyqa nmkb (.335) (.456) (.345) (.436) (.36) (.42) (.339) (.338) (.346) (.55) (.53) (.56) (.75) (.62) (.67) (.57) (.56) (.55) (.63) (.88) (.63) (.63) (.63) (.63) (.63) (.63) (.63) (.4) (.4) (.4) (.4) (.4) (.4) (.4) (.4) (.4) (4) (4) (4) (4) (4) (4) (4) (4) (4) (.) (.53) (.) (.) (.) (.) (.) (.) (.28) (.77) (.77) (.77) (.77) (.77) (.77) (.77) (.77) (.77) (.55) (.289) (.55) (.55) (.55) (.55) (.55) (.55) (.55) (.38) (.38) (.38) (.38) (.38) (.38) (.38) (.38) (.38) (3) (3) (3) (3) (3) (3) (3) (3) (3) (.392) (.292) (.43) (.4) (.387) (.365) (.365) (.372) (.48) (.296) (.325) (.32) (.32) (.287) (.244) (.256) (.264) (.333) (.89) (.96) (.23) (.23) () (.37) (.5) (.6) (.235) (.76) (.95) (.82) (.83) (.83) (.75) (.62) (.62) (.5) (.48) (.49) (.44) (.4) (.48) (.7) (.4) (.42) (.47) (.238) (.24) (.238) (.238) (.238) (.238) (.238) (.238) (.238) (.42) (.4) (.42) (.42) (.42) (.42) (.42) (.42) (.42) (.68) (.5) (.68) (.68) (.68) (.68) (.68) (.68) (.68) (.4) (.4) (.4) (.4) (.4) (.4) (.4) (.4) (.4) (4) (4) (4) (4) (4) (4) (4) (4) (4) 9
10 Table S.5: The mean (standard deviation) of different estimators of the slope parameter for scenario 6. The true value of β was. Method N J G C n Algorithms Nelder-Mead BFGS L-BFGS-B nlm nlminb ucminf newuoa bobyqa nmkb (.337) (.353) (.35) (.28) (.36) (.29) (.343) (.342) (.325) (.83) (.82) (.83) (.77) (.87) (.78) (.82) (.82) (.8) (.) (.29) (.) (.) (.99) (.) (.) (.99) (.) (.68) (.68) (.68) (.68) (.68) (.68) (.68) (.68) (.68) (.4) (.4) (.4) (.4) (.4) (.4) (.4) (.4) (.4) () (.9) () () () () () () () (.26) (.26) (.26) (.26) (.25) (.26) (.26) (.26) (.26) (.89) (.5) (.89) (.89) (.89) (.89) (.89) (.89) (.89) (.65) (.65) (.65) (.65) (.65) (.65) (.65) (.65) (.65) (.4) (.4) (.4) (.4) (.4) (.4) (.4) (.4) (.4) (.578) (.47) (.59) (.586) (.562) (.53) (.553) (.559) (.622) (.355) (.372) (.357) (.354) (.335) (.296) (.325) (.33) (.45) (.224) (.67) (.235) (.232) (.227) () (.22) (.28) (.278) (.24) (.29) (.27) (.27) (.26) (.23) (.22) (.2) (.4) (.78) (.78) (.79) (.78) (.78) (.83) (.79) (.78) (.78) (.293) (.292) (.293) (.293) (.293) (.293) (.293) (.293) (.293) (.76) (.76) (.76) (.76) (.76) (.76) (.76) (.76) (.76) (.) (.) (.) (.) (.) (.) (.) (.) (.) (.68) (.68) (.68) (.68) (.68) (.68) (.68) (.68) (.68) (.4) (.4) (.4) (.4) (.4) (.4) (.4) (.4) (.4)
11 Table S.6: The mean (standard deviation) of different estimators for the skewness parameter for scenario 6. The true value of δ was 4. Method N J G C n Algorithms Nelder-Mead BFGS L-BFGS-B nlm nlminb ucminf newuoa bobyqa nmkb (2663.5) (8.56) (378.) (6234.9) (988.2) (469.) (23.2) (3.3) (378.9) (982.5) (7.77) (36.9) (422.) (94.4) (6757.5) (6.4) (8.53) (4749.7) (82.) (.2) (85.6) (264.5) (65.) (3935.) (8.663) (4.659) (2983.8) (29.6) (4.875) (24.) (39.3) (.847) (445.) (3.722) (2.277) (33.) (.868) (.866) (.866) (.866) (.866) (.866) (.866) (.866) (.866) (.9) (.778) (.9) (.9) (.9) (.9) (.9) (.9) (.27) (.432) (.423) (.43) (.43) (.43) (.43) (.43) (.43) (.43) (.484) (.922) (.483) (.483) (.483) (.483) (.483) (.483) (.483) (.269) (.265) (.267) (.268) (.268) (.268) (.268) (.265) (.268) (.77) (.77) (.769) (.769) (.769) (.769) (.769) (.769) (.769) (2.38) (.97) (2.5) (2.434) (2.89) (.82) (2.49) (2.86) (2.864) (.943) (.978) (.954) (.88) (.654) (.33) (.568) (.58) (2.65) (6) (.72) (.62) (.48) (3) (.434) (.839) (.793) (.628) (.3) (.23) (.83) (.85) (.66) (.43) (.8) (.5) (.48) (.34) (.33) (5) (.5) (.33) (.3) (.7) (.7) (.34) (.7) (.79) (.7) (.7) (.7) (.7) (.7) (.7) (.7) (.349) (.346) (.35) (.35) (.35) (.35) (.35) (.35) (.35) (.336) (.58) (.337) (.337) (.337) (.337) (.337) (.337) (.337) (.8) (.75) (.78) (.79) (.79) (.79) (.79) (.78) (.79) (.75) (.75) (.749) (.749) (.749) (.749) (.749) (.749) (.749)
12 Table S.7: The mean (standard deviation) computation time for different algorithms for scenario 6. Method N J G C n Algorithms Nelder-Mead BFGS L-BFGS-B nlm nlminb ucminf newuoa bobyqa nmkb (.745) (3.436) (2.849) (.757) (.556) (.882) (7.66) (6.78) (.35) (3.74) (6.892) (5.968) (3.26) (2.72) (3.987) (5.64) (4.65) (2.95) (5.32) (.699) (6.949) (3.73) (2.72) (4.799) (2.55) (23.8) (3.866) (6.97) (9.634) (5.695) (3.37) (2.63) (4.428) (22.8) (32.25) (4.754) (3.79) (8.64) (6.73) (2.94) (4.98) (5.254) (24.94) (48.748) (8.46) (.252) (.269) (.68) (.47) (.37) (.68) (.727) (2.852) () (3.36) (2.66) (.694) (.82) (.787) (.458) (5.46) (.484) (.889) (6.65) (5.532) (3.272) (.558) (.945) (2.768) (2.482) (24.22) (3.737) (.485) (8.96) (5.685) (2.549) (3.742) (5.75) (23.96) (46.842) (7.7) (25.89) (7.377) (.495) (5.65) (8.48) (2.89) (46.677) (88.544) (5.869) (.337) (.688) () (.557) (.63) (.637) (3.9) (5.78) (.56) (2.874) (3.9) (2.22) (4.439) (.78) (.36) (7.23) (.462) (2.732) (4.766) (2.36) (3.47) (7.786) (2.9) (2.4) (7.742) (.265) (5.44) (7.725) (8.) (7.349) (8.46) (4.3) (4.533) (3.82) (4.557) (9.834) (4.596) (2.96) (6.258) (7.883) (.587) (.888) (9.852) (.32) (8.497) (.747) (.24) (.42) (.229) (.256) (.343) (.88) (.338) (.499) (.543) (.532) (.83) (.426) (.458) (.77) (2.392) (4.24) (.4) (3.7) (4.72) (.536) (.74) (5) (.278) (5.52) (.548) (.93) (5.769) (4.8) (2.952) (.249) (.887) (2.57) (.6) (22.363) (3.746) (2.83) (8.667) (5.83) (2.885) (4.374) (5.98) (23.93) (43.749) (8.38) 2
13 Table S.8: The number of non-convergent datasets for different algorithms for scenario 6. Method N J G C n Algorithms Nelder-Mead BFGS L-BFGS-B nlm nlminb ucminf newuoa bobyqa nmkb
14 n= n= n= n= n= 2 intercept slope N B J G C N B J G C N B J G C N B J G C N B J G C skewness Figure S.: Simulation results based on replications when X Normal(, ( 4/3) 2 ), δ = 4, β =.77, β =, and p m = 2% The numbers in the boxplots are the empirical coverage probabilities for the nominal level based on the standard error derived from the Fisher information matrix. The horizontal line in each figure indicates the true value of the parameter. N: Naive MLE, B: Bootstrap bias correction, J: Penalized likelihood estimation with Jeffrey s prior, G: Penalized likelihood estimation with generalized information matrix, C: Penalized likelihood estimation with Cauchy distribution. 4
15 n= n= n= n= n= intercept slope N B J G C N B J G C N B J G C N B J G C N B J G C skewness Figure S.2: Simulation results based on replications when X Normal(, ( 4/3) 2 ), δ = 4, β =.42, β =, and p m = 4%. The numbers in the boxplots are the empirical coverage probabilities for the nominal level based on the standard error derived from the Fisher information matrix. The horizontal line in each figure indicates the true value of the parameter. N: Naive MLE, B: Bootstrap bias correction, J: Penalized likelihood estimation with Jeffrey s prior, G: Penalized likelihood estimation with generalized information matrix, C: Penalized likelihood estimation with Cauchy distribution. 5
16 n= n= n= n= n= 3 2 intercept slope N B J G C N B J G C N B J G C N B J G C N B J G C skewness Figure S.3: Simulation results based on replications when X Normal(, ( 4/3) 2 ), δ = 8, β =.73, β =, and p m = 2%. The numbers in the boxplots are the empirical coverage probabilities for the nominal level based on the standard error derived from the Fisher information matrix. The horizontal line in each figure indicates the true value of the parameter. N: Naive MLE, B: Bootstrap bias correction, J: Penalized likelihood estimation with Jeffrey s prior, G: Penalized likelihood estimation with generalized information matrix, C: Penalized likelihood estimation with Cauchy distribution. 6
17 n= n= n= n= n= intercept slope N B J G C N B J G C N B J G C N B J G C N B J G C skewness Figure S.4: Simulation results based on replications when X Normal(, ( 4/3) 2 ), δ = 8, β =.44, β =, and p m = 4%. The numbers in the boxplots are the empirical coverage probabilities for the nominal level based on the standard error derived from the Fisher information matrix. The horizontal line in each figure indicates the true value of the parameter. N: Naive MLE, B: Bootstrap bias correction, J: Penalized likelihood estimation with Jeffrey s prior, G: Penalized likelihood estimation with generalized information matrix, C: Penalized likelihood estimation with Cauchy distribution. 7
18 n= n= n= n= n= 2 intercept slope N B J G C N B J G C N B J G C N B J G C N B J G C skewness Figure S.5: Simulation results based on replications when X.5Normal(, ( /3) 2 ) +.5Normal(, ( /3) 2 ), δ = 4, β =.85, β =, and p m = 2% The numbers in the boxplots are the empirical coverage probabilities for the nominal level based on the standard error derived from the Fisher information matrix. The horizontal line in each figure indicates the true value of the parameter. N: Naive MLE, B: Bootstrap bias correction, J: Penalized likelihood estimation with Jeffrey s prior, G: Penalized likelihood estimation with generalized information matrix, C: Penalized likelihood estimation with Cauchy distribution. 8
19 n= n= n= n= n= intercept slope N B J G C N B J G C N B J G C N B J G C N B J G C skewness Figure S.6: Simulation results based on replications when X.5Normal(, ( /3) 2 ) +.5Normal(, ( /3) 2 ), δ = 4, β =.35, β =, and p m = 4%. The numbers in the boxplots are the empirical coverage probabilities for the nominal level based on the standard error derived from the Fisher information matrix. The horizontal line in each figure indicates the true value of the parameter. N: Naive MLE, B: Bootstrap bias correction, J: Penalized likelihood estimation with Jeffrey s prior, G: Penalized likelihood estimation with generalized information matrix, C: Penalized likelihood estimation with Cauchy distribution. 9
20 n= n= n= n= n= 3 2 intercept slope N B J G C N B J G C N B J G C N B J G C N B J G C skewness Figure S.7: Simulation results based on replications when X.5Normal(, ( /3) 2 ) +.5Normal(, ( /3) 2 ), δ = 8, β =.82, β =, and p m = 2%. The numbers in the boxplots are the empirical coverage probabilities for the nominal level based on the standard error derived from the Fisher information matrix. The horizontal line in each figure indicates the true value of the parameter. N: Naive MLE, B: Bootstrap bias correction, J: Penalized likelihood estimation with Jeffrey s prior, G: Penalized likelihood estimation with generalized information matrix, C: Penalized likelihood estimation with Cauchy distribution. 2
21 n= n= n= n= n= intercept slope N B J G C N B J G C N B J G C N B J G C N B J G C skewness Figure S.8: Simulation results based on replications when X.5Normal(, ( /3) 2 ) +.5Normal(, ( /3) 2 ), δ = 8, β =.37, β =, and p m = 4%. The numbers in the boxplots are the empirical coverage probabilities for the nominal level based on the standard error derived from the Fisher information matrix. The horizontal line in each figure indicates the true value of the parameter. N: Naive MLE, B: Bootstrap bias correction, J: Penalized likelihood estimation with Jeffrey s prior, G: Penalized likelihood estimation with generalized information matrix, C: Penalized likelihood estimation with Cauchy distribution. 2
book 2014/5/6 15:21 page 261 #285
book 2014/5/6 15:21 page 261 #285 Chapter 10 Simulation Simulations provide a powerful way to answer questions and explore properties of statistical estimators and procedures. In this chapter, we will
More informationGov 2001: Section 5. I. A Normal Example II. Uncertainty. Gov Spring 2010
Gov 2001: Section 5 I. A Normal Example II. Uncertainty Gov 2001 Spring 2010 A roadmap We started by introducing the concept of likelihood in the simplest univariate context one observation, one variable.
More informationIntro to GLM Day 2: GLM and Maximum Likelihood
Intro to GLM Day 2: GLM and Maximum Likelihood Federico Vegetti Central European University ECPR Summer School in Methods and Techniques 1 / 32 Generalized Linear Modeling 3 steps of GLM 1. Specify the
More informationMaximum Likelihood Estimation
Maximum Likelihood Estimation EPSY 905: Fundamentals of Multivariate Modeling Online Lecture #6 EPSY 905: Maximum Likelihood In This Lecture The basics of maximum likelihood estimation Ø The engine that
More informationEstimation of a Ramsay-Curve IRT Model using the Metropolis-Hastings Robbins-Monro Algorithm
1 / 34 Estimation of a Ramsay-Curve IRT Model using the Metropolis-Hastings Robbins-Monro Algorithm Scott Monroe & Li Cai IMPS 2012, Lincoln, Nebraska Outline 2 / 34 1 Introduction and Motivation 2 Review
More informationMultiple Regression and Logistic Regression II. Dajiang 525 Apr
Multiple Regression and Logistic Regression II Dajiang Liu @PHS 525 Apr-19-2016 Materials from Last Time Multiple regression model: Include multiple predictors in the model = + + + + How to interpret the
More informationConsistent estimators for multilevel generalised linear models using an iterated bootstrap
Multilevel Models Project Working Paper December, 98 Consistent estimators for multilevel generalised linear models using an iterated bootstrap by Harvey Goldstein hgoldstn@ioe.ac.uk Introduction Several
More informationMCMC Package Example
MCMC Package Example Charles J. Geyer April 4, 2005 This is an example of using the mcmc package in R. The problem comes from a take-home question on a (take-home) PhD qualifying exam (School of Statistics,
More informationA UNIFIED APPROACH FOR PROBABILITY DISTRIBUTION FITTING WITH FITDISTRPLUS
A UNIFIED APPROACH FOR PROBABILITY DISTRIBUTION FITTING WITH FITDISTRPLUS M-L. Delignette-Muller 1, C. Dutang 2,3 1 VetAgro Sud Campus Vétérinaire - Lyon 2 ISFA - Lyon, 3 AXA GRM - Paris, 1/15 12/08/2011
More informationCS 361: Probability & Statistics
March 12, 2018 CS 361: Probability & Statistics Inference Binomial likelihood: Example Suppose we have a coin with an unknown probability of heads. We flip the coin 10 times and observe 2 heads. What can
More informationก ก ก ก ก ก ก. ก (Food Safety Risk Assessment Workshop) 1 : Fundamental ( ก ( NAC 2010)) 2 3 : Excel and Statistics Simulation Software\
ก ก ก ก (Food Safety Risk Assessment Workshop) ก ก ก ก ก ก ก ก 5 1 : Fundamental ( ก 29-30.. 53 ( NAC 2010)) 2 3 : Excel and Statistics Simulation Software\ 1 4 2553 4 5 : Quantitative Risk Modeling Microbial
More informationMeasurement of cost of equity trading - the LOT measure
Measurement of cost of equity trading - the LOT measure Bernt Arne Ødegaard 25 October 208 Contents 0. Intuition.................................................. The LOT cost measure 2 LOT estimation
More informationUsing R to Create Synthetic Discrete Response Regression Models
Arizona State University From the SelectedWorks of Joseph M Hilbe July 3, 2011 Using R to Create Synthetic Discrete Response Regression Models Joseph Hilbe, Arizona State University Available at: https://works.bepress.com/joseph_hilbe/3/
More informationGMM for Discrete Choice Models: A Capital Accumulation Application
GMM for Discrete Choice Models: A Capital Accumulation Application Russell Cooper, John Haltiwanger and Jonathan Willis January 2005 Abstract This paper studies capital adjustment costs. Our goal here
More informationMultivariate Cox PH model with log-skew-normal frailties
Multivariate Cox PH model with log-skew-normal frailties Department of Statistical Sciences, University of Padua, 35121 Padua (IT) Multivariate Cox PH model A standard statistical approach to model clustered
More informationMetropolis-Hastings algorithm
Metropolis-Hastings algorithm Dr. Jarad Niemi STAT 544 - Iowa State University March 27, 2018 Jarad Niemi (STAT544@ISU) Metropolis-Hastings March 27, 2018 1 / 32 Outline Metropolis-Hastings algorithm Independence
More informationBy-Peril Deductible Factors
By-Peril Deductible Factors Luyang Fu, Ph.D., FCAS Jerry Han, Ph.D., ASA March 17 th 2010 State Auto is one of only 13 companies to earn an A+ Rating by AM Best every year since 1954! Agenda Introduction
More informationA Test of the Normality Assumption in the Ordered Probit Model *
A Test of the Normality Assumption in the Ordered Probit Model * Paul A. Johnson Working Paper No. 34 March 1996 * Assistant Professor, Vassar College. I thank Jahyeong Koo, Jim Ziliak and an anonymous
More informationEstimating Mixed Logit Models with Large Choice Sets. Roger H. von Haefen, NC State & NBER Adam Domanski, NOAA July 2013
Estimating Mixed Logit Models with Large Choice Sets Roger H. von Haefen, NC State & NBER Adam Domanski, NOAA July 2013 Motivation Bayer et al. (JPE, 2007) Sorting modeling / housing choice 250,000 individuals
More informationThis document provides a detailed description of how to compute the SPI using the statistical package R.
INDICATOR FACT SHEET SPI ANNEX A Standardized Precipitation Index (SPI) Annex A: Calculation of SPI with R Key Message This document provides a detailed description of how to compute the SPI using the
More informationThe method of Maximum Likelihood.
Maximum Likelihood The method of Maximum Likelihood. In developing the least squares estimator - no mention of probabilities. Minimize the distance between the predicted linear regression and the observed
More informationRegression and Simulation
Regression and Simulation This is an introductory R session, so it may go slowly if you have never used R before. Do not be discouraged. A great way to learn a new language like this is to plunge right
More informationIt is common in the field of mathematics, for example, geometry, to have theorems or postulates
CHAPTER 5 POPULATION DISTRIBUTIONS It is common in the field of mathematics, for example, geometry, to have theorems or postulates that establish guiding principles for understanding analysis of data.
More informationOrdinal Predicted Variable
Ordinal Predicted Variable Tim Frasier Copyright Tim Frasier This work is licensed under the Creative Commons Attribution 4.0 International license. Click here for more information. Goals and General Idea
More informationBIOINFORMATICS MSc PROBABILITY AND STATISTICS SPLUS SHEET 1
BIOINFORMATICS MSc PROBABILITY AND STATISTICS SPLUS SHEET 1 A data set containing a segment of human chromosome 13 containing the BRCA2 breast cancer gene; it was obtained from the National Center for
More informationBayesian Multinomial Model for Ordinal Data
Bayesian Multinomial Model for Ordinal Data Overview This example illustrates how to fit a Bayesian multinomial model by using the built-in mutinomial density function (MULTINOM) in the MCMC procedure
More informationMCMC Package Example (Version 0.5-1)
MCMC Package Example (Version 0.5-1) Charles J. Geyer September 16, 2005 1 The Problem This is an example of using the mcmc package in R. The problem comes from a take-home question on a (take-home) PhD
More informationFinal Exam Suggested Solutions
University of Washington Fall 003 Department of Economics Eric Zivot Economics 483 Final Exam Suggested Solutions This is a closed book and closed note exam. However, you are allowed one page of handwritten
More informationLecture 17: More on Markov Decision Processes. Reinforcement learning
Lecture 17: More on Markov Decision Processes. Reinforcement learning Learning a model: maximum likelihood Learning a value function directly Monte Carlo Temporal-difference (TD) learning COMP-424, Lecture
More informationWindow Width Selection for L 2 Adjusted Quantile Regression
Window Width Selection for L 2 Adjusted Quantile Regression Yoonsuh Jung, The Ohio State University Steven N. MacEachern, The Ohio State University Yoonkyung Lee, The Ohio State University Technical Report
More informationPakes (1986): Patents as Options: Some Estimates of the Value of Holding European Patent Stocks
Pakes (1986): Patents as Options: Some Estimates of the Value of Holding European Patent Stocks Spring 2009 Main question: How much are patents worth? Answering this question is important, because it helps
More informationPackage SMFI5. February 19, 2015
Type Package Package SMFI5 February 19, 2015 Title R functions and data from Chapter 5 of 'Statistical Methods for Financial Engineering' Version 1.0 Date 2013-05-16 Author Maintainer
More informationNumerical Methods in Option Pricing (Part III)
Numerical Methods in Option Pricing (Part III) E. Explicit Finite Differences. Use of the Forward, Central, and Symmetric Central a. In order to obtain an explicit solution for the price of the derivative,
More informationOrdinal Multinomial Logistic Regression. Thom M. Suhy Southern Methodist University May14th, 2013
Ordinal Multinomial Logistic Thom M. Suhy Southern Methodist University May14th, 2013 GLM Generalized Linear Model (GLM) Framework for statistical analysis (Gelman and Hill, 2007, p. 135) Linear Continuous
More informationEconomics Multinomial Choice Models
Economics 217 - Multinomial Choice Models So far, most extensions of the linear model have centered on either a binary choice between two options (work or don t work) or censoring options. Many questions
More informationObjective calibration of the Bayesian CRM. Ken Cheung Department of Biostatistics, Columbia University
Objective calibration of the Bayesian CRM Department of Biostatistics, Columbia University King s College Aug 14, 2011 2 The other King s College 3 Phase I clinical trials Safety endpoint: Dose-limiting
More informationHierarchical Generalized Linear Models. Measurement Incorporated Hierarchical Linear Models Workshop
Hierarchical Generalized Linear Models Measurement Incorporated Hierarchical Linear Models Workshop Hierarchical Generalized Linear Models So now we are moving on to the more advanced type topics. To begin
More informationOmitted Variables Bias in Regime-Switching Models with Slope-Constrained Estimators: Evidence from Monte Carlo Simulations
Journal of Statistical and Econometric Methods, vol. 2, no.3, 2013, 49-55 ISSN: 2051-5057 (print version), 2051-5065(online) Scienpress Ltd, 2013 Omitted Variables Bias in Regime-Switching Models with
More information1 Bayesian Bias Correction Model
1 Bayesian Bias Correction Model Assuming that n iid samples {X 1,...,X n }, were collected from a normal population with mean µ and variance σ 2. The model likelihood has the form, P( X µ, σ 2, T n >
More informationPackage stable. February 6, 2017
Version 1.1.2 Package stable February 6, 2017 Title Probability Functions and Generalized Regression Models for Stable Distributions Depends R (>= 1.4), rmutil Description Density, distribution, quantile
More informationPackage MixedPoisson
Type Package Title Mixed Poisson Models Version 2.0 Date 2016-11-24 Package MixedPoisson December 9, 2016 Author Alicja Wolny-Dominiak and Maintainer Alicja Wolny-Dominiak
More informationPASS Sample Size Software
Chapter 850 Introduction Cox proportional hazards regression models the relationship between the hazard function λ( t X ) time and k covariates using the following formula λ log λ ( t X ) ( t) 0 = β1 X1
More informationFitting parametric distributions using R: the fitdistrplus package
Fitting parametric distributions using R: the fitdistrplus package M. L. Delignette-Muller - CNRS UMR 5558 R. Pouillot J.-B. Denis - INRA MIAJ user! 2009,10/07/2009 Background Specifying the probability
More informationSimulated Multivariate Random Effects Probit Models for Unbalanced Panels
Simulated Multivariate Random Effects Probit Models for Unbalanced Panels Alexander Plum 2013 German Stata Users Group Meeting June 7, 2013 Overview Introduction Random Effects Model Illustration Simulated
More informationRobust portfolio optimization under multiperiod mean-standard deviation criterion
Robust portfolio optimization under multiperiod mean-standard deviation criterion Spiridon Penev 1 Pavel Shevchenko 2 Wei Wu 1 1 The University of New South Wales, Sydney Australia 2 Macquarie University
More informationMath 239 Homework 1 solutions
Math 239 Homework 1 solutions Question 1. Delta hedging simulation. (a) Means, standard deviations and histograms are found using HW1Q1a.m with 100,000 paths. In the case of weekly rebalancing: mean =
More informationStatistical Computing (36-350)
Statistical Computing (36-350) Lecture 14: Simulation I: Generating Random Variables Cosma Shalizi 14 October 2013 Agenda Base R commands The basic random-variable commands Transforming uniform random
More informationBoosting Actuarial Regression Models in R
Carryl Oberson Faculty of Business and Economics University of Basel R in Insurance 2015 Build regression models (GLMs) for car insurance data. 3 types of response variables: claim incidence: y i = 0,
More informationMachine Learning for Quantitative Finance
Machine Learning for Quantitative Finance Fast derivative pricing Sofie Reyners Joint work with Jan De Spiegeleer, Dilip Madan and Wim Schoutens Derivative pricing is time-consuming... Vanilla option pricing
More informationStochastic Processes and Advanced Mathematical Finance. A Stochastic Process Model of Cash Management
Steven R. Dunbar Department of Mathematics 203 Avery Hall University of Nebraska-Lincoln Lincoln, NE 68588-0130 http://www.math.unl.edu Voice: 402-472-3731 Fax: 402-472-8466 Stochastic Processes and Advanced
More informationSolving dynamic portfolio choice problems by recursing on optimized portfolio weights or on the value function?
DOI 0.007/s064-006-9073-z ORIGINAL PAPER Solving dynamic portfolio choice problems by recursing on optimized portfolio weights or on the value function? Jules H. van Binsbergen Michael W. Brandt Received:
More informationEconometric Computing Issues with Logit Regression Models: The Case of Observation-Specific and Group Dummy Variables
Journal of Computations & Modelling, vol.3, no.3, 2013, 75-86 ISSN: 1792-7625 (print), 1792-8850 (online) Scienpress Ltd, 2013 Econometric Computing Issues with Logit Regression Models: The Case of Observation-Specific
More informationPredicting the Success of a Retirement Plan Based on Early Performance of Investments
Predicting the Success of a Retirement Plan Based on Early Performance of Investments CS229 Autumn 2010 Final Project Darrell Cain, AJ Minich Abstract Using historical data on the stock market, it is possible
More informationClustered Binary Logistic Regression in Teratology Data
Clustered Binary Logistic Regression in Teratology Data Jorge G. Morel, Ph.D. Adjunct Professor University of Maryland Baltimore County Division of Biostatistics and Epidemiology Cincinnati Children s
More information9.1 Principal Component Analysis for Portfolios
Chapter 9 Alpha Trading By the name of the strategies, an alpha trading strategy is to select and trade portfolios so the alpha is maximized. Two important mathematical objects are factor analysis and
More informationAppendix to Dividend yields, dividend growth, and return predictability in the cross-section of. stocks
Appendix to Dividend yields, dividend growth, and return predictability in the cross-section of stocks Paulo Maio 1 Pedro Santa-Clara 2 This version: February 2015 1 Hanken School of Economics. E-mail:
More informationMeasuring Efficiency of Exchange Traded Funds 1
Measuring Efficiency of Exchange Traded Funds 1 An Issue of Performance, Tracking Error and Liquidity Thierry Roncalli Evry University & Lyxor Asset Management, France Joint work with Marlène Hassine The
More informationInternet Appendix: High Frequency Trading and Extreme Price Movements
Internet Appendix: High Frequency Trading and Extreme Price Movements This appendix includes two parts. First, it reports the results from the sample of EPMs defined as the 99.9 th percentile of raw returns.
More informationMathematics of Finance Final Preparation December 19. To be thoroughly prepared for the final exam, you should
Mathematics of Finance Final Preparation December 19 To be thoroughly prepared for the final exam, you should 1. know how to do the homework problems. 2. be able to provide (correct and complete!) definitions
More informationGamma Distribution Fitting
Chapter 552 Gamma Distribution Fitting Introduction This module fits the gamma probability distributions to a complete or censored set of individual or grouped data values. It outputs various statistics
More informationMissing Data. EM Algorithm and Multiple Imputation. Aaron Molstad, Dootika Vats, Li Zhong. University of Minnesota School of Statistics
Missing Data EM Algorithm and Multiple Imputation Aaron Molstad, Dootika Vats, Li Zhong University of Minnesota School of Statistics December 4, 2013 Overview 1 EM Algorithm 2 Multiple Imputation Incomplete
More informationWeight Smoothing with Laplace Prior and Its Application in GLM Model
Weight Smoothing with Laplace Prior and Its Application in GLM Model Xi Xia 1 Michael Elliott 1,2 1 Department of Biostatistics, 2 Survey Methodology Program, University of Michigan National Cancer Institute
More informationAggregated Fractional Regression Estimation: Some Monte Carlo Evidence
Aggregated Fractional Regression Estimation: Some Monte Carlo Evidence Jingyu Song song173@purdue.edu Michael S. Delgado delgado2@purdue.edu Paul V. Preckel preckel@purdue.edu Department of Agricultural
More informationLecture Note of Bus 41202, Spring 2017: More Volatility Models. Mr. Ruey Tsay
Lecture Note of Bus 41202, Spring 2017: More Volatility Models. Mr. Ruey Tsay Package Note: We use fgarch to estimate most volatility models, but will discuss the package rugarch later, which can be used
More informationFNCE 4030 Fall 2012 Roberto Caccia, Ph.D. Midterm_2a (2-Nov-2012) Your name:
Answer the questions in the space below. Written answers require no more than few compact sentences to show you understood and master the concept. Show your work to receive partial credit. Points are as
More informationLogit Models for Binary Data
Chapter 3 Logit Models for Binary Data We now turn our attention to regression models for dichotomous data, including logistic regression and probit analysis These models are appropriate when the response
More informationFinancial Risk Forecasting Chapter 5 Implementing Risk Forecasts
Financial Risk Forecasting Chapter 5 Implementing Risk Forecasts Jon Danielsson 2017 London School of Economics To accompany Financial Risk Forecasting www.financialriskforecasting.com Published by Wiley
More informationOutline. Review Continuation of exercises from last time
Bayesian Models II Outline Review Continuation of exercises from last time 2 Review of terms from last time Probability density function aka pdf or density Likelihood function aka likelihood Conditional
More informationEstimating log models: to transform or not to transform?
Journal of Health Economics 20 (2001) 461 494 Estimating log models: to transform or not to transform? Willard G. Manning a,, John Mullahy b a Department of Health Studies, Biological Sciences Division,
More informationSensitivity Analysis for Unmeasured Confounding: Formulation, Implementation, Interpretation
Sensitivity Analysis for Unmeasured Confounding: Formulation, Implementation, Interpretation Joseph W Hogan Department of Biostatistics Brown University School of Public Health CIMPOD, February 2016 Hogan
More informationVisual fixations and the computation and comparison of value in simple choice SUPPLEMENTARY MATERIALS
Visual fixations and the computation and comparison of value in simple choice SUPPLEMENTARY MATERIALS Ian Krajbich 1 Carrie Armel 2 Antonio Rangel 1,3 1. Division of Humanities and Social Sciences, California
More information9. Appendixes. Page 73 of 95
9. Appendixes Appendix A: Construction cost... 74 Appendix B: Cost of capital... 75 Appendix B.1: Beta... 75 Appendix B.2: Cost of equity... 77 Appendix C: Geometric Brownian motion... 78 Appendix D: Static
More informationPackage ald. February 1, 2018
Type Package Title The Asymmetric Laplace Distribution Version 1.2 Date 2018-01-31 Package ald February 1, 2018 Author Christian E. Galarza and Victor H. Lachos
More informationEstimating LGD Correlation
Estimating LGD Correlation Jiří Witzany University of Economics, Prague Abstract: The paper proposes a new method to estimate correlation of account level Basle II Loss Given Default (LGD). The correlation
More informationThe Duration Derby: A Comparison of Duration Based Strategies in Asset Liability Management
The Duration Derby: A Comparison of Duration Based Strategies in Asset Liability Management H. Zheng Department of Mathematics, Imperial College London SW7 2BZ, UK h.zheng@ic.ac.uk L. C. Thomas School
More informationTHE EQUIVALENCE OF THREE LATENT CLASS MODELS AND ML ESTIMATORS
THE EQUIVALENCE OF THREE LATENT CLASS MODELS AND ML ESTIMATORS Vidhura S. Tennekoon, Department of Economics, Indiana University Purdue University Indianapolis (IUPUI), School of Liberal Arts, Cavanaugh
More informationOn the economic significance of stock return predictability: Evidence from macroeconomic state variables
On the economic significance of stock return predictability: Evidence from macroeconomic state variables Huacheng Zhang * University of Arizona This draft: 8/31/2012 First draft: 2/28/2012 Abstract We
More informationLecture 21: Logit Models for Multinomial Responses Continued
Lecture 21: Logit Models for Multinomial Responses Continued Dipankar Bandyopadhyay, Ph.D. BMTRY 711: Analysis of Categorical Data Spring 2011 Division of Biostatistics and Epidemiology Medical University
More informationSYSM 6304 Risk and Decision Analysis Lecture 2: Fitting Distributions to Data
SYSM 6304 Risk and Decision Analysis Lecture 2: Fitting Distributions to Data M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu September 5, 2015
More informationBoundary conditions for options
Boundary conditions for options Boundary conditions for options can refer to the non-arbitrage conditions that option prices has to satisfy. If these conditions are broken, arbitrage can exist. to the
More informationStochastic Frontier Models with Binary Type of Output
Chapter 6 Stochastic Frontier Models with Binary Type of Output 6.1 Introduction In all the previous chapters, we have considered stochastic frontier models with continuous dependent (or output) variable.
More informationUser s Guide for the Matlab Library Implementing Closed Form MLE for Diffusions
User s Guide for the Matlab Library Implementing Closed Form MLE for Diffusions Yacine Aït-Sahalia Department of Economics and Bendheim Center for Finance Princeton University and NBER This Version: July
More informationChapter 7: Point Estimation and Sampling Distributions
Chapter 7: Point Estimation and Sampling Distributions Seungchul Baek Department of Statistics, University of South Carolina STAT 509: Statistics for Engineers 1 / 20 Motivation In chapter 3, we learned
More informationStudy 2: data analysis. Example analysis using R
Study 2: data analysis Example analysis using R Steps for data analysis Install software on your computer or locate computer with software (e.g., R, systat, SPSS) Prepare data for analysis Subjects (rows)
More informationTests for Two ROC Curves
Chapter 65 Tests for Two ROC Curves Introduction Receiver operating characteristic (ROC) curves are used to summarize the accuracy of diagnostic tests. The technique is used when a criterion variable is
More informationIntroduction to R (2)
Introduction to R (2) Boxplots Boxplots are highly efficient tools for the representation of the data distributions. The five number summary can be located in boxplots. Additionally, we can distinguish
More informationTwo-term Edgeworth expansions of the distributions of fit indexes under fixed alternatives in covariance structure models
Economic Review (Otaru University of Commerce), Vo.59, No.4, 4-48, March, 009 Two-term Edgeworth expansions of the distributions of fit indexes under fixed alternatives in covariance structure models Haruhiko
More informationAnalytics on pension valuations
Analytics on pension valuations Research Paper Business Analytics Author: Arno Hendriksen November 4, 2017 Abstract EY Actuaries performs pension calculations for several companies where both the the assets
More informationChapter 6 Part 3 October 21, Bootstrapping
Chapter 6 Part 3 October 21, 2008 Bootstrapping From the internet: The bootstrap involves repeated re-estimation of a parameter using random samples with replacement from the original data. Because the
More informationThreshold cointegration and nonlinear adjustment between stock prices and dividends
Applied Economics Letters, 2010, 17, 405 410 Threshold cointegration and nonlinear adjustment between stock prices and dividends Vicente Esteve a, * and Marı a A. Prats b a Departmento de Economia Aplicada
More informationAn EM-Algorithm for Maximum-Likelihood Estimation of Mixed Frequency VARs
An EM-Algorithm for Maximum-Likelihood Estimation of Mixed Frequency VARs Jürgen Antony, Pforzheim Business School and Torben Klarl, Augsburg University EEA 2016, Geneva Introduction frequent problem in
More informationChristopher Meaney * and Rahim Moineddin
Meaney and Moineddin BMC Medical Research Methodology 2014, 14:14 RESEARCH ARTICLE Open Access A Monte Carlo simulation study comparing linear regression, beta regression, variable-dispersion beta regression
More informationStochastic Processes and Advanced Mathematical Finance. Hitting Times and Ruin Probabilities
Steven R. Dunbar Department of Mathematics 203 Avery Hall University of Nebraska-Lincoln Lincoln, NE 68588-0130 http://www.math.unl.edu Voice: 402-472-3731 Fax: 402-472-8466 Stochastic Processes and Advanced
More informationExperience with the Weighted Bootstrap in Testing for Unobserved Heterogeneity in Exponential and Weibull Duration Models
Experience with the Weighted Bootstrap in Testing for Unobserved Heterogeneity in Exponential and Weibull Duration Models Jin Seo Cho, Ta Ul Cheong, Halbert White Abstract We study the properties of the
More informationA Two-Step Estimator for Missing Values in Probit Model Covariates
WORKING PAPER 3/2015 A Two-Step Estimator for Missing Values in Probit Model Covariates Lisha Wang and Thomas Laitila Statistics ISSN 1403-0586 http://www.oru.se/institutioner/handelshogskolan-vid-orebro-universitet/forskning/publikationer/working-papers/
More informationboxcox() returns the values of α and their loglikelihoods,
Solutions to Selected Computer Lab Problems and Exercises in Chapter 11 of Statistics and Data Analysis for Financial Engineering, 2nd ed. by David Ruppert and David S. Matteson c 2016 David Ruppert and
More informationA Comprehensive, Non-Aggregated, Stochastic Approach to. Loss Development
A Comprehensive, Non-Aggregated, Stochastic Approach to Loss Development By Uri Korn Abstract In this paper, we present a stochastic loss development approach that models all the core components of the
More informationParallel Multilevel Monte Carlo Simulation
Parallel Simulation Mathematisches Institut Goethe-Universität Frankfurt am Main Advances in Financial Mathematics Paris January 7-10, 2014 Simulation Outline 1 Monte Carlo 2 3 4 Algorithm Numerical Results
More informationWeb Appendix. Are the effects of monetary policy shocks big or small? Olivier Coibion
Web Appendix Are the effects of monetary policy shocks big or small? Olivier Coibion Appendix 1: Description of the Model-Averaging Procedure This section describes the model-averaging procedure used in
More informationStatistics/BioSci 141, Fall 2006 Lab 2: Probability and Probability Distributions October 13, 2006
Statistics/BioSci 141, Fall 2006 Lab 2: Probability and Probability Distributions October 13, 2006 1 Using random samples to estimate a probability Suppose that you are stuck on the following problem:
More information