Test Volume 12, Number 1. June 2003
|
|
- Constance Sherman
- 5 years ago
- Views:
Transcription
1 Sociedad Española de Estadística e Investigación Operativa Test Volume 12, Number 1. June 2003 Power and Sample Size Calculation for 2x2 Tables under Multinomial Sampling with Random Loss Kung-Jong Lui Department of Mathematics and Statistics. San Diego State University. William G. Cumberland Department of Biostatistics. University of California, Los Angeles. Sociedad de Estadística e Investigación Operativa Test (2003) Vol. 12, No. 1, pp
2 Sociedad de Estadística e Investigación Operativa Test (2003) Vol. 12, No. 1, pp Power and Sample Size Calculation for 2x2 Tables under Multinomial Sampling with Random Loss Kung-Jong Lui Department of Mathematics and Statistics. San Diego State University. William G. Cumberland Department of Biostatistics. University of California, Los Angeles. Abstract Multinomial sampling, in which the total number of sampled subjects is fixed, is probably one of the most commonly used sampling schemes in categorical data analysis. When we apply multinomial sampling to collect subjects who are subject to a random exclusion from our data analysis, the number of subjects falling into each comparison group is random and can be small with a positive probability. Thus, the application of the traditional statistics derived from large sample theory for testing equality between two independent proportions can sometimes be theoretically invalid. On the other hand, using Fisher s exact test can always assure that the true type I error is less than or equal to a nominal α-level. Thus, we discuss here power and sample size calculation based on this exact test. For a desired power at a given α-level, we develop an exact sample size calculation procedure, that accounts for a random loss of sampled subjects, for testing equality between two independent proportions under multinomial sampling. Because the exact sample size calculation procedure requires intensive computations when the underlying required sample size is large, we also present an approximate sample size formula using large sample theory. On the basis of Monte Carlo simulation, we note that the power of using this approximate sample size formula generally agrees well with the desired power on the basis of the exact test. Finally, we propose a trial-and-error procedure using the approximate sample size as an initial estimate and Monte Carlo simulation to expedite the procedure for searching the minimum required sample size. Key Words: Sample size determination, Fisher s exact test, multinomial sampling, power AMS subject classification: 62F03, 62A05. Correspondence to: K.-J. Lui, Department of Mathematics and Statistics, San Diego State University, San Diego, CA 92182, USA. kjl@rohan.sdsu.edu Received: November 2001; Accepted: May 2002
3 142 K. J. Lui and W. G. Cumberland 1 Introduction Multinomial sampling, in which the total number of studied subjects is fixed, is probably one of the most commonly considered sampling designs in categorical data analysis (Bishop et al., 1975). Consider an epidemiological prevalence study, in which we take a random sample from a general population and want to compare the prevalence of disease between the exposure and the nonexposure subpopulations to a risk factor of interest. Or consider a clinical trial, in which we randomly assign each patient to receive treatment A or B with fixed probabilities, and wish to compare the response rate between two treatments. In either of the above cases, the number of subjects falling into the comparison groups is random. Furthermore, it is common that we may need to exclude some sampled subjects due to missing information from our data. As noted elsewhere (Skalski, 1992; Lui, 1994), sample size determination failing to take into account the potential loss of subjects can result in studies with inadequate power. Because the number of sampled subjects falling into the two comparison groups can be small with a positive probability under multinomial sampling with a random loss of sampled subjects, traditional statistics using large sample theory for testing equality between two independent proportions (Fleiss, 1981) can sometimes be theoretically invalid. However, using Fisher s exact test (Fisher, 1935; Irwin, 1935; Yates, 1934; Fleiss, 1981) can always assure that the true type I error is less than or equal to a nominal α-level regardless of the number of subjects from the subpopulations. This leads us to concentrate our discussion on power and sample size calculation on the basis of the exact test. Note that numerous publications on calculation of power and sample size based on the exact test under the product binomial sampling appear elsewhere (Bennett and Hsu, 1960; Haseman, 1978; Gail and Gart, 1973; Casagrande et al., 1978a; Gordon, 1994). Recently, an excellent and systematic review of sample size determination for testing differences in proportions under the two-sample design also appears in Sahai and Khurshid (1996). However, none of these papers focuses discussion on sample size calculation under multinomial sampling with a random exclusion of sampled subjects from data analysis as is done here. The purpose of this paper is to extend the sample size calculation procedure proposed elsewhere (Bennett and Hsu, 1960) to accommodate multinomial sampling with a random loss of sampled subjects. To provide readers with an insight of the effects due to different parameters, this paper cal-
4 Sample Size Calculation with Random Loss 143 culates the power based on the exact multinomial distribution in a variety of situations. Because the exact sample size calculation procedure involves intensive computations when the required sample size is large, this paper presents an approximate sample size formula using the large sample theory as well. Using Monte Carlo simulation, this paper finds that the power of using an approximate sample size formula derived from a method analogous to that proposed elsewhere (Casagrande et al., 1978b; Fleiss et al., 1980; Fleiss, 1981) can actually be quite accurate. Finally, this paper suggests a trial-and-error procedure using the approximate sample size as an initial estimate and Monte Carlo simulation to expedite the procedure for finding the minimum required sample size for a desired power at a nominal α-level. 2 Notations, Power, and Sample Size Determination Suppose that we take a random sample of n subjects, each having probability p e of falling into one comparison group, and probability 1 p e of falling into the other comparison group. For example, the probability p e may denote the population proportion of exposure in epidemiological prevalence studies or the probability of assigning subjects to an experimental treatment in clinical trials. Because the information regarding the exposure status or the outcome can sometimes be missing in prevalence studies, or the studied subjects can be lost to follow-up in clinical trials, we assume that each sampled subject has a positive probability p m to be excluded from our data analysis. For simplicity, we focus our discussion on the situation where the exclusion of a sampled subject is independent of both the exposure (or the treatment assignment) and the outcome status. Because the following discussion can be generally applied to test equality between the proportions of two comparison groups, we use the number 1 and 2 to designate these groups. Let N 1, N 2, and N 3 denote the random frequencies corresponding to groups 1 and 2, and the group of subjects who will be excluded from our comparison. The random vector N = (N 1, N 2, N 3 ) then follows a trinomial distribution: f N (n p e, p m ) = n! n 1!n 2!n 3! πn 1 1 πn 2 2 πn 3 3, (2.1) where n = (n 1, n 2, n 3 ), π 1 = (1 p m )p e, π 2 = (1 p m )(1 p e ), π 3 = p m, 0 n i n, and i n i = n.
5 144 K. J. Lui and W. G. Cumberland Let p 1 and p 2 denote the probabilities that a randomly selected subject from groups 1 and 2, respectively, has the outcome of interest. We consider first the situation for a one-sided test. Suppose that we want to test the null hypothesis H 0 : p 1 = p 2 versus the alternative hypothesis H a : p 1 > p 2. Let X i denote the number of subjects with the outcome of interest among the N i subjects from group i (i = 1, 2). Furthermore, let T = X 1 + X 2 denote the total number of subjects with the outcome of interest in the sample. Then, given N 1 = n 1, N 2 = n 2, and T = t, the conditional distribution of X 1 is well known to follow P (X 1 = x 1 t, n 1, n 2, p 1, p 2 ) = ( n1 b x=a )( n2 ) x 1 t x 1 φ x 1 ( n1 )( n2 ), (2.2) x t x φ x where a x 1 b, a = max(0, t n 2 ), b = min(t, n 1 ), and φ = p 1 (1 p 2 )/[(1 p 1 )p 2 ] is the odds ratio of possessing the outcome of interest between groups 1 and 2. When the null hypothesis H 0 : p 1 = p 2 (i.e., φ = 1) is true, the conditional distribution (2.2) of X 1 reduces to the hypergeometric distribution: P (X 1 = x 1 t, n 1, n 2, p 1 = p 2 ) = ( n1 )( n2 x 1 ). (2.3) ( n1 +n 2 t t x 1 ) Under the alternative hypothesis H a : p 1 > p 2, we expect the value of X 1 to be large. Thus, the critical region C(α) of a nominal α-level (one-sided test) consists of {X 1 : X 1 x 1 }, where x 1 is the smallest integer such that x 1 x P (X 1 = x 1 t, n 1, n 2, p 1 = p 2 ) α. The conditional power, given 1 n 1, n 2, and t, is then q(α, p 1, p 2 n 1, n 2, t) = P (X 1 = x 1 t, n 1, n 2, p 1, p 2 ), (2.4) x 1 C(α) where P (X 1 = x 1 t, n 1, n 2, p 1, p 2 ) is given by (2.2). Thus, the conditional power, given n 1 and n 2 is q(α, p 1, p 2 n 1, n 2 ) = n 1 +n 2 t=0 ( n1 x=a q(α, p 1, p 2 n 1, n 2, t)f T (t n 1, n 2 ), (2.5) where f T (t n 1, n 2 ) = b )( n2 ) x t x p x 1 (1 p 1 ) n1 x p t x 2 (1 p 2 ) n 2 (t x) and a = max(0, t n 2 ), b = min(t, n 1 ). Bennett and Hsu (1960) base their sample size calculation on (2.5) for studies in which the number of studied
6 Sample Size Calculation with Random Loss 145 subjects from each comparison group is fixed. We cannot directly apply (2.5) to calculate power for the situation in which n 1 and n 2 are random, nor when some n 3 subjects are randomly excluded from our data. Instead, we consider the expected power for a total sample size n with the given probabilities p e and p m. This expected power is q(α, p 1, p 2, n, p e, p m ) = n q(α, p 1, p 2 n 1, n 2 )f N (n p e, p m ), (2.6) where the summation is over all possible vector values for n = (n 1, n 2, n 3 ), and f N (n p e, p m ) is given in (2.1). For a desired power 1 β, we may use a trial-and-error procedure to find the minimum required sample size n such that the expected power q(α, p 1, p 2, n, p e, p m ) is 1 β. However, calculation of this expected power (2.6) can be very computationally intensive when the minimum required sample size n is large. Hence we need an approximate sample size formula for n. If n were very large, we would expect n i ( =. nπ i ) to be large as well, and the ratio n 2 /n 1 between groups 2 and 1 to be approximately equal to r = π 2 /π 1. Therefore, an approximation to the expected required sample size E(n 1 ) from group 1 for a desired power 1 β of rejecting the null hypothesis H 0 : p 1 = p 2 at α-level (one-sided test) when the alternative hypothesis H a : p 1 > p 2 is true is given by (Fleiss et al. 1980;Fleiss 1981, p. 45) { } n [Z 1a = ceiling α p(1 p)(r + 1) + Zβ rp1 (1 p 1 ) + p 2 (1 p 2 )] 2 r(p 1 p 2 ) 2, (2.7) where Z α is the upper 100(α)th percentile of the standard normal distribution, p = (p 1 +rp 2 )/(1+r), and ceiling {x} denotes the least integer greater than or equal to x. Note that when deriving sample size formula (2.7), we do not account for the continuity correction and hence using (2.7) tends to underestimate the expected required sample size from group 1 on the basis of the exact test (Casagrande et al., 1978b; Gordon, 1994). To alleviate this underestimation, we may want to apply the following adjustment formula which incorporates the continuity correction into the sample size determination (Fleiss et al., 1980; Fleiss, 1981; Casagrande et al., 1978b): n 1a = ceiling n 1a 4 ( ) 2 2(r + 1) n 1a r p 1 p 2. (2.8)
7 146 K. J. Lui and W. G. Cumberland These results suggest that an approximately minimum required sample size n with the continuity correction should be given by n a = ceiling{[n 1a + ceiling{n 1a r}]/(1 p m )}. (2.9) Note that the above discussions can be easily extended to accommodate hypothesis testing for a two-sided test. Consider testing the null hypothesis H 0 : p 1 = p 2 versus the alternative hypothesis H a : p 1 p 2. We reject the null hypothesis H 0 when X 1 is too large or too small. Thus, a critical region C(α) of a nominal α-level (two-sided test) consists of {X 1 : X 1 x 1 or X 1 x 1 }, where x 1 is the smallest integer such that x 1 x 1 P (X 1 = x 1 t, n 1, n 2, p 1 = p 2 ) α/2 and x 1 is the largest integer such that x 1 x P (X 1 = x 1 t, n 1, n 2, p 1 = p 2 ) α/2. With this critical 1 region C(α), we can calculate the expected power q(α, p 1, p 2, n, p e, p m ) (2.6) through use of (2.4) and (2.5). We can further find the minimum required sample size n for a desired power 1 β at a nominal α-level of two-sided test using (2.6). Similarly, we can substitute Z α for Z α/2 in (2.7) and apply (2.8) for the continuity correction to obtain an approximate sample size formula n a (2.9) for a two-sided test. 3 Power and Sample Size Calculation To illustrate the use of formula (2.6), we first calculate the expected power for the situations in which p e = 0.1, 0.30, p m = 0.10, 0.20, p 1 = 0.40, p 2 = 0.1, 0.20, 0.30, and n = 20 to 100 by 10, 120 to 200 by 20 at 0.05-level for one-sided and two sided tests using the exact multinomial distribution (2.1). For example, when p e = 0.1, p m = 0.10, p 1 = 0.40, p 2 = 0.1, and n = 180, the corresponding powers are and for one-sided and two-sided tests, respectively (Table 1). As we expect, we see that the power increases as either the total sample size n or the difference between p 1 and p 2 increases, but decreases as the probability of excluding a random selected subject p m increases. When the minimum required sample size n with the expected power q(α, p 1, p 2, n, p e, p m ) (2.6) greater or equal than a desired power 1 β at a nominal α-level is large, because the number of combinations of (n 1, n 2, n 3 ) under the multinomial distribution (2.1) can be very large, searching for the minimum required number n of subjects using a trial-and-error procedure will be extremely time consuming. To alleviate this problem, we propose
8 Sample Size Calculation with Random Loss 147 p e.1.3 p m p n One-Sided Test n Two-Sided Test Table 1: The exact power for testing the null hypothesis H 0 : p 1 = p 2 versus H a : p 1 > p 2 (one-sided test) or H a : p 1 p 2 (two-sided test) at 5% level, where p 1 = 0.40, and p 2 = 0.1, 0.2, 0.30; the probability of subjects falling into group 1, p e = 0.10, 0.30; the probability of excluding a randomly selected subject p m = 0.1, 0.20; and the total sample size n = 20 to 100 by 10, 120 to 200 by 20. using Monte Carlo simulation to generate 1000 repeated samples from the desired multinomial distribution (2.1). We then use the resulting empirical density for the random vector n rather than (2.1) when calculating the expected power (2.6). To further expedite this search procedure, we use the approximate sample size n a (2.9) as an initial estimate. If the power corre-
9 148 K. J. Lui and W. G. Cumberland sponding to n a (2.9) were less than the desired power, we would calculate powers using increasing sample sizes n{k} = n a + k max(int{n a /100}, 1), where max(v 1, v 2 ) denotes the maximum of v 1 and v 2, int{x} denotes the greatest integer x, for k = 1, 2,... until we first observe power greater or equal than 1 β. We note this sample size by n{k }. Similarly, if the power corresponding to n a (2.9) were larger than the desired power, we would then calculate powers using decreasing sample sizes n{k} = n a k max(int{n a /100}, 1) for k = 1, 2, until we obtain the first k such that the observed power q(α, p 1, p 2, n{k 1}, p e, p m ) < 1 β. Then, the minimum required sample size is again set equal to n{k }. Tables 2 and 3 summarize the approximate required sample n a (2.9), its corresponding power, and the final minimum required sample size estimate n{k } for one-sided and two-sided tests, respectively, for a desired power of 80% to reject the null hypothesis H 0 : p 1 = p 2 at the 0.05-level in the situations in which p 1 = 0.20, 0.30, 0.40, 0.50; p 2 ranges from 0.10 to p ; p e = 0.10, 0.30, 0.50, 0.70; and p m = 0.10, As shown in Tables 2 and 3, the power of using the approximate sample size formula n a (2.9) actually agrees reasonably well with the desired power 0.80 in almost all the situations considered here. For example, consider one of the worst cases for one-sided test: p e = 0.10, p m = 0.10, p 1 = 0.40, p 2 = 0.10 in Table 2. Here the corresponding power to the approximate sample size n a = 167 subjects (2.9) at 0.05 level (one-sided test) is 77.6%, that is less than the desired power 80% by only 2.5%. In this case, the final estimate of the minimum required sample size n{k } is 178 (Table 2). Similarly, when considering the same case as above for two-sided test (Table 3): p e = 0.10, p m = 0.10, p 1 = 0.40, p 2 = 0.10, the approximate sample size n a = 200, while the final estimate estimate n{k } is Discussion When we compare disease rates between subpopulations in sample surveys or response rates between treatments in clinical trials, this paper develops a sample size calculation procedure on the basis of the exact test for multinomial sampling with a random loss of subjects. If the required sample size is not large (less than 200), we can calculate the exact power (2.6) as those presented in Table 1 without any practical difficulty. These results not only provide us with an insight into the effects due to different parameters on the power, but also allow us to find the minimum required sample size for
10 Sample Size Calculation with Random Loss 149 p e p m p 1 p 2 n a n{k } n a n{k } n a n{k } n a n{k } (.787) (.787) (.824) (.818) (.782) (.784) (.804) (.804) (.794) (.793) (.801) (.803) (.798) (.797) (.800) (.801) (.776) (.773) (.800) (.799) (.791) (.795) (.805) (.806) (.798) (.797) (.799) (.799) (.783) (.783) (.808) (.808) (.792) (.792) (.797) (.797) (.783) (.786) (.801) (.799) 514 p e.5.7 p m p 1 p 2 n a n{k } n a n{k } n a n{k } n a n{k } (.822) (.815) (.825) (.822) (.810) (.807) (.808) (.809) (.802) (.801) (.802) (.801) (.801) (.801) (.801) (.801) (.830) (.824) (.821) (.816) (.807) (.805) (.807) (.810) (.800) (.800) (.802) (.802) (.813) (.812) (.820) (.820) (.802) (.803) (.806) (.807) (.810) (.809) (.813) (.814) 525 Table 2: The approximate sample size n a (2.9), its corresponding power (in parenthesis), as well as the final estimate of the minimum required sample size n{k } for a desired power 0.80 of rejecting the null hypothesis H 0 : p 1 = p 2 at 5% level (one-sided test). a desired power 80% at a nominal 0.05-level. For example, for the case of p e = 0.1, p m = 0.10, p 1 = 0.40, and p 2 = 0.1, Table 1 shows that the required sample size for a desired power of 0.80 of one-sided test by linear interpolation is approximately 178 (= (0.05/0.47)). This is actually identical to the minimum required sample size estimate n{k } found using Monte Carlo simulation in Table 2. We also see that the estimated required sample size using either n a (2.9) or n{k } (Table 2) tends to reach the minimum as p e = This is consistent with the well-known fact that given a fixed total sample size, equal sample allocation is generally optimal to maximize the power in comparison studies.
11 150 K. J. Lui and W. G. Cumberland p e p m p 1 p 2 n a n{k } n a n{k } n a n{k } n a n{k } (.750) (.750) (.827) (.826) (.782) (.775) (.796) (.794) (.788) (.789) (.803) (.802) (.796) (.797) (.799) (.799) (.771) (.775) (.812) (.813) (.783) (.782) (.795) (.796) (.794) (.793) (.799) (.798) (.768) (.770) (.798) (.797) (.788) (.789) (.797) (.796) (.781) (.781) (.797) (.797) 631 p e p m p 1 p 2 n a n{k } n a n{k } n a n{k } n a n{k } (.837) (.835) (.831) (.826) (.805) (.805) (.815) (.812) (.801) (.802) (.805) (.805) (.801) (.801) (.801) (.800) (.821) (.817) (.832) (.828) (.805) (.805) (.811) (.810) (.801) (.801) (.803) (.804) (.818) (.819) (.828) (.827) (.802) (.803) (.808) (.807) (.809) (.808) (.817) (.816) 651 Table 3: The approximate sample size n a (2.9), its corresponding power (in parenthesis), as well as the final estimate of the minimum required sample size n{k } for a desired power 0.80 of rejecting the null hypothesis H 0 : p 1 = p 2 at 5% level (two-sided test). Tables 2 and 3 demonstrate that using the approximate sample size formula n a (2.9) can actually agree well with the minimum required sample size estimate n{k } needed for a desired power in most situations. Thus, we can expedite the searching process for locating the minimum required sample size by using this approximate sample size n a as an initial estimate and applying the trial-and-error procedure. In summary, this paper has developed a sample size calculation procedure for a desired power 1 β at a given α-level for comparing the two independent proportions under multinomial sampling in the presence of random loss. This paper has presented an approximate sample size for-
12 Sample Size Calculation with Random Loss 151 mula and found that this approximation formula can be quite accurate in almost all the situations considered here. The results and the discussion presented here should be of use for biostatisticians, epidemiologists, and clinicians when they wish to employ multinomial sampling to collect subjects, but each of whom is subject to a random exclusion from studies. Acknowledgements The authors wish to thank the anonymous referee for many valuable comments to improve the clarity and scope of this paper, especially for the suggestion of the approximate sample size formula considered in this paper, and Ms. Ying Ying Ma for computational assistance in estimation of the required sample size. References Bennett, B. and Hsu, P. (1960). On the power function of the exact test for the 2x2 contingency table. Biometrika, 47: Bishop, Y., Fienberg, S., and Holland, P. (1975). Discrete Multivariate Analysis, Theory and Practice. MIT Press, Cambridge. Casagrande, J., Pike, M., and Smith, P. (1978a). The power function of the exact test for comparing two binomial distributions. Applied Statistics, 27: Casagrande, J., Pike, M., and Smith, P. (1978b). An improved approximate formula for comparing two binomial distributions. Biometrics, 34: Fisher, R. (1935). The logic of inductive inference. Journal of Royal Statistical Society, Series A, 98: Fleiss, J. (1981). Statistical Methods for Rates and Proportions, 2nd edn. Wiley and Sons, New York. Fleiss, J. L., Tytun, A., and Ury, H. K. (1980). A simple approximation for calculating sample sizes for comparing independent proportions. Biometrics, 36:
13 152 K. J. Lui and W. G. Cumberland Gail, M. and Gart, J. (1973). The determination of sample sizes for use with the exact conditional test in 2x2 comparative trials. Biometrics, 29: Gordon, I. (1994). Sample size for two independent proportions: a review. Australian Journal of Statistics, 36: Haseman, J. (1978). Exact sample sizes for use with the Fisher-Irwin test for 2x2 tables. Biometrics, 34: Irwin, J. D. (1935). Test of significance for differences between percentages based on small numbers. Metron, 12: Lui, K.-J. (1994). The effect of retaining probability variation on sample size calculations for normal variates. Biometrics, 50: Sahai, H. and Khurshid, A. (1996). Formulas and tables for determination of sample sizes and power in clinical trials for testing differences in proportions for the two-sample design: a review. Statistics in Medicine, 15:1 21. Skalski, J. (1992). Sample size calculations for normal variates under binomial censoring. Biometrics, 48: Yates, F. (1934). Contingency tables involving small numbers and the χ 2 test. Journal of the Royal Statistical Society, Supplement 1, pp
Tests for Two Independent Sensitivities
Chapter 75 Tests for Two Independent Sensitivities Introduction This procedure gives power or required sample size for comparing two diagnostic tests when the outcome is sensitivity (or specificity). In
More informationSuperiority by a Margin Tests for the Ratio of Two Proportions
Chapter 06 Superiority by a Margin Tests for the Ratio of Two Proportions Introduction This module computes power and sample size for hypothesis tests for superiority of the ratio of two independent proportions.
More informationNon-Inferiority Tests for the Odds Ratio of Two Proportions
Chapter Non-Inferiority Tests for the Odds Ratio of Two Proportions Introduction This module provides power analysis and sample size calculation for non-inferiority tests of the odds ratio in twosample
More informationNon-Inferiority Tests for the Ratio of Two Proportions
Chapter Non-Inferiority Tests for the Ratio of Two Proportions Introduction This module provides power analysis and sample size calculation for non-inferiority tests of the ratio in twosample designs in
More informationEquivalence Tests for the Odds Ratio of Two Proportions
Chapter 5 Equivalence Tests for the Odds Ratio of Two Proportions Introduction This module provides power analysis and sample size calculation for equivalence tests of the odds ratio in twosample designs
More informationMultinomial Logit Models for Variable Response Categories Ordered
www.ijcsi.org 219 Multinomial Logit Models for Variable Response Categories Ordered Malika CHIKHI 1*, Thierry MOREAU 2 and Michel CHAVANCE 2 1 Mathematics Department, University of Constantine 1, Ain El
More informationNon-Inferiority Tests for the Difference Between Two Proportions
Chapter 0 Non-Inferiority Tests for the Difference Between Two Proportions Introduction This module provides power analysis and sample size calculation for non-inferiority tests of the difference in twosample
More informationEquivalence Tests for One Proportion
Chapter 110 Equivalence Tests for One Proportion Introduction This module provides power analysis and sample size calculation for equivalence tests in one-sample designs in which the outcome is binary.
More informationInferences on Correlation Coefficients of Bivariate Log-normal Distributions
Inferences on Correlation Coefficients of Bivariate Log-normal Distributions Guoyi Zhang 1 and Zhongxue Chen 2 Abstract This article considers inference on correlation coefficients of bivariate log-normal
More informationAustralian Journal of Basic and Applied Sciences. Conditional Maximum Likelihood Estimation For Survival Function Using Cox Model
AENSI Journals Australian Journal of Basic and Applied Sciences Journal home page: wwwajbaswebcom Conditional Maximum Likelihood Estimation For Survival Function Using Cox Model Khawla Mustafa Sadiq University
More informationLog-linear Modeling Under Generalized Inverse Sampling Scheme
Log-linear Modeling Under Generalized Inverse Sampling Scheme Soumi Lahiri (1) and Sunil Dhar (2) (1) Department of Mathematical Sciences New Jersey Institute of Technology University Heights, Newark,
More informationStatistical Methodology. A note on a two-sample T test with one variance unknown
Statistical Methodology 8 (0) 58 534 Contents lists available at SciVerse ScienceDirect Statistical Methodology journal homepage: www.elsevier.com/locate/stamet A note on a two-sample T test with one variance
More informationA Convenient Way of Generating Normal Random Variables Using Generalized Exponential Distribution
A Convenient Way of Generating Normal Random Variables Using Generalized Exponential Distribution Debasis Kundu 1, Rameshwar D. Gupta 2 & Anubhav Manglick 1 Abstract In this paper we propose a very convenient
More informationGroup-Sequential Tests for Two Proportions
Chapter 220 Group-Sequential Tests for Two Proportions Introduction Clinical trials are longitudinal. They accumulate data sequentially through time. The participants cannot be enrolled and randomized
More informationSample Size for Assessing Agreement between Two Methods of Measurement by Bland Altman Method
Meng-Jie Lu 1 / Wei-Hua Zhong 1 / Yu-Xiu Liu 1 / Hua-Zhang Miao 1 / Yong-Chang Li 1 / Mu-Huo Ji 2 Sample Size for Assessing Agreement between Two Methods of Measurement by Bland Altman Method Abstract:
More informationConfidence Intervals for One-Sample Specificity
Chapter 7 Confidence Intervals for One-Sample Specificity Introduction This procedures calculates the (whole table) sample size necessary for a single-sample specificity confidence interval, based on a
More informationMeasuring the Benefits from Futures Markets: Conceptual Issues
International Journal of Business and Economics, 00, Vol., No., 53-58 Measuring the Benefits from Futures Markets: Conceptual Issues Donald Lien * Department of Economics, University of Texas at San Antonio,
More informationConfidence Intervals for the Median and Other Percentiles
Confidence Intervals for the Median and Other Percentiles Authored by: Sarah Burke, Ph.D. 12 December 2016 Revised 22 October 2018 The goal of the STAT COE is to assist in developing rigorous, defensible
More informationA generalized Hosmer Lemeshow goodness-of-fit test for multinomial logistic regression models
The Stata Journal (2012) 12, Number 3, pp. 447 453 A generalized Hosmer Lemeshow goodness-of-fit test for multinomial logistic regression models Morten W. Fagerland Unit of Biostatistics and Epidemiology
More information[D7] PROBABILITY DISTRIBUTION OF OUTSTANDING LIABILITY FROM INDIVIDUAL PAYMENTS DATA Contributed by T S Wright
Faculty and Institute of Actuaries Claims Reserving Manual v.2 (09/1997) Section D7 [D7] PROBABILITY DISTRIBUTION OF OUTSTANDING LIABILITY FROM INDIVIDUAL PAYMENTS DATA Contributed by T S Wright 1. Introduction
More informationTechnical Note: An Improved Range Chart for Normal and Long-Tailed Symmetrical Distributions
Technical Note: An Improved Range Chart for Normal and Long-Tailed Symmetrical Distributions Pandu Tadikamalla, 1 Mihai Banciu, 1 Dana Popescu 2 1 Joseph M. Katz Graduate School of Business, University
More informationGMM for Discrete Choice Models: A Capital Accumulation Application
GMM for Discrete Choice Models: A Capital Accumulation Application Russell Cooper, John Haltiwanger and Jonathan Willis January 2005 Abstract This paper studies capital adjustment costs. Our goal here
More informationOn Some Statistics for Testing the Skewness in a Population: An. Empirical Study
Available at http://pvamu.edu/aam Appl. Appl. Math. ISSN: 1932-9466 Vol. 12, Issue 2 (December 2017), pp. 726-752 Applications and Applied Mathematics: An International Journal (AAM) On Some Statistics
More informationSAMPLE STANDARD DEVIATION(s) CHART UNDER THE ASSUMPTION OF MODERATENESS AND ITS PERFORMANCE ANALYSIS
Science SAMPLE STANDARD DEVIATION(s) CHART UNDER THE ASSUMPTION OF MODERATENESS AND ITS PERFORMANCE ANALYSIS Kalpesh S Tailor * * Assistant Professor, Department of Statistics, M K Bhavnagar University,
More informationProbability Distributions: Discrete
Probability Distributions: Discrete Introduction to Data Science Algorithms Jordan Boyd-Graber and Michael Paul SEPTEMBER 27, 2016 Introduction to Data Science Algorithms Boyd-Graber and Paul Probability
More informationGENERATION OF STANDARD NORMAL RANDOM NUMBERS. Naveen Kumar Boiroju and M. Krishna Reddy
GENERATION OF STANDARD NORMAL RANDOM NUMBERS Naveen Kumar Boiroju and M. Krishna Reddy Department of Statistics, Osmania University, Hyderabad- 500 007, INDIA Email: nanibyrozu@gmail.com, reddymk54@gmail.com
More informationTests for the Odds Ratio in a Matched Case-Control Design with a Binary X
Chapter 156 Tests for the Odds Ratio in a Matched Case-Control Design with a Binary X Introduction This procedure calculates the power and sample size necessary in a matched case-control study designed
More informationRobust Critical Values for the Jarque-bera Test for Normality
Robust Critical Values for the Jarque-bera Test for Normality PANAGIOTIS MANTALOS Jönköping International Business School Jönköping University JIBS Working Papers No. 00-8 ROBUST CRITICAL VALUES FOR THE
More informationThe Duration Derby: A Comparison of Duration Based Strategies in Asset Liability Management
The Duration Derby: A Comparison of Duration Based Strategies in Asset Liability Management H. Zheng Department of Mathematics, Imperial College London SW7 2BZ, UK h.zheng@ic.ac.uk L. C. Thomas School
More informationMEASURING PORTFOLIO RISKS USING CONDITIONAL COPULA-AR-GARCH MODEL
MEASURING PORTFOLIO RISKS USING CONDITIONAL COPULA-AR-GARCH MODEL Isariya Suttakulpiboon MSc in Risk Management and Insurance Georgia State University, 30303 Atlanta, Georgia Email: suttakul.i@gmail.com,
More informationAnnual risk measures and related statistics
Annual risk measures and related statistics Arno E. Weber, CIPM Applied paper No. 2017-01 August 2017 Annual risk measures and related statistics Arno E. Weber, CIPM 1,2 Applied paper No. 2017-01 August
More informationMultivariate longitudinal data analysis for actuarial applications
Multivariate longitudinal data analysis for actuarial applications Priyantha Kumara and Emiliano A. Valdez astin/afir/iaals Mexico Colloquia 2012 Mexico City, Mexico, 1-4 October 2012 P. Kumara and E.A.
More informationChapter 7: Estimation Sections
1 / 40 Chapter 7: Estimation Sections 7.1 Statistical Inference Bayesian Methods: Chapter 7 7.2 Prior and Posterior Distributions 7.3 Conjugate Prior Distributions 7.4 Bayes Estimators Frequentist Methods:
More informationA Skewed Truncated Cauchy Logistic. Distribution and its Moments
International Mathematical Forum, Vol. 11, 2016, no. 20, 975-988 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/imf.2016.6791 A Skewed Truncated Cauchy Logistic Distribution and its Moments Zahra
More informationEquivalence Tests for Two Correlated Proportions
Chapter 165 Equivalence Tests for Two Correlated Proportions Introduction The two procedures described in this chapter compute power and sample size for testing equivalence using differences or ratios
More informationTests for Two ROC Curves
Chapter 65 Tests for Two ROC Curves Introduction Receiver operating characteristic (ROC) curves are used to summarize the accuracy of diagnostic tests. The technique is used when a criterion variable is
More informationImplementing Personalized Medicine: Estimating Optimal Treatment Regimes
Implementing Personalized Medicine: Estimating Optimal Treatment Regimes Baqun Zhang, Phillip Schulte, Anastasios Tsiatis, Eric Laber, and Marie Davidian Department of Statistics North Carolina State University
More informationTests for the Difference Between Two Poisson Rates in a Cluster-Randomized Design
Chapter 439 Tests for the Difference Between Two Poisson Rates in a Cluster-Randomized Design Introduction Cluster-randomized designs are those in which whole clusters of subjects (classes, hospitals,
More informationA New Multivariate Kurtosis and Its Asymptotic Distribution
A ew Multivariate Kurtosis and Its Asymptotic Distribution Chiaki Miyagawa 1 and Takashi Seo 1 Department of Mathematical Information Science, Graduate School of Science, Tokyo University of Science, Tokyo,
More informationChapter 3 Discrete Random Variables and Probability Distributions
Chapter 3 Discrete Random Variables and Probability Distributions Part 4: Special Discrete Random Variable Distributions Sections 3.7 & 3.8 Geometric, Negative Binomial, Hypergeometric NOTE: The discrete
More informationThe normal distribution is a theoretical model derived mathematically and not empirically.
Sociology 541 The Normal Distribution Probability and An Introduction to Inferential Statistics Normal Approximation The normal distribution is a theoretical model derived mathematically and not empirically.
More informationUniversity of California Berkeley
University of California Berkeley Improving the Asmussen-Kroese Type Simulation Estimators Samim Ghamami and Sheldon M. Ross May 25, 2012 Abstract Asmussen-Kroese [1] Monte Carlo estimators of P (S n >
More informationReview: Population, sample, and sampling distributions
Review: Population, sample, and sampling distributions A population with mean µ and standard deviation σ For instance, µ = 0, σ = 1 0 1 Sample 1, N=30 Sample 2, N=30 Sample 100000000000 InterquartileRange
More informationImpact of Weekdays on the Return Rate of Stock Price Index: Evidence from the Stock Exchange of Thailand
Journal of Finance and Accounting 2018; 6(1): 35-41 http://www.sciencepublishinggroup.com/j/jfa doi: 10.11648/j.jfa.20180601.15 ISSN: 2330-7331 (Print); ISSN: 2330-7323 (Online) Impact of Weekdays on the
More informationStatistics 431 Spring 2007 P. Shaman. Preliminaries
Statistics 4 Spring 007 P. Shaman The Binomial Distribution Preliminaries A binomial experiment is defined by the following conditions: A sequence of n trials is conducted, with each trial having two possible
More informationOmitted Variables Bias in Regime-Switching Models with Slope-Constrained Estimators: Evidence from Monte Carlo Simulations
Journal of Statistical and Econometric Methods, vol. 2, no.3, 2013, 49-55 ISSN: 2051-5057 (print version), 2051-5065(online) Scienpress Ltd, 2013 Omitted Variables Bias in Regime-Switching Models with
More informationProbabilistic Analysis of the Economic Impact of Earthquake Prediction Systems
The Minnesota Journal of Undergraduate Mathematics Probabilistic Analysis of the Economic Impact of Earthquake Prediction Systems Tiffany Kolba and Ruyue Yuan Valparaiso University The Minnesota Journal
More informationAn Information Based Methodology for the Change Point Problem Under the Non-central Skew t Distribution with Applications.
An Information Based Methodology for the Change Point Problem Under the Non-central Skew t Distribution with Applications. Joint with Prof. W. Ning & Prof. A. K. Gupta. Department of Mathematics and Statistics
More informationTests for the Matched-Pair Difference of Two Event Rates in a Cluster- Randomized Design
Chapter 487 Tests for the Matched-Pair Difference of Two Event Rates in a Cluster- Randomized Design Introduction Cluster-randomized designs are those in which whole clusters of subjects (classes, hospitals,
More informationEuropean Journal of Economic Studies, 2016, Vol.(17), Is. 3
Copyright 2016 by Academic Publishing House Researcher Published in the Russian Federation European Journal of Economic Studies Has been issued since 2012. ISSN: 2304-9669 E-ISSN: 2305-6282 Vol. 17, Is.
More informationPower of t-test for Simple Linear Regression Model with Non-normal Error Distribution: A Quantile Function Distribution Approach
Available Online Publications J. Sci. Res. 4 (3), 609-622 (2012) JOURNAL OF SCIENTIFIC RESEARCH www.banglajol.info/index.php/jsr of t-test for Simple Linear Regression Model with Non-normal Error Distribution:
More informationContents Part I Descriptive Statistics 1 Introduction and Framework Population, Sample, and Observations Variables Quali
Part I Descriptive Statistics 1 Introduction and Framework... 3 1.1 Population, Sample, and Observations... 3 1.2 Variables.... 4 1.2.1 Qualitative and Quantitative Variables.... 5 1.2.2 Discrete and Continuous
More informationNon-Inferiority Tests for the Ratio of Two Means
Chapter 455 Non-Inferiority Tests for the Ratio of Two Means Introduction This procedure calculates power and sample size for non-inferiority t-tests from a parallel-groups design in which the logarithm
More information3 Arbitrage pricing theory in discrete time.
3 Arbitrage pricing theory in discrete time. Orientation. In the examples studied in Chapter 1, we worked with a single period model and Gaussian returns; in this Chapter, we shall drop these assumptions
More informationOperational Risk Aggregation
Operational Risk Aggregation Professor Carol Alexander Chair of Risk Management and Director of Research, ISMA Centre, University of Reading, UK. Loss model approaches are currently a focus of operational
More informationOn the Distribution and Its Properties of the Sum of a Normal and a Doubly Truncated Normal
The Korean Communications in Statistics Vol. 13 No. 2, 2006, pp. 255-266 On the Distribution and Its Properties of the Sum of a Normal and a Doubly Truncated Normal Hea-Jung Kim 1) Abstract This paper
More informationAnalysis of truncated data with application to the operational risk estimation
Analysis of truncated data with application to the operational risk estimation Petr Volf 1 Abstract. Researchers interested in the estimation of operational risk often face problems arising from the structure
More informationE509A: Principle of Biostatistics. GY Zou
E509A: Principle of Biostatistics (Week 2: Probability and Distributions) GY Zou gzou@robarts.ca Reporting of continuous data If approximately symmetric, use mean (SD), e.g., Antibody titers ranged from
More informationLogarithmic-Normal Model of Income Distribution in the Czech Republic
AUSTRIAN JOURNAL OF STATISTICS Volume 35 (2006), Number 2&3, 215 221 Logarithmic-Normal Model of Income Distribution in the Czech Republic Jitka Bartošová University of Economics, Praque, Czech Republic
More informationTolerance Intervals for Any Data (Nonparametric)
Chapter 831 Tolerance Intervals for Any Data (Nonparametric) Introduction This routine calculates the sample size needed to obtain a specified coverage of a β-content tolerance interval at a stated confidence
More informationRules and Models 1 investigates the internal measurement approach for operational risk capital
Carol Alexander 2 Rules and Models Rules and Models 1 investigates the internal measurement approach for operational risk capital 1 There is a view that the new Basel Accord is being defined by a committee
More informationProbability Distributions: Discrete
Probability Distributions: Discrete INFO-2301: Quantitative Reasoning 2 Michael Paul and Jordan Boyd-Graber FEBRUARY 19, 2017 INFO-2301: Quantitative Reasoning 2 Paul and Boyd-Graber Probability Distributions:
More informationPoint Estimation. Some General Concepts of Point Estimation. Example. Estimator quality
Point Estimation Some General Concepts of Point Estimation Statistical inference = conclusions about parameters Parameters == population characteristics A point estimate of a parameter is a value (based
More informationCopyright 2005 Pearson Education, Inc. Slide 6-1
Copyright 2005 Pearson Education, Inc. Slide 6-1 Chapter 6 Copyright 2005 Pearson Education, Inc. Measures of Center in a Distribution 6-A The mean is what we most commonly call the average value. It is
More informationThe Vasicek Distribution
The Vasicek Distribution Dirk Tasche Lloyds TSB Bank Corporate Markets Rating Systems dirk.tasche@gmx.net Bristol / London, August 2008 The opinions expressed in this presentation are those of the author
More informationLogit Models for Binary Data
Chapter 3 Logit Models for Binary Data We now turn our attention to regression models for dichotomous data, including logistic regression and probit analysis These models are appropriate when the response
More informationTwo-Sample Z-Tests Assuming Equal Variance
Chapter 426 Two-Sample Z-Tests Assuming Equal Variance Introduction This procedure provides sample size and power calculations for one- or two-sided two-sample z-tests when the variances of the two groups
More informationBIO5312 Biostatistics Lecture 5: Estimations
BIO5312 Biostatistics Lecture 5: Estimations Yujin Chung September 27th, 2016 Fall 2016 Yujin Chung Lec5: Estimations Fall 2016 1/34 Recap Yujin Chung Lec5: Estimations Fall 2016 2/34 Today s lecture and
More informationThe Cost of Capital for the Closely-held, Family- Controlled Firm
USASBE_2009_Proceedings-Page0113 The Cost of Capital for the Closely-held, Family- Controlled Firm Presented at the Family Firm Institute London By Daniel L. McConaughy, PhD California State University,
More informationThe Two Sample T-test with One Variance Unknown
The Two Sample T-test with One Variance Unknown Arnab Maity Department of Statistics, Texas A&M University, College Station TX 77843-343, U.S.A. amaity@stat.tamu.edu Michael Sherman Department of Statistics,
More informationOperational Risk Aggregation
Operational Risk Aggregation Professor Carol Alexander Chair of Risk Management and Director of Research, ISMA Centre, University of Reading, UK. Loss model approaches are currently a focus of operational
More informationVolume 30, Issue 1. Samih A Azar Haigazian University
Volume 30, Issue Random risk aversion and the cost of eliminating the foreign exchange risk of the Euro Samih A Azar Haigazian University Abstract This paper answers the following questions. If the Euro
More informationELEMENTS OF MONTE CARLO SIMULATION
APPENDIX B ELEMENTS OF MONTE CARLO SIMULATION B. GENERAL CONCEPT The basic idea of Monte Carlo simulation is to create a series of experimental samples using a random number sequence. According to the
More informationKeywords Akiake Information criterion, Automobile, Bonus-Malus, Exponential family, Linear regression, Residuals, Scaled deviance. I.
Application of the Generalized Linear Models in Actuarial Framework BY MURWAN H. M. A. SIDDIG School of Mathematics, Faculty of Engineering Physical Science, The University of Manchester, Oxford Road,
More informationNBER WORKING PAPER SERIES A REHABILITATION OF STOCHASTIC DISCOUNT FACTOR METHODOLOGY. John H. Cochrane
NBER WORKING PAPER SERIES A REHABILIAION OF SOCHASIC DISCOUN FACOR MEHODOLOGY John H. Cochrane Working Paper 8533 http://www.nber.org/papers/w8533 NAIONAL BUREAU OF ECONOMIC RESEARCH 1050 Massachusetts
More informationGamma Distribution Fitting
Chapter 552 Gamma Distribution Fitting Introduction This module fits the gamma probability distributions to a complete or censored set of individual or grouped data values. It outputs various statistics
More information**BEGINNING OF EXAMINATION** A random sample of five observations from a population is:
**BEGINNING OF EXAMINATION** 1. You are given: (i) A random sample of five observations from a population is: 0.2 0.7 0.9 1.1 1.3 (ii) You use the Kolmogorov-Smirnov test for testing the null hypothesis,
More informationMath 361. Day 8 Binomial Random Variables pages 27 and 28 Inv Do you have ESP? Inv. 1.3 Tim or Bob?
Math 361 Day 8 Binomial Random Variables pages 27 and 28 Inv. 1.2 - Do you have ESP? Inv. 1.3 Tim or Bob? Inv. 1.1: Friend or Foe Review Is a particular study result consistent with the null model? Learning
More informationMendelian Randomization with a Binary Outcome
Chapter 851 Mendelian Randomization with a Binary Outcome Introduction This module computes the sample size and power of the causal effect in Mendelian randomization studies with a binary outcome. This
More informationBootstrap Inference for Multiple Imputation Under Uncongeniality
Bootstrap Inference for Multiple Imputation Under Uncongeniality Jonathan Bartlett www.thestatsgeek.com www.missingdata.org.uk Department of Mathematical Sciences University of Bath, UK Joint Statistical
More informationBinomial distribution
Binomial distribution Jon Michael Gran Department of Biostatistics, UiO MF9130 Introductory course in statistics Tuesday 24.05.2010 1 / 28 Overview Binomial distribution (Aalen chapter 4, Kirkwood and
More informationMuch of what appears here comes from ideas presented in the book:
Chapter 11 Robust statistical methods Much of what appears here comes from ideas presented in the book: Huber, Peter J. (1981), Robust statistics, John Wiley & Sons (New York; Chichester). There are many
More informationGame Theory-based Model for Insurance Pricing in Public-Private-Partnership Project
Game Theory-based Model for Insurance Pricing in Public-Private-Partnership Project Lei Zhu 1 and David K. H. Chua Abstract In recent years, Public-Private Partnership (PPP) as a project financial method
More informationEX-POST VERIFICATION OF PREDICTION MODELS OF WAGE DISTRIBUTIONS
EX-POST VERIFICATION OF PREDICTION MODELS OF WAGE DISTRIBUTIONS LUBOŠ MAREK, MICHAL VRABEC University of Economics, Prague, Faculty of Informatics and Statistics, Department of Statistics and Probability,
More informationMonitoring Processes with Highly Censored Data
Monitoring Processes with Highly Censored Data Stefan H. Steiner and R. Jock MacKay Dept. of Statistics and Actuarial Sciences University of Waterloo Waterloo, N2L 3G1 Canada The need for process monitoring
More informationMODELLING OF INCOME AND WAGE DISTRIBUTION USING THE METHOD OF L-MOMENTS OF PARAMETER ESTIMATION
International Days of Statistics and Economics, Prague, September -3, MODELLING OF INCOME AND WAGE DISTRIBUTION USING THE METHOD OF L-MOMENTS OF PARAMETER ESTIMATION Diana Bílková Abstract Using L-moments
More informationConsistent estimators for multilevel generalised linear models using an iterated bootstrap
Multilevel Models Project Working Paper December, 98 Consistent estimators for multilevel generalised linear models using an iterated bootstrap by Harvey Goldstein hgoldstn@ioe.ac.uk Introduction Several
More information8: Economic Criteria
8.1 Economic Criteria Capital Budgeting 1 8: Economic Criteria The preceding chapters show how to discount and compound a variety of different types of cash flows. This chapter explains the use of those
More informationADVANCED OPERATIONAL RISK MODELLING IN BANKS AND INSURANCE COMPANIES
Small business banking and financing: a global perspective Cagliari, 25-26 May 2007 ADVANCED OPERATIONAL RISK MODELLING IN BANKS AND INSURANCE COMPANIES C. Angela, R. Bisignani, G. Masala, M. Micocci 1
More informationSampling & Confidence Intervals
Sampling & Confidence Intervals Mark Lunt Arthritis Research UK Epidemiology Unit University of Manchester 24/10/2017 Principles of Sampling Often, it is not practical to measure every subject in a population.
More informationChapter 5 Univariate time-series analysis. () Chapter 5 Univariate time-series analysis 1 / 29
Chapter 5 Univariate time-series analysis () Chapter 5 Univariate time-series analysis 1 / 29 Time-Series Time-series is a sequence fx 1, x 2,..., x T g or fx t g, t = 1,..., T, where t is an index denoting
More informationA NEW POINT ESTIMATOR FOR THE MEDIAN OF GAMMA DISTRIBUTION
Banneheka, B.M.S.G., Ekanayake, G.E.M.U.P.D. Viyodaya Journal of Science, 009. Vol 4. pp. 95-03 A NEW POINT ESTIMATOR FOR THE MEDIAN OF GAMMA DISTRIBUTION B.M.S.G. Banneheka Department of Statistics and
More informationThe risk/return trade-off has been a
Efficient Risk/Return Frontiers for Credit Risk HELMUT MAUSSER AND DAN ROSEN HELMUT MAUSSER is a mathematician at Algorithmics Inc. in Toronto, Canada. DAN ROSEN is the director of research at Algorithmics
More informationModelling strategies for bivariate circular data
Modelling strategies for bivariate circular data John T. Kent*, Kanti V. Mardia, & Charles C. Taylor Department of Statistics, University of Leeds 1 Introduction On the torus there are two common approaches
More informationOn Maximizing Annualized Option Returns
Digital Commons@ Loyola Marymount University and Loyola Law School Finance & CIS Faculty Works Finance & Computer Information Systems 10-1-2014 On Maximizing Annualized Option Returns Charles J. Higgins
More informationConover Test of Variances (Simulation)
Chapter 561 Conover Test of Variances (Simulation) Introduction This procedure analyzes the power and significance level of the Conover homogeneity test. This test is used to test whether two or more population
More informationUsing New SAS 9.4 Features for Cumulative Logit Models with Partial Proportional Odds Paul J. Hilliard, Educational Testing Service (ETS)
Using New SAS 9.4 Features for Cumulative Logit Models with Partial Proportional Odds Using New SAS 9.4 Features for Cumulative Logit Models with Partial Proportional Odds INTRODUCTION Multicategory Logit
More informationA Simple Utility Approach to Private Equity Sales
The Journal of Entrepreneurial Finance Volume 8 Issue 1 Spring 2003 Article 7 12-2003 A Simple Utility Approach to Private Equity Sales Robert Dubil San Jose State University Follow this and additional
More informationMEASURING TRADED MARKET RISK: VALUE-AT-RISK AND BACKTESTING TECHNIQUES
MEASURING TRADED MARKET RISK: VALUE-AT-RISK AND BACKTESTING TECHNIQUES Colleen Cassidy and Marianne Gizycki Research Discussion Paper 9708 November 1997 Bank Supervision Department Reserve Bank of Australia
More informationcontinuous rv Note for a legitimate pdf, we have f (x) 0 and f (x)dx = 1. For a continuous rv, P(X = c) = c f (x)dx = 0, hence
continuous rv Let X be a continuous rv. Then a probability distribution or probability density function (pdf) of X is a function f(x) such that for any two numbers a and b with a b, P(a X b) = b a f (x)dx.
More information