STRESS-STRENGTH RELIABILITY ESTIMATION

Size: px
Start display at page:

Download "STRESS-STRENGTH RELIABILITY ESTIMATION"

Transcription

1 CHAPTER 5 STRESS-STRENGTH RELIABILITY ESTIMATION 5. Introduction There are appliances (every physical component possess an inherent strength) which survive due to their strength. These appliances receive a certain level of stress and sustain. But if a higher level of stress is applied then their strength is unable to sustain and they break down. Suppose Y represents the stress which is applied to a certain appliance and X represents the strength to sustain the stress, then the stress-strength reliability is denoted by R= P(Y<X), if X,Y are assumed to be random. The term stress is defined as: It is failure inducing variable. It is defined as stress (load) which tends to produce a failure of a component or of a device of a material. The term load may be defined as mechanical load, environment, temperature and electric current etc. The term strength is defined as: The ability of component, a device or a material to accomplish its required function (mission) satisfactorily without failure when subjected to the external loading and environment. Therefore strength is failure resisting variable. Stress-strength model: The variation in stress and strength results in a statistical distribution and natural scatter occurs in these variables when the two distributions interfere. When interference becomes higher than strength, failure results. In other words, when probability density functions of both stress and strength are known, the component reliability may be determined analytically. Therefore R= P(Y<X) is reliability parameter R. Thus R= P(Y<X) is the characteristic of the distribution of X and Y. 5

2 Let X and Y be two random variables such that X represents strength and Y represents stress and X,Y follow a joint pdf f(x,y), then reliability of the component is: + x R= P( Y < X) = f(x,y)dy dx (5..) where P(Y<X) is a relationship which represents the probability, that the strength exceeds the stress and f(x,y) is joint pdf of X and Y. If the random variables are statistically independent, then f(x,y) = f(x) g(y) so that x R = f(x)g(y)dy dx (5..) G y ( x ) = x g(y)dy R = G (x)f(x)dx (5..3) y where f(x) and g(y) are pdf s of X and Y respectively. The concept of stress-strength in engineering devices has been one of the deciding factors of failure of the devices. It has been customary to define safety factors for longer lives of systems in terms of the inherent strength they have and the external stress being experienced by the systems. If X o is the fixed strength and Y o is the fixed stress, that a system is experiencing, then the ratio Xo Yo is called safety factor and the difference X o - Y o is called safety margin. Thus in the deterministic stress-strength 53

3 situation the system survives only if the safety factor is greater than or equivalently safety margin is positive. In the traditional approach to design of a system the safety factor or margin is made large enough to compensate uncertainties in the values of the stress and strength variates. Thus uncertainties in the stress and strength of a system make the system to be viewed as random variables. In engineering situations the calculations are often deterministic. However, the probabilistic analysis demands the use of random variables for the concepts of stress and strength for the evaluation of survival probabilities of such systems. This analysis is particularly useful in situations in which no fixed bound can be put on the stress. For example, with earth quakes, floods and other natural phenomena, the shortcomings may result in failures of systems with unusually small strengths. Similarly when economics rather than safety is the primary criterion, the comparison of survival performance can be better studied by knowing the increase in failure probability as the stress and strength approach one another. In foregoing lines it is mentioned that the stress-strength variates are more reasonable to be random variables than purely deterministic. Let the random samples Y, X, X,..., Xk being independent, G(y) be the continuous cdf of Y and F(x) be the common continuous cdf of X, X,..., X k. The reliability in a multicomponent stress- strength model developed by Bhattacharyya and Johnson (974) is given by R = P[ at least s of the (X,X,...,X ) exceed Y] s,k k k ( k ) i= s i k-i i G(y) ] [ G(y) ] = [ df(y), (5..4) 54

4 where X, X,..., Xk are iid with common cdf F(x), this system is subjected to common random stress Y. The R.H.S of the equation (5..3) is called the survival probability of a system of single component having a random strength X, and experiencing a random stress Y. Let a system consist of say k components whose strengths are given by independently identically distributed random variables with cumulative distribution function F(.) each experiencing a random stress governed by a random variable Y with cumulative distribution function G(.).The above probability given by (5..4) is called reliability in a multi-component stress-strength model ( Battacharya and Johnson, 974). A salient feature of probability mentioned in (5..) is that the probability density functions of strength and stress variates will have a nonempty overlapping ranges for the random variables. In some cases the failure probability may strongly depend on the lower tail of the strength distribution. When the stress or strength is not determined by either sum or the product of many small components, it may be the extremes of these small components that decide the stress or the strength. Extreme ordered variates have proved to be very useful in the analysis of reliability problems of this nature. Suppose that a number of random stresses say (Y,Y,Y 3,...,Y m) with cumulative distribution function G(.) are acting on a system whose strengths are given by m random variables (X,X,X 3,...,X n) with cumulative distribution function F(.). If V is the maximum of Y,Y,Y 3,...,Y m and U is the minimum of X,X,X 3,...,X n, the survival of the system depends on whether or not U exceeds V. That is the survival probability of such a system is 55

5 explained with the help of distributions of extreme order statistics in random samples of finite sizes. Thus the above introductory lines indicate that whether it is through extreme values or independent variates like Y, X in a single component or multicomponent system with stress and strength factors, the reliability in any situation ultimately turns out to be a parametric function of the parameters in probability distribution of the concerned variates. If some or all of the parameters are not known, evaluation of the survival probabilities of a stress-strength model leads to estimation of a parametric function. As mentioned in the introduction, several authors have taken up the problem of estimating survival probability in stress-strength relationship assuming various lifetime distributions for the stress-strength random variates. Some recent works in this direction are Kantam et al. (000), Kantam and Srinivasa Rao (007), Srinivasa Rao et al. (00b) and the references therein. In this chapter, we present estimation of R using maximum likelihood (ML) estimates and moment (MOM) estimates of the parameters in Section 5.. Asymptotic distribution and confidence intervals for R are given in Section 5.3. Simulation studies are carried out in Section 5.4 to investigate bias and mean squared errors (MSE) of the MLE and MOM of R as well as the lengths of the confidence interval for R. The conclusions and comments are provided in Section

6 5. Estimation of stress-strength reliability 5.. Estimation of stress-strength reliability using ML and moment estimates : The purpose of this section is to study the inference of R = P(Y< X), where X ~ IRD( σ ), Y ~ IRD( σ ) and they are independently distributed. The estimation of reliability is very common in statistical literature. The problem arises in the context of reliability of a component of strength, X, subject to a stress Y. Thus R= P(Y < X) is a measure of reliability of the system. The system fails if and only if the applied stress is greater than its strength. We obtain the maximum likelihood estimator (MLE) and moment estimator (MOM) of R and obtain its asymptotic distribution. The asymptotic distribution has been used to construct an asymptotic confidence interval. The bootstrap confidence interval of R is also proposed. Assuming that the different scale parameters are not known, we obtain the MLE and MOM of R. Suppose X and Y are random variables independently distributed as X ~ IRD( σ ) and Y ~ IRD( σ ). Therefore, x R = P(Y < X) = 00 σ 3 x e σ x σ 3 y e σ y dydx = σ 3 x 0 e σ x e σ x dx σ =. (5..) σ + σ Note that, = λ σ, = + λ λ σ R (5..) 57

7 R λ Therefore, = > 0. λ ( + λ ) R is an increasing function in λ. Now to compute R, we need to estimate the parameters σ and σ, and these are estimated by using MLE and MOM. 5.. Method of Maximum Likelihood Estimation (MLE) Suppose X,X,X 3,...,Xnis a random sample from inverse Rayleigh distribution (IRD) with σ and Y,Y,Y 3,...,Y m is a random sample from IRD with σ. The log-likelihood function of the observed samples is Log L( σ, σ ) = ( m + n) log + n log σ + m log σ σ σ log x log y n m n m 3 3 j i= xi j= yj i= j= (5..3) The MLEs of σ andσ, sayσˆ andσˆ respectively can be obtained as σˆ = n, x i (5..4) m σˆ =. (5..5) y j The MLE of stress-strength reliability R becomes ˆ σˆ R =. (5..6) σˆ + σˆ 58

8 5..3 Method of Moment (MOM) Estimation If x, y are the sample means of samples on strength and stress variates respectively, then moment estimators (MOM) of σ,σ are σ = x/ π and σ = y/ π respectively. Estimate of R using MOM method adopting invariance property for moment estimation, we get ~ ˆ σ R = ~ ~. (5..7) σ + σ 5..4 Asymptotic distribution and confidence intervals In this Section, first we obtain the asymptotic distribution of θ ˆ = ( σˆ, σˆ ) and then we derive the asymptotic distribution of ˆR. Based on the asymptotic distribution of ˆR, we obtain the asymptotic confidence interval of R. Let us denote the Fisher s information matrix of θ = ( σ, σ) as I( θ) = ( Iij ( θ); i, j =,). Therefore, I I I ( θ) =. (5..8) I I where log L 4n I E = = σ σ. log L 4m I E = = σ σ. 59

9 log L I I E σ σ = = = 0. As n, m, ( ) d n( ˆ σ σ), m( ˆ σ σ) N 0, A ( σ, σ ), where a 0 A( σ, σ ) = 0 a, 4 and a = I =, σ n a 4 = I =. σ m To obtain asymptotic confidence interval for R, we proceed as follows (Rao, 973) : d R σσ ( σ, σ) = = σ ( σ + σ) d R σσ ( σ, σ) = = σ ( σ + σ). This gives Var( Rˆ ) var( ˆ ) d (, ) var( ˆ ) d (, ) = σ σ σ + σ σ σ σ d σ σ d σ σ σ = (, ) + (, ) 4n 4m n ( σ + σ) m ( σ + σ) σ σ σ σ σ σ =

10 = + n m [ ] R( R) (5..9) Thus we have the following result, As n, m, Rˆ R R( R) + n m d N(0,). Hence the asymptotic 00( α )% confidence interval for R would be (, ) L U, where ( ) ˆ ˆ ˆ L = R Z( α /) R R + n m, (5..0) and ( ) ˆ ˆ ˆ U = R + Z( α /) R R +. n m (5..) Where Z( α / ) is the and ˆR is given by equation (5..6) Exact confidence interval th ( α / ) percentile of the standard normal distribution Let X,X,X 3,...,X n and Y,Y,Y 3,...,Y m are two independent random samples of size n, m respectively, drawn from inverse Rayleigh n m distribution with scale parameters σ and σ. Thus, and x y are i= i j= j independent gamma random variables with parameters m ( n, σ )and(, σ ) 6

11 respectively. We see that n m i= xi j= yj σ and σ are two independent Chisquare random variables with n and m degrees of freedom respectively. Thus, ˆR in equation (5..6) could be rewritten as Using (5..4) and (5..5) we obtain ˆ ˆ σ R = +. ˆ σ ˆ R σ = + σ F (5..) where F mσ = nσ n i= xi m j= y j is an F distributed random variable with (n, m) degrees of freedom. From equations (5..5) and (5..), we see that F Rˆ R could be written as F = Rˆ. R Using F as a pivotal quantity, we obtain a 00( α )% confidence interval for R as ( L, U ), where L = + ( ˆ ) F( /)( m, n) R α (5..3) U = + ( ˆ ) F( /)( m, n) R α (5..4) where ( α/) ( α/) F ( m, n) and F ( m, n) are the lower and upper percentiles of F distribution with m and n degrees of freedom. th ( α /) 6

12 p 5..6 Bootstrap confidence intervals In this subsection, we propose to use confidence intervals based on percentile bootstrap method (we call it from now on as Boot-p) based on the idea of Efron (98). We illustrate briefly how to estimate confidence interval of R using this method. Step : Draw random samples X, X,..., Xn and Y, Y,..., Ym from the populations of X and Y respectively. Step : Using the random samples X, X,..., Xn and Y, Y,..., Y m, generate bootstrap samples x, x,..., x n and * * *,,..., m y y y respectively. Compute the bootstrap estimates of σ and σ, say σ and σ respectively. Using σ and σ and equation (5..6), compute the bootstrap estimate of R, say ˆR. Step 3: Repeat step, NBOOT times, where N=000. Step 4: Let ( ) ( ˆ G x = P R x) be the empirical distribution function of ˆR. Let L ˆ 3 = R boot p( α /) and U = Rˆ 3 ( α boot / ) be the 00 α / th and the 00( α / ) th empirical percentiles of the ˆR values respectively. The small sample comparisons are studied through simulation in Section Estimation of reliability in multi-component stress-strength Model Assuming that F(.) and G (.) are inverse Rayleigh distributions with unknown scale parameters σ, σ and that independent random samples X < X <...< X n and Y < Y <... < Ym are available from F(.) and G(.) respectively. The reliability in multi-component stress-strength for inverse Rayleigh distribution using (5..4) is 63

13 k i i k = k = s s i i i k i + λ / λ 0 i = s j = i 0 - ( ) k ( σ/ y) i ( σ/ y) k i σ ( σ/ y) = [ ] [ ] i 3 i= s y 0 R K e e e dy ( ) k / λ i / λ k i k [ t ] [ t ] dt i where i= s 0 = ( ) λ ( ) = k [ z] [ z] dz if z= t = k λβ( k i+ λ, i+ ). After the simplification, we get 64 σ / y σ λ σ t = e and = k! R = λ ( k + λ - j), since kand i are integers. ( k - i)! (5.3.) The probability given in (5.3.) is called reliability in a multicomponent stress-strength model (Bhattacharyya and Johnson 974). Suppose a system with k identical components, functions, if s( s k) or more of the components simultaneously operate. In its operating environment, the system is subjected to a stress Y which is a random variable with cdf G(.). The strengths of the components, that is the minimum stresses to cause failure are independent and identically distributed random variables with cdf F(.). Then the system reliability, which is the probability that the system does not fail is the function R given in (5.3.). If σ, σ are not known, it is necessary to estimate σ, σ to estimate R. In this section, we estimate σ, σ by ML method and method of moment thus giving rise to two estimates. The estimates are substituted in λ to get an estimate of Rsk, using equation (5.3.). The theory of methods of estimation is explained below. It is well known that the method of maximum likelihood estimation (MLE) has invariance property. When the method of estimation of

14 s s, k s, k, k parameter is changed from ML to any other traditional method, this invariance principle does not hold good to estimate the parametric function. However, such an adoption of invariance property for other optimal estimators of the parameters to estimate a parametric function is attempted in different situations by different authors. Travadi and Ratani (990), Kantam and Srinivasa Rao (00) and the references therein are a few such instances. In this direction, we have proposed some estimators for the reliability of multicomponent stress-strength model by considering the estimators of the parameters of stress, strength distributions by standard methods of estimation in inverse Rayleigh distribution. The MLEs of σandσaredenotedas σ and σ. The asymptotic variance of the MLE is given by σ i σi E( log L/ ) = /4 n ; i=, when m=n. (5.3.) The MLE of survival probability of multicomponent stress-strength model is given by R with λ is replaced by λ = σ / σ in (5.3.). The second estimator, we propose here is R with λ is replaced by λ = σ / σ in (5.3.). Thus for a given pair of samples on stress, strength variates we get two estimates of Rsk, by the above two different methods. The asymptotic variance (AV) of an estimate of R which a function of two independent statistics (say) t,t is given by (Rao, 973). ˆ R R AV(R )=AV(t ) + AV(t ) σ σ (5.3.3) Where t,t are to be taken in two different ways namely, exact ML and method of moment estimators. Unfortunately, we can find the variance of 65

15 ,3, 3, 4, 4, 4,3 s k s, k s, k d 4 inverse Rayleigh distribution, the asymptotic variance of R is obtained using MLE only. From the asymptotic optimum properties of MLEs (Kendall and Stuart, 979) and of linear unbiased estimators (David, 98), we know that MLEs are asymptotically equally efficient having the Cramer-Rao lower bound as their asymptotic variance as given in (5.3.). Thus from equation (5.3.3), the asymptotic variance of R skobtained, when ( t,t ) are replaced by MLE. To avoid the difficulty of derivation of R sk,, we obtain the derivatives of R for (s,k)=(,3) and (,4) separately, they are given by R 6 λ R 6 λ = and = σ σ (3 + λ ) σ σ (3 + λ ). R 4 λ( λ + 7) R 4 λ( λ + 7) = and = σ σ (3 + λ )(4 + λ ) σ σ (3 + λ )(4 + λ ). where λ = σ / σ Thus AV(R ˆ )= 9λ + (3 + λ ) n m ˆ 44 λ (λ + 7) AV(R )= + (3 + )(4 + ) n m [ λ λ ] As n,, Rˆ R N(0,), AV(R ˆ ) and the asymptotic confidence - 95% confidence interval for R is given Rˆ.96 AV(R ˆ ). by s, k s, k 66

16 The asymptotic confidence - 95% confidence interval for R,3 is given by ˆ 3λ R,3.96 (3 λ ) + + n m The asymptotic confidence - 95% confidence interval for R,4 is given by ˆ λ (λ + 7) R, (3 )(4 n m + λ + λ ) The small sample comparisons are studied through simulation in Section Simulation study and Data analysis 5.4. Simulation study In this section, we present some results based on Monte - Carlo simulation, to compare the performance of the estimates of R and R using ML and MOM estimators mainly for some sample sizes. We consider the following some sample sizes; (n, m) = (5, 5), (0, 0), (5, 5), (0, 0) and (5, 5). In both estimations we take ( σ, σ ) =(, 3), (,.5), (, ), (,.5), (, ), (.5, ), (.5, ) and (3,). All the results are based on 3000 replications. From each sample, we compute the estimates of ( σ, σ ) using ML and Method of Moment estimation. Once we estimate( σ, σ ), we obtain the estimates of R by substituting in (5..) and (5..) respectively. Also obtain the estimates of R by substituting in for (s,k)=(,3) and (,4) respectively. We report the average biases of R in Table 5.4., R in Table 5.4.5, mean squared errors (MSEs) of R in Table 5.4. and R in Table over 3000 replications. We also compute the 95% confidence 67

17 intervals based on the asymptotic distribution, exact distribution and Boot-p method of ˆR. We report the average confidence lengths in Table and the coverage probabilities in Table based on 000 replications. Average confidence length and coverage probability of the simulated 95% confidence intervals of R are given in Table Some of the points are quite clear from this simulation. Even for small sample sizes, the performance of the R using MLEs is quite satisfactory in terms of biases and MSEs as compared with MOM estimates. When σ < σ, the bias is positive and when σ > σ, the bias is negative. Also the absolute bias decreases as sample size increases in both methods which is a natural trend. It is observed that when m=n and m, n increase then MSEs decrease as expected. It satisfies the consistency property of the MLE of R. As expected, the MSE is symmetric with respect to σ and σ. For example, if (n,m)=(0,0), MSE for ( σ, σ ) =(3,) and MSE for ( σ, σ ) =(,3) is the same in case of MLE whereas MOM show approximately symmetric. The length of the confidence interval is also symmetric with respect to ( σ, σ ) and decreases as the sample size increases. For (n, m)=(5, 5), we find that average length of ( L, U ) < average length of ( LU, ) < average length of ( L3, U3) in all combinations of ( σ, σ ). For other combinations of (n,m), we find that average length of ( L, U ) < average length of ( L3, U 3) in the case of σ σ, whereas average length of ( L, U ) < average length of ( LU, ) < average length of ( L3, U 3) when σ > σ. Particularly, for a fixed σ when σ.0, the average length of exact confidence intervals and Boot-p confidence intervals are almost same. Comparing the average coverage probabilities, it is observed that for most of sample sizes, the coverage probabilities of the confidence intervals based on the asymptotic results reach the nominal level as compared with other two confidence 68

18 intervals. However, it is slightly less than The performance of the bootstrap confidence intervals is quite good for small samples. For all combinations of (n,m) and ( σ, σ ), the coverage probabilities in exact confidence intervals is moderate and away from nominal level The overall performance of the confidence interval is quite good for asymptotic confidence intervals. The simulation results also indicate that MLE shows better performance than MOM in the average bias and average MSE for different choices of the parameters. The exact confidence intervals are preferable in the case of average short length of confidence intervals, whereas asymptotic confidence intervals are advisable to use with respect to coverage probabilities for different choices of the parameters. Whereas the following points are observed for R for simulation study. The true value of reliability in multicomponent stress-strength increases when σ σ otherwise true value of reliability decreases. Thus the true value of reliability increases as λ decreases and vice-versa. Both bias and MSE decreases as sample size increases for both methods of estimation in reliability. With respect to bias, moment estimator is closer to exact MLE in most of the parametric and sample combinations. Also the bias is negative when σ σ and in other cases bias is positive in both situations of (s,k). With respect to MSE, MLE shows first preference than method of moment estimation. The length of the confidence interval also decreases as the sample size increases. The coverage probability is close to the nominal value in all cases for MLE. The overall performance of the confidence interval is quite good for MLE. The simulation results also show that there is no considerable difference in the average bias and average MSE for different choices of the parameters, whereas there is considerable difference in MLE and MOM. The same phenomenon is observed for the 69

19 average lengths and coverage probabilities of the confidence intervals using MLE Data analysis We present a data analysis for two data sets reported by Lawless (98) and Proschan (963). The first data set is obtained from Lawless (98) and it represents the number of revolution before failure of each of 3 ball bearings in the life tests and they are as follows: Data Set I: 7.88, 8.9, 33.00, 4.5, 4., 45.60, 48.80, 5.84, 5.96, 54., 55.56, 67.80, 68.44, 68.64, 68.88, 84., 93., 98.64, 05., 05.84, 7.9, 8.04, Gupta and Kundu (00) have fitted gamma, Weibull and Generalized exponential distribution to this data. The second data set is obtained from Proschan (963) and represents times between successive failures of 5 air conditioning (AC) equipment in a Boeing 70 airplane and they are as follows: Data Set II:,, 6, 7, 9, 9, 48, 57, 59, 70, 74, 53, 36, 386, 50. We fit the inverse Rayleigh distribution to the two data sets separately. We used the Kolmogorov-Smirnov (K-S) tests for each data set to fit the inverse Rayleigh model. It is observed that for Data Sets I and II, the K-S distances are 0.09 and with the corresponding p values are and respectively. For data sets I and II, the chi-square values are and.6383 respectively. Therefore, it is clear that inverse Rayleigh model fits quite well to both the data sets. We plot the empirical survival functions and the fitted survival functions in Figures 5.4. and Figures 5.4. and 5.4. show that the empirical and fitted models are very close for each data set. 70

20 Based on estimates of σandσ the MLE of R becomes and the corresponding 95% confidence interval becomes (0.5688, ). We also obtain the 95% Boot-p confidence intervals as ( , ). The MLE of R for (s,k)=(,3) become and the corresponding 95% confidence interval becomes ( , ). The MLE of R for (s,k)=(,4) become and the corresponding 95% confidence interval becomes (0.6380, ) Empirical survival function Fitted survival function Figure 5.4.: The empirical and fitted survival functions for Data Set I. 7

21 Empirical survival function Fitted survival function Figure 5.4.: The empirical and fitted survival functions for Data Set II. 5.5 Conclusions We compare two methods of estimating R= P( Y < X) when Y and X both follow inverse Rayleigh distributions with different scale parameters. We provide MLE and MOM procedure to estimate the unknown scale parameters and use them to estimate of R. We also obtain the asymptotic distribution of estimated R and that was used to compute the asymptotic confidence intervals. The simulation results indicate that MLE shows better performance than MOM in the average bias and average MSE for different choices of the parameters. The exact confidence intervals are preferable for average short length of confidence intervals whereas asymptotic confidence intervals are advisable to use with respect to coverage probabilities for different choices of the parameters. We proposed bootstrap confidence intervals and its performance is also quite satisfactory. 7

22 Whereas to estimate the multi-component stress-strength reliability, we provided ML and MOM estimators of σ andσ when both of stress, strength variates follow the same population. Also, we have estimated asymptotic confidence interval for multicomponent stress-strength reliability. The simulation results indicate that in order to estimate the multicomponent stress-strength reliability for inverse Rayleigh distribution, the ML method of estimation is preferable than the method of moment estimation. The length of the confidence interval also decreases as the sample size increases and coverage probability is close to the nominal value in all cases for MLE. 73

23 Table 5.4.: Average bias of the simulated estimates of R. ( σ, σ ) (n, m) (5,5) (0,0) (5,5) (0,0) (5,5) (,3) (,.5) (,) (,.5) (,) (.5,) (,) (.5,) (3,) R= 0.0 R= 0.4 R= 0.0 R= 0.3 R= 0.50 R= 0.69 R= 0.80 R= 0.86 R= In each cell the first row represents the average bias of R using the MOM and second row represents average bias of R using the MLE. 74

24 Table 5.4.: Average MSE of the simulated estimates of R. (n, m) (5,5) (0,0) (5,5) (0,0) (5,5) ( σ, σ ) (,3) (,.5) (,) (,.5) (,) (.5,) (,) (.5,) (3,) R= 0.0 R= 0.4 R= 0.0 R= 0.3 R= 0.50 R= 0.69 R= 0.80 R= 0.86 R= In each cell the first row represents the average MSE of R using the MOM and second row represents average MSE of R using the MLE. 75

25 Table 5.4.3: Average confidence length of the simulated 95% confidence intervals of R. ( σ, σ ) (,3) (,.5) (,) (,.5) (,) (.5,) (,) (.5,) (3,) (n, m) Interval R= 0.0 R= 0.4 R= 0.0 R= 0.3 R= 0.50 R= 0.69 R= 0.80 R= 0.86 R= 0.90 (5,5) A B C (0,0) A B C (5,5) A B C (0,0) A B C (5,5) A B C A: Asymptotic confidence interval B: Exact confidence interval C: Bootstrap confidence interval 76

26 Table 5.4.4: Average coverage probability of the simulated 95% confidence intervals of R. ( σ, σ ) (,3) (,.5) (,) (,.5) (,) (.5,) (,) (.5,) (3,) (n, m) Interval R= 0.0 R= 0.4 R= 0.0 R= 0.3 R= 0.50 R= 0.69 R= 0.80 R= 0.86 R= 0.90 (5,5) A B C (0,0) A B C (5,5) A B C (0,0) A B C (5,5) A B C A: Asymptotic confidence interval, B: Exact confidence interval and C: Bootstrap confidence interval. 77

27 Table 5.4.5: Average bias of the simulated estimates of R. (n, m) (s,k) (,3) (5,5) (0,0) (5,5) (0,0) (5,5) ( σ, σ ) (,3) (,.5) (,) (,.5) (,) (.5,) (,) (.5,) (3,) R =0.964 R =0.949 R =0.93 R =0.87 R =0.750 R =0.57 R =0.49 R =0.34 R = (n, m) R =0.938 (,4) R =0.93 R =0.869 R =0.784 R =0.600 R =0.366 R =0.4 R =0.7 R =0.077 (5,5) (0,0) (5,5) (0,0) (5,5) In each cell the first row represents the average bias of R using the MLE and second row represents average bias of R using the MOM. 78

28 (n, m) (s,k) (5,5) Table 5.4.6: Average MSE of the simulated estimates of R. ( σ, σ ) (,3) (,.5) (,) (,.5) (,) (.5,) (,) (.5,) (3,) (,3) R =0.964 R =0.949 R =0.93 R =0.87 R =0.750 R =0.57 R =0.49 R =0.34 R = (0,0) (5,5) (0,0) (5,5) (n, m) (,4) R =0.938 R =0.93 R =0.869 R =0.784 R =0.600 R =0.366 R =0.4 R =0.7 R =0.077 (5,5) (0,0) (5,5) (0,0) (5,5) In each cell the first row represents the average MSE of R using the MLE and second row represents average MSE of R using the MOM. 79

29 Table 5.4.7: Average confidence length and coverage probability of the simulated 95% confidence intervals of R using MLE. (n, m) (5,5) (0,0) (5,5) (0,0) (5,5) (s,k) (,3) ( σ, σ ) (,3) (,.5) (,) (,.5) (,) (.5,) (,) (.5,) (3,) A B A B A B A B A B (n, m) (,4) ( σ, σ ) (,3) (,.5) (,) (,.5) (,) (.5,) (,) (.5,) (3,) (5,5) (0,0) (5,5) (0,0) (5,5) A B A B A B A B A B A: Average confidence length B: Coverage probability 80

A New Hybrid Estimation Method for the Generalized Pareto Distribution

A New Hybrid Estimation Method for the Generalized Pareto Distribution A New Hybrid Estimation Method for the Generalized Pareto Distribution Chunlin Wang Department of Mathematics and Statistics University of Calgary May 18, 2011 A New Hybrid Estimation Method for the GPD

More information

**BEGINNING OF EXAMINATION** A random sample of five observations from a population is:

**BEGINNING OF EXAMINATION** A random sample of five observations from a population is: **BEGINNING OF EXAMINATION** 1. You are given: (i) A random sample of five observations from a population is: 0.2 0.7 0.9 1.1 1.3 (ii) You use the Kolmogorov-Smirnov test for testing the null hypothesis,

More information

Point Estimation. Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage

Point Estimation. Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage 6 Point Estimation Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage Point Estimation Statistical inference: directed toward conclusions about one or more parameters. We will use the generic

More information

Chapter 8: Sampling distributions of estimators Sections

Chapter 8: Sampling distributions of estimators Sections Chapter 8 continued Chapter 8: Sampling distributions of estimators Sections 8.1 Sampling distribution of a statistic 8.2 The Chi-square distributions 8.3 Joint Distribution of the sample mean and sample

More information

Much of what appears here comes from ideas presented in the book:

Much of what appears here comes from ideas presented in the book: Chapter 11 Robust statistical methods Much of what appears here comes from ideas presented in the book: Huber, Peter J. (1981), Robust statistics, John Wiley & Sons (New York; Chichester). There are many

More information

Random Variables and Probability Distributions

Random Variables and Probability Distributions Chapter 3 Random Variables and Probability Distributions Chapter Three Random Variables and Probability Distributions 3. Introduction An event is defined as the possible outcome of an experiment. In engineering

More information

Homework Problems Stat 479

Homework Problems Stat 479 Chapter 10 91. * A random sample, X1, X2,, Xn, is drawn from a distribution with a mean of 2/3 and a variance of 1/18. ˆ = (X1 + X2 + + Xn)/(n-1) is the estimator of the distribution mean θ. Find MSE(

More information

MATH 3200 Exam 3 Dr. Syring

MATH 3200 Exam 3 Dr. Syring . Suppose n eligible voters are polled (randomly sampled) from a population of size N. The poll asks voters whether they support or do not support increasing local taxes to fund public parks. Let M be

More information

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER Two hours MATH20802 To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER STATISTICAL METHODS Answer any FOUR of the SIX questions.

More information

12 The Bootstrap and why it works

12 The Bootstrap and why it works 12 he Bootstrap and why it works For a review of many applications of bootstrap see Efron and ibshirani (1994). For the theory behind the bootstrap see the books by Hall (1992), van der Waart (2000), Lahiri

More information

Analysis of truncated data with application to the operational risk estimation

Analysis of truncated data with application to the operational risk estimation Analysis of truncated data with application to the operational risk estimation Petr Volf 1 Abstract. Researchers interested in the estimation of operational risk often face problems arising from the structure

More information

درس هفتم یادگیري ماشین. (Machine Learning) دانشگاه فردوسی مشهد دانشکده مهندسی رضا منصفی

درس هفتم یادگیري ماشین. (Machine Learning) دانشگاه فردوسی مشهد دانشکده مهندسی رضا منصفی یادگیري ماشین توزیع هاي نمونه و تخمین نقطه اي پارامترها Sampling Distributions and Point Estimation of Parameter (Machine Learning) دانشگاه فردوسی مشهد دانشکده مهندسی رضا منصفی درس هفتم 1 Outline Introduction

More information

Confidence Intervals for an Exponential Lifetime Percentile

Confidence Intervals for an Exponential Lifetime Percentile Chapter 407 Confidence Intervals for an Exponential Lifetime Percentile Introduction This routine calculates the number of events needed to obtain a specified width of a confidence interval for a percentile

More information

Point Estimation. Some General Concepts of Point Estimation. Example. Estimator quality

Point Estimation. Some General Concepts of Point Estimation. Example. Estimator quality Point Estimation Some General Concepts of Point Estimation Statistical inference = conclusions about parameters Parameters == population characteristics A point estimate of a parameter is a value (based

More information

Chapter 7. Inferences about Population Variances

Chapter 7. Inferences about Population Variances Chapter 7. Inferences about Population Variances Introduction () The variability of a population s values is as important as the population mean. Hypothetical distribution of E. coli concentrations from

More information

Chapter 4: Commonly Used Distributions. Statistics for Engineers and Scientists Fourth Edition William Navidi

Chapter 4: Commonly Used Distributions. Statistics for Engineers and Scientists Fourth Edition William Navidi Chapter 4: Commonly Used Distributions Statistics for Engineers and Scientists Fourth Edition William Navidi 2014 by Education. This is proprietary material solely for authorized instructor use. Not authorized

More information

Lecture 10: Point Estimation

Lecture 10: Point Estimation Lecture 10: Point Estimation MSU-STT-351-Sum-17B (P. Vellaisamy: MSU-STT-351-Sum-17B) Probability & Statistics for Engineers 1 / 31 Basic Concepts of Point Estimation A point estimate of a parameter θ,

More information

ELEMENTS OF MONTE CARLO SIMULATION

ELEMENTS OF MONTE CARLO SIMULATION APPENDIX B ELEMENTS OF MONTE CARLO SIMULATION B. GENERAL CONCEPT The basic idea of Monte Carlo simulation is to create a series of experimental samples using a random number sequence. According to the

More information

INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY. Lecture -5 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc.

INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY. Lecture -5 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc. INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY Lecture -5 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc. Summary of the previous lecture Moments of a distribubon Measures of

More information

Applied Statistics I

Applied Statistics I Applied Statistics I Liang Zhang Department of Mathematics, University of Utah July 14, 2008 Liang Zhang (UofU) Applied Statistics I July 14, 2008 1 / 18 Point Estimation Liang Zhang (UofU) Applied Statistics

More information

4-1. Chapter 4. Commonly Used Distributions by The McGraw-Hill Companies, Inc. All rights reserved.

4-1. Chapter 4. Commonly Used Distributions by The McGraw-Hill Companies, Inc. All rights reserved. 4-1 Chapter 4 Commonly Used Distributions 2014 by The Companies, Inc. All rights reserved. Section 4.1: The Bernoulli Distribution 4-2 We use the Bernoulli distribution when we have an experiment which

More information

Bias Reduction Using the Bootstrap

Bias Reduction Using the Bootstrap Bias Reduction Using the Bootstrap Find f t (i.e., t) so that or E(f t (P, P n ) P) = 0 E(T(P n ) θ(p) + t P) = 0. Change the problem to the sample: whose solution is so the bias-reduced estimate is E(T(P

More information

A Convenient Way of Generating Normal Random Variables Using Generalized Exponential Distribution

A Convenient Way of Generating Normal Random Variables Using Generalized Exponential Distribution A Convenient Way of Generating Normal Random Variables Using Generalized Exponential Distribution Debasis Kundu 1, Rameshwar D. Gupta 2 & Anubhav Manglick 1 Abstract In this paper we propose a very convenient

More information

Lecture Notes 6. Assume F belongs to a family of distributions, (e.g. F is Normal), indexed by some parameter θ.

Lecture Notes 6. Assume F belongs to a family of distributions, (e.g. F is Normal), indexed by some parameter θ. Sufficient Statistics Lecture Notes 6 Sufficiency Data reduction in terms of a particular statistic can be thought of as a partition of the sample space X. Definition T is sufficient for θ if the conditional

More information

Point Estimators. STATISTICS Lecture no. 10. Department of Econometrics FEM UO Brno office 69a, tel

Point Estimators. STATISTICS Lecture no. 10. Department of Econometrics FEM UO Brno office 69a, tel STATISTICS Lecture no. 10 Department of Econometrics FEM UO Brno office 69a, tel. 973 442029 email:jiri.neubauer@unob.cz 8. 12. 2009 Introduction Suppose that we manufacture lightbulbs and we want to state

More information

Commonly Used Distributions

Commonly Used Distributions Chapter 4: Commonly Used Distributions 1 Introduction Statistical inference involves drawing a sample from a population and analyzing the sample data to learn about the population. We often have some knowledge

More information

Definition 9.1 A point estimate is any function T (X 1,..., X n ) of a random sample. We often write an estimator of the parameter θ as ˆθ.

Definition 9.1 A point estimate is any function T (X 1,..., X n ) of a random sample. We often write an estimator of the parameter θ as ˆθ. 9 Point estimation 9.1 Rationale behind point estimation When sampling from a population described by a pdf f(x θ) or probability function P [X = x θ] knowledge of θ gives knowledge of the entire population.

More information

Statistical estimation

Statistical estimation Statistical estimation Statistical modelling: theory and practice Gilles Guillot gigu@dtu.dk September 3, 2013 Gilles Guillot (gigu@dtu.dk) Estimation September 3, 2013 1 / 27 1 Introductory example 2

More information

Lecture 5: Fundamentals of Statistical Analysis and Distributions Derived from Normal Distributions

Lecture 5: Fundamentals of Statistical Analysis and Distributions Derived from Normal Distributions Lecture 5: Fundamentals of Statistical Analysis and Distributions Derived from Normal Distributions ELE 525: Random Processes in Information Systems Hisashi Kobayashi Department of Electrical Engineering

More information

INSTITUTE AND FACULTY OF ACTUARIES. Curriculum 2019 SPECIMEN EXAMINATION

INSTITUTE AND FACULTY OF ACTUARIES. Curriculum 2019 SPECIMEN EXAMINATION INSTITUTE AND FACULTY OF ACTUARIES Curriculum 2019 SPECIMEN EXAMINATION Subject CS1A Actuarial Statistics Time allowed: Three hours and fifteen minutes INSTRUCTIONS TO THE CANDIDATE 1. Enter all the candidate

More information

Statistics for Business and Economics

Statistics for Business and Economics Statistics for Business and Economics Chapter 7 Estimation: Single Population Copyright 010 Pearson Education, Inc. Publishing as Prentice Hall Ch. 7-1 Confidence Intervals Contents of this chapter: Confidence

More information

Bivariate Birnbaum-Saunders Distribution

Bivariate Birnbaum-Saunders Distribution Department of Mathematics & Statistics Indian Institute of Technology Kanpur January 2nd. 2013 Outline 1 Collaborators 2 3 Birnbaum-Saunders Distribution: Introduction & Properties 4 5 Outline 1 Collaborators

More information

ROM SIMULATION Exact Moment Simulation using Random Orthogonal Matrices

ROM SIMULATION Exact Moment Simulation using Random Orthogonal Matrices ROM SIMULATION Exact Moment Simulation using Random Orthogonal Matrices Bachelier Finance Society Meeting Toronto 2010 Henley Business School at Reading Contact Author : d.ledermann@icmacentre.ac.uk Alexander

More information

Exercise. Show the corrected sample variance is an unbiased estimator of population variance. S 2 = n i=1 (X i X ) 2 n 1. Exercise Estimation

Exercise. Show the corrected sample variance is an unbiased estimator of population variance. S 2 = n i=1 (X i X ) 2 n 1. Exercise Estimation Exercise Show the corrected sample variance is an unbiased estimator of population variance. S 2 = n i=1 (X i X ) 2 n 1 Exercise S 2 = = = = n i=1 (X i x) 2 n i=1 = (X i µ + µ X ) 2 = n 1 n 1 n i=1 ((X

More information

On the comparison of the Fisher information of the log-normal and generalized Rayleigh distributions

On the comparison of the Fisher information of the log-normal and generalized Rayleigh distributions On the comparison of the Fisher information of the log-normal and generalized Rayleigh distributions Fawziah S. Alshunnar 1, Mohammad Z. Raqab 1 and Debasis Kundu 2 Abstract Surles and Padgett (2001) recently

More information

Introduction to Algorithmic Trading Strategies Lecture 8

Introduction to Algorithmic Trading Strategies Lecture 8 Introduction to Algorithmic Trading Strategies Lecture 8 Risk Management Haksun Li haksun.li@numericalmethod.com www.numericalmethod.com Outline Value at Risk (VaR) Extreme Value Theory (EVT) References

More information

Chapter 5 Univariate time-series analysis. () Chapter 5 Univariate time-series analysis 1 / 29

Chapter 5 Univariate time-series analysis. () Chapter 5 Univariate time-series analysis 1 / 29 Chapter 5 Univariate time-series analysis () Chapter 5 Univariate time-series analysis 1 / 29 Time-Series Time-series is a sequence fx 1, x 2,..., x T g or fx t g, t = 1,..., T, where t is an index denoting

More information

PARAMETRIC AND NON-PARAMETRIC BOOTSTRAP: A SIMULATION STUDY FOR A LINEAR REGRESSION WITH RESIDUALS FROM A MIXTURE OF LAPLACE DISTRIBUTIONS

PARAMETRIC AND NON-PARAMETRIC BOOTSTRAP: A SIMULATION STUDY FOR A LINEAR REGRESSION WITH RESIDUALS FROM A MIXTURE OF LAPLACE DISTRIBUTIONS PARAMETRIC AND NON-PARAMETRIC BOOTSTRAP: A SIMULATION STUDY FOR A LINEAR REGRESSION WITH RESIDUALS FROM A MIXTURE OF LAPLACE DISTRIBUTIONS Melfi Alrasheedi School of Business, King Faisal University, Saudi

More information

Technical Note: An Improved Range Chart for Normal and Long-Tailed Symmetrical Distributions

Technical Note: An Improved Range Chart for Normal and Long-Tailed Symmetrical Distributions Technical Note: An Improved Range Chart for Normal and Long-Tailed Symmetrical Distributions Pandu Tadikamalla, 1 Mihai Banciu, 1 Dana Popescu 2 1 Joseph M. Katz Graduate School of Business, University

More information

Technology Support Center Issue

Technology Support Center Issue United States Office of Office of Solid EPA/600/R-02/084 Environmental Protection Research and Waste and October 2002 Agency Development Emergency Response Technology Support Center Issue Estimation of

More information

A NEW POINT ESTIMATOR FOR THE MEDIAN OF GAMMA DISTRIBUTION

A NEW POINT ESTIMATOR FOR THE MEDIAN OF GAMMA DISTRIBUTION Banneheka, B.M.S.G., Ekanayake, G.E.M.U.P.D. Viyodaya Journal of Science, 009. Vol 4. pp. 95-03 A NEW POINT ESTIMATOR FOR THE MEDIAN OF GAMMA DISTRIBUTION B.M.S.G. Banneheka Department of Statistics and

More information

TABLE OF CONTENTS - VOLUME 2

TABLE OF CONTENTS - VOLUME 2 TABLE OF CONTENTS - VOLUME 2 CREDIBILITY SECTION 1 - LIMITED FLUCTUATION CREDIBILITY PROBLEM SET 1 SECTION 2 - BAYESIAN ESTIMATION, DISCRETE PRIOR PROBLEM SET 2 SECTION 3 - BAYESIAN CREDIBILITY, DISCRETE

More information

Introduction Dickey-Fuller Test Option Pricing Bootstrapping. Simulation Methods. Chapter 13 of Chris Brook s Book.

Introduction Dickey-Fuller Test Option Pricing Bootstrapping. Simulation Methods. Chapter 13 of Chris Brook s Book. Simulation Methods Chapter 13 of Chris Brook s Book Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 April 26, 2017 Christopher

More information

continuous rv Note for a legitimate pdf, we have f (x) 0 and f (x)dx = 1. For a continuous rv, P(X = c) = c f (x)dx = 0, hence

continuous rv Note for a legitimate pdf, we have f (x) 0 and f (x)dx = 1. For a continuous rv, P(X = c) = c f (x)dx = 0, hence continuous rv Let X be a continuous rv. Then a probability distribution or probability density function (pdf) of X is a function f(x) such that for any two numbers a and b with a b, P(a X b) = b a f (x)dx.

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Maximum Likelihood Estimation The likelihood and log-likelihood functions are the basis for deriving estimators for parameters, given data. While the shapes of these two functions are different, they have

More information

FINITE SAMPLE DISTRIBUTIONS OF RISK-RETURN RATIOS

FINITE SAMPLE DISTRIBUTIONS OF RISK-RETURN RATIOS Available Online at ESci Journals Journal of Business and Finance ISSN: 305-185 (Online), 308-7714 (Print) http://www.escijournals.net/jbf FINITE SAMPLE DISTRIBUTIONS OF RISK-RETURN RATIOS Reza Habibi*

More information

Week 7 Quantitative Analysis of Financial Markets Simulation Methods

Week 7 Quantitative Analysis of Financial Markets Simulation Methods Week 7 Quantitative Analysis of Financial Markets Simulation Methods Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 November

More information

Can we use kernel smoothing to estimate Value at Risk and Tail Value at Risk?

Can we use kernel smoothing to estimate Value at Risk and Tail Value at Risk? Can we use kernel smoothing to estimate Value at Risk and Tail Value at Risk? Ramon Alemany, Catalina Bolancé and Montserrat Guillén Riskcenter - IREA Universitat de Barcelona http://www.ub.edu/riskcenter

More information

[D7] PROBABILITY DISTRIBUTION OF OUTSTANDING LIABILITY FROM INDIVIDUAL PAYMENTS DATA Contributed by T S Wright

[D7] PROBABILITY DISTRIBUTION OF OUTSTANDING LIABILITY FROM INDIVIDUAL PAYMENTS DATA Contributed by T S Wright Faculty and Institute of Actuaries Claims Reserving Manual v.2 (09/1997) Section D7 [D7] PROBABILITY DISTRIBUTION OF OUTSTANDING LIABILITY FROM INDIVIDUAL PAYMENTS DATA Contributed by T S Wright 1. Introduction

More information

Actuarial Mathematics and Statistics Statistics 5 Part 2: Statistical Inference Tutorial Problems

Actuarial Mathematics and Statistics Statistics 5 Part 2: Statistical Inference Tutorial Problems Actuarial Mathematics and Statistics Statistics 5 Part 2: Statistical Inference Tutorial Problems Spring 2005 1. Which of the following statements relate to probabilities that can be interpreted as frequencies?

More information

Point Estimation. Principle of Unbiased Estimation. When choosing among several different estimators of θ, select one that is unbiased.

Point Estimation. Principle of Unbiased Estimation. When choosing among several different estimators of θ, select one that is unbiased. Point Estimation Point Estimation Definition A point estimate of a parameter θ is a single number that can be regarded as a sensible value for θ. A point estimate is obtained by selecting a suitable statistic

More information

Unit 5: Sampling Distributions of Statistics

Unit 5: Sampling Distributions of Statistics Unit 5: Sampling Distributions of Statistics Statistics 571: Statistical Methods Ramón V. León 6/12/2004 Unit 5 - Stat 571 - Ramon V. Leon 1 Definitions and Key Concepts A sample statistic used to estimate

More information

Unit 5: Sampling Distributions of Statistics

Unit 5: Sampling Distributions of Statistics Unit 5: Sampling Distributions of Statistics Statistics 571: Statistical Methods Ramón V. León 6/12/2004 Unit 5 - Stat 571 - Ramon V. Leon 1 Definitions and Key Concepts A sample statistic used to estimate

More information

An Improved Skewness Measure

An Improved Skewness Measure An Improved Skewness Measure Richard A. Groeneveld Professor Emeritus, Department of Statistics Iowa State University ragroeneveld@valley.net Glen Meeden School of Statistics University of Minnesota Minneapolis,

More information

Appendix A. Selecting and Using Probability Distributions. In this appendix

Appendix A. Selecting and Using Probability Distributions. In this appendix Appendix A Selecting and Using Probability Distributions In this appendix Understanding probability distributions Selecting a probability distribution Using basic distributions Using continuous distributions

More information

2 of PU_2015_375 Which of the following measures is more flexible when compared to other measures?

2 of PU_2015_375 Which of the following measures is more flexible when compared to other measures? PU M Sc Statistics 1 of 100 194 PU_2015_375 The population census period in India is for every:- quarterly Quinqennial year biannual Decennial year 2 of 100 105 PU_2015_375 Which of the following measures

More information

EVA Tutorial #1 BLOCK MAXIMA APPROACH IN HYDROLOGIC/CLIMATE APPLICATIONS. Rick Katz

EVA Tutorial #1 BLOCK MAXIMA APPROACH IN HYDROLOGIC/CLIMATE APPLICATIONS. Rick Katz 1 EVA Tutorial #1 BLOCK MAXIMA APPROACH IN HYDROLOGIC/CLIMATE APPLICATIONS Rick Katz Institute for Mathematics Applied to Geosciences National Center for Atmospheric Research Boulder, CO USA email: rwk@ucar.edu

More information

A Test of the Normality Assumption in the Ordered Probit Model *

A Test of the Normality Assumption in the Ordered Probit Model * A Test of the Normality Assumption in the Ordered Probit Model * Paul A. Johnson Working Paper No. 34 March 1996 * Assistant Professor, Vassar College. I thank Jahyeong Koo, Jim Ziliak and an anonymous

More information

Statistical analysis and bootstrapping

Statistical analysis and bootstrapping Statistical analysis and bootstrapping p. 1/15 Statistical analysis and bootstrapping Michel Bierlaire michel.bierlaire@epfl.ch Transport and Mobility Laboratory Statistical analysis and bootstrapping

More information

3 ˆθ B = X 1 + X 2 + X 3. 7 a) Find the Bias, Variance and MSE of each estimator. Which estimator is the best according

3 ˆθ B = X 1 + X 2 + X 3. 7 a) Find the Bias, Variance and MSE of each estimator. Which estimator is the best according STAT 345 Spring 2018 Homework 9 - Point Estimation Name: Please adhere to the homework rules as given in the Syllabus. 1. Mean Squared Error. Suppose that X 1, X 2 and X 3 are independent random variables

More information

Gamma Distribution Fitting

Gamma Distribution Fitting Chapter 552 Gamma Distribution Fitting Introduction This module fits the gamma probability distributions to a complete or censored set of individual or grouped data values. It outputs various statistics

More information

χ 2 distributions and confidence intervals for population variance

χ 2 distributions and confidence intervals for population variance χ 2 distributions and confidence intervals for population variance Let Z be a standard Normal random variable, i.e., Z N(0, 1). Define Y = Z 2. Y is a non-negative random variable. Its distribution is

More information

Case Study: Heavy-Tailed Distribution and Reinsurance Rate-making

Case Study: Heavy-Tailed Distribution and Reinsurance Rate-making Case Study: Heavy-Tailed Distribution and Reinsurance Rate-making May 30, 2016 The purpose of this case study is to give a brief introduction to a heavy-tailed distribution and its distinct behaviors in

More information

Example: Small-Sample Properties of IV and OLS Estimators

Example: Small-Sample Properties of IV and OLS Estimators Example: Small- Properties of IV and OLS Estimators Considerable technical analysis is required to characterize the finite-sample distributions of IV estimators analytically. However, simple numerical

More information

Paper Series of Risk Management in Financial Institutions

Paper Series of Risk Management in Financial Institutions - December, 007 Paper Series of Risk Management in Financial Institutions The Effect of the Choice of the Loss Severity Distribution and the Parameter Estimation Method on Operational Risk Measurement*

More information

Computer Statistics with R

Computer Statistics with R MAREK GAGOLEWSKI KONSTANCJA BOBECKA-WESO LOWSKA PRZEMYS LAW GRZEGORZEWSKI Computer Statistics with R 5. Point Estimation Faculty of Mathematics and Information Science Warsaw University of Technology []

More information

II. Random Variables

II. Random Variables II. Random Variables Random variables operate in much the same way as the outcomes or events in some arbitrary sample space the distinction is that random variables are simply outcomes that are represented

More information

On Some Test Statistics for Testing the Population Skewness and Kurtosis: An Empirical Study

On Some Test Statistics for Testing the Population Skewness and Kurtosis: An Empirical Study Florida International University FIU Digital Commons FIU Electronic Theses and Dissertations University Graduate School 8-26-2016 On Some Test Statistics for Testing the Population Skewness and Kurtosis:

More information

Chapter 7: Point Estimation and Sampling Distributions

Chapter 7: Point Estimation and Sampling Distributions Chapter 7: Point Estimation and Sampling Distributions Seungchul Baek Department of Statistics, University of South Carolina STAT 509: Statistics for Engineers 1 / 20 Motivation In chapter 3, we learned

More information

Clark. Outside of a few technical sections, this is a very process-oriented paper. Practice problems are key!

Clark. Outside of a few technical sections, this is a very process-oriented paper. Practice problems are key! Opening Thoughts Outside of a few technical sections, this is a very process-oriented paper. Practice problems are key! Outline I. Introduction Objectives in creating a formal model of loss reserving:

More information

Statistical Tables Compiled by Alan J. Terry

Statistical Tables Compiled by Alan J. Terry Statistical Tables Compiled by Alan J. Terry School of Science and Sport University of the West of Scotland Paisley, Scotland Contents Table 1: Cumulative binomial probabilities Page 1 Table 2: Cumulative

More information

MEASURING PORTFOLIO RISKS USING CONDITIONAL COPULA-AR-GARCH MODEL

MEASURING PORTFOLIO RISKS USING CONDITIONAL COPULA-AR-GARCH MODEL MEASURING PORTFOLIO RISKS USING CONDITIONAL COPULA-AR-GARCH MODEL Isariya Suttakulpiboon MSc in Risk Management and Insurance Georgia State University, 30303 Atlanta, Georgia Email: suttakul.i@gmail.com,

More information

Homework Problems Stat 479

Homework Problems Stat 479 Chapter 2 1. Model 1 is a uniform distribution from 0 to 100. Determine the table entries for a generalized uniform distribution covering the range from a to b where a < b. 2. Let X be a discrete random

More information

Back to estimators...

Back to estimators... Back to estimators... So far, we have: Identified estimators for common parameters Discussed the sampling distributions of estimators Introduced ways to judge the goodness of an estimator (bias, MSE, etc.)

More information

MM and ML for a sample of n = 30 from Gamma(3,2) ===============================================

MM and ML for a sample of n = 30 from Gamma(3,2) =============================================== and for a sample of n = 30 from Gamma(3,2) =============================================== Generate the sample with shape parameter α = 3 and scale parameter λ = 2 > x=rgamma(30,3,2) > x [1] 0.7390502

More information

Chapter 2 Uncertainty Analysis and Sampling Techniques

Chapter 2 Uncertainty Analysis and Sampling Techniques Chapter 2 Uncertainty Analysis and Sampling Techniques The probabilistic or stochastic modeling (Fig. 2.) iterative loop in the stochastic optimization procedure (Fig..4 in Chap. ) involves:. Specifying

More information

A Stochastic Reserving Today (Beyond Bootstrap)

A Stochastic Reserving Today (Beyond Bootstrap) A Stochastic Reserving Today (Beyond Bootstrap) Presented by Roger M. Hayne, PhD., FCAS, MAAA Casualty Loss Reserve Seminar 6-7 September 2012 Denver, CO CAS Antitrust Notice The Casualty Actuarial Society

More information

IEOR E4703: Monte-Carlo Simulation

IEOR E4703: Monte-Carlo Simulation IEOR E4703: Monte-Carlo Simulation Generating Random Variables and Stochastic Processes Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com

More information

IEOR E4602: Quantitative Risk Management

IEOR E4602: Quantitative Risk Management IEOR E4602: Quantitative Risk Management Basic Concepts and Techniques of Risk Management Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com

More information

Chapter 14 : Statistical Inference 1. Note : Here the 4-th and 5-th editions of the text have different chapters, but the material is the same.

Chapter 14 : Statistical Inference 1. Note : Here the 4-th and 5-th editions of the text have different chapters, but the material is the same. Chapter 14 : Statistical Inference 1 Chapter 14 : Introduction to Statistical Inference Note : Here the 4-th and 5-th editions of the text have different chapters, but the material is the same. Data x

More information

Inferences on Correlation Coefficients of Bivariate Log-normal Distributions

Inferences on Correlation Coefficients of Bivariate Log-normal Distributions Inferences on Correlation Coefficients of Bivariate Log-normal Distributions Guoyi Zhang 1 and Zhongxue Chen 2 Abstract This article considers inference on correlation coefficients of bivariate log-normal

More information

Asymmetric Type II Compound Laplace Distributions and its Properties

Asymmetric Type II Compound Laplace Distributions and its Properties CHAPTER 4 Asymmetric Type II Compound Laplace Distributions and its Properties 4. Introduction Recently there is a growing trend in the literature on parametric families of asymmetric distributions which

More information

Loss Simulation Model Testing and Enhancement

Loss Simulation Model Testing and Enhancement Loss Simulation Model Testing and Enhancement Casualty Loss Reserve Seminar By Kailan Shang Sept. 2011 Agenda Research Overview Model Testing Real Data Model Enhancement Further Development Enterprise

More information

Applications of Good s Generalized Diversity Index. A. J. Baczkowski Department of Statistics, University of Leeds Leeds LS2 9JT, UK

Applications of Good s Generalized Diversity Index. A. J. Baczkowski Department of Statistics, University of Leeds Leeds LS2 9JT, UK Applications of Good s Generalized Diversity Index A. J. Baczkowski Department of Statistics, University of Leeds Leeds LS2 9JT, UK Internal Report STAT 98/11 September 1998 Applications of Good s Generalized

More information

Version A. Problem 1. Let X be the continuous random variable defined by the following pdf: 1 x/2 when 0 x 2, f(x) = 0 otherwise.

Version A. Problem 1. Let X be the continuous random variable defined by the following pdf: 1 x/2 when 0 x 2, f(x) = 0 otherwise. Math 224 Q Exam 3A Fall 217 Tues Dec 12 Version A Problem 1. Let X be the continuous random variable defined by the following pdf: { 1 x/2 when x 2, f(x) otherwise. (a) Compute the mean µ E[X]. E[X] x

More information

Probability & Statistics

Probability & Statistics Probability & Statistics BITS Pilani K K Birla Goa Campus Dr. Jajati Keshari Sahoo Department of Mathematics Statistics Descriptive statistics Inferential statistics /38 Inferential Statistics 1. Involves:

More information

Statistics and Probability

Statistics and Probability Statistics and Probability Continuous RVs (Normal); Confidence Intervals Outline Continuous random variables Normal distribution CLT Point estimation Confidence intervals http://www.isrec.isb-sib.ch/~darlene/geneve/

More information

8.1 Estimation of the Mean and Proportion

8.1 Estimation of the Mean and Proportion 8.1 Estimation of the Mean and Proportion Statistical inference enables us to make judgments about a population on the basis of sample information. The mean, standard deviation, and proportions of a population

More information

ECE 295: Lecture 03 Estimation and Confidence Interval

ECE 295: Lecture 03 Estimation and Confidence Interval ECE 295: Lecture 03 Estimation and Confidence Interval Spring 2018 Prof Stanley Chan School of Electrical and Computer Engineering Purdue University 1 / 23 Theme of this Lecture What is Estimation? You

More information

Subject CS1 Actuarial Statistics 1 Core Principles. Syllabus. for the 2019 exams. 1 June 2018

Subject CS1 Actuarial Statistics 1 Core Principles. Syllabus. for the 2019 exams. 1 June 2018 ` Subject CS1 Actuarial Statistics 1 Core Principles Syllabus for the 2019 exams 1 June 2018 Copyright in this Core Reading is the property of the Institute and Faculty of Actuaries who are the sole distributors.

More information

Chapter 7: Estimation Sections

Chapter 7: Estimation Sections 1 / 40 Chapter 7: Estimation Sections 7.1 Statistical Inference Bayesian Methods: Chapter 7 7.2 Prior and Posterior Distributions 7.3 Conjugate Prior Distributions 7.4 Bayes Estimators Frequentist Methods:

More information

Consistent estimators for multilevel generalised linear models using an iterated bootstrap

Consistent estimators for multilevel generalised linear models using an iterated bootstrap Multilevel Models Project Working Paper December, 98 Consistent estimators for multilevel generalised linear models using an iterated bootstrap by Harvey Goldstein hgoldstn@ioe.ac.uk Introduction Several

More information

Strategies for Improving the Efficiency of Monte-Carlo Methods

Strategies for Improving the Efficiency of Monte-Carlo Methods Strategies for Improving the Efficiency of Monte-Carlo Methods Paul J. Atzberger General comments or corrections should be sent to: paulatz@cims.nyu.edu Introduction The Monte-Carlo method is a useful

More information

Budget Setting Strategies for the Company s Divisions

Budget Setting Strategies for the Company s Divisions Budget Setting Strategies for the Company s Divisions Menachem Berg Ruud Brekelmans Anja De Waegenaere November 14, 1997 Abstract The paper deals with the issue of budget setting to the divisions of a

More information

Cambridge University Press Risk Modelling in General Insurance: From Principles to Practice Roger J. Gray and Susan M.

Cambridge University Press Risk Modelling in General Insurance: From Principles to Practice Roger J. Gray and Susan M. adjustment coefficient, 272 and Cramér Lundberg approximation, 302 existence, 279 and Lundberg s inequality, 272 numerical methods for, 303 properties, 272 and reinsurance (case study), 348 statistical

More information

2 Control variates. λe λti λe e λt i where R(t) = t Y 1 Y N(t) is the time from the last event to t. L t = e λr(t) e e λt(t) Exercises

2 Control variates. λe λti λe e λt i where R(t) = t Y 1 Y N(t) is the time from the last event to t. L t = e λr(t) e e λt(t) Exercises 96 ChapterVI. Variance Reduction Methods stochastic volatility ISExSoren5.9 Example.5 (compound poisson processes) Let X(t) = Y + + Y N(t) where {N(t)},Y, Y,... are independent, {N(t)} is Poisson(λ) with

More information

1. Covariance between two variables X and Y is denoted by Cov(X, Y) and defined by. Cov(X, Y ) = E(X E(X))(Y E(Y ))

1. Covariance between two variables X and Y is denoted by Cov(X, Y) and defined by. Cov(X, Y ) = E(X E(X))(Y E(Y )) Correlation & Estimation - Class 7 January 28, 2014 Debdeep Pati Association between two variables 1. Covariance between two variables X and Y is denoted by Cov(X, Y) and defined by Cov(X, Y ) = E(X E(X))(Y

More information

Basic notions of probability theory: continuous probability distributions. Piero Baraldi

Basic notions of probability theory: continuous probability distributions. Piero Baraldi Basic notions of probability theory: continuous probability distributions Piero Baraldi Probability distributions for reliability, safety and risk analysis: discrete probability distributions continuous

More information

Module 4: Point Estimation Statistics (OA3102)

Module 4: Point Estimation Statistics (OA3102) Module 4: Point Estimation Statistics (OA3102) Professor Ron Fricker Naval Postgraduate School Monterey, California Reading assignment: WM&S chapter 8.1-8.4 Revision: 1-12 1 Goals for this Module Define

More information

Jackknife Empirical Likelihood Inferences for the Skewness and Kurtosis

Jackknife Empirical Likelihood Inferences for the Skewness and Kurtosis Georgia State University ScholarWorks @ Georgia State University Mathematics Theses Department of Mathematics and Statistics 5-10-2014 Jackknife Empirical Likelihood Inferences for the Skewness and Kurtosis

More information