STRESS-STRENGTH RELIABILITY ESTIMATION

Similar documents
A New Hybrid Estimation Method for the Generalized Pareto Distribution

**BEGINNING OF EXAMINATION** A random sample of five observations from a population is:

Point Estimation. Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage

Chapter 8: Sampling distributions of estimators Sections

Much of what appears here comes from ideas presented in the book:

Random Variables and Probability Distributions

Homework Problems Stat 479

MATH 3200 Exam 3 Dr. Syring

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER

12 The Bootstrap and why it works

Analysis of truncated data with application to the operational risk estimation

درس هفتم یادگیري ماشین. (Machine Learning) دانشگاه فردوسی مشهد دانشکده مهندسی رضا منصفی

Confidence Intervals for an Exponential Lifetime Percentile

Point Estimation. Some General Concepts of Point Estimation. Example. Estimator quality

Chapter 7. Inferences about Population Variances

Chapter 4: Commonly Used Distributions. Statistics for Engineers and Scientists Fourth Edition William Navidi

Lecture 10: Point Estimation

ELEMENTS OF MONTE CARLO SIMULATION

INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY. Lecture -5 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc.

Applied Statistics I

4-1. Chapter 4. Commonly Used Distributions by The McGraw-Hill Companies, Inc. All rights reserved.

Bias Reduction Using the Bootstrap

A Convenient Way of Generating Normal Random Variables Using Generalized Exponential Distribution

Lecture Notes 6. Assume F belongs to a family of distributions, (e.g. F is Normal), indexed by some parameter θ.

Point Estimators. STATISTICS Lecture no. 10. Department of Econometrics FEM UO Brno office 69a, tel

Commonly Used Distributions

Definition 9.1 A point estimate is any function T (X 1,..., X n ) of a random sample. We often write an estimator of the parameter θ as ˆθ.

Statistical estimation

Lecture 5: Fundamentals of Statistical Analysis and Distributions Derived from Normal Distributions

INSTITUTE AND FACULTY OF ACTUARIES. Curriculum 2019 SPECIMEN EXAMINATION

Statistics for Business and Economics

Bivariate Birnbaum-Saunders Distribution

ROM SIMULATION Exact Moment Simulation using Random Orthogonal Matrices

Exercise. Show the corrected sample variance is an unbiased estimator of population variance. S 2 = n i=1 (X i X ) 2 n 1. Exercise Estimation

On the comparison of the Fisher information of the log-normal and generalized Rayleigh distributions

Introduction to Algorithmic Trading Strategies Lecture 8

Chapter 5 Univariate time-series analysis. () Chapter 5 Univariate time-series analysis 1 / 29

PARAMETRIC AND NON-PARAMETRIC BOOTSTRAP: A SIMULATION STUDY FOR A LINEAR REGRESSION WITH RESIDUALS FROM A MIXTURE OF LAPLACE DISTRIBUTIONS

Technical Note: An Improved Range Chart for Normal and Long-Tailed Symmetrical Distributions

Technology Support Center Issue

A NEW POINT ESTIMATOR FOR THE MEDIAN OF GAMMA DISTRIBUTION

TABLE OF CONTENTS - VOLUME 2

Introduction Dickey-Fuller Test Option Pricing Bootstrapping. Simulation Methods. Chapter 13 of Chris Brook s Book.

continuous rv Note for a legitimate pdf, we have f (x) 0 and f (x)dx = 1. For a continuous rv, P(X = c) = c f (x)dx = 0, hence

Maximum Likelihood Estimation

FINITE SAMPLE DISTRIBUTIONS OF RISK-RETURN RATIOS

Week 7 Quantitative Analysis of Financial Markets Simulation Methods

Can we use kernel smoothing to estimate Value at Risk and Tail Value at Risk?

[D7] PROBABILITY DISTRIBUTION OF OUTSTANDING LIABILITY FROM INDIVIDUAL PAYMENTS DATA Contributed by T S Wright

Actuarial Mathematics and Statistics Statistics 5 Part 2: Statistical Inference Tutorial Problems

Point Estimation. Principle of Unbiased Estimation. When choosing among several different estimators of θ, select one that is unbiased.

Unit 5: Sampling Distributions of Statistics

Unit 5: Sampling Distributions of Statistics

An Improved Skewness Measure

Appendix A. Selecting and Using Probability Distributions. In this appendix

2 of PU_2015_375 Which of the following measures is more flexible when compared to other measures?

EVA Tutorial #1 BLOCK MAXIMA APPROACH IN HYDROLOGIC/CLIMATE APPLICATIONS. Rick Katz

A Test of the Normality Assumption in the Ordered Probit Model *

Statistical analysis and bootstrapping

3 ˆθ B = X 1 + X 2 + X 3. 7 a) Find the Bias, Variance and MSE of each estimator. Which estimator is the best according

Gamma Distribution Fitting

χ 2 distributions and confidence intervals for population variance

Case Study: Heavy-Tailed Distribution and Reinsurance Rate-making

Example: Small-Sample Properties of IV and OLS Estimators

Paper Series of Risk Management in Financial Institutions

Computer Statistics with R

II. Random Variables

On Some Test Statistics for Testing the Population Skewness and Kurtosis: An Empirical Study

Chapter 7: Point Estimation and Sampling Distributions

Clark. Outside of a few technical sections, this is a very process-oriented paper. Practice problems are key!

Statistical Tables Compiled by Alan J. Terry

MEASURING PORTFOLIO RISKS USING CONDITIONAL COPULA-AR-GARCH MODEL

Homework Problems Stat 479

Back to estimators...

MM and ML for a sample of n = 30 from Gamma(3,2) ===============================================

Chapter 2 Uncertainty Analysis and Sampling Techniques

A Stochastic Reserving Today (Beyond Bootstrap)

IEOR E4703: Monte-Carlo Simulation

IEOR E4602: Quantitative Risk Management

Chapter 14 : Statistical Inference 1. Note : Here the 4-th and 5-th editions of the text have different chapters, but the material is the same.

Inferences on Correlation Coefficients of Bivariate Log-normal Distributions

Asymmetric Type II Compound Laplace Distributions and its Properties

Loss Simulation Model Testing and Enhancement

Applications of Good s Generalized Diversity Index. A. J. Baczkowski Department of Statistics, University of Leeds Leeds LS2 9JT, UK

Version A. Problem 1. Let X be the continuous random variable defined by the following pdf: 1 x/2 when 0 x 2, f(x) = 0 otherwise.

Probability & Statistics

Statistics and Probability

8.1 Estimation of the Mean and Proportion

ECE 295: Lecture 03 Estimation and Confidence Interval

Subject CS1 Actuarial Statistics 1 Core Principles. Syllabus. for the 2019 exams. 1 June 2018

Chapter 7: Estimation Sections

Consistent estimators for multilevel generalised linear models using an iterated bootstrap

Strategies for Improving the Efficiency of Monte-Carlo Methods

Budget Setting Strategies for the Company s Divisions

Cambridge University Press Risk Modelling in General Insurance: From Principles to Practice Roger J. Gray and Susan M.

2 Control variates. λe λti λe e λt i where R(t) = t Y 1 Y N(t) is the time from the last event to t. L t = e λr(t) e e λt(t) Exercises

1. Covariance between two variables X and Y is denoted by Cov(X, Y) and defined by. Cov(X, Y ) = E(X E(X))(Y E(Y ))

Basic notions of probability theory: continuous probability distributions. Piero Baraldi

Module 4: Point Estimation Statistics (OA3102)

Jackknife Empirical Likelihood Inferences for the Skewness and Kurtosis

Transcription:

CHAPTER 5 STRESS-STRENGTH RELIABILITY ESTIMATION 5. Introduction There are appliances (every physical component possess an inherent strength) which survive due to their strength. These appliances receive a certain level of stress and sustain. But if a higher level of stress is applied then their strength is unable to sustain and they break down. Suppose Y represents the stress which is applied to a certain appliance and X represents the strength to sustain the stress, then the stress-strength reliability is denoted by R= P(Y<X), if X,Y are assumed to be random. The term stress is defined as: It is failure inducing variable. It is defined as stress (load) which tends to produce a failure of a component or of a device of a material. The term load may be defined as mechanical load, environment, temperature and electric current etc. The term strength is defined as: The ability of component, a device or a material to accomplish its required function (mission) satisfactorily without failure when subjected to the external loading and environment. Therefore strength is failure resisting variable. Stress-strength model: The variation in stress and strength results in a statistical distribution and natural scatter occurs in these variables when the two distributions interfere. When interference becomes higher than strength, failure results. In other words, when probability density functions of both stress and strength are known, the component reliability may be determined analytically. Therefore R= P(Y<X) is reliability parameter R. Thus R= P(Y<X) is the characteristic of the distribution of X and Y. 5

Let X and Y be two random variables such that X represents strength and Y represents stress and X,Y follow a joint pdf f(x,y), then reliability of the component is: + x R= P( Y < X) = f(x,y)dy dx (5..) where P(Y<X) is a relationship which represents the probability, that the strength exceeds the stress and f(x,y) is joint pdf of X and Y. If the random variables are statistically independent, then f(x,y) = f(x) g(y) so that x R = f(x)g(y)dy dx (5..) G y ( x ) = x g(y)dy R = G (x)f(x)dx (5..3) y where f(x) and g(y) are pdf s of X and Y respectively. The concept of stress-strength in engineering devices has been one of the deciding factors of failure of the devices. It has been customary to define safety factors for longer lives of systems in terms of the inherent strength they have and the external stress being experienced by the systems. If X o is the fixed strength and Y o is the fixed stress, that a system is experiencing, then the ratio Xo Yo is called safety factor and the difference X o - Y o is called safety margin. Thus in the deterministic stress-strength 53

situation the system survives only if the safety factor is greater than or equivalently safety margin is positive. In the traditional approach to design of a system the safety factor or margin is made large enough to compensate uncertainties in the values of the stress and strength variates. Thus uncertainties in the stress and strength of a system make the system to be viewed as random variables. In engineering situations the calculations are often deterministic. However, the probabilistic analysis demands the use of random variables for the concepts of stress and strength for the evaluation of survival probabilities of such systems. This analysis is particularly useful in situations in which no fixed bound can be put on the stress. For example, with earth quakes, floods and other natural phenomena, the shortcomings may result in failures of systems with unusually small strengths. Similarly when economics rather than safety is the primary criterion, the comparison of survival performance can be better studied by knowing the increase in failure probability as the stress and strength approach one another. In foregoing lines it is mentioned that the stress-strength variates are more reasonable to be random variables than purely deterministic. Let the random samples Y, X, X,..., Xk being independent, G(y) be the continuous cdf of Y and F(x) be the common continuous cdf of X, X,..., X k. The reliability in a multicomponent stress- strength model developed by Bhattacharyya and Johnson (974) is given by R = P[ at least s of the (X,X,...,X ) exceed Y] s,k k k ( k ) i= s i k-i i G(y) ] [ G(y) ] = [ df(y), (5..4) 54

where X, X,..., Xk are iid with common cdf F(x), this system is subjected to common random stress Y. The R.H.S of the equation (5..3) is called the survival probability of a system of single component having a random strength X, and experiencing a random stress Y. Let a system consist of say k components whose strengths are given by independently identically distributed random variables with cumulative distribution function F(.) each experiencing a random stress governed by a random variable Y with cumulative distribution function G(.).The above probability given by (5..4) is called reliability in a multi-component stress-strength model ( Battacharya and Johnson, 974). A salient feature of probability mentioned in (5..) is that the probability density functions of strength and stress variates will have a nonempty overlapping ranges for the random variables. In some cases the failure probability may strongly depend on the lower tail of the strength distribution. When the stress or strength is not determined by either sum or the product of many small components, it may be the extremes of these small components that decide the stress or the strength. Extreme ordered variates have proved to be very useful in the analysis of reliability problems of this nature. Suppose that a number of random stresses say (Y,Y,Y 3,...,Y m) with cumulative distribution function G(.) are acting on a system whose strengths are given by m random variables (X,X,X 3,...,X n) with cumulative distribution function F(.). If V is the maximum of Y,Y,Y 3,...,Y m and U is the minimum of X,X,X 3,...,X n, the survival of the system depends on whether or not U exceeds V. That is the survival probability of such a system is 55

explained with the help of distributions of extreme order statistics in random samples of finite sizes. Thus the above introductory lines indicate that whether it is through extreme values or independent variates like Y, X in a single component or multicomponent system with stress and strength factors, the reliability in any situation ultimately turns out to be a parametric function of the parameters in probability distribution of the concerned variates. If some or all of the parameters are not known, evaluation of the survival probabilities of a stress-strength model leads to estimation of a parametric function. As mentioned in the introduction, several authors have taken up the problem of estimating survival probability in stress-strength relationship assuming various lifetime distributions for the stress-strength random variates. Some recent works in this direction are Kantam et al. (000), Kantam and Srinivasa Rao (007), Srinivasa Rao et al. (00b) and the references therein. In this chapter, we present estimation of R using maximum likelihood (ML) estimates and moment (MOM) estimates of the parameters in Section 5.. Asymptotic distribution and confidence intervals for R are given in Section 5.3. Simulation studies are carried out in Section 5.4 to investigate bias and mean squared errors (MSE) of the MLE and MOM of R as well as the lengths of the confidence interval for R. The conclusions and comments are provided in Section 5.5. 56

5. Estimation of stress-strength reliability 5.. Estimation of stress-strength reliability using ML and moment estimates : The purpose of this section is to study the inference of R = P(Y< X), where X ~ IRD( σ ), Y ~ IRD( σ ) and they are independently distributed. The estimation of reliability is very common in statistical literature. The problem arises in the context of reliability of a component of strength, X, subject to a stress Y. Thus R= P(Y < X) is a measure of reliability of the system. The system fails if and only if the applied stress is greater than its strength. We obtain the maximum likelihood estimator (MLE) and moment estimator (MOM) of R and obtain its asymptotic distribution. The asymptotic distribution has been used to construct an asymptotic confidence interval. The bootstrap confidence interval of R is also proposed. Assuming that the different scale parameters are not known, we obtain the MLE and MOM of R. Suppose X and Y are random variables independently distributed as X ~ IRD( σ ) and Y ~ IRD( σ ). Therefore, x R = P(Y < X) = 00 σ 3 x e σ x σ 3 y e σ y dydx = σ 3 x 0 e σ x e σ x dx σ =. (5..) σ + σ Note that, = λ σ, = + λ λ σ R (5..) 57

R λ Therefore, = > 0. λ ( + λ ) R is an increasing function in λ. Now to compute R, we need to estimate the parameters σ and σ, and these are estimated by using MLE and MOM. 5.. Method of Maximum Likelihood Estimation (MLE) Suppose X,X,X 3,...,Xnis a random sample from inverse Rayleigh distribution (IRD) with σ and Y,Y,Y 3,...,Y m is a random sample from IRD with σ. The log-likelihood function of the observed samples is Log L( σ, σ ) = ( m + n) log + n log σ + m log σ σ σ log x log y n m n m 3 3 j i= xi j= yj i= j= (5..3) The MLEs of σ andσ, sayσˆ andσˆ respectively can be obtained as σˆ = n, x i (5..4) m σˆ =. (5..5) y j The MLE of stress-strength reliability R becomes ˆ σˆ R =. (5..6) σˆ + σˆ 58

5..3 Method of Moment (MOM) Estimation If x, y are the sample means of samples on strength and stress variates respectively, then moment estimators (MOM) of σ,σ are σ = x/ π and σ = y/ π respectively. Estimate of R using MOM method adopting invariance property for moment estimation, we get ~ ˆ σ R = ~ ~. (5..7) σ + σ 5..4 Asymptotic distribution and confidence intervals In this Section, first we obtain the asymptotic distribution of θ ˆ = ( σˆ, σˆ ) and then we derive the asymptotic distribution of ˆR. Based on the asymptotic distribution of ˆR, we obtain the asymptotic confidence interval of R. Let us denote the Fisher s information matrix of θ = ( σ, σ) as I( θ) = ( Iij ( θ); i, j =,). Therefore, I I I ( θ) =. (5..8) I I where log L 4n I E = = σ σ. log L 4m I E = = σ σ. 59

log L I I E σ σ = = = 0. As n, m, ( ) d n( ˆ σ σ), m( ˆ σ σ) N 0, A ( σ, σ ), where a 0 A( σ, σ ) = 0 a, 4 and a = I =, σ n a 4 = I =. σ m To obtain asymptotic confidence interval for R, we proceed as follows (Rao, 973) : d R σσ ( σ, σ) = = σ ( σ + σ) d R σσ ( σ, σ) = = σ ( σ + σ). This gives Var( Rˆ ) var( ˆ ) d (, ) var( ˆ ) d (, ) = σ σ σ + σ σ σ σ d σ σ d σ σ σ = (, ) + (, ) 4n 4m n ( σ + σ) m ( σ + σ) σ σ σ σ σ σ = + 4 4 60

= + n m [ ] R( R) (5..9) Thus we have the following result, As n, m, Rˆ R R( R) + n m d N(0,). Hence the asymptotic 00( α )% confidence interval for R would be (, ) L U, where ( ) ˆ ˆ ˆ L = R Z( α /) R R + n m, (5..0) and ( ) ˆ ˆ ˆ U = R + Z( α /) R R +. n m (5..) Where Z( α / ) is the and ˆR is given by equation (5..6). 5..5 Exact confidence interval th ( α / ) percentile of the standard normal distribution Let X,X,X 3,...,X n and Y,Y,Y 3,...,Y m are two independent random samples of size n, m respectively, drawn from inverse Rayleigh n m distribution with scale parameters σ and σ. Thus, and x y are i= i j= j independent gamma random variables with parameters m ( n, σ )and(, σ ) 6

respectively. We see that n m i= xi j= yj σ and σ are two independent Chisquare random variables with n and m degrees of freedom respectively. Thus, ˆR in equation (5..6) could be rewritten as Using (5..4) and (5..5) we obtain ˆ ˆ σ R = +. ˆ σ ˆ R σ = + σ F (5..) where F mσ = nσ n i= xi m j= y j is an F distributed random variable with (n, m) degrees of freedom. From equations (5..5) and (5..), we see that F Rˆ R could be written as F = Rˆ. R Using F as a pivotal quantity, we obtain a 00( α )% confidence interval for R as ( L, U ), where L = + ( ˆ ) F( /)( m, n) R α (5..3) U = + ( ˆ ) F( /)( m, n) R α (5..4) where ( α/) ( α/) F ( m, n) and F ( m, n) are the lower and upper percentiles of F distribution with m and n degrees of freedom. th ( α /) 6

p 5..6 Bootstrap confidence intervals In this subsection, we propose to use confidence intervals based on percentile bootstrap method (we call it from now on as Boot-p) based on the idea of Efron (98). We illustrate briefly how to estimate confidence interval of R using this method. Step : Draw random samples X, X,..., Xn and Y, Y,..., Ym from the populations of X and Y respectively. Step : Using the random samples X, X,..., Xn and Y, Y,..., Y m, generate bootstrap samples x, x,..., x n and * * *,,..., m y y y respectively. Compute the bootstrap estimates of σ and σ, say σ and σ respectively. Using σ and σ and equation (5..6), compute the bootstrap estimate of R, say ˆR. Step 3: Repeat step, NBOOT times, where N=000. Step 4: Let ( ) ( ˆ G x = P R x) be the empirical distribution function of ˆR. Let L ˆ 3 = R boot p( α /) and U = Rˆ 3 ( α boot / ) be the 00 α / th and the 00( α / ) th empirical percentiles of the ˆR values respectively. The small sample comparisons are studied through simulation in Section 5.4. 5.3 Estimation of reliability in multi-component stress-strength Model Assuming that F(.) and G (.) are inverse Rayleigh distributions with unknown scale parameters σ, σ and that independent random samples X < X <...< X n and Y < Y <... < Ym are available from F(.) and G(.) respectively. The reliability in multi-component stress-strength for inverse Rayleigh distribution using (5..4) is 63

k i i k = k = s s i i i k i + λ / λ 0 i = s j = i 0 - ( ) k ( σ/ y) i ( σ/ y) k i σ ( σ/ y) = [ ] [ ] i 3 i= s y 0 R K e e e dy ( ) k / λ i / λ k i k [ t ] [ t ] dt i where i= s 0 = ( ) λ ( ) = k [ z] [ z] dz if z= t = k λβ( k i+ λ, i+ ). After the simplification, we get 64 σ / y σ λ σ t = e and = k! R = λ ( k + λ - j), since kand i are integers. ( k - i)! (5.3.) The probability given in (5.3.) is called reliability in a multicomponent stress-strength model (Bhattacharyya and Johnson 974). Suppose a system with k identical components, functions, if s( s k) or more of the components simultaneously operate. In its operating environment, the system is subjected to a stress Y which is a random variable with cdf G(.). The strengths of the components, that is the minimum stresses to cause failure are independent and identically distributed random variables with cdf F(.). Then the system reliability, which is the probability that the system does not fail is the function R given in (5.3.). If σ, σ are not known, it is necessary to estimate σ, σ to estimate R. In this section, we estimate σ, σ by ML method and method of moment thus giving rise to two estimates. The estimates are substituted in λ to get an estimate of Rsk, using equation (5.3.). The theory of methods of estimation is explained below. It is well known that the method of maximum likelihood estimation (MLE) has invariance property. When the method of estimation of

s s, k s, k, k parameter is changed from ML to any other traditional method, this invariance principle does not hold good to estimate the parametric function. However, such an adoption of invariance property for other optimal estimators of the parameters to estimate a parametric function is attempted in different situations by different authors. Travadi and Ratani (990), Kantam and Srinivasa Rao (00) and the references therein are a few such instances. In this direction, we have proposed some estimators for the reliability of multicomponent stress-strength model by considering the estimators of the parameters of stress, strength distributions by standard methods of estimation in inverse Rayleigh distribution. The MLEs of σandσaredenotedas σ and σ. The asymptotic variance of the MLE is given by σ i σi E( log L/ ) = /4 n ; i=, when m=n. (5.3.) The MLE of survival probability of multicomponent stress-strength model is given by R with λ is replaced by λ = σ / σ in (5.3.). The second estimator, we propose here is R with λ is replaced by λ = σ / σ in (5.3.). Thus for a given pair of samples on stress, strength variates we get two estimates of Rsk, by the above two different methods. The asymptotic variance (AV) of an estimate of R which a function of two independent statistics (say) t,t is given by (Rao, 973). ˆ R R AV(R )=AV(t ) + AV(t ) σ σ (5.3.3) Where t,t are to be taken in two different ways namely, exact ML and method of moment estimators. Unfortunately, we can find the variance of 65

,3, 3, 4, 4, 4,3 s k s, k s, k 4 4 4 d 4 inverse Rayleigh distribution, the asymptotic variance of R is obtained using MLE only. From the asymptotic optimum properties of MLEs (Kendall and Stuart, 979) and of linear unbiased estimators (David, 98), we know that MLEs are asymptotically equally efficient having the Cramer-Rao lower bound as their asymptotic variance as given in (5.3.). Thus from equation (5.3.3), the asymptotic variance of R skobtained, when ( t,t ) are replaced by MLE. To avoid the difficulty of derivation of R sk,, we obtain the derivatives of R for (s,k)=(,3) and (,4) separately, they are given by R 6 λ R 6 λ = and = σ σ (3 + λ ) σ σ (3 + λ ). R 4 λ( λ + 7) R 4 λ( λ + 7) = and = σ σ (3 + λ )(4 + λ ) σ σ (3 + λ )(4 + λ ). where λ = σ / σ Thus AV(R ˆ )= 9λ + (3 + λ ) n m ˆ 44 λ (λ + 7) AV(R )= + (3 + )(4 + ) n m [ λ λ ] As n,, Rˆ R N(0,), AV(R ˆ ) and the asymptotic confidence - 95% confidence interval for R is given Rˆ.96 AV(R ˆ ). by s, k s, k 66

The asymptotic confidence - 95% confidence interval for R,3 is given by ˆ 3λ R,3.96 (3 λ ) + + n m The asymptotic confidence - 95% confidence interval for R,4 is given by ˆ λ (λ + 7) R,4.96 + (3 )(4 n m + λ + λ ) The small sample comparisons are studied through simulation in Section 5.4. 5.4 Simulation study and Data analysis 5.4. Simulation study In this section, we present some results based on Monte - Carlo simulation, to compare the performance of the estimates of R and R using ML and MOM estimators mainly for some sample sizes. We consider the following some sample sizes; (n, m) = (5, 5), (0, 0), (5, 5), (0, 0) and (5, 5). In both estimations we take ( σ, σ ) =(, 3), (,.5), (, ), (,.5), (, ), (.5, ), (.5, ) and (3,). All the results are based on 3000 replications. From each sample, we compute the estimates of ( σ, σ ) using ML and Method of Moment estimation. Once we estimate( σ, σ ), we obtain the estimates of R by substituting in (5..) and (5..) respectively. Also obtain the estimates of R by substituting in 5.3.. for (s,k)=(,3) and (,4) respectively. We report the average biases of R in Table 5.4., R in Table 5.4.5, mean squared errors (MSEs) of R in Table 5.4. and R in Table 5.4.6 over 3000 replications. We also compute the 95% confidence 67

intervals based on the asymptotic distribution, exact distribution and Boot-p method of ˆR. We report the average confidence lengths in Table 5.4.3 and the coverage probabilities in Table 5.4.4 based on 000 replications. Average confidence length and coverage probability of the simulated 95% confidence intervals of R are given in Table 5.4.7. Some of the points are quite clear from this simulation. Even for small sample sizes, the performance of the R using MLEs is quite satisfactory in terms of biases and MSEs as compared with MOM estimates. When σ < σ, the bias is positive and when σ > σ, the bias is negative. Also the absolute bias decreases as sample size increases in both methods which is a natural trend. It is observed that when m=n and m, n increase then MSEs decrease as expected. It satisfies the consistency property of the MLE of R. As expected, the MSE is symmetric with respect to σ and σ. For example, if (n,m)=(0,0), MSE for ( σ, σ ) =(3,) and MSE for ( σ, σ ) =(,3) is the same in case of MLE whereas MOM show approximately symmetric. The length of the confidence interval is also symmetric with respect to ( σ, σ ) and decreases as the sample size increases. For (n, m)=(5, 5), we find that average length of ( L, U ) < average length of ( LU, ) < average length of ( L3, U3) in all combinations of ( σ, σ ). For other combinations of (n,m), we find that average length of ( L, U ) < average length of ( L3, U 3) in the case of σ σ, whereas average length of ( L, U ) < average length of ( LU, ) < average length of ( L3, U 3) when σ > σ. Particularly, for a fixed σ when σ.0, the average length of exact confidence intervals and Boot-p confidence intervals are almost same. Comparing the average coverage probabilities, it is observed that for most of sample sizes, the coverage probabilities of the confidence intervals based on the asymptotic results reach the nominal level as compared with other two confidence 68

intervals. However, it is slightly less than 0.95. The performance of the bootstrap confidence intervals is quite good for small samples. For all combinations of (n,m) and ( σ, σ ), the coverage probabilities in exact confidence intervals is moderate and away from nominal level 0.95. The overall performance of the confidence interval is quite good for asymptotic confidence intervals. The simulation results also indicate that MLE shows better performance than MOM in the average bias and average MSE for different choices of the parameters. The exact confidence intervals are preferable in the case of average short length of confidence intervals, whereas asymptotic confidence intervals are advisable to use with respect to coverage probabilities for different choices of the parameters. Whereas the following points are observed for R for simulation study. The true value of reliability in multicomponent stress-strength increases when σ σ otherwise true value of reliability decreases. Thus the true value of reliability increases as λ decreases and vice-versa. Both bias and MSE decreases as sample size increases for both methods of estimation in reliability. With respect to bias, moment estimator is closer to exact MLE in most of the parametric and sample combinations. Also the bias is negative when σ σ and in other cases bias is positive in both situations of (s,k). With respect to MSE, MLE shows first preference than method of moment estimation. The length of the confidence interval also decreases as the sample size increases. The coverage probability is close to the nominal value in all cases for MLE. The overall performance of the confidence interval is quite good for MLE. The simulation results also show that there is no considerable difference in the average bias and average MSE for different choices of the parameters, whereas there is considerable difference in MLE and MOM. The same phenomenon is observed for the 69

average lengths and coverage probabilities of the confidence intervals using MLE. 5.4. Data analysis We present a data analysis for two data sets reported by Lawless (98) and Proschan (963). The first data set is obtained from Lawless (98) and it represents the number of revolution before failure of each of 3 ball bearings in the life tests and they are as follows: Data Set I: 7.88, 8.9, 33.00, 4.5, 4., 45.60, 48.80, 5.84, 5.96, 54., 55.56, 67.80, 68.44, 68.64, 68.88, 84., 93., 98.64, 05., 05.84, 7.9, 8.04, 73.40. Gupta and Kundu (00) have fitted gamma, Weibull and Generalized exponential distribution to this data. The second data set is obtained from Proschan (963) and represents times between successive failures of 5 air conditioning (AC) equipment in a Boeing 70 airplane and they are as follows: Data Set II:,, 6, 7, 9, 9, 48, 57, 59, 70, 74, 53, 36, 386, 50. We fit the inverse Rayleigh distribution to the two data sets separately. We used the Kolmogorov-Smirnov (K-S) tests for each data set to fit the inverse Rayleigh model. It is observed that for Data Sets I and II, the K-S distances are 0.09 and 0.378 with the corresponding p values are 0.8508 and 0.43879 respectively. For data sets I and II, the chi-square values are 0.305 and.6383 respectively. Therefore, it is clear that inverse Rayleigh model fits quite well to both the data sets. We plot the empirical survival functions and the fitted survival functions in Figures 5.4. and 5.4.. Figures 5.4. and 5.4. show that the empirical and fitted models are very close for each data set. 70

Based on estimates of σandσ the MLE of R becomes 0.7043 and the corresponding 95% confidence interval becomes (0.5688, 0.83978). We also obtain the 95% Boot-p confidence intervals as (0.54543, 0.8669). The MLE of R for (s,k)=(,3) become 0.5573 and the corresponding 95% confidence interval becomes (0.39684, 0.7776). The MLE of R for (s,k)=(,4) become 0.349 and the corresponding 95% confidence interval becomes (0.6380, 0.5347). 0.9 0.8 Empirical survival function Fitted survival function 0.7 0.6 0.5 0.4 0.3 0. 0. 0 0 40 80 0 60 00 40 Figure 5.4.: The empirical and fitted survival functions for Data Set I. 7

0.9 0.8 Empirical survival function Fitted survival function 0.7 0.6 0.5 0.4 0.3 0. 0. 0 0 50 00 50 00 50 300 350 400 450 500 550 Figure 5.4.: The empirical and fitted survival functions for Data Set II. 5.5 Conclusions We compare two methods of estimating R= P( Y < X) when Y and X both follow inverse Rayleigh distributions with different scale parameters. We provide MLE and MOM procedure to estimate the unknown scale parameters and use them to estimate of R. We also obtain the asymptotic distribution of estimated R and that was used to compute the asymptotic confidence intervals. The simulation results indicate that MLE shows better performance than MOM in the average bias and average MSE for different choices of the parameters. The exact confidence intervals are preferable for average short length of confidence intervals whereas asymptotic confidence intervals are advisable to use with respect to coverage probabilities for different choices of the parameters. We proposed bootstrap confidence intervals and its performance is also quite satisfactory. 7

Whereas to estimate the multi-component stress-strength reliability, we provided ML and MOM estimators of σ andσ when both of stress, strength variates follow the same population. Also, we have estimated asymptotic confidence interval for multicomponent stress-strength reliability. The simulation results indicate that in order to estimate the multicomponent stress-strength reliability for inverse Rayleigh distribution, the ML method of estimation is preferable than the method of moment estimation. The length of the confidence interval also decreases as the sample size increases and coverage probability is close to the nominal value in all cases for MLE. 73

Table 5.4.: Average bias of the simulated estimates of R. ( σ, σ ) (n, m) (5,5) (0,0) (5,5) (0,0) (5,5) (,3) (,.5) (,) (,.5) (,) (.5,) (,) (.5,) (3,) R= 0.0 R= 0.4 R= 0.0 R= 0.3 R= 0.50 R= 0.69 R= 0.80 R= 0.86 R= 0.90 0.0384 0.044 0.0405 0.086-0.0066-0.0403-0.0500-0.049-0.0449 0.049 0.070 0.075 0.09-0.0036-0.093-0.04-0.007-0.079 0.046 0.07 0.069 0.087-0.0070-0.035-0.0375-0.0357-0.037 0.006 0.0069 0.007 0.0046-0.0038-0.0-0.09-0.006-0.0088 0.03 0.05 0.063 0.0 0.00-0.09-0.049-0.04-0.06 0.0058 0.0069 0.0078 0.0070 0.006-0.0043-0.0057-0.0054-0.0046 0.098 0.05 0.035 0.09 0.008-0.059-0.008-0.00-0.080 0.0035 0.004 0.0044 0.0034-0.000-0.0050-0.0056-0.0050-0.004 0.07 0.04 0.039 0.0089-0.0068-0.009-0.034-0.04-0.085 0.005 0.009 0.003 0.00-0.003-0.0044-0.0047-0.004-0.0034 In each cell the first row represents the average bias of R using the MOM and second row represents average bias of R using the MLE. 74

Table 5.4.: Average MSE of the simulated estimates of R. (n, m) (5,5) (0,0) (5,5) (0,0) (5,5) ( σ, σ ) (,3) (,.5) (,) (,.5) (,) (.5,) (,) (.5,) (3,) R= 0.0 R= 0.4 R= 0.0 R= 0.3 R= 0.50 R= 0.69 R= 0.80 R= 0.86 R= 0.90 0.09 0.055 0.0338 0.043 0.0490 0.0443 0.0354 0.074 0.0 0.0053 0.008 0.07 0.090 0.036 0.093 0.030 0.0084 0.0055 0.009 0.054 0.08 0.030 0.036 0.030 0.040 0.07 0.03 0.000 0.0033 0.0056 0.0090 0.08 0.009 0.0056 0.0033 0.000 0.0084 0.0 0.076 0.046 0.094 0.047 0.078 0.03 0.0086 0.004 0.003 0.0039 0.0065 0.0085 0.0064 0.0038 0.00 0.003 0.0074 0.007 0.055 0.06 0.056 0.0 0.049 0.00 0.0069 0.000 0.006 0.008 0.0047 0.0063 0.0047 0.008 0.006 0.0009 0.0053 0.0079 0.08 0.074 0.08 0.084 0.09 0.0086 0.0058 0.0007 0.00 0.00 0.0037 0.0050 0.0037 0.00 0.00 0.0007 In each cell the first row represents the average MSE of R using the MOM and second row represents average MSE of R using the MLE. 75

Table 5.4.3: Average confidence length of the simulated 95% confidence intervals of R. ( σ, σ ) (,3) (,.5) (,) (,.5) (,) (.5,) (,) (.5,) (3,) (n, m) Interval R= 0.0 R= 0.4 R= 0.0 R= 0.3 R= 0.50 R= 0.69 R= 0.80 R= 0.86 R= 0.90 (5,5) A 0.395 0.3049 0.39 0.4933 0.56 0.4985 0.3977 0.3 0.450 B 0.8 0.758 0.34 0.46 0.467 0.466 0.3430 0.766 0.35 C 0.705 0.3397 0.47 0.54 0.589 0.500 0.44 0.3338 0.650 (0,0) A 0.67 0.4 0.78 0.3607 0.476 0.3650 0.83 0.60 0.666 B 0.46 0.867 0.403 0.3037 0.3447 0.3040 0.407 0.870 0.464 C 0.94 0.467 0.340 0.388 0.4 0.357 0.676 0.0 0.549 (5,5) A 0.335 0.74 0.30 0.995 0.3457 0.98 0.84 0.76 0.3 B 0.58 0.496 0.953 0.506 0.87 0.507 0.954 0.497 0.59 C 0.53 0.968 0.543 0.30 0.355 0.934 0. 0.655 0.6 (0,0) A 0.38 0.490 0.98 0.598 0.30 0.606 0.990 0.499 0.45 B 0.0988 0.83 0.688 0.84 0.54 0.8 0.685 0.8 0.0986 C 0.564 0.966 0.463 0.968 0.305 0.48 0.777 0.308 0.0986 (5,5) A 0.0 0.38 0.770 0.330 0.76 0.339 0.78 0.338 0.00 B 0.0875 0.40 0.506 0.959 0.63 0.957 0.504 0.38 0.0873 C 0.99 0.645 0.08 0.538 0.659 0.05 0.539 0.3 0.085 A: Asymptotic confidence interval B: Exact confidence interval C: Bootstrap confidence interval 76

Table 5.4.4: Average coverage probability of the simulated 95% confidence intervals of R. ( σ, σ ) (,3) (,.5) (,) (,.5) (,) (.5,) (,) (.5,) (3,) (n, m) Interval R= 0.0 R= 0.4 R= 0.0 R= 0.3 R= 0.50 R= 0.69 R= 0.80 R= 0.86 R= 0.90 (5,5) A 0.9337 0.9357 0.9387 0.943 0.9330 0.9400 0.9397 0.9367 0.9353 B 0.8796 0.8784 0.8765 0.8744 0.873 0.8765 0.8774 0.8797 0.8803 C 0.9590 0.9590 0.9590 0.9590 0.9590 0.9590 0.9590 0.9590 0.9590 (0,0) A 0.9440 0.9447 0.9463 0.9470 0.9497 0.9533 0.9557 0.9547 0.9540 B 0.889 0.888 0.8878 0.8847 0.889 0.885 0.8840 0.8864 0.887 C 0.9500 0.9490 0.9490 0.9490 0.9480 0.9460 0.9460 0.9460 0.9460 (5,5) A 0.9430 0.9407 0.943 0.9390 0.9387 0.9403 0.943 0.9440 0.9440 B 0.8886 0.8885 0.8873 0.8863 0.8864 0.8876 0.8883 0.8888 0.8893 C 0.9340 0.9340 0.9340 0.9340 0.9300 0.9300 0.9300 0.9300 0.930 (0,0) A 0.9440 0.9447 0.9467 0.9463 0.9440 0.947 0.9443 0.9463 0.9487 B 0.8966 0.896 0.8954 0.8957 0.896 0.894 0.8954 0.8949 0.8963 C 0.8300 0.830 0.830 0.8350 0.8360 0.8380 0.8450 0.8430 0.8430 (5,5) A 0.9473 0.9463 0.9473 0.9467 0.9470 0.9473 0.9507 0.957 0.953 B 0.894 0.8938 0.8933 0.893 0.893 0.8930 0.8938 0.8948 0.895 C 0.8080 0.8060 0.8040 0.8000 0.8070 0.8090 0.830 0.840 0.830 A: Asymptotic confidence interval, B: Exact confidence interval and C: Bootstrap confidence interval. 77

Table 5.4.5: Average bias of the simulated estimates of R. (n, m) (s,k) (,3) (5,5) (0,0) (5,5) (0,0) (5,5) ( σ, σ ) (,3) (,.5) (,) (,.5) (,) (.5,) (,) (.5,) (3,) R =0.964 R =0.949 R =0.93 R =0.87 R =0.750 R =0.57 R =0.49 R =0.34 R =0.50-0.0063-0.008-0.008-0.036-0.00 0.00 0.037 0.009 0.036-0.039-0.096-0.0367-0.0437-0.040-0.07 0.048 0.038 0.043-0.0036-0.0048-0.0065-0.0087-0.009-0.006 0.0044 0.0087 0.005-0.08-0.065-0.05-0.07-0.06-0.0075 0.09 0.044 0.0306-0.00-0.009-0.0039-0.0053-0.0055-0.00 0.0033 0.0060 0.0070-0.006-0.033-0.066-0.098-0.066 0.0006 0.066 0.060 0.030-0.00-0.006-0.00-0.008-0.00 0.006 0.005 0.0069 0.0074-0.0087-0.03-0.049-0.09-0.09-0.0060 0.0079 0.069 0.0-0.005-0.000-0.008-0.0040-0.0048-0.008 0.000 0.006 0.005-0.0073-0.0097-0.08-0.066-0.064-0.0044 0.008 0.058 0.093 (n, m) R =0.938 (,4) R =0.93 R =0.869 R =0.784 R =0.600 R =0.366 R =0.4 R =0.7 R =0.077 (5,5) (0,0) (5,5) (0,0) (5,5) -0.00-0.09-0.059-0.073-0.0067 0.086 0.03 0.0335 0.094-0.036-0.043-0.0504-0.055-0.089 0.04 0.0560 0.0658 0.0636-0.0059-0.0077-0.0099-0.08-0.0076 0.0066 0.050 0.063 0.043-0.004-0.055-0.033-0.0345-0.098 0.08 0.04 0.047 0.0443-0.0036-0.0046-0.0060-0.007-0.0044 0.0048 0.00 0.007 0.009-0.063-0.097-0.03-0.037-0.0087 0.03 0.040 0.048 0.0387-0.000-0.006-0.003-0.0035-0.0006 0.0067 0.00 0.000 0.0083-0.040-0.076-0.00-0.049-0.05 0.0 0.089 0.0330 0.0308-0.005-0.0033-0.0044-0.0057-0.0050 0.0003 0.0039 0.0048 0.0043-0.09-0.05-0.09-0.07-0.07 0.0 0.067 0.097 0.070 In each cell the first row represents the average bias of R using the MLE and second row represents average bias of R using the MOM. 78

(n, m) (s,k) (5,5) Table 5.4.6: Average MSE of the simulated estimates of R. ( σ, σ ) (,3) (,.5) (,) (,.5) (,) (.5,) (,) (.5,) (3,) (,3) R =0.964 R =0.949 R =0.93 R =0.87 R =0.750 R =0.57 R =0.49 R =0.34 R =0.50 0.000 0.007 0.0033 0.0069 0.048 0.06 0.09 0.090 0.053 0.007 0.005 0.058 0.048 0.0389 0.0478 0.048 0.0449 0.0400 (0,0) (5,5) (0,0) (5,5) 0.0004 0.0007 0.004 0.0033 0.0079 0.0 0.0 0.00 0.0079 0.004 0.0040 0.007 0.03 0.046 0.0334 0.034 0.030 0.067 0.000 0.0004 0.0008 0.000 0.0050 0.0079 0.0078 0.0064 0.0048 0.008 0.004 0.0068 0.08 0.008 0.077 0.080 0.05 0.0 0.000 0.0003 0.0006 0.004 0.0037 0.0060 0.006 0.0050 0.0038 0.004 0.004 0.0044 0.0086 0.07 0.040 0.044 0.09 0.085 0.000 0.000 0.0004 0.00 0.008 0.0046 0.0046 0.0037 0.008 0.00 0.009 0.0036 0.0074 0.05 0.06 0.07 0.090 0.056 (n, m) (,4) R =0.938 R =0.93 R =0.869 R =0.784 R =0.600 R =0.366 R =0.4 R =0.7 R =0.077 (5,5) (0,0) (5,5) (0,0) (5,5) 0.006 0.0045 0.008 0.053 0.065 0.086 0.08 0.04 0.0087 0.05 0.0 0.0304 0.0436 0.0580 0.0594 0.05 0.04 0.038 0.000 0.009 0.0037 0.0078 0.05 0.063 0.04 0.0067 0.0037 0.0060 0.0096 0.058 0.066 0.0409 0.0434 0.0360 0.07 0.094 0.0006 0.00 0.00 0.0048 0.0097 0.006 0.007 0.0039 0.000 0.0063 0.009 0.04 0.05 0.0338 0.0356 0.089 0.00 0.046 0.0004 0.0008 0.006 0.0035 0.0073 0.0083 0.0056 0.003 0.006 0.0037 0.0060 0.003 0.08 0.094 0.034 0.05 0.083 0.08 0.0003 0.0006 0.00 0.007 0.0056 0.006 0.004 0.00 0.00 0.008 0.0049 0.0087 0.060 0.067 0.080 0.06 0.049 0.0098 In each cell the first row represents the average MSE of R using the MLE and second row represents average MSE of R using the MOM. 79

Table 5.4.7: Average confidence length and coverage probability of the simulated 95% confidence intervals of R using MLE. (n, m) (5,5) (0,0) (5,5) (0,0) (5,5) (s,k) (,3) ( σ, σ ) (,3) (,.5) (,) (,.5) (,) (.5,) (,) (.5,) (3,) A 0.09907 0.3539 0.9354 0.9058 0.4470 0.55454 0.556 0.5087 0.4578 B 0.943 0.943 0.943 0.943 0.9470 0.9483 0.943 0.9377 0.943 A 0.06476 0.08956 0.3033 0.009 0.380 0.40908 0.4050 0.37338 0.357 B 0.940 0.947 0.9430 0.9440 0.9457 0.9440 0.9493 0.9493 0.9483 A 0.0548 0.0778 0.0637 0.655 0.6696 0.340 0.33943 0.30609 0.647 B 0.9503 0.9503 0.9503 0.9500 0.9483 0.9480 0.9450 0.9447 0.9470 A 0.04448 0.068 0.09065 0.466 0.308 0.9669 0.9646 0.6709 0.3056 B 0.950 0.950 0.9500 0.9500 0.9463 0.9457 0.950 0.9500 0.9500 A 0.03934 0.0547 0.08038 0.597 0.0659 0.6653 0.6668 0.40 0.0699 B 0.9490 0.9497 0.9500 0.9493 0.9490 0.9507 0.950 0.9537 0.9547 (n, m) (,4) ( σ, σ ) (,3) (,.5) (,) (,.5) (,) (.5,) (,) (.5,) (3,) (5,5) (0,0) (5,5) (0,0) (5,5) A 0.6630 0.37 0.336 0.44463 0.653 0.63405 0.5366 0.4080 0.30794 B 0.943 0.943 0.943 0.9440 0.9463 0.9457 0.9400 0.9307 0.987 A 0.0985 0.4998 0.336 0.3467 0.457 0.47469 0.39039 0.94 0.0993 B 0.947 0.943 0.9437 0.9460 0.9433 0.950 0.950 0.9477 0.9443 A 0.0893 0.5 0.7486 0.600 0.37643 0.39440 0.3893 0.3308 0.644 B 0.9503 0.9503 0.9487 0.9500 0.9483 0.9467 0.9487 0.9473 0.9447 A 0.07576 0.0408 0.4953 0.396 0.3790 0.34567 0.7889 0.065 0.404 B 0.950 0.9507 0.9500 0.9483 0.9473 0.950 0.957 0.9473 0.9450 A 0.06706 0.094 0.379 0.9963 0.94 0.363 0.58 0.876 0.67 B 0.9497 0.9497 0.9503 0.9483 0.950 0.950 0.9563 0.9570 0.9547 A: Average confidence length B: Coverage probability 80