Chapter 6: Point Estimation

Size: px
Start display at page:

Download "Chapter 6: Point Estimation"

Transcription

1 Chapter 6: Point Estimation Professor Sharabati Purdue University March 10, 2014 Professor Sharabati (Purdue University) Point Estimation Spring / 37

2 Chapter Overview Point estimator and point estimate Define point estimator and estimate Unbiased estimators Minimum variance estimators Finding the standard error of an estimator Deriving directly Bootstrapping Methods of point estimation The method of moments The method of maximum likelihood Professor Sharabati (Purdue University) Point Estimation Spring / 37

3 Motivation for Point Estimation Suppose we want to find the ratio p of FIV infected cats in a specific area. It s impossible to check all feral cats in the area. Maybe we can do the following: 1 X = 1 if cat has FIV, and X = 0 if not. 2 p = proportion of FIV infected cats 3 Distribution of X: Bernoulli with p unknown. How to estimate the value of the parameter p of a Bernoulli distribution? RV = Distribution = Unknown parameter of interest Professor Sharabati (Purdue University) Point Estimation Spring / 37

4 Point Estimator and Point Estimate To estimate the value of p, we catch 25 feral cats randomly and check them. Suppose we found that cat number 1,5,10,15,23 are infected with FIV. 1 Bernoulli rv s X 1, X 2,, X 25 form a random sample. 2 Point estimator of p: ˆp = X 1+ +X this is a statistic 3 Point estimate of p: ˆp = 5 25 = this is a value Random sample = Estimator = Estimate Professor Sharabati (Purdue University) Point Estimation Spring / 37

5 Point Estimator and Point Estimate Definition A point estimate of a parameter θ is a single number that can be regarded as a sensible value for θ. A point estimate is obtained by selecing a suitable statistic and computing its value from given sample data. The selected statistic is called the point estimator. Point estimate is a value, point estimator is a statistic. Usually we use ˆθ to denote the point estimator of a parameter θ. Different statistic can be used to estimate the same parameter, i.e., a parameter may have multiple point estimators. Professor Sharabati (Purdue University) Point Estimation Spring / 37

6 Example Assume the breakdown voltage for pieces of epoxy resin to be normally distributed. Now we want to estimate the mean µ of the breakdown voltage. We randomly check 20 breakdown voltages, and denote them as X 1, X 2,, X 20. Suppose the observed voltage values are: {24.46, 25.61, 26.25, 26.42, 26.66, 27.15, 27.31, 27.54, 27.74, 27.94, 27.98, 28.04, 28.28, 28.49, 28.50, 28.87, 29.11, 29.13, 29.50, 30.88} Which point estimators could be used to estimate µ? Answer 1 Sample mean: ˆµ = X 2 Sample median: ˆµ = X 3 Average of the extremes: ˆµ = min(x i)+max(x i ) 2 4 Trimmed mean ˆµ = X tr(10), 10% trimmed mean (discard the smallest and largest 10% of the sample data and then take an average). 5 etc... Professor Sharabati (Purdue University) Point Estimation Spring / 37

7 Example Assume the breakdown voltage for pieces of epoxy resin to be normally distributed. Now we want to estimate the mean µ of the breakdown voltage. We randomly check 20 breakdown voltages, and denote them as X 1, X 2,, X 20. Suppose the observed voltage values are: {24.46, 25.61, 26.25, 26.42, 26.66, 27.15, 27.31, 27.54, 27.74, 27.94, 27.98, 28.04, 28.28, 28.49, 28.50, 28.87, 29.11, 29.13, 29.50, 30.88} Which point estimators could be used to estimate µ? Answer 1 Sample mean: ˆµ = X 2 Sample median: ˆµ = X 3 Average of the extremes: ˆµ = min(x i)+max(x i ) 2 4 Trimmed mean ˆµ = X tr(10), 10% trimmed mean (discard the smallest and largest 10% of the sample data and then take an average). 5 etc... Professor Sharabati (Purdue University) Point Estimation Spring / 37

8 Example We want to know the variance of the elastic modulus of AZ91D alloy. We do not know the population distribution of elastic modulus, but we observed 8 elastic modulus of AZ91D alloy specimens from a die-casting process: {44.2, 43.9, 44.7, 44.2, 44.0, 43.8, 44.6, 43.1} Assume these observations are the result of a random sample X 1, X 2,, X 8 from the population. We actually want to estimate the population variance σ 2. Which point estimators could be used to estimate σ 2? Answer 1 We may use the sample variance: ˆσ 2 = S 2 = 2 We may use: ˆσ 2 = (Xi X) 2 n (Xi X) 2 n 1 Professor Sharabati (Purdue University) Point Estimation Spring / 37

9 Example We want to know the variance of the elastic modulus of AZ91D alloy. We do not know the population distribution of elastic modulus, but we observed 8 elastic modulus of AZ91D alloy specimens from a die-casting process: {44.2, 43.9, 44.7, 44.2, 44.0, 43.8, 44.6, 43.1} Assume these observations are the result of a random sample X 1, X 2,, X 8 from the population. We actually want to estimate the population variance σ 2. Which point estimators could be used to estimate σ 2? Answer 1 We may use the sample variance: ˆσ 2 = S 2 = 2 We may use: ˆσ 2 = (Xi X) 2 n (Xi X) 2 n 1 Professor Sharabati (Purdue University) Point Estimation Spring / 37

10 Choosing an Optimal Estimator A point estimator ˆθ is a random variable, and its value varies from sample to sample error of estimation = ˆθ θ ˆθ = θ + error of estimation Which is the best one? Unbiased Estimator: the one that is closest to the true value of the parameter θ on average. Minimum Variance Estimator: the estimation error to be small. Professor Sharabati (Purdue University) Point Estimation Spring / 37

11 Ideally an estimator should be accurate (low bias) and precise (low variability) Professor Sharabati (Purdue University) Point Estimation Spring / 37

12 Unbiased Estimators and Minimum Variance Estimators Unbiased Estimator Definition Principle of unbiased estimation Some unbiased estimators Minimum Variance Estimator Principle of minimum variance unbiased estimator (MVUE) MVUE for normal mean Professor Sharabati (Purdue University) Point Estimation Spring / 37

13 Unbiased Estimator Definition A point estimator ˆθ is said to be an unbiased estimator of the parameter θ if E(ˆθ) = θ. If ˆθ is not unbiased, the difference E(ˆθ) θ is called the bias of ˆθ. Distribution of θ^1 Distribution of θ^2 E(θ^1) = θ E(θ^2) Professor Sharabati (Purdue University) Point Estimation Spring / 37

14 Some Unbiased Estimators Sample mean X = X 1+X 2 + +X n n population mean µ. is the unbiased estimator of the For continuous, symmetric distributions, sample median X and and trimmed mean are also unbiased estimators of population mean µ. Sample variance S 2 = population variance σ 2. (Xi X) 2 n 1 is the unbiased estimator of the Professor Sharabati (Purdue University) Point Estimation Spring / 37

15 Exercises 1 For X is a binomial(n, p) rv, with p unknown, is the estimator ˆp = X n an unbiased estimator? 2 For normal distribution with mean µ and variance σ 2, given a random sample of size n, X 1, X 2,, X n. Is sample mean X = X 1+ +X n n an unbiased estimator of µ? 3 For any distribution, is sample mean X an unbiased estimator of population mean µ? 4 Given a random sample X 1, X 2,, X n from a continuous uniform distribution f(x) = 1 θ, defined on [0, θ]. Is ˆθ = 2 X = 2(X 1+X 2 + +X n) n an unbiased estimator of θ? Professor Sharabati (Purdue University) Point Estimation Spring / 37

16 Answers to the Exercises 1 For X is a binomial(n, p) rv, with p unknown, is the estimator ˆp = X n an unbiased estimator? Yes [Hint: use the definition of unbiased estimator to check.] 2 For normal distribution with mean µ and variance σ 2, given a random sample of size n, X 1, X 2,, X n. Is sample mean X = X 1+ +X n n an unbiased estimator of µ? Yes [Hint: see PROPOSITION on text p224] 3 For any distribution, is sample mean X an unbiased estimator of population mean µ? Yes [Hint: see PROPOSITION on text p223] 4 Given a random sample X 1, X 2,, X n from a continuous uniform distribution f(x) = 1 θ, defined on [0, θ]. Is ˆθ = 2 X = 2(X 1+X 2 + +X n) n an unbiased estimator of θ? Yes [Hint: check bias = E(ˆθ) θ] Professor Sharabati (Purdue University) Point Estimation Spring / 37

17 = 1 3 (4σ2 σ 2 ) = σ 2 Professor Sharabati (Purdue University) Point Estimation Spring / 37 Principle of Unbiased Estimation When choosing among several different estimators of θ, select the one that is unbiased. Which one is preferred? Let X 1, X 2, X 3, X 4 be a random sample of size 4 from a normal distribution with unknown variance σ 2. To estimate σ 2, we may use ˆσ 2 1 = S 2 = (X i X) 2 or 4 1 ˆσ 2 2 = (X i X) 2 = S2. E( ˆσ 2 1 ) = E(S 2 ) { [ 1 = E 2 Xi ( ]} X i) = 1 { ( ) E X 2 i 1 [ ( 3 4 E ) ]} 2 Xi = 1 { 4σ 2 + 4µ 2 1 ( [ ] [ ( )] )} 2 V ar Xi + E Xi 3 4 = 1 {4σ 2 + 4µ σ2 14 } (4µ)2

18 Why Minimum Variance Estimator? To estimate (unkonwn) p for a binomial distribution with 10 trials? 1 Take a random sample of size 1, and let ˆp 1 = X Take a random sample of size m, and let ˆp 2 = X 1+X 2 +X 3 + +X m 10m Which one is unbiased? ( ) X1 E(ˆp 1 ) = E = E(X 1) = p = p ( ) X1 + + X m E(ˆp 2 ) = E = 1 10m 10m E(X X m ) = 1 10m m 10p = p So both are unbiased. Which one is preferred? Professor Sharabati (Purdue University) Point Estimation Spring / 37

19 Principle of minimum variance unbiased estimator (MVUE) Among all estimators of θ that are unbiased, choose the one that has minimum variance. The resulting ˆθ is called the minimum variance unbiased estimator (MVUE) of θ. While comparing several unbiased estimators, choose the one that has the smallest variance. Which estimator of p is preferred? ( ) X1 V ar(ˆp 1 ) = V ar = 1 p(1 p) (10p(1 p)) = ( ) X1 + + X m V ar(ˆp 2 ) = V ar = 1 10m 10 2 m 2 V ar(x p(1 p) X m ) = 10m Professor Sharabati (Purdue University) Point Estimation Spring / 37

20 MVUE for the Mean of Normal Distribution Theorem Let X 1, X 2,, X n be a random sample from a normal distribution with parameters µ and σ. Then the estimator ˆµ = X is the MVUE for µ. Professor Sharabati (Purdue University) Point Estimation Spring / 37

21 Standard Error of an Estimator Definition of standard error How to compute standard error? Derive the standard deviation directly Use computer intensive methods such as bootstrap Professor Sharabati (Purdue University) Point Estimation Spring / 37

22 Standard Error of an Estimator To denote the precision of the estimator, we may use its variance or standard deviation as a measure. Definition of Standard Error The standard error of an estimator ˆθ is its std. deviation σˆθ = V ar(ˆθ). If the standard error itself involves unknown parameters whose values can be estimated, substitution of these estimators into σˆθ yields the estimated standard error of the estimator. The estimated standard error can be denoted either by ˆσˆθ or by sˆθ. To find the standard error of an estimator, we may: Derive the standard deviation directly. Use computer intensive methods such as bootstrapping. Professor Sharabati (Purdue University) Point Estimation Spring / 37

23 Example Assume that the breakdown voltage for pieces of epoxy resin is normally distributed. To estimate the mean µ of the breakdown voltage, we randomly check 20 breakdown voltages, and denote them as X 1, X 2,, X 20. Suppose the observed voltage values are: {24.46, 25.61, 26.25, 26.42, 26.66, 27.15, 27.31, 27.54, 27.74, 27.94, 27.98, 28.04, 28.28, 28.49, 28.50, 28.87, 29.11, 29.13, 29.50, 30.88} 1 Calculate a point estimate of mean µ of the breakdown voltage and state which estimator you used. 2 Given σ = 1.5, what s the standard error of the estimator? 3 If σ is unknown, what s the estimated standard error of the estimator? Professor Sharabati (Purdue University) Point Estimation Spring / 37

24 Answers 1 We can use estimator X to find an estimate of µ. The estimate of ˆµ = X = 20 = When σ = 1.5 is known, the estimtor X is normal distributed with std. dev. σ X = σ n [Hint: see PROPOSITION on text p224] The standard error of the estimator ˆµ = X is σˆµ = σ X = σ n = = When σ is unknown, we use an estimate of σ (e.g. sample std. dev. s = s 2 = x 2 i ( x i ) = 1.462) to replace σ The estimated standard error of the estimator ˆµ = X is ˆσˆµ = ˆσ X = ˆσ n = = Professor Sharabati (Purdue University) Point Estimation Spring / 37

25 Find Standard Error Using Bootstrap When the form of the estimator ˆθ is too complicated and it is impossible to obtain the expression of the standard error, we use bootstrap. Idea of Bootstrap Suppose the population pdf is f(x; θ) and that data x 1, x 2,, x n gives ˆθ. We use the computer to obtain bootstrap samples from the pdf f(x, ˆθ), and for each sample we calculate a bootstrap estimate ˆθ. 1 First bootstrap: x 1, x 2,, x n. Estimate = ˆθ 1. 2 Second bootstrap: x 1, x 2,, x n. Estimate = ˆθ 2. 3 Repeat B (100 or 200) times, get ˆθ 1, ˆθ 2,, ˆθ B different estimates. 4 Sample mean of the bootstrap estimates θ = ˆθ 1 + +ˆθ B B The boostrap estimate of ˆθ s standard error sˆθ = (ˆθ i θ ) 2 1 B 1 Professor Sharabati (Purdue University) Point Estimation Spring / 37

26 Bootstrap Example Let X be the time to breakdown of an insulating fluid between electrodes at a particular voltage. X follows the exponential distribution with pdf f(x) = λe λx. A random sample (size n = 10) of the breakdown times (min) is {41.53, 18.73, 2.99, 30.34, 12.33, , 73.02, , 4.00, 26.87}. Use ˆλ = 1 X as the estimator. From the sample data, we get the estimate ˆλ = 1 x = Now generate bootstrap samples using the pdf f(x) = e x. 1 First bootstrap sample {11.25,,, 42.65}, ˆλ 1 = 2 Second bootstrap sample{54.61,, 18.63, 5.68}, ˆλ 2 = 3 Repeat B = 100 times = Sample mean of bootstrap estimates: λ = = The bootstrap estimate of ˆλ s standard error: sˆλ = (ˆλ i λ ) 2 = = Professor Sharabati (Purdue University) Point Estimation Spring / 37

27 Methods of Point Estimation Method of moments Definition of moments Method of moments Maximum likelihood estimation Maximum likelihood estimation (mle) examples Estimating functions of parameters Large sample behavior of the mle Professor Sharabati (Purdue University) Point Estimation Spring / 37

28 Definition of Moments Definition (Population moment) Let X follow a specific population distribution. The k-th moment of the population distribution with pmf p(x) or pdf f(x) is: E(X k ). Definition (Sample moment) Let X 1, X 2,, X n be a random sample from a pmf p(x) or pdf f(x). The k-th sample moment is: n i=1 Xk i n Sample moments can be used to estimate population moments. Professor Sharabati (Purdue University) Point Estimation Spring / 37

29 Examples of Moments A random sample: X 1, X 2,, X n. Normal distribution with mean µ and variance σ 2,. Population Moments First E(X) = µ Sample Moments X 1 + +X n n = X Second E(X 2 ) = σ 2 + µ 2 X 2 1 +X X2 n n For uniform distribution with parameters a and b, a < b. f(x) = 1 b a. First Population Moments E(X) = a+b 2 Second E(X 2 ) = a2 +ab+b 2 3 Sample Moments X 1 + +X n n X1 2+X X2 n n = X Professor Sharabati (Purdue University) Point Estimation Spring / 37

30 The Method of Moments Definition Let X 1, X 2,, X n be a random sample from a distribution with pmf or pdf f(x; θ 1,, θ m ) where θ 1,, θ m are parameters whose values are unknown. Then the moment estimators ˆθ 1, ˆθ 2,, ˆθ m are obtained by equating the first m sample moments to the corresponding first m population moments and solving for θ 1, θ 2,, θ m. This is called method of moments. An explanation when m = 2 We need to estimate two parameters θ 1, θ 2 of a distribution. The population moments E(X) and E(X 2 ) are the functions of θ 1, θ 2. Given the observed values x 1,, x n of a random sample X 1,, X n, we can estimate populations moments with sample moments: E(X) = X and E(X 2 X 2 i ) = n. This gives us two equations of θ 1 and θ 2. Solve the equations, we get the estimates of parameters θ 1, θ 2. Professor Sharabati (Purdue University) Point Estimation Spring / 37

31 Using Moments - Example Moment Estimator Given X 1,, X 5, a random sample of uniform distribution and the observed values of the random sample give X 1+ +X 5 5 = 2, X1 2+ +X2 5 5 = Find the moment estimates of parameters a and b in the uniform distribution f(x) = 1 b a, a < b. Hint E(X) = a + b 2 E(X 2 ) = a2 + ab + b 2 3 = X X 5 5 = 2 = X2 1 + X X2 5 5 = 13 3 Professor Sharabati (Purdue University) Point Estimation Spring / 37

32 Using Moments - Exercises Let X 1,, X 10 be a random sample of size 10 of a normal distribution. The observed values of the random sample are {3.92, 3.76, 4.01, 3.67, 3.89, 3.62, 4.09, 4.15, 3.58, 3.75}. Find the moment estimates of the mean µ and standard deviation σ of a normal distribution. Given a random sample X 1, X 2,, X n from some exponential distribution f(x) = λe λx, use the moment estimator to esimate parameter λ. Professor Sharabati (Purdue University) Point Estimation Spring / 37

33 Maximum Likelihood Estimation - Example A sample of ten new bike helmets manufactured by a company is obtained. Let X i = 1 if the ith helmet is flawed, X i = 0 if the ith is flawless. X i s are independent. Let p = P (a helmet is flawed) = P (X i = 1), thus all X i s follow the same Bernoulli distribution with parameter p. The observed values x i s of X i s are: {1, 0, 1, 0, 0, 0, 0, 0, 0, 1}. Now estimate p using MLE. Maximum Likelihood Estimation - MLE 1 What is the probability that we will have the current observed values? P (X 1 = 1, X 2 = 0, X 3 = 1, X 4 = X 9 = 0, X 10 = 1) = P (X 1 = 1)P (X 2 = 0) P (X 10 = 1) = p 3 (1 p) 7 f(p) 2 For what value of p is the observed sample most likely to have occurred? Find p that maximizes f(p).i.e., equivalent to max p ln[f(p)]. Taking derivative of ln[f(p)] and quating it to zero yields d dp {ln[f(p)]} = d dp {3 ln(p) + 7 ln(1 p)} = 3 p 7 1 p = 0 ˆp = 3 10 =.30 Professor Sharabati (Purdue University) Point Estimation Spring / 37

34 Maximum Likelihood Estimation Definition Let X 1, X 2,, X n have joint pmf or pdf f(x 1, x 2,, x n ; θ 1,, θ m ) where parameters θ 1, θ 2, θ m have unknown values. When x 1, x 2,, x n are the observed sample values of X 1,, X n, f(x 1, x 2,, x n ; θ 1,, θ m ) is called the likelihood function. The maximum likelihood estimates (mle s) of θ i s ˆθ 1, ˆθ 2,, ˆθ m are those values of θ i s that maximize the likelihood function. Professor Sharabati (Purdue University) Point Estimation Spring / 37

35 Example Let X 1, X 2,, X n be a random sample from a normal distribution with mean µ and variance σ 2. The observed values are x 1, x 2,, x n. Find the likelihood estimates of µ and σ 2. Answer Likelihood function: ln likelihood function: f(x 1,, x n ; µ, σ 2 1 ) = ( 2πσ 2 ) n 2 e (xi µ) 2 /(2σ 2 ) ln(f(x 1, x 2,, x n ; µ, σ 2 )) = n 2 ln(2πσ2 ) 1 2σ 2 (xi µ) 2 To find the maximizing values of µ, σ, we take partial derivatives of ln[f(x 1, x 2,, x n ; µ, σ 2 )], equate them to zero, and solve the equations. The mle esimates are ˆµ = X and ˆσ 2 = (Xi X) 2 n. Professor Sharabati (Purdue University) Point Estimation Spring / 37

36 Exercise Let X 1, X 2,, X n is a random sample from an exponential distribution with parameter λ. Given the observed values x 1, x 2,, x n. Find the maximum likelihood estimate of λ. Hint Likelihood function: ln likelihood function: f(x 1, x 2,, x n ; λ) = λe λx 1 λe λx2 λe λxn = λ n e λ(x 1+ +x n) ln(f(x 1, x 2,, x n, λ)) = n ln(λ) λ(x x n ) Professor Sharabati (Purdue University) Point Estimation Spring / 37

37 Estimating Functions of Parameters The invariance principle Let ˆθ 1, ˆθ 2,, ˆθ m be the mle s of the parameters θ 1, θ 2,, θ m. Then the mle of any function h(θ 1, θ 2, θ m ) of these parameters is the function h(ˆθ 1, ˆθ 2,, ˆθ m ). Example In the normal distribution case, the mle of µ and σ 2 are ˆµ = X and ˆσ 2 = (X i X) 2 /n. How to obtain the mle of σ? [Hint: use the function h(µ, σ 2 ) = σ 2 = σ] Because of the above invariance principle, we can substitute the mle s into the function: ˆσ = ˆσ 1 2 = (Xi n X) 2 Professor Sharabati (Purdue University) Point Estimation Spring / 37

38 Large Sample Behavior of MLE Proposition Under very general conditions on the joint distribution of the sample, when the sample size n is large, the maximum likelihood estimator of any parameter θ is approximately unbiased E(ˆθ) θ and has variance that is either as small as or nearly as small as can be achieved by any estimator. Short version: i.e., when sample size is large enough, the mle ˆθ is approx. MVUE of θ. Professor Sharabati (Purdue University) Point Estimation Spring / 37

39 Summary Point estimator and point estimate Unbiased estimators Minimum variance estimators Minimum variance unbiased estimator (MVUE) Finding the standard error of an estimator Deriving directly Bootstrapping Methods of point estimation The method of moments The method of maximum likelihood Maximum likelihood estimation (MLE) Estimating functions of parameters Large sample behavior of the mle Professor Sharabati (Purdue University) Point Estimation Spring / 37

Chapter 7 - Lecture 1 General concepts and criteria

Chapter 7 - Lecture 1 General concepts and criteria Chapter 7 - Lecture 1 General concepts and criteria January 29th, 2010 Best estimator Mean Square error Unbiased estimators Example Unbiased estimators not unique Special case MVUE Bootstrap General Question

More information

Chapter 8. Introduction to Statistical Inference

Chapter 8. Introduction to Statistical Inference Chapter 8. Introduction to Statistical Inference Point Estimation Statistical inference is to draw some type of conclusion about one or more parameters(population characteristics). Now you know that a

More information

Applied Statistics I

Applied Statistics I Applied Statistics I Liang Zhang Department of Mathematics, University of Utah July 14, 2008 Liang Zhang (UofU) Applied Statistics I July 14, 2008 1 / 18 Point Estimation Liang Zhang (UofU) Applied Statistics

More information

Point Estimation. Copyright Cengage Learning. All rights reserved.

Point Estimation. Copyright Cengage Learning. All rights reserved. 6 Point Estimation Copyright Cengage Learning. All rights reserved. 6.2 Methods of Point Estimation Copyright Cengage Learning. All rights reserved. Methods of Point Estimation The definition of unbiasedness

More information

Lecture 10: Point Estimation

Lecture 10: Point Estimation Lecture 10: Point Estimation MSU-STT-351-Sum-17B (P. Vellaisamy: MSU-STT-351-Sum-17B) Probability & Statistics for Engineers 1 / 31 Basic Concepts of Point Estimation A point estimate of a parameter θ,

More information

Point Estimation. Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage

Point Estimation. Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage 6 Point Estimation Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage Point Estimation Statistical inference: directed toward conclusions about one or more parameters. We will use the generic

More information

Definition 9.1 A point estimate is any function T (X 1,..., X n ) of a random sample. We often write an estimator of the parameter θ as ˆθ.

Definition 9.1 A point estimate is any function T (X 1,..., X n ) of a random sample. We often write an estimator of the parameter θ as ˆθ. 9 Point estimation 9.1 Rationale behind point estimation When sampling from a population described by a pdf f(x θ) or probability function P [X = x θ] knowledge of θ gives knowledge of the entire population.

More information

Point Estimation. Principle of Unbiased Estimation. When choosing among several different estimators of θ, select one that is unbiased.

Point Estimation. Principle of Unbiased Estimation. When choosing among several different estimators of θ, select one that is unbiased. Point Estimation Point Estimation Definition A point estimate of a parameter θ is a single number that can be regarded as a sensible value for θ. A point estimate is obtained by selecting a suitable statistic

More information

Point Estimation. Some General Concepts of Point Estimation. Example. Estimator quality

Point Estimation. Some General Concepts of Point Estimation. Example. Estimator quality Point Estimation Some General Concepts of Point Estimation Statistical inference = conclusions about parameters Parameters == population characteristics A point estimate of a parameter is a value (based

More information

Point Estimators. STATISTICS Lecture no. 10. Department of Econometrics FEM UO Brno office 69a, tel

Point Estimators. STATISTICS Lecture no. 10. Department of Econometrics FEM UO Brno office 69a, tel STATISTICS Lecture no. 10 Department of Econometrics FEM UO Brno office 69a, tel. 973 442029 email:jiri.neubauer@unob.cz 8. 12. 2009 Introduction Suppose that we manufacture lightbulbs and we want to state

More information

Back to estimators...

Back to estimators... Back to estimators... So far, we have: Identified estimators for common parameters Discussed the sampling distributions of estimators Introduced ways to judge the goodness of an estimator (bias, MSE, etc.)

More information

Lecture Notes 6. Assume F belongs to a family of distributions, (e.g. F is Normal), indexed by some parameter θ.

Lecture Notes 6. Assume F belongs to a family of distributions, (e.g. F is Normal), indexed by some parameter θ. Sufficient Statistics Lecture Notes 6 Sufficiency Data reduction in terms of a particular statistic can be thought of as a partition of the sample space X. Definition T is sufficient for θ if the conditional

More information

Chapter 7: Point Estimation and Sampling Distributions

Chapter 7: Point Estimation and Sampling Distributions Chapter 7: Point Estimation and Sampling Distributions Seungchul Baek Department of Statistics, University of South Carolina STAT 509: Statistics for Engineers 1 / 20 Motivation In chapter 3, we learned

More information

Chapter 4: Commonly Used Distributions. Statistics for Engineers and Scientists Fourth Edition William Navidi

Chapter 4: Commonly Used Distributions. Statistics for Engineers and Scientists Fourth Edition William Navidi Chapter 4: Commonly Used Distributions Statistics for Engineers and Scientists Fourth Edition William Navidi 2014 by Education. This is proprietary material solely for authorized instructor use. Not authorized

More information

Actuarial Mathematics and Statistics Statistics 5 Part 2: Statistical Inference Tutorial Problems

Actuarial Mathematics and Statistics Statistics 5 Part 2: Statistical Inference Tutorial Problems Actuarial Mathematics and Statistics Statistics 5 Part 2: Statistical Inference Tutorial Problems Spring 2005 1. Which of the following statements relate to probabilities that can be interpreted as frequencies?

More information

Chapter 4: Asymptotic Properties of MLE (Part 3)

Chapter 4: Asymptotic Properties of MLE (Part 3) Chapter 4: Asymptotic Properties of MLE (Part 3) Daniel O. Scharfstein 09/30/13 1 / 1 Breakdown of Assumptions Non-Existence of the MLE Multiple Solutions to Maximization Problem Multiple Solutions to

More information

درس هفتم یادگیري ماشین. (Machine Learning) دانشگاه فردوسی مشهد دانشکده مهندسی رضا منصفی

درس هفتم یادگیري ماشین. (Machine Learning) دانشگاه فردوسی مشهد دانشکده مهندسی رضا منصفی یادگیري ماشین توزیع هاي نمونه و تخمین نقطه اي پارامترها Sampling Distributions and Point Estimation of Parameter (Machine Learning) دانشگاه فردوسی مشهد دانشکده مهندسی رضا منصفی درس هفتم 1 Outline Introduction

More information

BIO5312 Biostatistics Lecture 5: Estimations

BIO5312 Biostatistics Lecture 5: Estimations BIO5312 Biostatistics Lecture 5: Estimations Yujin Chung September 27th, 2016 Fall 2016 Yujin Chung Lec5: Estimations Fall 2016 1/34 Recap Yujin Chung Lec5: Estimations Fall 2016 2/34 Today s lecture and

More information

Learning From Data: MLE. Maximum Likelihood Estimators

Learning From Data: MLE. Maximum Likelihood Estimators Learning From Data: MLE Maximum Likelihood Estimators 1 Parameter Estimation Assuming sample x1, x2,..., xn is from a parametric distribution f(x θ), estimate θ. E.g.: Given sample HHTTTTTHTHTTTHH of (possibly

More information

Chapter 5. Sampling Distributions

Chapter 5. Sampling Distributions Lecture notes, Lang Wu, UBC 1 Chapter 5. Sampling Distributions 5.1. Introduction In statistical inference, we attempt to estimate an unknown population characteristic, such as the population mean, µ,

More information

Module 4: Point Estimation Statistics (OA3102)

Module 4: Point Estimation Statistics (OA3102) Module 4: Point Estimation Statistics (OA3102) Professor Ron Fricker Naval Postgraduate School Monterey, California Reading assignment: WM&S chapter 8.1-8.4 Revision: 1-12 1 Goals for this Module Define

More information

MATH 3200 Exam 3 Dr. Syring

MATH 3200 Exam 3 Dr. Syring . Suppose n eligible voters are polled (randomly sampled) from a population of size N. The poll asks voters whether they support or do not support increasing local taxes to fund public parks. Let M be

More information

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER Two hours MATH20802 To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER STATISTICAL METHODS Answer any FOUR of the SIX questions.

More information

4-1. Chapter 4. Commonly Used Distributions by The McGraw-Hill Companies, Inc. All rights reserved.

4-1. Chapter 4. Commonly Used Distributions by The McGraw-Hill Companies, Inc. All rights reserved. 4-1 Chapter 4 Commonly Used Distributions 2014 by The Companies, Inc. All rights reserved. Section 4.1: The Bernoulli Distribution 4-2 We use the Bernoulli distribution when we have an experiment which

More information

Statistical estimation

Statistical estimation Statistical estimation Statistical modelling: theory and practice Gilles Guillot gigu@dtu.dk September 3, 2013 Gilles Guillot (gigu@dtu.dk) Estimation September 3, 2013 1 / 27 1 Introductory example 2

More information

Stat 213: Intro to Statistics 9 Central Limit Theorem

Stat 213: Intro to Statistics 9 Central Limit Theorem 1 Stat 213: Intro to Statistics 9 Central Limit Theorem H. Kim Fall 2007 2 unknown parameters Example: A pollster is sure that the responses to his agree/disagree questions will follow a binomial distribution,

More information

Chapter 7: Estimation Sections

Chapter 7: Estimation Sections 1 / 40 Chapter 7: Estimation Sections 7.1 Statistical Inference Bayesian Methods: Chapter 7 7.2 Prior and Posterior Distributions 7.3 Conjugate Prior Distributions 7.4 Bayes Estimators Frequentist Methods:

More information

Review for Final Exam Spring 2014 Jeremy Orloff and Jonathan Bloom

Review for Final Exam Spring 2014 Jeremy Orloff and Jonathan Bloom Review for Final Exam 18.05 Spring 2014 Jeremy Orloff and Jonathan Bloom THANK YOU!!!! JON!! PETER!! RUTHI!! ERIKA!! ALL OF YOU!!!! Probability Counting Sets Inclusion-exclusion principle Rule of product

More information

Chapter 5: Statistical Inference (in General)

Chapter 5: Statistical Inference (in General) Chapter 5: Statistical Inference (in General) Shiwen Shen University of South Carolina 2016 Fall Section 003 1 / 17 Motivation In chapter 3, we learn the discrete probability distributions, including Bernoulli,

More information

Normal Distribution. Definition A continuous rv X is said to have a normal distribution with. the pdf of X is

Normal Distribution. Definition A continuous rv X is said to have a normal distribution with. the pdf of X is Normal Distribution Normal Distribution Definition A continuous rv X is said to have a normal distribution with parameter µ and σ (µ and σ 2 ), where < µ < and σ > 0, if the pdf of X is f (x; µ, σ) = 1

More information

UQ, STAT2201, 2017, Lectures 3 and 4 Unit 3 Probability Distributions.

UQ, STAT2201, 2017, Lectures 3 and 4 Unit 3 Probability Distributions. UQ, STAT2201, 2017, Lectures 3 and 4 Unit 3 Probability Distributions. Random Variables 2 A random variable X is a numerical (integer, real, complex, vector etc.) summary of the outcome of the random experiment.

More information

PROBABILITY AND STATISTICS

PROBABILITY AND STATISTICS Monday, January 12, 2015 1 PROBABILITY AND STATISTICS Zhenyu Ye January 12, 2015 Monday, January 12, 2015 2 References Ch10 of Experiments in Modern Physics by Melissinos. Particle Physics Data Group Review

More information

continuous rv Note for a legitimate pdf, we have f (x) 0 and f (x)dx = 1. For a continuous rv, P(X = c) = c f (x)dx = 0, hence

continuous rv Note for a legitimate pdf, we have f (x) 0 and f (x)dx = 1. For a continuous rv, P(X = c) = c f (x)dx = 0, hence continuous rv Let X be a continuous rv. Then a probability distribution or probability density function (pdf) of X is a function f(x) such that for any two numbers a and b with a b, P(a X b) = b a f (x)dx.

More information

CSE 312 Winter Learning From Data: Maximum Likelihood Estimators (MLE)

CSE 312 Winter Learning From Data: Maximum Likelihood Estimators (MLE) CSE 312 Winter 2017 Learning From Data: Maximum Likelihood Estimators (MLE) 1 Parameter Estimation Given: independent samples x1, x2,..., xn from a parametric distribution f(x θ) Goal: estimate θ. Not

More information

Statistical analysis and bootstrapping

Statistical analysis and bootstrapping Statistical analysis and bootstrapping p. 1/15 Statistical analysis and bootstrapping Michel Bierlaire michel.bierlaire@epfl.ch Transport and Mobility Laboratory Statistical analysis and bootstrapping

More information

STAT 509: Statistics for Engineers Dr. Dewei Wang. Copyright 2014 John Wiley & Sons, Inc. All rights reserved.

STAT 509: Statistics for Engineers Dr. Dewei Wang. Copyright 2014 John Wiley & Sons, Inc. All rights reserved. STAT 509: Statistics for Engineers Dr. Dewei Wang Applied Statistics and Probability for Engineers Sixth Edition Douglas C. Montgomery George C. Runger 7 Point CHAPTER OUTLINE 7-1 Point Estimation 7-2

More information

Exam 2 Spring 2015 Statistics for Applications 4/9/2015

Exam 2 Spring 2015 Statistics for Applications 4/9/2015 18.443 Exam 2 Spring 2015 Statistics for Applications 4/9/2015 1. True or False (and state why). (a). The significance level of a statistical test is not equal to the probability that the null hypothesis

More information

Lecture 23. STAT 225 Introduction to Probability Models April 4, Whitney Huang Purdue University. Normal approximation to Binomial

Lecture 23. STAT 225 Introduction to Probability Models April 4, Whitney Huang Purdue University. Normal approximation to Binomial Lecture 23 STAT 225 Introduction to Probability Models April 4, 2014 approximation Whitney Huang Purdue University 23.1 Agenda 1 approximation 2 approximation 23.2 Characteristics of the random variable:

More information

Exercise. Show the corrected sample variance is an unbiased estimator of population variance. S 2 = n i=1 (X i X ) 2 n 1. Exercise Estimation

Exercise. Show the corrected sample variance is an unbiased estimator of population variance. S 2 = n i=1 (X i X ) 2 n 1. Exercise Estimation Exercise Show the corrected sample variance is an unbiased estimator of population variance. S 2 = n i=1 (X i X ) 2 n 1 Exercise S 2 = = = = n i=1 (X i x) 2 n i=1 = (X i µ + µ X ) 2 = n 1 n 1 n i=1 ((X

More information

6. Genetics examples: Hardy-Weinberg Equilibrium

6. Genetics examples: Hardy-Weinberg Equilibrium PBCB 206 (Fall 2006) Instructor: Fei Zou email: fzou@bios.unc.edu office: 3107D McGavran-Greenberg Hall Lecture 4 Topics for Lecture 4 1. Parametric models and estimating parameters from data 2. Method

More information

MVE051/MSG Lecture 7

MVE051/MSG Lecture 7 MVE051/MSG810 2017 Lecture 7 Petter Mostad Chalmers November 20, 2017 The purpose of collecting and analyzing data Purpose: To build and select models for parts of the real world (which can be used for

More information

4.1 Introduction Estimating a population mean The problem with estimating a population mean with a sample mean: an example...

4.1 Introduction Estimating a population mean The problem with estimating a population mean with a sample mean: an example... Chapter 4 Point estimation Contents 4.1 Introduction................................... 2 4.2 Estimating a population mean......................... 2 4.2.1 The problem with estimating a population mean

More information

1. Covariance between two variables X and Y is denoted by Cov(X, Y) and defined by. Cov(X, Y ) = E(X E(X))(Y E(Y ))

1. Covariance between two variables X and Y is denoted by Cov(X, Y) and defined by. Cov(X, Y ) = E(X E(X))(Y E(Y )) Correlation & Estimation - Class 7 January 28, 2014 Debdeep Pati Association between two variables 1. Covariance between two variables X and Y is denoted by Cov(X, Y) and defined by Cov(X, Y ) = E(X E(X))(Y

More information

6 Central Limit Theorem. (Chs 6.4, 6.5)

6 Central Limit Theorem. (Chs 6.4, 6.5) 6 Central Limit Theorem (Chs 6.4, 6.5) Motivating Example In the next few weeks, we will be focusing on making statistical inference about the true mean of a population by using sample datasets. Examples?

More information

Rowan University Department of Electrical and Computer Engineering

Rowan University Department of Electrical and Computer Engineering Rowan University Department of Electrical and Computer Engineering Estimation and Detection Theory Fall 203 Practice EXAM Solution This is a closed book exam. One letter-size sheet is allowed. There are

More information

Normal Distribution. Notes. Normal Distribution. Standard Normal. Sums of Normal Random Variables. Normal. approximation of Binomial.

Normal Distribution. Notes. Normal Distribution. Standard Normal. Sums of Normal Random Variables. Normal. approximation of Binomial. Lecture 21,22, 23 Text: A Course in Probability by Weiss 8.5 STAT 225 Introduction to Probability Models March 31, 2014 Standard Sums of Whitney Huang Purdue University 21,22, 23.1 Agenda 1 2 Standard

More information

Chapter 7: SAMPLING DISTRIBUTIONS & POINT ESTIMATION OF PARAMETERS

Chapter 7: SAMPLING DISTRIBUTIONS & POINT ESTIMATION OF PARAMETERS Chapter 7: SAMPLING DISTRIBUTIONS & POINT ESTIMATION OF PARAMETERS Part 1: Introduction Sampling Distributions & the Central Limit Theorem Point Estimation & Estimators Sections 7-1 to 7-2 Sample data

More information

Probability Theory and Simulation Methods. April 9th, Lecture 20: Special distributions

Probability Theory and Simulation Methods. April 9th, Lecture 20: Special distributions April 9th, 2018 Lecture 20: Special distributions Week 1 Chapter 1: Axioms of probability Week 2 Chapter 3: Conditional probability and independence Week 4 Chapters 4, 6: Random variables Week 9 Chapter

More information

8.1 Estimation of the Mean and Proportion

8.1 Estimation of the Mean and Proportion 8.1 Estimation of the Mean and Proportion Statistical inference enables us to make judgments about a population on the basis of sample information. The mean, standard deviation, and proportions of a population

More information

The Bernoulli distribution

The Bernoulli distribution This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike License. Your use of this material constitutes acceptance of that license and the conditions of use of materials on this

More information

Statistics and Their Distributions

Statistics and Their Distributions Statistics and Their Distributions Deriving Sampling Distributions Example A certain system consists of two identical components. The life time of each component is supposed to have an expentional distribution

More information

Chapter 3 Discrete Random Variables and Probability Distributions

Chapter 3 Discrete Random Variables and Probability Distributions Chapter 3 Discrete Random Variables and Probability Distributions Part 4: Special Discrete Random Variable Distributions Sections 3.7 & 3.8 Geometric, Negative Binomial, Hypergeometric NOTE: The discrete

More information

ECE 295: Lecture 03 Estimation and Confidence Interval

ECE 295: Lecture 03 Estimation and Confidence Interval ECE 295: Lecture 03 Estimation and Confidence Interval Spring 2018 Prof Stanley Chan School of Electrical and Computer Engineering Purdue University 1 / 23 Theme of this Lecture What is Estimation? You

More information

Chapter 5. Statistical inference for Parametric Models

Chapter 5. Statistical inference for Parametric Models Chapter 5. Statistical inference for Parametric Models Outline Overview Parameter estimation Method of moments How good are method of moments estimates? Interval estimation Statistical Inference for Parametric

More information

STA258H5. Al Nosedal and Alison Weir. Winter Al Nosedal and Alison Weir STA258H5 Winter / 41

STA258H5. Al Nosedal and Alison Weir. Winter Al Nosedal and Alison Weir STA258H5 Winter / 41 STA258H5 Al Nosedal and Alison Weir Winter 2017 Al Nosedal and Alison Weir STA258H5 Winter 2017 1 / 41 NORMAL APPROXIMATION TO THE BINOMIAL DISTRIBUTION. Al Nosedal and Alison Weir STA258H5 Winter 2017

More information

5.3 Statistics and Their Distributions

5.3 Statistics and Their Distributions Chapter 5 Joint Probability Distributions and Random Samples Instructor: Lingsong Zhang 1 Statistics and Their Distributions 5.3 Statistics and Their Distributions Statistics and Their Distributions Consider

More information

1/2 2. Mean & variance. Mean & standard deviation

1/2 2. Mean & variance. Mean & standard deviation Question # 1 of 10 ( Start time: 09:46:03 PM ) Total Marks: 1 The probability distribution of X is given below. x: 0 1 2 3 4 p(x): 0.73? 0.06 0.04 0.01 What is the value of missing probability? 0.54 0.16

More information

EE641 Digital Image Processing II: Purdue University VISE - October 29,

EE641 Digital Image Processing II: Purdue University VISE - October 29, EE64 Digital Image Processing II: Purdue University VISE - October 9, 004 The EM Algorithm. Suffient Statistics and Exponential Distributions Let p(y θ) be a family of density functions parameterized by

More information

Binomial Random Variables. Binomial Random Variables

Binomial Random Variables. Binomial Random Variables Bernoulli Trials Definition A Bernoulli trial is a random experiment in which there are only two possible outcomes - success and failure. 1 Tossing a coin and considering heads as success and tails as

More information

3 ˆθ B = X 1 + X 2 + X 3. 7 a) Find the Bias, Variance and MSE of each estimator. Which estimator is the best according

3 ˆθ B = X 1 + X 2 + X 3. 7 a) Find the Bias, Variance and MSE of each estimator. Which estimator is the best according STAT 345 Spring 2018 Homework 9 - Point Estimation Name: Please adhere to the homework rules as given in the Syllabus. 1. Mean Squared Error. Suppose that X 1, X 2 and X 3 are independent random variables

More information

Much of what appears here comes from ideas presented in the book:

Much of what appears here comes from ideas presented in the book: Chapter 11 Robust statistical methods Much of what appears here comes from ideas presented in the book: Huber, Peter J. (1981), Robust statistics, John Wiley & Sons (New York; Chichester). There are many

More information

Version A. Problem 1. Let X be the continuous random variable defined by the following pdf: 1 x/2 when 0 x 2, f(x) = 0 otherwise.

Version A. Problem 1. Let X be the continuous random variable defined by the following pdf: 1 x/2 when 0 x 2, f(x) = 0 otherwise. Math 224 Q Exam 3A Fall 217 Tues Dec 12 Version A Problem 1. Let X be the continuous random variable defined by the following pdf: { 1 x/2 when x 2, f(x) otherwise. (a) Compute the mean µ E[X]. E[X] x

More information

Central Limit Theorem, Joint Distributions Spring 2018

Central Limit Theorem, Joint Distributions Spring 2018 Central Limit Theorem, Joint Distributions 18.5 Spring 218.5.4.3.2.1-4 -3-2 -1 1 2 3 4 Exam next Wednesday Exam 1 on Wednesday March 7, regular room and time. Designed for 1 hour. You will have the full

More information

INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY. Lecture -5 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc.

INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY. Lecture -5 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc. INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY Lecture -5 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc. Summary of the previous lecture Moments of a distribubon Measures of

More information

Chapter 7: Estimation Sections

Chapter 7: Estimation Sections 1 / 31 : Estimation Sections 7.1 Statistical Inference Bayesian Methods: 7.2 Prior and Posterior Distributions 7.3 Conjugate Prior Distributions 7.4 Bayes Estimators Frequentist Methods: 7.5 Maximum Likelihood

More information

Interval estimation. September 29, Outline Basic ideas Sampling variation and CLT Interval estimation using X More general problems

Interval estimation. September 29, Outline Basic ideas Sampling variation and CLT Interval estimation using X More general problems Interval estimation September 29, 2017 STAT 151 Class 7 Slide 1 Outline of Topics 1 Basic ideas 2 Sampling variation and CLT 3 Interval estimation using X 4 More general problems STAT 151 Class 7 Slide

More information

Chapter 7: Estimation Sections

Chapter 7: Estimation Sections Chapter 7: Estimation Sections 7.1 Statistical Inference Bayesian Methods: 7.2 Prior and Posterior Distributions 7.3 Conjugate Prior Distributions Frequentist Methods: 7.5 Maximum Likelihood Estimators

More information

Chapter 8: Sampling distributions of estimators Sections

Chapter 8: Sampling distributions of estimators Sections Chapter 8 continued Chapter 8: Sampling distributions of estimators Sections 8.1 Sampling distribution of a statistic 8.2 The Chi-square distributions 8.3 Joint Distribution of the sample mean and sample

More information

6. Continous Distributions

6. Continous Distributions 6. Continous Distributions Chris Piech and Mehran Sahami May 17 So far, all random variables we have seen have been discrete. In all the cases we have seen in CS19 this meant that our RVs could only take

More information

Review of key points about estimators

Review of key points about estimators Review of key points about estimators Populations can be at least partially described by population parameters Population parameters include: mean, proportion, variance, etc. Because populations are often

More information

Likelihood Methods of Inference. Toss coin 6 times and get Heads twice.

Likelihood Methods of Inference. Toss coin 6 times and get Heads twice. Methods of Inference Toss coin 6 times and get Heads twice. p is probability of getting H. Probability of getting exactly 2 heads is 15p 2 (1 p) 4 This function of p, is likelihood function. Definition:

More information

Probability & Statistics

Probability & Statistics Probability & Statistics BITS Pilani K K Birla Goa Campus Dr. Jajati Keshari Sahoo Department of Mathematics Statistics Descriptive statistics Inferential statistics /38 Inferential Statistics 1. Involves:

More information

Chapter 3 Discrete Random Variables and Probability Distributions

Chapter 3 Discrete Random Variables and Probability Distributions Chapter 3 Discrete Random Variables and Probability Distributions Part 3: Special Discrete Random Variable Distributions Section 3.5 Discrete Uniform Section 3.6 Bernoulli and Binomial Others sections

More information

Review of key points about estimators

Review of key points about estimators Review of key points about estimators Populations can be at least partially described by population parameters Population parameters include: mean, proportion, variance, etc. Because populations are often

More information

(Practice Version) Midterm Exam 1

(Practice Version) Midterm Exam 1 EECS 126 Probability and Random Processes University of California, Berkeley: Fall 2014 Kannan Ramchandran September 19, 2014 (Practice Version) Midterm Exam 1 Last name First name SID Rules. DO NOT open

More information

Computer Statistics with R

Computer Statistics with R MAREK GAGOLEWSKI KONSTANCJA BOBECKA-WESO LOWSKA PRZEMYS LAW GRZEGORZEWSKI Computer Statistics with R 5. Point Estimation Faculty of Mathematics and Information Science Warsaw University of Technology []

More information

MAS187/AEF258. University of Newcastle upon Tyne

MAS187/AEF258. University of Newcastle upon Tyne MAS187/AEF258 University of Newcastle upon Tyne 2005-6 Contents 1 Collecting and Presenting Data 5 1.1 Introduction...................................... 5 1.1.1 Examples...................................

More information

ECE 340 Probabilistic Methods in Engineering M/W 3-4:15. Lecture 10: Continuous RV Families. Prof. Vince Calhoun

ECE 340 Probabilistic Methods in Engineering M/W 3-4:15. Lecture 10: Continuous RV Families. Prof. Vince Calhoun ECE 340 Probabilistic Methods in Engineering M/W 3-4:15 Lecture 10: Continuous RV Families Prof. Vince Calhoun 1 Reading This class: Section 4.4-4.5 Next class: Section 4.6-4.7 2 Homework 3.9, 3.49, 4.5,

More information

Statistics for Business and Economics

Statistics for Business and Economics Statistics for Business and Economics Chapter 7 Estimation: Single Population Copyright 010 Pearson Education, Inc. Publishing as Prentice Hall Ch. 7-1 Confidence Intervals Contents of this chapter: Confidence

More information

The Normal Distribution

The Normal Distribution The Normal Distribution The normal distribution plays a central role in probability theory and in statistics. It is often used as a model for the distribution of continuous random variables. Like all models,

More information

Statistics 431 Spring 2007 P. Shaman. Preliminaries

Statistics 431 Spring 2007 P. Shaman. Preliminaries Statistics 4 Spring 007 P. Shaman The Binomial Distribution Preliminaries A binomial experiment is defined by the following conditions: A sequence of n trials is conducted, with each trial having two possible

More information

Probability Theory. Probability and Statistics for Data Science CSE594 - Spring 2016

Probability Theory. Probability and Statistics for Data Science CSE594 - Spring 2016 Probability Theory Probability and Statistics for Data Science CSE594 - Spring 2016 What is Probability? 2 What is Probability? Examples outcome of flipping a coin (seminal example) amount of snowfall

More information

Continuous random variables

Continuous random variables Continuous random variables probability density function (f(x)) the probability distribution function of a continuous random variable (analogous to the probability mass function for a discrete random variable),

More information

Introduction to Probability and Inference HSSP Summer 2017, Instructor: Alexandra Ding July 19, 2017

Introduction to Probability and Inference HSSP Summer 2017, Instructor: Alexandra Ding July 19, 2017 Introduction to Probability and Inference HSSP Summer 2017, Instructor: Alexandra Ding July 19, 2017 Please fill out the attendance sheet! Suggestions Box: Feedback and suggestions are important to the

More information

Module 3: Sampling Distributions and the CLT Statistics (OA3102)

Module 3: Sampling Distributions and the CLT Statistics (OA3102) Module 3: Sampling Distributions and the CLT Statistics (OA3102) Professor Ron Fricker Naval Postgraduate School Monterey, California Reading assignment: WM&S chpt 7.1-7.3, 7.5 Revision: 1-12 1 Goals for

More information

ME3620. Theory of Engineering Experimentation. Spring Chapter III. Random Variables and Probability Distributions.

ME3620. Theory of Engineering Experimentation. Spring Chapter III. Random Variables and Probability Distributions. ME3620 Theory of Engineering Experimentation Chapter III. Random Variables and Probability Distributions Chapter III 1 3.2 Random Variables In an experiment, a measurement is usually denoted by a variable

More information

Probability Theory. Mohamed I. Riffi. Islamic University of Gaza

Probability Theory. Mohamed I. Riffi. Islamic University of Gaza Probability Theory Mohamed I. Riffi Islamic University of Gaza Table of contents 1. Chapter 2 Discrete Distributions The binomial distribution 1 Chapter 2 Discrete Distributions Bernoulli trials and the

More information

Engineering Statistics ECIV 2305

Engineering Statistics ECIV 2305 Engineering Statistics ECIV 2305 Section 5.3 Approximating Distributions with the Normal Distribution Introduction A very useful property of the normal distribution is that it provides good approximations

More information

Discrete Random Variables

Discrete Random Variables Discrete Random Variables ST 370 A random variable is a numerical value associated with the outcome of an experiment. Discrete random variable When we can enumerate the possible values of the variable

More information

A New Hybrid Estimation Method for the Generalized Pareto Distribution

A New Hybrid Estimation Method for the Generalized Pareto Distribution A New Hybrid Estimation Method for the Generalized Pareto Distribution Chunlin Wang Department of Mathematics and Statistics University of Calgary May 18, 2011 A New Hybrid Estimation Method for the GPD

More information

Chapter 7 Sampling Distributions and Point Estimation of Parameters

Chapter 7 Sampling Distributions and Point Estimation of Parameters Chapter 7 Sampling Distributions and Point Estimation of Parameters Part 1: Sampling Distributions, the Central Limit Theorem, Point Estimation & Estimators Sections 7-1 to 7-2 1 / 25 Statistical Inferences

More information

Bias Reduction Using the Bootstrap

Bias Reduction Using the Bootstrap Bias Reduction Using the Bootstrap Find f t (i.e., t) so that or E(f t (P, P n ) P) = 0 E(T(P n ) θ(p) + t P) = 0. Change the problem to the sample: whose solution is so the bias-reduced estimate is E(T(P

More information

Lecture III. 1. common parametric models 2. model fitting 2a. moment matching 2b. maximum likelihood 3. hypothesis testing 3a. p-values 3b.

Lecture III. 1. common parametric models 2. model fitting 2a. moment matching 2b. maximum likelihood 3. hypothesis testing 3a. p-values 3b. Lecture III 1. common parametric models 2. model fitting 2a. moment matching 2b. maximum likelihood 3. hypothesis testing 3a. p-values 3b. simulation Parameters Parameters are knobs that control the amount

More information

Chapter 7. Sampling Distributions and the Central Limit Theorem

Chapter 7. Sampling Distributions and the Central Limit Theorem Chapter 7. Sampling Distributions and the Central Limit Theorem 1 Introduction 2 Sampling Distributions related to the normal distribution 3 The central limit theorem 4 The normal approximation to binomial

More information

Elementary Statistics Lecture 5

Elementary Statistics Lecture 5 Elementary Statistics Lecture 5 Sampling Distributions Chong Ma Department of Statistics University of South Carolina Chong Ma (Statistics, USC) STAT 201 Elementary Statistics 1 / 24 Outline 1 Introduction

More information

LET us say we have a population drawn from some unknown probability distribution f(x) with some

LET us say we have a population drawn from some unknown probability distribution f(x) with some CmpE 343 Lecture Notes 9: Estimation Ethem Alpaydın December 30, 04 LET us say we have a population drawn from some unknown probability distribution fx with some parameter θ. When we do not know θ, we

More information

Statistics for Managers Using Microsoft Excel 7 th Edition

Statistics for Managers Using Microsoft Excel 7 th Edition Statistics for Managers Using Microsoft Excel 7 th Edition Chapter 7 Sampling Distributions Statistics for Managers Using Microsoft Excel 7e Copyright 2014 Pearson Education, Inc. Chap 7-1 Learning Objectives

More information

IEOR 165 Lecture 1 Probability Review

IEOR 165 Lecture 1 Probability Review IEOR 165 Lecture 1 Probability Review 1 Definitions in Probability and Their Consequences 1.1 Defining Probability A probability space (Ω, F, P) consists of three elements: A sample space Ω is the set

More information

Expectations. Definition Let X be a discrete rv with set of possible values D and pmf p(x). The expected value or mean value of X, denoted by E(X ) or

Expectations. Definition Let X be a discrete rv with set of possible values D and pmf p(x). The expected value or mean value of X, denoted by E(X ) or Definition Let X be a discrete rv with set of possible values D and pmf p(x). The expected value or mean value of X, denoted by E(X ) or µ X, is E(X ) = µ X = x D x p(x) Definition Let X be a discrete

More information

MidTerm 1) Find the following (round off to one decimal place):

MidTerm 1) Find the following (round off to one decimal place): MidTerm 1) 68 49 21 55 57 61 70 42 59 50 66 99 Find the following (round off to one decimal place): Mean = 58:083, round off to 58.1 Median = 58 Range = max min = 99 21 = 78 St. Deviation = s = 8:535,

More information