Chapter 7 - Lecture 1 General concepts and criteria

Similar documents
Point Estimation. Principle of Unbiased Estimation. When choosing among several different estimators of θ, select one that is unbiased.

Applied Statistics I

Chapter 8. Introduction to Statistical Inference

Chapter 6: Point Estimation

Point Estimators. STATISTICS Lecture no. 10. Department of Econometrics FEM UO Brno office 69a, tel

Interval estimation. September 29, Outline Basic ideas Sampling variation and CLT Interval estimation using X More general problems

Chapter 7: Point Estimation and Sampling Distributions

Point Estimation. Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage

Definition 9.1 A point estimate is any function T (X 1,..., X n ) of a random sample. We often write an estimator of the parameter θ as ˆθ.

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER

Statistical analysis and bootstrapping

Review of key points about estimators

STAT 509: Statistics for Engineers Dr. Dewei Wang. Copyright 2014 John Wiley & Sons, Inc. All rights reserved.

MATH 3200 Exam 3 Dr. Syring

BIO5312 Biostatistics Lecture 5: Estimations

Review of key points about estimators

Chapter 5: Statistical Inference (in General)

MVE051/MSG Lecture 7

Chapter 5. Statistical inference for Parametric Models

Chapter 8: Sampling distributions of estimators Sections

Lecture 10: Point Estimation

Econ 300: Quantitative Methods in Economics. 11th Class 10/19/09

Lecture 23. STAT 225 Introduction to Probability Models April 4, Whitney Huang Purdue University. Normal approximation to Binomial

Confidence Intervals Introduction

Statistics for Business and Economics

Chapter 8: Sampling distributions of estimators Sections

As you draw random samples of size n, as n increases, the sample means tend to be normally distributed.

Chapter 7: SAMPLING DISTRIBUTIONS & POINT ESTIMATION OF PARAMETERS

Tutorial 6. Sampling Distribution. ENGG2450A Tutors. 27 February The Chinese University of Hong Kong 1/6

Contents. 1 Introduction. Math 321 Chapter 5 Confidence Intervals. 1 Introduction 1

Lecture 18 Section Mon, Feb 16, 2009

Lecture 18 Section Mon, Sep 29, 2008

The Central Limit Theorem. Sec. 8.2: The Random Variable. it s Distribution. it s Distribution

Module 4: Point Estimation Statistics (OA3102)

Chapter 4: Asymptotic Properties of MLE (Part 3)

1 Introduction 1. 3 Confidence interval for proportion p 6

Chapter Seven: Confidence Intervals and Sample Size

χ 2 distributions and confidence intervals for population variance

Lecture 6: Confidence Intervals

Actuarial Mathematics and Statistics Statistics 5 Part 2: Statistical Inference Tutorial Problems

STAT Chapter 6: Sampling Distributions

1. Covariance between two variables X and Y is denoted by Cov(X, Y) and defined by. Cov(X, Y ) = E(X E(X))(Y E(Y ))

Statistics and Their Distributions

Chapter 5. Sampling Distributions

Lecture Notes 6. Assume F belongs to a family of distributions, (e.g. F is Normal), indexed by some parameter θ.

Point Estimation. Some General Concepts of Point Estimation. Example. Estimator quality

Computer Statistics with R

Unit 5: Sampling Distributions of Statistics

4.1 Introduction Estimating a population mean The problem with estimating a population mean with a sample mean: an example...

Unit 5: Sampling Distributions of Statistics

8.1 Estimation of the Mean and Proportion

Much of what appears here comes from ideas presented in the book:

Sampling and sampling distribution

Confidence Intervals. σ unknown, small samples The t-statistic /22

Chapter 7: Estimation Sections

may be of interest. That is, the average difference between the estimator and the truth. Estimators with Bias(ˆθ) = 0 are called unbiased.

Value (x) probability Example A-2: Construct a histogram for population Ψ.

Point Estimation. Edwin Leuven

IEOR E4703: Monte-Carlo Simulation

Elementary Statistics Lecture 5

UNIVERSITY OF VICTORIA Midterm June 2014 Solutions

MLLunsford 1. Activity: Central Limit Theorem Theory and Computations

STAT 111 Recitation 3

5.3 Interval Estimation

6 Central Limit Theorem. (Chs 6.4, 6.5)

Simple Random Sampling. Sampling Distribution

Chapter 7 Sampling Distributions and Point Estimation of Parameters

Statistics for Managers Using Microsoft Excel 7 th Edition

Module 4: Probability

ECON 214 Elements of Statistics for Economists 2016/2017

Determining Sample Size. Slide 1 ˆ ˆ. p q n E = z α / 2. (solve for n by algebra) n = E 2

MATH 10 INTRODUCTORY STATISTICS

Normal Distribution. Notes. Normal Distribution. Standard Normal. Sums of Normal Random Variables. Normal. approximation of Binomial.

STATISTICS and PROBABILITY

MATH 264 Problem Homework I

درس هفتم یادگیري ماشین. (Machine Learning) دانشگاه فردوسی مشهد دانشکده مهندسی رضا منصفی

Statistical estimation

Chapter 7. Sampling Distributions

Chapter 7 presents the beginning of inferential statistics. The two major activities of inferential statistics are

Midterm Exam III Review

Point Estimation. Copyright Cengage Learning. All rights reserved.

Name PID Section # (enrolled)

The Constant Expected Return Model

Central Limit Theorem

ECE 295: Lecture 03 Estimation and Confidence Interval

Probability Theory and Simulation Methods. April 9th, Lecture 20: Special distributions

Sampling Distribution

Back to estimators...

If the distribution of a random variable x is approximately normal, then

1. Statistical problems - a) Distribution is known. b) Distribution is unknown.

Lecture 22. Survey Sampling: an Overview

What was in the last lecture?

ECON 214 Elements of Statistics for Economists 2016/2017

Standard Deviation. Lecture 18 Section Robb T. Koether. Hampden-Sydney College. Mon, Sep 26, 2011

INSTITUTE AND FACULTY OF ACTUARIES. Curriculum 2019 SPECIMEN EXAMINATION

PROBABILITY DISTRIBUTIONS

Chapter 7: Estimation Sections

Discrete Random Variables

Discrete Random Variables

Engineering Statistics ECIV 2305

Transcription:

Chapter 7 - Lecture 1 General concepts and criteria January 29th, 2010

Best estimator Mean Square error Unbiased estimators Example Unbiased estimators not unique Special case MVUE Bootstrap

General Question What do you think the main purpose of Statistics is?

Parameter versus Estimator Parameter refers to a value that represents the whole population. (If we go back to Stat 318 we will see that the distribution of a random variable might depend on some parameters. For example: On which parameter does the Poisson distribution depends?) Estimator refers to a value that represents a specific sample of the population and is used to find the value of the parameter when the parameter is unknown.

s A point estimate of parameter θ is a single number that can be regarded as a sensible value for θ. A point estimate is obtained by selecting a suitable statistic and computing its value from the given data. The selected statistic is called the point estimator of θ. Both the point estimate and point estimator of a parameter θ are denoted with ˆθ. The difference is the that, as a function the estimator uses the capital letters and the estimate uses the small letters.

Example with Poisson Suppose that the number of typos in each of my lectures follow a Poisson distribution with mean λ unknown. I randomly choose 10 lectures and the number of typos I find in them are: 0, 0, 0, 1, 1, 1, 1, 2, 3, 6. Find the estimator and the estimate of λ.

Example with Normal Suppose that the average time a student needs to finish a homework assignment is normally distributed with unknown average. I randomly ask 10 students and I get the following answers for the time it takes for them: 30, 25, 55, 60, 40, 45, 30, 60, 45, 50. Find several estimators and estimates of the population average.

Best possible estimator Best estimator Mean Square error Unbiased estimators Example As we have seen in the last example (see also example 7.2 in your book) there might be several choices to estimate a parameter θ. Which one is the best? In the best case, an estimator that is equal to the parameter always, that is ˆθ = θ, is the best estimator. But every estimator is a statistic. And we know that every statistic has a different value from sample to sample. So the above is impossible. Any solutions?

of Mean Square error Best estimator Mean Square error Unbiased estimators Example The mean square error for an estimator ˆθ is defined as follows: ( ) ) 2 MSE(ˆθ) = E (ˆθ θ It can be shown (how?) that the above can be written as: Why is MSE important? ( ) ) 2 MSE(ˆθ) = var(ˆθ) + E (ˆθ θ

Unbiased estimators Best estimator Mean Square error Unbiased estimators Example ) The value E (ˆθ θ is called bias of the point estimator ˆθ. A point estimator ˆθ is said to be an unbiased estimator of θ if: ) E (ˆθ = θ

Example 7.5 page 332 Best estimator Mean Square error Unbiased estimators Example

Proposition Outline Unbiased estimators not unique Special case MVUE If X 1,..., X n is a random sample from a distribution with mean µ then X is always an unbiased estimator of µ. Moreover if the distribution is continuous and symmetric then the median and the trimmed mean are also unbiased estimators of µ. The above shows that unbiased estimators are not unique. If we have several unbiased estimators which one is better?

of MVUE Unbiased estimators not unique Special case MVUE Among all estimators ˆθ that are unbiased we choose the one with the minimum variance. This estimator is called minimum variance unbiased estimator (MVUE) of θ. MVUEs are important in connection with the MSE. What happens to the MSE if we have an MVUE?

Theorem Outline Unbiased estimators not unique Special case MVUE Theorem If X 1,..., X n is a random sample from a normal distribution with parameters µ and σ 2 ( X i N(µ, σ 2 ) ). Then the estimator ˆµ = X is the MVUE for µ.

Example 7.7 page 335 Unbiased estimators not unique Special case MVUE

Unbiased estimators not unique Special case MVUE Example 7.6 page 335 - The confusing one

of an estimator Bootstrap Every time we report an estimator, we want to know how much inflation there is in our estimation for the parameter of interest. This inflation is called the standard error of the estimator. It is calculated as follows: ) var (ˆθ (1) σˆθ =

Bootstrap Estimated of an estimator Sometimes the standard error involves unknown parameters. In this case the standard error cannot be calculated. If the parameters can be estimated then we can calculate the estimated standard error, which is: ) ˆσˆθ = var (ˆθ (2)

Example Outline Bootstrap Suppose that the average time a student needs to finish a homework assignment is normally distributed with unknown mean and standard deviation equal to 15. I randomly ask 10 students and I get the following answers for the time it takes for them: 30, 25, 55, 60, 40, 45, 30, 60, 45, 50. Find the standard error of the estimator for the mean.

Example Outline Bootstrap Suppose that the average time a student needs to finish a homework assignment is normally distributed with unknown mean and standard deviation. I randomly ask 10 students and I get the following answers for the time it takes for them: 30, 25, 55, 60, 40, 45, 30, 60, 45, 50. Find the standard error of the estimator for the mean.

Bootstrap Bootstrap Sometimes we cannot find a specific formula to estimate the standard error of the estimate. In this case one can use bootstrap to find an estimated standard error of the estimate. Let s assume that we have a random sample from N(µ, σ 2 ) where µ and σ 2 are both unknown. We estimate them by ˆµ = x = 2.2 and ˆσ 2 = s 2 = 8.3. We then use computer to calculate B bootstrap samples from N(2.2, 8.3).

Bootstrap Outline Bootstrap In each sample we calculate a new estimate for the mean ˆµ B and a new estimate for the variance ˆσ 2 B. We average all the means and all the variances to obtain ˆµ = B i=1 ˆµ i B, ˆσ 2 = The bootstrap estimated of the standard error of the two estimators are: Sˆµ = 1 B (ˆµ B 1 i ˆµ ) 2, Sˆσ 2 = 1 B 2 (ˆσ i ˆσ 2 ) 2 B 1 i=1 B i=1 B ˆσ 2 i i=1

Section 7.1 page 340 1, 2, 3, 4, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 19