Point Estimators. STATISTICS Lecture no. 10. Department of Econometrics FEM UO Brno office 69a, tel

Similar documents
Chapter 7 - Lecture 1 General concepts and criteria

Definition 9.1 A point estimate is any function T (X 1,..., X n ) of a random sample. We often write an estimator of the parameter θ as ˆθ.

Lecture 10: Point Estimation

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER

Chapter 8. Introduction to Statistical Inference

Chapter 7: Point Estimation and Sampling Distributions

Point Estimation. Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage

Actuarial Mathematics and Statistics Statistics 5 Part 2: Statistical Inference Tutorial Problems

Exercise. Show the corrected sample variance is an unbiased estimator of population variance. S 2 = n i=1 (X i X ) 2 n 1. Exercise Estimation

Chapter 4: Asymptotic Properties of MLE (Part 3)

Chapter 5: Statistical Inference (in General)

Statistical estimation

MATH 3200 Exam 3 Dr. Syring

Point Estimation. Copyright Cengage Learning. All rights reserved.

EE641 Digital Image Processing II: Purdue University VISE - October 29,

Point Estimation. Some General Concepts of Point Estimation. Example. Estimator quality

Applied Statistics I

Back to estimators...

STAT 509: Statistics for Engineers Dr. Dewei Wang. Copyright 2014 John Wiley & Sons, Inc. All rights reserved.

Chapter 6: Point Estimation

Statistical analysis and bootstrapping

BIO5312 Biostatistics Lecture 5: Estimations

درس هفتم یادگیري ماشین. (Machine Learning) دانشگاه فردوسی مشهد دانشکده مهندسی رضا منصفی

Practice Exercises for Midterm Exam ST Statistical Theory - II The ACTUAL exam will consists of less number of problems.

Point Estimation. Principle of Unbiased Estimation. When choosing among several different estimators of θ, select one that is unbiased.

Random Variables Handout. Xavier Vilà

Lecture Notes 6. Assume F belongs to a family of distributions, (e.g. F is Normal), indexed by some parameter θ.

UQ, STAT2201, 2017, Lectures 3 and 4 Unit 3 Probability Distributions.

Tutorial 11: Limit Theorems. Baoxiang Wang & Yihan Zhang bxwang, April 10, 2017

Econ 300: Quantitative Methods in Economics. 11th Class 10/19/09

Simulation Wrap-up, Statistics COS 323

Chapter 5. Statistical inference for Parametric Models

may be of interest. That is, the average difference between the estimator and the truth. Estimators with Bias(ˆθ) = 0 are called unbiased.

6. Genetics examples: Hardy-Weinberg Equilibrium

Chapter 8: Sampling distributions of estimators Sections

MVE051/MSG Lecture 7

Random Samples. Mathematics 47: Lecture 6. Dan Sloughter. Furman University. March 13, 2006

Learning From Data: MLE. Maximum Likelihood Estimators

Statistics 6 th Edition

PROBABILITY AND STATISTICS

Chapter 7: Estimation Sections

Random variables. Contents

Chapter 8: Sampling distributions of estimators Sections

4-1. Chapter 4. Commonly Used Distributions by The McGraw-Hill Companies, Inc. All rights reserved.

Chapter 5. Sampling Distributions

Central Limit Theorem (cont d) 7/28/2006

Chapter 7: Estimation Sections

4.1 Introduction Estimating a population mean The problem with estimating a population mean with a sample mean: an example...

Point Estimation. Edwin Leuven

M.I.T Fall Practice Problems

Much of what appears here comes from ideas presented in the book:

CSE 312 Winter Learning From Data: Maximum Likelihood Estimators (MLE)

STAT/MATH 395 PROBABILITY II

Lecture Note 8 of Bus 41202, Spring 2017: Stochastic Diffusion Equation & Option Pricing

Version A. Problem 1. Let X be the continuous random variable defined by the following pdf: 1 x/2 when 0 x 2, f(x) = 0 otherwise.

IEOR E4703: Monte-Carlo Simulation

IEOR E4703: Monte-Carlo Simulation

Chapter 4: Commonly Used Distributions. Statistics for Engineers and Scientists Fourth Edition William Navidi

Likelihood Methods of Inference. Toss coin 6 times and get Heads twice.

UNIVERSITY OF VICTORIA Midterm June 2014 Solutions

3 ˆθ B = X 1 + X 2 + X 3. 7 a) Find the Bias, Variance and MSE of each estimator. Which estimator is the best according

Homework Assignments

8.1 Estimation of the Mean and Proportion

Lecture 22. Survey Sampling: an Overview

Chapter 7: Estimation Sections

IEOR 165 Lecture 1 Probability Review

Random Variables and Probability Functions

Week 1 Quantitative Analysis of Financial Markets Basic Statistics A

INSTITUTE AND FACULTY OF ACTUARIES. Curriculum 2019 SPECIMEN EXAMINATION

ECE 340 Probabilistic Methods in Engineering M/W 3-4:15. Lecture 10: Continuous RV Families. Prof. Vince Calhoun

Computer Statistics with R

The Vasicek Distribution

STAT Chapter 7: Central Limit Theorem

Continuous random variables

Commonly Used Distributions

Normal Distribution. Notes. Normal Distribution. Standard Normal. Sums of Normal Random Variables. Normal. approximation of Binomial.

Review of key points about estimators

IEOR E4703: Monte-Carlo Simulation

**BEGINNING OF EXAMINATION** A random sample of five observations from a population is:

Case Study: Heavy-Tailed Distribution and Reinsurance Rate-making

Probability Theory and Simulation Methods. April 9th, Lecture 20: Special distributions

Confidence Intervals Introduction

ECE 295: Lecture 03 Estimation and Confidence Interval

Interval estimation. September 29, Outline Basic ideas Sampling variation and CLT Interval estimation using X More general problems

Chapter 3 Common Families of Distributions. Definition 3.4.1: A family of pmfs or pdfs is called exponential family if it can be expressed as

Lecture 18 Section Mon, Feb 16, 2009

The Bernoulli distribution

INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY. Lecture -5 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc.

Lecture 18 Section Mon, Sep 29, 2008

Comparing the Means of. Two Log-Normal Distributions: A Likelihood Approach

4.2 Probability Distributions

Review of key points about estimators

Section The Sampling Distribution of a Sample Mean

Lecture 17: More on Markov Decision Processes. Reinforcement learning

Statistics for Business and Economics

Unit 5: Sampling Distributions of Statistics

Modelling financial data with stochastic processes

Unit 5: Sampling Distributions of Statistics

Econometric Methods for Valuation Analysis

STAT 241/251 - Chapter 7: Central Limit Theorem

Transcription:

STATISTICS Lecture no. 10 Department of Econometrics FEM UO Brno office 69a, tel. 973 442029 email:jiri.neubauer@unob.cz 8. 12. 2009

Introduction Suppose that we manufacture lightbulbs and we want to state the average lifetime on the box. Let us say that we have following five observed lifetimes (in hours) 983 1063 1241 1040 1103 which have the average 1086. If it is all the information we have, it seems to be reasonable to state 1086 as the average lifetime.

Introduction Let the random variable X be the lifetime of a lightbulb, and let E(X ) = µ. Here µ is an unknown parameter. We decide to repeat the experiment to measure a lifetime 5 times and will then get an outcome on the five random variables X 1,..., X 5 that are i.i.d. (independent identically distributed). We now estimate µ by X = 1 5 5 X i which is the sample mean.

Point Estimator Definition Let X 1,..., X n be a random sample. The statistic (random variable) T = T (X 1, X 2,..., X n ) = T (X), which is a function of the random sample and is used to estimate an unknown parameter θ, is called a point estimator of θ. We write T (X) = ˆθ.

Point Estimator Definition The estimator T (X) is said to be unbiased estimator the parameter θ if E [T (X)] = θ.

Point Estimator Definition The estimator T (X) is said to be unbiased estimator the parameter θ if E [T (X)] = θ. The difference B(θ, T ) = E [T (X)] θ is called a bias of the estimator T (X).

Example Point Estimator Let X 1, X 2,..., X n be a random sample from a distribution with the mean µ and the variance σ 2. The sample mean X is an unbiased estimator of µ, because ( ) 1 E(X ) = E X i = 1 E(X i ) = µ. n n

Example Point Estimator Let X 1, X 2,..., X n be a random sample from a distribution with the mean µ and the variance σ 2. The sample mean X is an unbiased estimator of µ, because ( ) 1 E(X ) = E X i = 1 E(X i ) = µ. n n The sample variance S 2 is an unbiased estimator of σ 2, because ( ) E(S 2 1 ) = E (X i X ) 2 = = σ 2. n 1

Example Point Estimator Let X 1, X 2,..., X n be a random sample from a distribution with the mean µ and the variance σ 2. The (moment) variance Sn 2 is a biased estimator of σ 2, because ( ) E(Sn) 2 1 = E (X i X ) 2 = = n 1 n n σ2.

Example Point Estimator Let X 1, X 2,..., X n be a random sample from a distribution with the mean µ and the variance σ 2. The (moment) variance Sn 2 is a biased estimator of σ 2, because ( ) E(Sn) 2 1 = E (X i X ) 2 = = n 1 n n σ2. The bias of the estimator S 2 n is B(σ 2, S 2 n) = E(S 2 n) σ 2 = n 1 n σ2 σ 2 = 1 n σ2. The bias decreases for large n.

Some estimators are biased but their bias decrease when n increases. Definition If lim E [T (X)] = θ, n then the estimator T (X) is said to be asymptotically unbiased estimator of the parameter θ. It easy to see that lim E [T (X) θ] = 0. n

Example Point Estimator The (moment) variance is an asymptotically unbiased estimator of σ 2, because lim E(S 2 n 1 n n) = lim n n σ2 = σ 2.

Definition The statistic T (X) is a consistent estimator of the parameter θ if for every ɛ > 0 lim P( T (X) θ < ɛ) = 1. n

Definition The statistic T (X) is a consistent estimator of the parameter θ if for every ɛ > 0 If lim P( T (X) θ < ɛ) = 1. n lim B(θ, T ) = 0 and lim D[T (X)] = 0, n n then T (X) is the consistent estimator of θ.

Example Point Estimator Prove that the sample mean is a consistent estimator of the expected value µ.

Example Point Estimator Prove that the sample mean is a consistent estimator of the expected value µ. According to E(X ) = µ and D(X ) = σ 2 /n we obtain σ 2 B(µ, X ) = E(X ) µ = 0 a lim D(X ) = lim n n n = 0.

If we have two unbiased estimators T 1 (X) = ˆθ and T 2 (X) = θ, which should we choose? Intuitively, we should choose the one that tends to be closer to θ, and since E(T 1 ) = E(T 2 ) = θ, it makes sense to choose the estimator with the smaller variance. Definition Suppose that T 1 (X) = ˆθ and T 2 (X) = θ are two unbiased estimators of θ. If D(T 1 (X)) < D(T 2 (X)) then T 1 (X) = ˆθ is said to be more efficient than T 2 (X) = θ.

Example Point Estimator We can find two unbiased estimators of a parameter λ of Poisson distribution E(X ) = λ and E(S 2 ) = λ. It is possible to calculate that D(X ) < D(S 2 ). The estimator X is more efficient then the estimator S 2.

How to Compare Estimators? Let us suppose we would like to compare unbiased and biased estimators of the parameter θ. In this case might not be suitable to choose one of the smallest variance. The estimator T has the smallest variance but has a large bias. Even the estimator with the smallest bias is not necessary the best one. The estimator U has no bias but its variance is to large. The estimator V seems to be the best.

Point Estimator Definition The mean square error of the estimator T of a parameter θ is defined as MSE(T ) = E(T θ) 2 = D(T ) + B 2 (θ, T ) (MSE of estimator = variance of estimator + bias 2 ), where T θ is a sample error.

Point Estimator The mean square error indicates the average sample error of estimates which can be calculated for all possible random sample of the size n.

Point Estimator The mean square error indicates the average sample error of estimates which can be calculated for all possible random sample of the size n. is a combination of 2 required properties (a small bias and a small variance), that why it is an universal criterion.

Point Estimator The mean square error indicates the average sample error of estimates which can be calculated for all possible random sample of the size n. is a combination of 2 required properties (a small bias and a small variance), that why it is an universal criterion. If T is an unbiased estimator then MSE(T ) = D(T ). Another possibility how to measure an accuracy of estimators is standard error SE = D(T ).

Example Point Estimator The sample mean is an unbiased estimator of the expected value µ, the standard error is equal to the standard deviation of the sample mean SE = D(X ) = σ(x ) = σ(x ). n σ(x ) is unknown, we have to estimate it by the sample standard deviation and we get the estimation ŜE = ˆσ(X ) n = S n.

Example Point Estimator Find the mean square error of S 2 and S 2 n. Let us start with the statistic S 2 which is an unbiased estimator of σ 2. MSE(S 2 ) = D(S 2 ) = E(S 2 σ 2 ) 2 = E(S 4 ) 2σ 2 E(σ 2 ) + σ 4 = = E(S 4 ) σ 4 = 2σ4 n 1.

Example Point Estimator Find the mean square error of S 2 and S 2 n. Let us start with the statistic S 2 which is an unbiased estimator of σ 2. MSE(S 2 ) = D(S 2 ) = E(S 2 σ 2 ) 2 = E(S 4 ) 2σ 2 E(σ 2 ) + σ 4 = = E(S 4 ) σ 4 = 2σ4 n 1. The MSE of the estimator S 2 n is MSE(Sn) 2 = 2 σ 2 ) 2 = E(Sn) 4 2 n 1 n σ4 + σ 4 = = E(Sn) 4 2 n n σ4 = 2n 1 σ 4, n 2. MSE(S 2 n) < MSE(S 2 ) because 2n 1 n 2 < 2 n 1

Method of Moments Method of Maximum Likelihood The definitions of unbiasness and other properties of estimators do not provide any guidance about how good estimators can be obtained. In this part, we discuss two methods for obtaining point estimators: the method of moments, the method of maximum likelihood. Maximum likelihood estimates are generally preferable to moment estimators because they have better efficiency properties. However, moment estimators are sometimes easier to compute. Both methods can produce unbiased point estimators.

Method of Moments Method of Maximum Likelihood Method of Moments The general idea behind the method of moments is to equate population moments, which are defined in terms of expected values, to the corresponding sample moments. The population moments will be functions of the unknown parameters. Then these equations are solved to yield estimators of the unknown parameters.

Method of Moments Method of Maximum Likelihood Method of Moments Let us assume the distribution with m 1 real parameters θ 1, θ 2,..., θ m and let X 1, X 2,..., X n be a random sample from this distribution. Let us suppose that exist moments µ r = E(X r i ) for r = 1, 2,..., m. These moments depend on the parameters θ 1, θ 2,..., θ m. Sample moments are defined by the formula M r = 1 n X r i, r = 1, 2....

Method of Moments Method of Maximum Likelihood Method of Moments Let X 1,..., X n be a random sample from either a probability function or probability density function with m unknown parameters θ 1,..., θ m. The moment estimators are found by equating the first m population moments to the first m sample moments and solving the resulting equations for the unknown parameters µ r = M r.

Method of Moments Method of Maximum Likelihood Example Estimation of the parameter λ Poisson distribution. Suppose that X 1,..., X n is a random sample from the Poisson distribution Po(λ), we get an equation µ 1 = M 1 E(X i ) = 1 n X i, the estimator ˆλ of the parameter λ is ˆλ = X.

Method of Moments Method of Maximum Likelihood Example Estimation of the parameters µ and σ 2 normal distribution. Suppose that X 1,..., X n is a random sample from the normal distribution N(µ, σ 2 ). µ 1 = M 1 E(X i ) = 1 n X i,

Method of Moments Method of Maximum Likelihood Example Estimation of the parameters µ and σ 2 normal distribution. Suppose that X 1,..., X n is a random sample from the normal distribution N(µ, σ 2 ). µ 2 = M 2 E(X 2 i ) = 1 n µ 1 = M 1 E(X i ) = 1 n X 2 i σ 2 + µ 2 = 1 n X i, D(X i ) + E(X i ) 2 = 1 n X 2 i X 2 i

Method of Moments Method of Maximum Likelihood Example Estimation of the parameters µ and σ 2 normal distribution. Suppose that X 1,..., X n is a random sample from the normal distribution N(µ, σ 2 ). µ 2 = M 2 E(X 2 i ) = 1 n We obtain estimators. ˆµ = X, ˆσ 2 = 1 n µ 1 = M 1 E(X i ) = 1 n X 2 i X 2 i σ 2 + µ 2 = 1 n X 2 = 1 n X i, D(X i ) + E(X i ) 2 = 1 n X 2 i X 2 i (X i X ) 2 = S 2 n = n 1 n S 2

Method of Moments Method of Maximum Likelihood Method of Maximum Likelihood Let X 1, X 2,..., X n be a random sample from either a probability density function f (x, θ) or a probability function p(x, θ) with an unknown parameter θ = (θ 1, θ 2,..., θ m ). A random vector X = (X 1, X 2,..., X n ) has either a joint probability density function or probability function or g(x, θ) = g(x 1, x 2,..., x n, θ) = f (x 1, θ)f (x 2, θ) f (x n, θ) g(x, θ) = g(x 1, x 2,..., x n, θ) = p(x 1, θ)p(x 2, θ) p(x n, θ).

Method of Moments Method of Maximum Likelihood Method of Maximum Likelihood The density g(x, θ) is a function of x with a given value of θ. If values x are given (observed data) than g(x, θ) is a function of a variable θ. We denote it L(θ, x) and call it a likelihood function. If exists some ˆθ which fulfils L(ˆθ, x) L(θ, x), then ˆθ is a maximum likelihood estimator of the parameter θ.

Method of Moments Method of Maximum Likelihood Method of Maximum Likelihood The density g(x, θ) is a function of x with a given value of θ. If values x are given (observed data) than g(x, θ) is a function of a variable θ. We denote it L(θ, x) and call it a likelihood function. If exists some ˆθ which fulfils L(ˆθ, x) L(θ, x), then ˆθ is a maximum likelihood estimator of the parameter θ. Sometimes is reasonable to use a logarithm of the likelihood function L(θ, x) = ln L(θ, x). For the maximum likelihood estimator we can write L(ˆθ, x) L(θ, x), because the logarithm is an increasing function.

Method of Moments Method of Maximum Likelihood Method of Maximum Likelihood The Maximum likelihood estimator of the vector θ = (θ 1, θ 2,..., θ m ) we obtain by solving a system of equations L(θ, x) θ i = 0, i = 1, 2,..., m.

Method of Moments Method of Maximum Likelihood Example Let X be a Bernoulli random variable. The probability function is { π p(x) = x (1 π) 1 x x = 0, 1, 0 otherwise. The likelihood function is L(π, x) = π x 1 (1 π) 1 x 1 π x 2 (1 π) 1 x 2... π xn (1 π) 1 xn = = π P n x i (1 π) n P n x i The logarithm of L(π, x) is ( L(π, x) = x i ln π + n ) x i ln(1 π).

Method of Moments Method of Maximum Likelihood Example We calculate the maximum of L(π, x) dl(π, x) dπ = n x i π n n x i 1 π = 0, and get the estimator ˆπ = n x i n = x.

Method of Moments Method of Maximum Likelihood Example Find a maximum likelihood estimator of a parameter λ of Poisson distribution Po(λ). λ P n x i L(λ, x) = e nλ x 1!x 2! x n!, L(λ, x) = ln L(λ, x) = nλ + dl(λ, x) dλ = n + ˆλ = 1 n x i ln λ ln(x 1!x 2! x n!) x i = x. x i 1 λ = 0