x satisfying all regularity conditions. Then

Similar documents
14.30 Introduction to Statistical Methods in Economics Spring 2009

Summary. Recap. Last Lecture. .1 If you know MLE of θ, can you also know MLE of τ(θ) for any function τ?

5. Best Unbiased Estimators

Exam 1 Spring 2015 Statistics for Applications 3/5/2015

Lecture 4: Parameter Estimation and Confidence Intervals. GENOME 560 Doug Fowler, GS

Lecture 5 Point Es/mator and Sampling Distribu/on

ECON 5350 Class Notes Maximum Likelihood Estimation

Lecture 9: The law of large numbers and central limit theorem

Introduction to Probability and Statistics Chapter 7

5 Statistical Inference

18.S096 Problem Set 5 Fall 2013 Volatility Modeling Due Date: 10/29/2013

STAT 135 Solutions to Homework 3: 30 points

Rafa l Kulik and Marc Raimondo. University of Ottawa and University of Sydney. Supplementary material

Chapter 10 - Lecture 2 The independent two sample t-test and. confidence interval

0.1 Valuation Formula:

ST 305: Exam 2 Fall 2014

Parametric Density Estimation: Maximum Likelihood Estimation

A Bayesian perspective on estimating mean, variance, and standard-deviation from data

Exam 2. Instructor: Cynthia Rudin TA: Dimitrios Bisias. October 25, 2011

Topic 14: Maximum Likelihood Estimation

Chpt 5. Discrete Probability Distributions. 5-3 Mean, Variance, Standard Deviation, and Expectation

Combining imperfect data, and an introduction to data assimilation Ross Bannister, NCEO, September 2010

A random variable is a variable whose value is a numerical outcome of a random phenomenon.

. (The calculated sample mean is symbolized by x.)

Estimating Proportions with Confidence

Maximum Empirical Likelihood Estimation (MELE)

Solutions to Problem Sheet 1

BASIC STATISTICS ECOE 1323

1. Suppose X is a variable that follows the normal distribution with known standard deviation σ = 0.3 but unknown mean µ.

4.5 Generalized likelihood ratio test

Statistics for Economics & Business

Today: Finish Chapter 9 (Sections 9.6 to 9.8 and 9.9 Lesson 3)

AY Term 2 Mock Examination

SCHOOL OF ACCOUNTING AND BUSINESS BSc. (APPLIED ACCOUNTING) GENERAL / SPECIAL DEGREE PROGRAMME

A point estimate is the value of a statistic that estimates the value of a parameter.

Unbiased estimators Estimators

Monetary Economics: Problem Set #5 Solutions

= α e ; x 0. Such a random variable is said to have an exponential distribution, with parameter α. [Here, view X as time-to-failure.

Sampling Distributions and Estimation

Inferential Statistics and Probability a Holistic Approach. Inference Process. Inference Process. Chapter 8 Slides. Maurice Geraghty,

INTERVAL GAMES. and player 2 selects 1, then player 2 would give player 1 a payoff of, 1) = 0.

1 Random Variables and Key Statistics

Topic-7. Large Sample Estimation

Bayes Estimator for Coefficient of Variation and Inverse Coefficient of Variation for the Normal Distribution

Introduction to Econometrics (3 rd Updated Edition) Solutions to Odd- Numbered End- of- Chapter Exercises: Chapter 2

point estimator a random variable (like P or X) whose values are used to estimate a population parameter

NOTES ON ESTIMATION AND CONFIDENCE INTERVALS. 1. Estimation

Simulation Efficiency and an Introduction to Variance Reduction Methods

ii. Interval estimation:

Math 312, Intro. to Real Analysis: Homework #4 Solutions

AMS Final Exam Spring 2018

EXERCISE - BINOMIAL THEOREM

Chapter 8. Confidence Interval Estimation. Copyright 2015, 2012, 2009 Pearson Education, Inc. Chapter 8, Slide 1

FINM6900 Finance Theory How Is Asymmetric Information Reflected in Asset Prices?

Asymptotics: Consistency and Delta Method

FOUNDATION ACTED COURSE (FAC)

Confidence Intervals. CI for a population mean (σ is known and n > 30 or the variable is normally distributed in the.

Sampling Distributions and Estimation

Statistics for Business and Economics

The Valuation of the Catastrophe Equity Puts with Jump Risks

Standard Deviations for Normal Sampling Distributions are: For proportions For means _

MATH : EXAM 2 REVIEW. A = P 1 + AP R ) ny

r i = a i + b i f b i = Cov[r i, f] The only parameters to be estimated for this model are a i 's, b i 's, σe 2 i

Chapter 8: Estimation of Mean & Proportion. Introduction

CHAPTER 8 Estimating with Confidence

Outline. Populations. Defs: A (finite) population is a (finite) set P of elements e. A variable is a function v : P IR. Population and Characteristics

Binomial Model. Stock Price Dynamics. The Key Idea Riskless Hedge

Lecture 4: Probability (continued)

B = A x z

Probability and statistics

These characteristics are expressed in terms of statistical properties which are estimated from the sample data.

Sampling Distributions & Estimators

The material in this chapter is motivated by Experiment 9.

Point Estimation by MLE Lesson 5

Exercise. Show the corrected sample variance is an unbiased estimator of population variance. S 2 = n i=1 (X i X ) 2 n 1. Exercise Estimation

Sequences and Series

Online appendices from The xva Challenge by Jon Gregory. APPENDIX 10A: Exposure and swaption analogy.

1 Estimating sensitivities

Kernel Density Estimation. Let X be a random variable with continuous distribution F (x) and density f(x) = d

Notes on Expected Revenue from Auctions

Online appendices from Counterparty Risk and Credit Value Adjustment a continuing challenge for global financial markets by Jon Gregory

CHAPTER 8: CONFIDENCE INTERVAL ESTIMATES for Means and Proportions

An Empirical Study of the Behaviour of the Sample Kurtosis in Samples from Symmetric Stable Distributions

Math 124: Lecture for Week 10 of 17

Models of Asset Pricing

Point Estimation by MLE Lesson 5

Basic formula for confidence intervals. Formulas for estimating population variance Normal Uniform Proportion

CHAPTER 8: CONFIDENCE INTERVAL ESTIMATES for Means and Proportions

Lecture 5: Sampling Distribution

Lecture Notes 6. Assume F belongs to a family of distributions, (e.g. F is Normal), indexed by some parameter θ.

Overlapping Generations

Confidence Intervals Introduction

1 Basic Growth Models

Models of Asset Pricing

Models of Asset Pricing

Stochastic Processes and their Applications in Financial Pricing

2.6 Rational Functions and Their Graphs

Appendix 1 to Chapter 5

APPLICATION OF GEOMETRIC SEQUENCES AND SERIES: COMPOUND INTEREST AND ANNUITIES

Random Sequences Using the Divisor Pairs Function

Transcription:

AMS570.01 Practice Midterm Exam Sprig, 018 Name: ID: Sigature: Istructio: This is a close book exam. You are allowed oe-page 8x11 formula sheet (-sided). No cellphoe or calculator or computer is allowed. Cheatig shall result i a course grade of F. The exam will last the etire lecture. Please provide complete solutios for full credit ad also tur i this cover page. Good luck! **** Please ote that this practice exam has more questios tha what will appear i the real midterm. The real midterm will have oly 3 problems each with o more tha 3 questios. 1. Let,, i. ~ i. d. (, ) 1 X X N, be a radom sample from the ormal populatio where is assumed kow. Please derive (a) The maximum likelihood estimator for. (b) The method of momet estimators for. (c) Are the above estimator(s) for ubiased? (d) Are these two estimators idetical? (e) Is the MLE a efficiet estimator for? (f) Is the MOME a efficiet estimator for? (g) Is the MLE a best estimator (UMVUE) for? (h) Is the MOME a best estimator (UMVUE) for? (i) Is the sample variace a efficiet estimator for Hit: Cramér-Rao Iequality: Let ˆ h X1, X,, X radom sample from a populatio with pdf f ;? Is it a best estimator (UMVUE) for be ubiased for, where X, i 1,,, is a X x satisfyig all regularity coditios. The 1 1 ˆ l f X x; l f X x; Var E E Solutio: (a) The likelihood fuctio is L = f (x i ; σ ) = 1 πσ exp [ (x i μ) σ ] The log likelihood fuctio is Solvig l = ll = costat l(σ ) (x i μ) i = (π) / [σ ] / exp [ (x i μ) σ ] σ l σ = σ + (x i μ) σ 4 = 0 We obtai the MLE for σ : σ = (X i μ) (b) We ow derive the MOME estimator for σ. Sice the first populatio momet is E[X] = μ that does ot ivolve σ, we oly eed to use the secod populatio ad sample momet. Settig them equal we have:? 1

X i1 E( X ) Oe step further we have the MOME estimator of σ : σ = X i μ i (c) Sice E[σ ] = E(X i μ) = σ It is straight-forward to verify that the MLE is a ubiased estimator for σ. = (X i μ) σ Sice E[σ ] = E[X i ] μ = (σ + μ ) μ = σ We have show that the MOME σ = X i μ is a ubiased estimator for σ. (d) The MLE for σ ca be rewritte as: σ = (X i μ) = (X i μx i + μ ) = X i μx + μ Comparig to the MOME, we foud them NOT idetical, although both are ubiased estimators for σ uless μ = 0. (e) Now we calculate the Cramer-Lower boud for the variace of a ubiased estimator for σ : l[f (x; σ 1 (x μ) )] = l [ exp [ πσ σ ]] = costatt 1 (x μ) lσ σ l[f (x; σ )] σ = 1 (x μ) + σ σ 4 l[f (x; σ )] = 1 (x μ) σ σ4 σ 6 E [ l[f (x; σ )] ] = E [ 1 (X μ) σ σ4 σ 6 ] = [ 1 σ 4 σ σ 6] = Thus the Cramer-Rao lower boud is: σ 4

σ 4 Therefore we claim that the MLE is a efficiet estimator for σ. We ow show that the MLE for σ : is a efficiet estimator for σ. Let σ = (X i μ) W = (X i μ) σ We kow that W~ χ ad therefore, Var(W) = Thus Var(σ ) = Var ( σ W ) = σ4 σ4 = Therefore, the MLE is a efficiet estimator for σ, ad subsequetly we ca immediately claim it is also the best estimator (UMVUE) for σ our aswer for part (g). (f) Usig the momet geeratig fuctio, we ca show that the variace of the MOME is: Var[σ ] = Var[X i ] = Var[X ] = σ4 + 4μ σ Comparig to the C-R lower boud, we foud the MOME is NOT is a efficiet estimator for σ uless μ = 0. Sice the MOME does ot have lower variace tha the MLE whe µ is ot zero, we claim for (h) that the MOME is ot a best estimator uless μ = 0. ***Below we show the detailed derivatios of [X ], ad thus, Var[σ ] *** Var[X ] = E[X 4 ] (E[X ]) = E[X 4 ] (σ + μ ) Now we derive E[X 4 ] usig the momet geeratig fuctio for X~N(μ, σ ): M(t) = exp(μt + σ t /) The first through the fourth derivatives for M(t) with respect to t is as follows: dm(t) dt = (μ + σ t)exp(μt + σ t /) d M(t) dt = σ exp(μt + σ t /) + (μ + σ t) exp(μt + σ t /) d 3 M(t) dt 3 = 3σ (μ + σ t)exp(μt + σ t /) + (μ + σ t) 3 exp(μt + σ t /) 3

d 4 M(t) dt 4 = 3σ 4 exp(μt + σ t /) + 6σ (μ + σ t) exp(μt + σ t /) + (μ + σ t) 4 exp(μt + σ t /) Therefore we have: E[X 4 ] = d4 M(t) dt 4 t=0 = 3σ 4 + 6μ σ + μ 4 Thus: Var[X ] = E[X 4 ] (E[X ]) = E[X 4 ] (σ + μ ) = σ 4 + 4μ σ Therefore we have: Var[σ ] = Var[X i ] = Var[X ] = σ4 + 4μ σ (g) Alteratively, we ca derive that the MLE is the best estimator directly (rather tha usig the efficiet estimator is also a best estimator argumet) as follows: The populatio pdf is: f(x; σ ) = 1 (x μ) πσ e σ = 1 1 πσ e σ (x μ) So it is a regular expoetial family, where the red part is w(σ ) ad the gree part is t(x). Thus T(X) = (X i μ) i is a complete & sufficiet statistic (CSS) for σ. It is easy to see that the MLE for σ is a fuctio of the complete & sufficiet statistic as follows: σ = (X i μ) = T(X) I additio, we kow that the MLE is a ubiased estimator for σ as we have doe i part (c): E(σ ) = E i (X i μ) = i E(X i μ) = i Var(X i) σ Sice the MLE σ is a ubiased estimator ad a fuctio of the CSS, σ is the best estimator (UMVUE) for σ by the Lehma-Scheffe Theorem. (j) The sample variace is also a ubiased estimator for σ : Let S = (X i X ) 1 V = (X i X ) σ We kow that V~ χ 1 ad therefore, Var(V) = ( 1) Thus 4

Var(S ) = Var ( σ V 1 ) = σ 4 σ4 ( 1) = ( 1) 1 > σ4 Therefore, the sample variace is either a efficiet estimator or a best estimator for σ whe μ is kow.. Suppose that the radom variables Y 1, Y,, Y satisfy Y i = βx i + ε i, i = 1,,, where x 1, x,, x are fixed costats, ad ε 1, ε,, ε are idepedet Gaussia radom variables with mea 0 ad variace σ (ukow). This is called a regressio through the origi. Please derive: (1) The (ordiary) least squares estimator (LSE) for β. () The maximum likelihood estimator (MLE) for β. Is the MLE the same as the LSE? (3) The distributio of the MLE. (4) Ca you fid a way to derive the method of momet estimator (MOME) for β? If so, please derive. Solutio: (1) To miimize SS β = 0 x i(y i βx i ) SS = (y i βx i ) : = 0 β = x iy i x i y = β x is the least squares regressio lie through the origi. () Y i ~N(βx i, σ ), i = 1,, Sice Y 1, Y,, Y are idepedet to each other, the likelihood is: L = 1 πσ e The log likelihood is: (y i βx i ) σ = (πσ ) / exp [ (y i βx i ) σ ] ll = l(πσ ) (y i βx i ) σ Take derivative with respect to β, ad set it to zero, we obtai the MLE: x iy i (3) Let β = x i θ i =, i = 1,, x i The θ i Y i ~N(θ i βx i, θ i σ ), i = 1,, Furthermore, they are idepedet to each other. We have the momet geeratig fuctio for β : x i 5

Therefore we foud: M β (t) = E[exp(tβ )] = E [exp (t θ i Y i )] = E[exp(tθ i Y i )] = exp [tθ i βx i + 1 t θ i σ ] = exp [t θ i βx i = exp [tβ + 1 t σ x i ] β ~N (β, σ x i ) i + 1 t θ i σ ] i (4) Yes, for example, By settig up: We obtai a MOME: W i = Y i βx i ~N(0, σ ), i = 1,, W = Y βx = EW = 0 β = Y x 3. Let X1, X,, X be a radom sample from the trucated expoetial distributio with pdf: f ( x) exp x, if x ; ad f ( x) 0, if x. Please (a) Derive the method of momet estimator of θ; (b) Derive the MLE of θ. (c) Are the MOME ad the MLE ubiased estimators for θ? (d) Compare the MSE of the MOME ad the MLE for θ. Which oe is a better estimator for θ? Solutio: (a) First we derive the populatio mea as follows. x x x x x x 1 E X xf ( x) dx xe dx e xe dx e e dx xe e e xe e e e Settig the populatio mea equal to the sample mea, we have 1 X ; thus the MOME for θ is W = ˆ X 1 (b) The Likelihood is L exp x exp x ; for x, i 1,, for mi X i1 i i1 i i i ˆ mi i Thus the likelihood is maximized whe θ achieves its largest value; thus the MLE for θ is X (c) The MOME is obviously ubiased as E(W) = (1 + θ) 1 = θ. The pdf of the MLE Y = X (1) is: f(y) = e θ y, y θ Thus the MLE is ot ubiased. E(Y) = yf(y) dy = θ + 1 (d) It is easy to show that X θ ~ exp (1), ad Y θ ~ exp (1/) θ 6

Therefore by the formula MSE = Bias + Variace, we ca easily compute the MSE for W ad Y as: 1/ ad / respectively. Thus for sample size, the MSE for the MLE is smaller. 4. The coutig process {N(t), t 0} is said to be a Poisso process havig rate λ, λ > 0, if (i) N(0) = 0; (ii) The process has idepedet icremets; ad (iii) The umber of evets i ay iterval of legth t is Poisso distributed with mea λt. That is, for all s, t 0 (λt) λt P{N(t + s) N(s) = } = e, = 0,1,! Please aswer the followig questios. (a) For this Poisso process, let us deote the time if the first evet by T 1. Further, for > 1, let T deote the elapsed time betwee the (-1) st ad the th evet. The sequece {T, = 1,, } is called the sequece of iterarrival times. Please show that T i.i.d. expoetia(λ), = 1,, ~ (b) The waitig time for the th evet is defied as S = Please derive the distributio of the waitig time S Solutio: T i, 1 (a) P(T > t) = P(time iterval betwee ( 1)th ad th evet is loger tha t) = P(o evet occurs i the time iterval of legth t) = E[P(N(t + S) N(S) = 0 S)] = E[e λt ] Note that {P(N(t + S) N(S) = 0 S = s) = e λt } = e λt t > 0. Thus, the cumulative distributio fuctio is F T (t) = 1 e λt, t > 0, = 1,,3,. Besides, assumptio (ii) implies the idepedece property. Thus, T i.i.d. expoetia(λ), = 1,, ~ (b) Derive the distributio by idetifyig the mgf of S. By (a),t i.i.d. expoetia(λ), = 1,,, ~ S = T i M S (t) = M, 1 Ti (t) = M Ti (t) = [M T1 (t)] M T1 (t) = e ts 0 λe λs ds = 0 λe (λ t)s ds = λ (λ λ t t)e (λ t)s ds 0 = λ, t < λ. λ t Thus, M S (t) = ( λ λ t ) = (1 t λ ) correspodig to Gamma(, λ) with the followig pdf: f S (x) = λ τ() x 1 e λx, x > 0. 7

********* Topics to be covered i the midterm exam********* Dear Studets, There will be 3 problems (each with o more tha 3 sub-questios), take from our lecture otes ad text books covered (except the Bayesia estimators), icludig especially the followig topics: 1. Poit estimators This part is the same as the midterm where we have thoroughly reviewed the. Please review all cocepts covered i lectures icludig (1) geeral methods of derivig estimators icludig -- MLE, MOME, (Ordiary) Least Squares Estimators, ad () geeral properties of estimators such as bias, MSE, efficiet estimators, best estimators, sufficiet statistics, complete statistics, ad the related theorems.. Order statistics Their defiitios, distributios, ad applicatios especially i derivig the MLEs. 3. Poisso process This simple stochastic process provides a good groud for exercisig the cocepts of probabilities, their related properties (idepedece, mutually exclusive evets, etc.), statistical distributio ad sum of variables etc. 4. Probability This is the foudatio of mathematical statistics. At the begiig of the semester, we had revised may related cocepts. 5. Variables, joit distributio, ad variable trasformatio We have three ways to derive the distributio of trasformed variables: the cdf approach, the pdf approach ad the mgf approach. Oe should be fluet i all three. Please pay special attetio to the bivariate ormal radom variable oe should kow both the joit pdf ad the joit mgf ad how to use them, ad the 1-1 trasformatio of two variables especially how to trasform the joit domais. Our midterm exam will be held Thursday, March, 018, 8:30 AM ~ 9:50 AM, at our usual classroom Humaities 1006. 8