Practice Exercises for Midterm Exam ST Statistical Theory - II The ACTUAL exam will consists of less number of problems.

Similar documents
Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER

Lecture Notes 6. Assume F belongs to a family of distributions, (e.g. F is Normal), indexed by some parameter θ.

Definition 9.1 A point estimate is any function T (X 1,..., X n ) of a random sample. We often write an estimator of the parameter θ as ˆθ.

Point Estimators. STATISTICS Lecture no. 10. Department of Econometrics FEM UO Brno office 69a, tel

Version A. Problem 1. Let X be the continuous random variable defined by the following pdf: 1 x/2 when 0 x 2, f(x) = 0 otherwise.

Exam 2 Spring 2015 Statistics for Applications 4/9/2015

Chapter 7: Estimation Sections

Chapter 7: Estimation Sections

UQ, STAT2201, 2017, Lectures 3 and 4 Unit 3 Probability Distributions.

Chapter 8: Sampling distributions of estimators Sections

Actuarial Mathematics and Statistics Statistics 5 Part 2: Statistical Inference Tutorial Problems

Applied Statistics I

Qualifying Exam Solutions: Theoretical Statistics

Chapter 7: Estimation Sections

continuous rv Note for a legitimate pdf, we have f (x) 0 and f (x)dx = 1. For a continuous rv, P(X = c) = c f (x)dx = 0, hence

Computer Statistics with R

Chapter 3 Common Families of Distributions. Definition 3.4.1: A family of pmfs or pdfs is called exponential family if it can be expressed as

Lecture 23. STAT 225 Introduction to Probability Models April 4, Whitney Huang Purdue University. Normal approximation to Binomial

Back to estimators...

Point Estimation. Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage

Much of what appears here comes from ideas presented in the book:

Normal Distribution. Notes. Normal Distribution. Standard Normal. Sums of Normal Random Variables. Normal. approximation of Binomial.

Exercise. Show the corrected sample variance is an unbiased estimator of population variance. S 2 = n i=1 (X i X ) 2 n 1. Exercise Estimation

Chapter 4: Asymptotic Properties of MLE (Part 3)

INSTITUTE AND FACULTY OF ACTUARIES. Curriculum 2019 SPECIMEN EXAMINATION

MATH 3200 Exam 3 Dr. Syring

Non-informative Priors Multiparameter Models

Statistical analysis and bootstrapping

12 The Bootstrap and why it works

Decision theoretic estimation of the ratio of variances in a bivariate normal distribution 1

UNIVERSITY OF VICTORIA Midterm June 2014 Solutions

Lecture 10: Point Estimation

Point Estimation. Copyright Cengage Learning. All rights reserved.

Chapter 8. Introduction to Statistical Inference

STAT 509: Statistics for Engineers Dr. Dewei Wang. Copyright 2014 John Wiley & Sons, Inc. All rights reserved.

ELEMENTS OF MONTE CARLO SIMULATION

Chapter 7 - Lecture 1 General concepts and criteria

Bayesian Linear Model: Gory Details

Point Estimation. Principle of Unbiased Estimation. When choosing among several different estimators of θ, select one that is unbiased.

Confidence Intervals Introduction

Probability & Statistics

Posterior Inference. , where should we start? Consider the following computational procedure: 1. draw samples. 2. convert. 3. compute properties

Multi-armed bandit problems

Homework Problems Stat 479

درس هفتم یادگیري ماشین. (Machine Learning) دانشگاه فردوسی مشهد دانشکده مهندسی رضا منصفی

3 ˆθ B = X 1 + X 2 + X 3. 7 a) Find the Bias, Variance and MSE of each estimator. Which estimator is the best according

CS 361: Probability & Statistics

Random Variables Handout. Xavier Vilà

Objective Bayesian Analysis for Heteroscedastic Regression

CSC 411: Lecture 08: Generative Models for Classification

Tutorial 6. Sampling Distribution. ENGG2450A Tutors. 27 February The Chinese University of Hong Kong 1/6

Review for Final Exam Spring 2014 Jeremy Orloff and Jonathan Bloom

Chapter 8: Sampling distributions of estimators Sections

SOCIETY OF ACTUARIES EXAM STAM SHORT-TERM ACTUARIAL MATHEMATICS EXAM STAM SAMPLE QUESTIONS

**BEGINNING OF EXAMINATION** A random sample of five observations from a population is:

Sampling Distribution

Simulation Wrap-up, Statistics COS 323

Unit 5: Sampling Distributions of Statistics

Unit 5: Sampling Distributions of Statistics

Statistical estimation

Asymptotic results discrete time martingales and stochastic algorithms

1. You are given the following information about a stationary AR(2) model:

Chapter 7: SAMPLING DISTRIBUTIONS & POINT ESTIMATION OF PARAMETERS

Chapter 4 Continuous Random Variables and Probability Distributions

Statistical Inference and Methods

EVA Tutorial #1 BLOCK MAXIMA APPROACH IN HYDROLOGIC/CLIMATE APPLICATIONS. Rick Katz

1. Covariance between two variables X and Y is denoted by Cov(X, Y) and defined by. Cov(X, Y ) = E(X E(X))(Y E(Y ))

12. Conditional heteroscedastic models (ARCH) MA6622, Ernesto Mordecki, CityU, HK, 2006.

Bias Reduction Using the Bootstrap

Section 7.1: Continuous Random Variables

Reading: You should read Hull chapter 12 and perhaps the very first part of chapter 13.

Using Monte Carlo Integration and Control Variates to Estimate π

Estimation after Model Selection

A potentially useful approach to model nonlinearities in time series is to assume different behavior (structural break) in different subsamples

Common one-parameter models

Generating Random Numbers

The Normal Distribution

Chapter 4 Continuous Random Variables and Probability Distributions

STAT 111 Recitation 4

NEWCASTLE UNIVERSITY SCHOOL OF MATHEMATICS, STATISTICS & PHYSICS SEMESTER 1 SPECIMEN 2 MAS3904. Stochastic Financial Modelling. Time allowed: 2 hours

MTH6154 Financial Mathematics I Stochastic Interest Rates

Additional questions for chapter 3

Continuous Probability Distributions & Normal Distribution

Exam STAM Practice Exam #1

Understanding Tail Risk 1

Central Limit Theorem (cont d) 7/28/2006

Learning From Data: MLE. Maximum Likelihood Estimators

Week 1 Quantitative Analysis of Financial Markets Basic Statistics A

Probability. An intro for calculus students P= Figure 1: A normal integral

Machine Learning for Quantitative Finance

Conjugate priors: Beta and normal Class 15, Jeremy Orloff and Jonathan Bloom

MVE051/MSG Lecture 7

Central Limit Theorem, Joint Distributions Spring 2018

ECSE B Assignment 5 Solutions Fall (a) Using whichever of the Markov or the Chebyshev inequalities is applicable, estimate

The rth moment of a real-valued random variable X with density f(x) is. x r f(x) dx

Bivariate Birnbaum-Saunders Distribution

Chapter 5: Statistical Inference (in General)

PROBABILITY AND STATISTICS

χ 2 distributions and confidence intervals for population variance

Point Estimation. Some General Concepts of Point Estimation. Example. Estimator quality

Transcription:

Practice Exercises for Midterm Exam ST 522 - Statistical Theory - II The ACTUAL exam will consists of less number of problems. 1. Suppose X i F ( ) for i = 1,..., n, where F ( ) is a strictly increasing continuous cumulative distribution function. Let X (1) X (2) X (n) denote the ordered statistics. (a) Show that U j = F (X j ) U(0, 1) for j = 1,..., n. (b) Show that B j = F (X (j) ) ind Beta(j, n j + 1) for j = 1,..., n. (c) Show that E[X (j) ] = E[F 1 (B j )] where B j is as defined in part (b) above and F 1 (u) = inf{x R : F (x) u} for u [0, 1]. 2. Suppose X i f(x µ, σ) = 1 σ exp{ (x µ)/σ}i (µ, )(x) for i = 1,..., n, where µ R and σ > 0 are both unknown. Let θ = (µ, σ) denote parameter of this shifted exponential family. (a) Obtain a minimal sufficient statistic for θ. Is your minimal sufficient statistic complete for this family of distributions? (b) Show that X 1 µ σ Exp(1), hence (or otherwise) show that X (1) µ σ Exp ( 1 n). (c) Show that ( X X (1) )/(X (n) X (1) ) is an ancillary statistic. (d) Obtain the MME 1 of θ. Is the MME of θ unbiased? (e) Obtain the MLE 2 of θ. Is the MLE of θ unbiased? (f) Obtain the UMVUE 3 of θ. (g) Compute the MSE 4 of the MLE and UMVUE of σ. Which estimator is better in terms of having smaller MSE? (h) Obtain a class of conjugate prior distributions for θ. (i) Obtain the Bayes estimators, E[µ X 1,..., X n ] and E[σ X 1,..., X n ] of µ and σ, respectively, under the conjugate prior your derived in the previous part. 1 MME=Method of Moments Estimator 2 MLE=Maximum Likelihood Estimator 3 UMVUE=Uniformly Minimum Variance Unbiased Estimator 4 MSE=Mean Squared Error ST 522: Practice Problems for Midterm Exam Page 1 c Sujit Ghosh, NCSU Statistics

3. Provide examples of the following cases: (a) A minimal sufficient statistic that is of same dimension as that of the parameter. (b) A minimal sufficient statistic that is of larger dimension than that of the parameter. (c) A sufficient statistic that is of smaller dimension than that of the parameter. 4. Suppose X i f(x µ, σ) = 1 x µ φ( ) + 1 x µ exp{ } for i = 1, 2,..., n where 2σ σ 4σ σ µ R, σ > 0 are both unknown and φ( ) denotes the density of a standard normal distribution. State if the following statistics are ancillary (and provide justifications to your answers): (a) T 1 = X X (1) X (n) X (1) (b) T 2 = X X (1) X (n) +X (1) (c) T 3 = X X (1) S 1 where S 1 = 1 n n i=1 X i X (d) T 4 = 2 X X (1) X (n) S 1 5. Suppose X i U( 1, θ) for i = 1,..., n where θ > 1 is unknown. θ (a) Show that (X (1), X (n) ) is sufficient for θ (b) Is the statistic in part (a) above minimal sufficient? If yes, prove it otherwise exhibit a minimal sufficient statistic. (c) Is the minimal sufficient statistic that you found in part (b) above complete? Provide justifications. (d) Obtain the MLE and UMVUE of θ 6. Suppose X i U(0, θ) for i = 1,..., n where θ 1 is unknown. (a) Obtain the MLE of θ. Is the MLE unbiased? (b) Show that X (n) is not complete for this family of uniform distributions. (c) Obtain a minimal sufficient statistic for θ. Is your minimal sufficient statistic complete? (d) Obtain the UMVUE of θ (bit tricky!) ST 522: Practice Problems for Midterm Exam Page 2 c Sujit Ghosh, NCSU Statistics

7. Suppose X i f(x θ) for i = 1,..., n where θ Θ R d for some integer d 1. Let T = T (X 1,..., X n ) be a minimal sufficient statistic. (a) Show that the MLE of θ (if it exists uniquely) is a function of T only. (b) Consider a prior distribution of θ π(θ). Show that the posterior distribution of θ given (X 1,..., X n ) is same as that of θ given T. Conclude that any Bayes estimator is a function of T only. (c) Suppose d = 1. Show that the UMVUE of θ (if it exists) is a function of T only. (d) Give an example to show that MME need not be a function of T only. 8. Suppose X i f(x θ) for i = 1,..., n where θ Θ R d for some integer d 1. Let T 1 = T (X 1,..., X n ) and T 2 = T (X 1,..., X n ) be real-valued statistics to estimate η = τ(θ) (a real-valued function of θ). Assume that for each θ Θ, T 1 η is stochastically smaller 5 than T 2 η. (a) Show that MSE θ (T 1 ) = E θ [(T 1 η) 2 ] E[(T 2 η) 2 ] = MSE θ (T 2 ) for all θ Θ. (b) More generally, given any non-negative valued increasing continuous function G( ), show that E θ [G( T 1 η )] E[G( T 2 η )] for all θ Θ. 9. Suppose X i f(x θ) for i = 1,..., n where θ Θ R d for some integer d 1. Let T = T (X 1,..., X n ) be an UMVUE of η = τ(θ), a real valued function of θ. (a) Suppose U = U(X 1,..., X n ) is another unbiased estimator of η. Show that Cor θ [T, U] > 0. (b) Suppose T 2 is another UMVUE of η. Show that Cor θ [T 1, T 2 ] = 1. (c) Show that T 2 is the UMVUE of E θ [T 2 ], provided E θ [T 4 ] <. More generally, show that T k is the UMVUE of E θ [T k ], provided E θ [T 2k ] < for k = 2, 3,... (d) Suppose T = g(s) where S is a complete sufficient statistic for θ and let T 2 another unbiased estimate of η. Show that E[T 2 S] = g(s). 5 A real-valued random variable U is said to be stochastically smaller than another real-valued random variable V if Pr[U ɛ] > Pr[V ɛ] for all ɛ R ST 522: Practice Problems for Midterm Exam Page 3 c Sujit Ghosh, NCSU Statistics

10. Two statistics T 1 and T 2 are said to be equivalent if we can write T 2 = H(T 1 ) for some 1 1 transformation H( ) of the range of T 1 into the range of T 2. Which of the following statistics are equivalent? (Prove or disprove) (a) n i=1 X i and n i=1 log X i (b) n i=1 X i and n i=1 log X i (c) ( n i=1 X i, n i=1 X2 i ) and ( X, S 2 ) where X is the sample mean and S 2 is the sample variance. (d) ( n i=1 X i, n i=1 X3 i ) and ( X, n i=1 (X i X) 3 ) 11. Suppose X i N(µ 1, σ 2 1) and Y j N(µ 2, σ 2 2) for i = 1,..., n and j = 1,..., m. Find minimal sufficient statistics and compute the MLE for the following cases: (a) µ 1, µ 2 R and σ 1, σ 2 (0, ) are arbitrary (b) µ 1 = µ 2 R and σ 1, σ 2 (0, ) are arbitrary (c) σ 1 = σ 2 (0, ) and µ 1, µ 2 R arbitrary 12. Suppose X i f(x θ) for i = 1,..., n where θ Θ R. In the following cases show that there are no unbiased estimators of η = τ(θ). (a) f(x θ) = θ x (1 θ) 1 x for x = 0, 1 and θ (0, 1) and η = θ, the odds 1 θ (b) f(x θ) = θx e θ x! for x {0, 1,...} and θ > 0 and η = θ, the standard deviation of X. 13. Suppose X i f(x θ) for i = 1,..., n where θ Θ R d for some integer d 1 where f(x θ) is integrable as a function of θ. Assume that the support S = {x : f(x θ) > 0} does not involve θ. Show that the following class of prior densities is conjugate: π(θ) = N j=1 where ξ j S and N {1, 2,...}. f(ξ j θ)/ Θ N f(ξ j θ)dθ, 14. Suppose X i U(θ 1, θ + 1) for i = 1,..., n where θ R. (a) Obtain the MME of θ. (b) Obtain MLE of θ. Is it unique? j=1 ST 522: Practice Problems for Midterm Exam Page 4 c Sujit Ghosh, NCSU Statistics

15. The Kullback-Liebler divergence (KLD) between two densities f and g is defined as:. KLD(f, g) = f(x) log f(x) g(x) dx (a) Show that KLD(f, g) 0 and the equality holds if and only if f g (i.e., f(x) = g(x) for all x {x : f(x) > 0}). (b) Suppose f(x) = 1 σ 0 φ( x µ 0 σ 0 ) and g(x) = 1 x µ φ( ) where φ( ) denotes the density σ σ of the standard normal distribution. Compute KLD(f, g). (c) In part (b) above suppose σ = σ 0. Show that KLD(f, g) = 0 if and only if µ = µ 0 (d) In part (b) above suppose KLD(f, g) = 0. Can you conclude µ = µ 0 and σ = σ 0? (e) Suppose X i f 0 (x) for i = 1,..., n where f 0 ( ) is an unknown density. Suppose we use a statistical model that assumes X i f(x θ). i. Show that if X i f 0 (x) then it follows by SLLN that KLD n (θ) = 1 n n i=1 log f 0(X i ) f(x i θ) a.s. KLD(f 0, f( θ)) as n ii. Show that the MLE of θ under the assumed statistical model minimizes KLD n (θ). 16. Review all the problems solved during Lab hours (ST 522L) and Home assignments Hint for Exercise#2(a): You may assume (or prove) that for the given family of distributions, 2n( X X (1) ) and 2n(X (1) µ) are independently distributed as χ 2 σ σ 2(n 1) and χ2 2 distributions. ST 522: Practice Problems for Midterm Exam Page 5 c Sujit Ghosh, NCSU Statistics