Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER

Similar documents
Practice Exercises for Midterm Exam ST Statistical Theory - II The ACTUAL exam will consists of less number of problems.

MATH 3200 Exam 3 Dr. Syring

Lecture 10: Point Estimation

Back to estimators...

Lecture Notes 6. Assume F belongs to a family of distributions, (e.g. F is Normal), indexed by some parameter θ.

Chapter 7 - Lecture 1 General concepts and criteria

Definition 9.1 A point estimate is any function T (X 1,..., X n ) of a random sample. We often write an estimator of the parameter θ as ˆθ.

INSTITUTE AND FACULTY OF ACTUARIES. Curriculum 2019 SPECIMEN EXAMINATION

Point Estimators. STATISTICS Lecture no. 10. Department of Econometrics FEM UO Brno office 69a, tel

Chapter 8: Sampling distributions of estimators Sections

Point Estimation. Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage

Two Hours. Mathematical formula books and statistical tables are to be provided THE UNIVERSITY OF MANCHESTER. 22 January :00 16:00

Applied Statistics I

STAT 509: Statistics for Engineers Dr. Dewei Wang. Copyright 2014 John Wiley & Sons, Inc. All rights reserved.

Chapter 8. Introduction to Statistical Inference

Chapter 4: Asymptotic Properties of MLE (Part 3)

Version A. Problem 1. Let X be the continuous random variable defined by the following pdf: 1 x/2 when 0 x 2, f(x) = 0 otherwise.

**BEGINNING OF EXAMINATION** A random sample of five observations from a population is:

Computer Statistics with R

Much of what appears here comes from ideas presented in the book:

Chapter 7: Point Estimation and Sampling Distributions

Actuarial Mathematics and Statistics Statistics 5 Part 2: Statistical Inference Tutorial Problems

Homework Problems Stat 479

Chapter 8: Sampling distributions of estimators Sections

continuous rv Note for a legitimate pdf, we have f (x) 0 and f (x)dx = 1. For a continuous rv, P(X = c) = c f (x)dx = 0, hence

4-1. Chapter 4. Commonly Used Distributions by The McGraw-Hill Companies, Inc. All rights reserved.

1. You are given the following information about a stationary AR(2) model:

Learning From Data: MLE. Maximum Likelihood Estimators

Statistical estimation

Probability & Statistics

Continuous Distributions

EVA Tutorial #1 BLOCK MAXIMA APPROACH IN HYDROLOGIC/CLIMATE APPLICATIONS. Rick Katz

Point Estimation. Some General Concepts of Point Estimation. Example. Estimator quality

Chapter 7: Estimation Sections

Econ 300: Quantitative Methods in Economics. 11th Class 10/19/09

Generalized MLE per Martins and Stedinger

Point Estimation. Principle of Unbiased Estimation. When choosing among several different estimators of θ, select one that is unbiased.

Probability Theory and Simulation Methods. April 9th, Lecture 20: Special distributions

Financial Risk Management

CSE 312 Winter Learning From Data: Maximum Likelihood Estimators (MLE)

Point Estimation. Copyright Cengage Learning. All rights reserved.

Statistics for Business and Economics

SOCIETY OF ACTUARIES EXAM STAM SHORT-TERM ACTUARIAL MATHEMATICS EXAM STAM SAMPLE QUESTIONS

Random Variables Handout. Xavier Vilà

What was in the last lecture?

Chapter 5: Statistical Inference (in General)

A potentially useful approach to model nonlinearities in time series is to assume different behavior (structural break) in different subsamples

Statistical analysis and bootstrapping

SYSM 6304 Risk and Decision Analysis Lecture 2: Fitting Distributions to Data

Financial Risk Forecasting Chapter 9 Extreme Value Theory

3 ˆθ B = X 1 + X 2 + X 3. 7 a) Find the Bias, Variance and MSE of each estimator. Which estimator is the best according

Chapter 7: Estimation Sections

The Bernoulli distribution

Qualifying Exam Solutions: Theoretical Statistics

Chapter 4: Commonly Used Distributions. Statistics for Engineers and Scientists Fourth Edition William Navidi

Normal Distribution. Notes. Normal Distribution. Standard Normal. Sums of Normal Random Variables. Normal. approximation of Binomial.

Exercise. Show the corrected sample variance is an unbiased estimator of population variance. S 2 = n i=1 (X i X ) 2 n 1. Exercise Estimation

Homework Problems Stat 479

UQ, STAT2201, 2017, Lectures 3 and 4 Unit 3 Probability Distributions.

Chapter 7: Estimation Sections

Exam 2 Spring 2015 Statistics for Applications 4/9/2015

Lecture 5: Fundamentals of Statistical Analysis and Distributions Derived from Normal Distributions

The Normal Distribution

BIO5312 Biostatistics Lecture 5: Estimations

Chapter 4 Continuous Random Variables and Probability Distributions

درس هفتم یادگیري ماشین. (Machine Learning) دانشگاه فردوسی مشهد دانشکده مهندسی رضا منصفی

Lecture 23. STAT 225 Introduction to Probability Models April 4, Whitney Huang Purdue University. Normal approximation to Binomial

Statistics for Business and Economics

ME3620. Theory of Engineering Experimentation. Spring Chapter III. Random Variables and Probability Distributions.

Statistical Tables Compiled by Alan J. Terry

MA : Introductory Probability

Lecture 22. Survey Sampling: an Overview

Bivariate Birnbaum-Saunders Distribution

Homework Problems Stat 479

σ 2 : ESTIMATES, CONFIDENCE INTERVALS, AND TESTS Business Statistics

INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY. Lecture -5 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc.

Estimation after Model Selection

MTH6154 Financial Mathematics I Stochastic Interest Rates

Confidence Intervals Introduction

may be of interest. That is, the average difference between the estimator and the truth. Estimators with Bias(ˆθ) = 0 are called unbiased.

CS340 Machine learning Bayesian model selection

Sampling Distribution

12. Conditional heteroscedastic models (ARCH) MA6622, Ernesto Mordecki, CityU, HK, 2006.

Chapter 4 Continuous Random Variables and Probability Distributions

PhD Qualifier Examination

Point Estimation. Edwin Leuven

12 The Bootstrap and why it works

Objective Bayesian Analysis for Heteroscedastic Regression

Chapter 3 Common Families of Distributions. Definition 3.4.1: A family of pmfs or pdfs is called exponential family if it can be expressed as

6. Genetics examples: Hardy-Weinberg Equilibrium

Estimating the Parameters of Closed Skew-Normal Distribution Under LINEX Loss Function

STRESS-STRENGTH RELIABILITY ESTIMATION

Calibration of Interest Rates

Another Look at Normal Approximations in Cryptanalysis

Chapter 5 Univariate time-series analysis. () Chapter 5 Univariate time-series analysis 1 / 29

Conjugate Models. Patrick Lam

SOCIETY OF ACTUARIES/CASUALTY ACTUARIAL SOCIETY EXAM C CONSTRUCTION AND EVALUATION OF ACTUARIAL MODELS EXAM C SAMPLE QUESTIONS

Modelling Environmental Extremes

Posterior Inference. , where should we start? Consider the following computational procedure: 1. draw samples. 2. convert. 3. compute properties

Practice Exam 1. Loss Amount Number of Losses

Transcription:

Two hours MATH20802 To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER STATISTICAL METHODS Answer any FOUR of the SIX questions. University-approved calculators may be used 1 of 7 P.T.O.

1. A random variable X is said to have the Gumbel distribution, written X Gumbel(µ, β), if its probability density function is given by f X (x) = 1 ( β exp x µ ) { ( exp exp x µ )} β β for < x <, < µ < and β > 0. (i) Show that the cumulative distribution function of X is { ( F X (x) = exp exp x µ )} β for < x <. (7 marks) (ii) Show that the moment generating function of X is M X (t) = exp(µt)γ(1 βt) for t < 1/β, where Γ( ) denotes the gamma function. (7 marks) (iii) Show that E(X) = µ βγ (1), where Γ ( ) denotes the first derivative of Γ( ). (iv) If X i Gumbel(µ, β), i = 1, 2,..., n are independent random variables then show that max(x 1, X 2,..., X n ) Gumbel(µ + β log n, β). (7 marks) 2 of 7 P.T.O.

2. (a) Suppose θ is an estimator of θ based on a random sample of size n. Define what is meant by the following: (i) θ is an unbiased estimator of θ; (ii) θ is an asymptotically unbiased estimator of θ; (iii) the bias of θ (written as bias( θ)); (iv) the mean squared error of θ (written as MSE( θ)); (v) θ is a consistent estimator of θ. (b) Suppose X 1, X 2,..., X n is a random sample from the Exp (λ) distribution. Consider the following estimators for θ = 1/λ: θ1 = (1/n) n i=1 X i and θ 2 = (1/(n + 1)) n i=1 X i. (i) Find the biases of θ 1 and θ 2. (ii) Find the variances of θ 1 and θ 2. (iii) Find the mean squared errors of θ 1 and θ 2. (iv) Which of the two estimators ( θ 1 or θ 2 ) is better and why? (3 marks) 3 of 7 P.T.O.

3. Consider the two independent random samples: X 1, X 2,..., X n from N(µ X, σ 2 ) and Y 1, Y 2,..., Y m from N(µ Y, σ 2 ), where σ 2 is assumed known. The parameters µ X and µ Y are assumed not known. (i) Write down the joint likelihood function of µ X and µ Y. (ii) Find the maximum likelihood estimators (mles) of µ X and µ Y. (10 marks) (iii) Find the mle of Pr(X < Y ), where X N(µ X, σ 2 ) and Y N(µ Y, σ 2 ) are independent random variables. (iv) Show that the mle of µ X in part (ii) is an unbiased and consistent estimator for µ X. (3 marks) (v) Show also that the mle of µ Y in part (ii) is an unbiased and consistent estimator for µ Y. 4 of 7 P.T.O.

4. Suppose X 1, X 2,..., X n is a random sample from N(µ, σ 2 ), where both µ and σ 2 are unknown. (i) Write down the joint likelihood function of µ and σ 2. (ii) Show that the maximum likelihood estimator (mle) of µ is µ = X, where X = (1/n) n i=1 X i is the sample mean. (iii) Show that the mle of σ 2 is σ 2 = (1/n) n i=1 (X i X) 2. (iv) Show that the mle, µ, is an unbiased and consistent estimator for µ. (v) Show that the mle, σ 2, is a biased and consistent estimator for σ 2. 5 of 7 P.T.O.

5. (a) Suppose we wish to test H 0 : θ = θ 0 versus H 1 : θ θ 0. Define what is meant by the following: (i) the Type I error of a test. (ii) the Type II error of a test. (iii) the significance level of a test. (iv) the power function of a test (denoted Π(θ)). (b) Suppose X 1, X 2,..., X n is a random sample from a Bernoulli distribution with parameter p. State the rejection region for each of the following tests: (i) H 0 : p = p 0 versus H 1 : p p 0. (ii) H 0 : p = p 0 versus H 1 : p < p 0. (iii) H 0 : p = p 0 versus H 1 : p > p 0. In each case, assume a significance level of α and that X = (X 1 +X 2 + +X n )/n has an approximate normal distribution. (c) Under the same assumptions as part (b), find the power function, Π(p), for each of the tests: (i) H 0 : p = p 0 versus H 1 : p p 0. (ii) H 0 : p = p 0 versus H 1 : p < p 0. (iii) H 0 : p = p 0 versus H 1 : p > p 0. (3 marks) In each case, you may express the power function, Π(p), in terms of Φ( ), the standard normal distribution function. 6 of 7 P.T.O.

6. (a) State the Neyman-Pearson test for H 0 : θ = θ 1 versus H 1 : θ = θ 2 based on a random sample X 1, X 2,..., X n from a distribution with the probability density function f(x; θ). (b) Let X 1, X 2,..., X n be a random sample from a Uniform(0, θ) distribution. (i) Find the most powerful test at significance level α for H 0 : θ = θ 1 versus H 1 : θ = θ 2, where θ 2 > θ 1 are constants. Show that the test rejects H 0 if and only if max(x 1, X 2,..., X n ) > k for some k. (ii) Determine the power function, Π(θ), of the test in part (i). (iii) Find the value of k when α = 0.05, n = 5 and θ = θ 1 = 0.5. (iv) Find β = Pr (Type II error) when n = 5, θ 1 = 0.5 and θ = θ 2 = 0.6. END OF EXAMINATION PAPER 7 of 7