Chapter 8: Sampling distributions of estimators Sections
|
|
- Simon Holland
- 5 years ago
- Views:
Transcription
1 Chapter 8 continued Chapter 8: Sampling distributions of estimators Sections 8.1 Sampling distribution of a statistic 8.2 The Chi-square distributions 8.3 Joint Distribution of the sample mean and sample variance Skip: p The t distributions Skip: derivation of the pdf, p Confidence intervals 8.6 Bayesian Analysis of Samples from a Normal Distribution 8.7 Unbiased Estimators 8.8 Fisher Information Sampling Distributions 1 / 30
2 Review from Sections Review from Sections Chi-square distribution: χ 2 m, same as Gamma(α = m/2, β = 1/2) The t m distribution: If Y χ 2 m and Z N(0, 1) are independent Z then t m. Y /m Let X 1,..., X n be a random sample from N(µ, σ 2 ) If µ is known but σ is not: n σ 2 0 σ 2 χ 2 n where If both (µ, σ) are unknown: σ 2 0 = 1 n n σ 2 S n χ 2 n 1 where S n = 1 n n(x n µ) σ t n 1 where σ = n (X i µ) 2 i=1 n (X i X n ) 2 i=1 [ n i=1 (X i X n ) 2 n 1 ] 1/2 Sampling Distributions 2 / 30
3 8.5 Confidence intervals Confidence Interval A frequentist tool Say we want to estimate θ, or in general g(θ) We also want to know how good that estimate is. Def: Confidence Interval (CI) Let X 1,..., X n be a random sample from f (x θ), where θ is unknown (but not random). Let g(θ) be a real-valued function and let A and B be statistics where P (A < g(θ) < B) γ θ. The random interval (A, B) is called a 100γ% confidence interval for g(θ). If =, the CI is exact. After the random variables X 1,..., X n have been observed and the values of A = a and B = b have been computed, the interval (a, b) is called the observed confidence interval. Sampling Distributions 3 / 30
4 8.5 Confidence intervals Confidence Interval - Mean of a Normal Distribution Last time we saw the following example Let X 1,..., X n be a random sample from N(µ, σ 2 ) Let ( X n = 1 n n X i and σ i=1 = (X i X n ) 2 n n 1 i=1 Then we know that U = n(x n µ) σ ) 1/2 has the t n 1 distribution. We can therefore calculate γ = P( c < U < c). Turning this around, we get ) γ = P (X n c σ < µ < X n + c σ n n Sampling Distributions 4 / 30
5 8.5 Confidence intervals Confidence Interval - Mean of a Normal Distribution Let T m (x) denote the cdf of the t m distribution. Given γ we can find c so that P( c < U < c) = γ: γ = P( c < U < c) = 2T n 1 (c) 1 since the t distribution is symmetric around 0. Solving for c we get ( ) γ + 1 c = T 1 n 1 2 where T 1 n 1 is the quantile function for the t n 1 distribution. So a 100γ% confidence interval for µ is ( X n T 1 n 1 ( γ ) σ n, X n + T 1 n 1 ( γ ) σ n ) Sampling Distributions 5 / 30
6 Example Hotdogs Exercise in the book 8.5 Confidence intervals Data on calorie content in 20 different beef hot dogs from Consumer Reports (June 1986 issue): 186, 181, 176, 149, 184, 190, 158, 139, 175, 148, 152, 111, 141, 153, 190, 157, 131, 149, 135, 132 Assume that these numbers are observed values from a random sample of twenty independent N(µ, σ 2 ) random variables, where µ and σ 2 are unknown. Observed sample mean and σ are X n = and σ = Find a 95% confidence interval for µ Sampling Distributions 6 / 30
7 8.5 Confidence intervals Interpretation of a confidence interval Confidence intervals are a Frequentist tool We know that ( ( ) γ + 1 σ P X n T 1 n 1 < µ < X n + T 1 2 n n 1 ( γ + 1 After observing the data we observe the random interval 2 ) σ n ) = γ For example: (146.25, ) is an observed 95% confidence interval for µ That does NOT mean that P( < µ < ) = For this statement to make sense we need Bayesian thinking and Bayesian methods. Sampling Distributions 7 / 30
8 8.5 Confidence intervals Interpretation of a confidence interval Confidence intervals are a Frequentist tool One way of thinking of this: Repeated samples. Take a random sample of size n from N(µ, σ 2 ) and calculate the 95% confidence interval Take another random sample (of the same size n) and do the same calculations. Repeat. Many times. Since there is a 95% chance that the random intervals cover the value of µ we expect 95% of the intervals to cover the actual value of µ Problem: We never take more than one sample! Sampling Distributions 8 / 30
9 8.5 Confidence intervals Properties of a confidence interval - Simulation Study I simulated n=20 r.v. from N(8, 2 2 ) and calculated the 95% CI I repeated that 100 times 4 of the 100 intervals do not cover µ = 8 (red intervals) Sampling Distributions 9 / 30
10 8.5 Confidence intervals Non-symmetric confidence intervals Mean of the normal distribution More generally we want to find P(c 1 < U < c 2 ) = γ Symmetric confidence interval: Equal probability on either side: P(U c 1 ) = P(U c 2 ) = 1 γ 2 Since the distribution of U is symmetric around 0, the shortest possible for µ is the symmetric confidence interval. One-sided confidence interval: All the extra probability is on one side. That is, either c 1 = or c 2 = Sampling Distributions 10 / 30
11 8.5 Confidence intervals One-sided Confidence Interval Def: Lower bound Let A be a statistic so that P(A < g(θ)) γ θ The random interval (A, ) is a one-sided 100γ% confidence interval for g(θ) A is a 100γ% lower confidence limit for g(θ) Sampling Distributions 11 / 30
12 8.5 Confidence intervals One-sided Confidence Interval Def: Upper bound Let B be a statistic so that P(g(θ) < B) γ θ The random interval (, B) is a one-sided 100γ% confidence interval for g(θ). B is a 100γ% upper confidence limit for g(θ) Sampling Distributions 12 / 30
13 8.5 Confidence intervals One-sided Confidence Interval - Mean of a normal Let X 1,..., X n be a random sample from N(µ, σ 2 ), both µ and σ 2 unknown. Find the one-sided 100γ% confidence intervals for µ Find the observed 95% upper confidence limit for µ for the hotdog example. Sampling Distributions 13 / 30
14 8.5 Confidence intervals Confidence intervals for other distributions Def: Pivotal Let X = (X 1,..., X n ) be a random sample from a distribution that depends on parameter θ. Let V (X, θ) be a random variable whose distribution is the same for all θ. Then V is called a pivotal quantity. To use this we need to be able to invert the pivotal relationship: find a function r(v, x) so that r(v (X, θ), X) = g(θ). If the function r is increasing in v for every x, V has a continuous distribution with cdf F(v) and γ 2 γ 1 = γ, then ( ) ( ) A = r F 1 (γ 1 ), X and B = r F 1 (γ 2 ), X are the endpoints of an exact 100γ% confidence interval (Theorem 8.5.3). Sampling Distributions 14 / 30
15 8.5 Confidence intervals Confidence interval using Pivotal quantities Example: The rate parameter θ of the exponential distribution X 1,..., X n i.i.d. Expo(θ) Find the γ% upper confidence limit for θ Find a symmetric γ% confidence interval for θ Example: Variance of the normal distribution X 1,..., X n i.i.d. N(µ, σ 2 ), both unknown. Find a symmetric γ% confidence interval for σ 2 Find the observed symmetric γ% confidence interval for σ 2 for the hotdog example Sampling Distributions 15 / 30
16 8.5 Confidence intervals Problems with interpretation of a confidence interval Example is an interesting example. Say X 1, X 2 are i.i.d. Uniform(θ 0.5, θ + 0.5) Let Y 1 = min(x 1, X 2 ) and Y 2 = max(x 1, X 2 ). Then (Y 1, Y 2 ) is a 50% confidence interval for θ However: If we observe Y 1 and Y 2 that are more than 0.5 apart, that is y 2 y 1 > 0.5 then we know for a certainty that (y 1, y 2 ) contains θ! Yet we only assign 50% confidence to that interval, which ignores information we have. Sampling Distributions 16 / 30
17 Chapter 8 continued Chapter 8: Sampling distributions of estimators Sections 8.1 Sampling distribution of a statistic 8.2 The Chi-square distributions 8.3 Joint Distribution of the sample mean and sample variance Skip: p The t distributions Skip: derivation of the pdf, p Confidence intervals 8.6 Bayesian Analysis of Samples from a Normal Distribution 8.7 Unbiased Estimators 8.8 Fisher Information Sampling Distributions 17 / 30
18 Unbiased Estimators 8.7 Unbiased Estimators Suppose that we use an estimator δ(x) to estimate the parameter g(θ) Properties of an estimator (so far): consistency and sufficiency Another property of an estimator: unbiasedness Def: Unbiased Estimator / Bias An estimator δ(x) is an unbiased estimator of g(θ) if E(δ(X)) = g(θ) θ. Otherwise it is called a biased estimator. The bias is defined as E(δ(X)) g(θ) Sampling Distributions 18 / 30
19 8.7 Unbiased Estimators Examples X 1,..., X n i.i.d. N(µ, σ 2 ). X n is an unbiased estimator of µ since E(X n ) = µ for all µ Unbiased estimators of mean and variance from any distribution: Let X 1,..., X n be a random sample from f (x θ). The mean and variance of the distribution (if exist) are functions of θ. X n is an unbiased estimator of the mean E(X 1 ) Theorem 8.7.1: If variance is finite then ˆσ 1 2 is an unbiased estimator of Var(X) where ˆσ 2 1 = 1 n 1 n (X i X n ) 2 Note: This means that the MLE of σ 2 in N(µ, σ 2 ) is a biased estimator i=1 Sampling Distributions 19 / 30
20 8.7 Unbiased Estimators Mean Square Error (MSE) Is unbiased good enough? Useless if the estimator has high variance Look for unbiased estimators with lowest variance Mean squared error: E ( (δ(x) g(θ)) 2) Want estimators with small MSE. Corollary Let δ(x) be an estimator with finite variance. Then MSE(δ(X)) = Var(δ(X)) + bias(δ(x)) 2 the MSE of an unbiased estimator is equal to its variance. Searching for unbiased estimator with small variance is equivalent to searching for unbiased estimators with small MSE. Sampling Distributions 20 / 30
21 8.7 Unbiased Estimators Example Let X 1,..., X n be a random sample from N(µ, σ 2 ) (both µ and σ 2 are unknown). Consider two estimators of σ 2 δ 1 = S n (the MLE of σ 2 ) δ 2 = ˆσ 2 1 (unbiased) Find the MSE of each estimator. Which estimator has smaller MSE? Which estimator do you prefer? Sampling Distributions 21 / 30
22 8.7 Unbiased Estimators Why unbiased? Sounds good - who wants to be biased? However, the variance or MSE are better evaluators of quality of estimators In many cases there exist biased estimators with smaller MSE Sampling Distributions 22 / 30
23 8.8 Fisher Information Let the pdf of X be f (x θ) The Fisher information I(θ) in the random variable X is defined as { [d ] } log f (X θ) 2 I(θ) = E dθ Under mild conditions, we have (Theorem 8.8.1) [ ] [ d log f (X θ) d 2 ] log f (X θ) I(θ) = Var = E dθ dθ 2 For a random sample X 1,..., X n, the Fisher information I n (θ) satisfies that I n (θ) = ni(θ) Sampling Distributions 23 / 30
24 8.8 Fisher Information Cramér-Rao Inequality Let X 1,..., X n be a random sample from a distribution for which the pdf is f (x θ). For any statistic T, let m(θ) = E(T ). Then under mild conditions, we have Var(T ) [m (θ)] 2 ni(θ). (Corollary 8.8.1) If T is unbiased estimator of θ, then Var(T ) 1 ni(θ). Efficient estimator of its expectation: if an estimator achieves the lower bound in Cramér-Rao Inequality. Example: X 1,..., X n is a random sample from Poisson(θ). Show that the MLE is an efficient estimator of θ. Sampling Distributions 24 / 30
25 8.8 Fisher Information Asymptotic Distributions of MLE Theorem Let ˆθ n be the MLE of θ, then under mild conditions, we have [ni(θ)] 1/2 (ˆθ n θ) d N(0, 1). MLE is asymptotically efficient Sampling Distributions 25 / 30
26 8.8 Fisher Information Chapter 8: Sampling distributions of estimators Sections 8.1 Sampling distribution of a statistic 8.2 The Chi-square distributions 8.3 Joint Distribution of the sample mean and sample variance Skip: p The t distributions Skip: derivation of the pdf, p Confidence intervals 8.6 Bayesian Analysis of Samples from a Normal Distribution 8.7 Unbiased Estimators 8.8 Fisher Information Sampling Distributions 26 / 30
27 8.6 Bayesian Analysis of Samples from a Normal Distribution Bayesian alternative to confidence intervals Bayesian inference is based on the posterior distribution. Reporting a whole distribution may not be what you (or your client) want Point estimates: Bayesian estimators Minimize the expected loss Interval estimates: simply use quantiles of the posterior distribution For example: We can find constants c 1 and c 2 so that P(c 1 < θ < c 1 X = x) γ The interval (c 1, c 2 ) is called a 100γ% Credible interval for θ Note: The interpretation is very different from interpretation of confidence intervals Sampling Distributions 27 / 30
28 8.6 Bayesian Analysis of Samples from a Normal Distribution Example: the normal distribution Let X 1,..., X n be a random sample for N(µ, σ 2 ) In Chapter 7.3 we saw: If σ 2 is known, the normal distribution is a conjugate prior for µ Theorem 7.3.3: If the prior is µ N(µ 0, ν0 2 ) the posterior of µ is also normal with mean and variance µ 1 = σ2 µ 0 + nν 2 0 x n σ 2 + nν 2 0 and ν 2 1 = σ2 ν 2 0 σ 2 + nν 2 0 We can obtain credible intervals for µ from this N(µ 1, ν 2 1 ) posterior distribution Sampling Distributions 28 / 30
29 8.6 Bayesian Analysis of Samples from a Normal Distribution Example: the normal distribution What if both µ and σ 2 are unknown? Use the joint distribution of µ and σ 2 as the prior; Conjugate priors are available: the Normal-Inverse Gamma distribution; To give credible intervals for µ and σ 2 individually we need the marginal posterior distributions Sampling Distributions 29 / 30
30 8.6 Bayesian Analysis of Samples from a Normal Distribution END OF CHAPTER 8 Sampling Distributions 30 / 30
Chapter 8: Sampling distributions of estimators Sections
Chapter 8: Sampling distributions of estimators Sections 8.1 Sampling distribution of a statistic 8.2 The Chi-square distributions 8.3 Joint Distribution of the sample mean and sample variance Skip: p.
More informationChapter 7: Estimation Sections
1 / 40 Chapter 7: Estimation Sections 7.1 Statistical Inference Bayesian Methods: Chapter 7 7.2 Prior and Posterior Distributions 7.3 Conjugate Prior Distributions 7.4 Bayes Estimators Frequentist Methods:
More informationChapter 7: Estimation Sections
1 / 31 : Estimation Sections 7.1 Statistical Inference Bayesian Methods: 7.2 Prior and Posterior Distributions 7.3 Conjugate Prior Distributions 7.4 Bayes Estimators Frequentist Methods: 7.5 Maximum Likelihood
More informationChapter 7: Estimation Sections
Chapter 7: Estimation Sections 7.1 Statistical Inference Bayesian Methods: 7.2 Prior and Posterior Distributions 7.3 Conjugate Prior Distributions Frequentist Methods: 7.5 Maximum Likelihood Estimators
More informationDefinition 9.1 A point estimate is any function T (X 1,..., X n ) of a random sample. We often write an estimator of the parameter θ as ˆθ.
9 Point estimation 9.1 Rationale behind point estimation When sampling from a population described by a pdf f(x θ) or probability function P [X = x θ] knowledge of θ gives knowledge of the entire population.
More informationApplied Statistics I
Applied Statistics I Liang Zhang Department of Mathematics, University of Utah July 14, 2008 Liang Zhang (UofU) Applied Statistics I July 14, 2008 1 / 18 Point Estimation Liang Zhang (UofU) Applied Statistics
More informationTwo hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER
Two hours MATH20802 To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER STATISTICAL METHODS Answer any FOUR of the SIX questions.
More informationStatistical analysis and bootstrapping
Statistical analysis and bootstrapping p. 1/15 Statistical analysis and bootstrapping Michel Bierlaire michel.bierlaire@epfl.ch Transport and Mobility Laboratory Statistical analysis and bootstrapping
More informationConfidence Intervals Introduction
Confidence Intervals Introduction A point estimate provides no information about the precision and reliability of estimation. For example, the sample mean X is a point estimate of the population mean μ
More informationInterval estimation. September 29, Outline Basic ideas Sampling variation and CLT Interval estimation using X More general problems
Interval estimation September 29, 2017 STAT 151 Class 7 Slide 1 Outline of Topics 1 Basic ideas 2 Sampling variation and CLT 3 Interval estimation using X 4 More general problems STAT 151 Class 7 Slide
More informationPoint Estimation. Principle of Unbiased Estimation. When choosing among several different estimators of θ, select one that is unbiased.
Point Estimation Point Estimation Definition A point estimate of a parameter θ is a single number that can be regarded as a sensible value for θ. A point estimate is obtained by selecting a suitable statistic
More informationLecture Notes 6. Assume F belongs to a family of distributions, (e.g. F is Normal), indexed by some parameter θ.
Sufficient Statistics Lecture Notes 6 Sufficiency Data reduction in terms of a particular statistic can be thought of as a partition of the sample space X. Definition T is sufficient for θ if the conditional
More informationStatistics for Business and Economics
Statistics for Business and Economics Chapter 7 Estimation: Single Population Copyright 010 Pearson Education, Inc. Publishing as Prentice Hall Ch. 7-1 Confidence Intervals Contents of this chapter: Confidence
More informationPoint Estimation. Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage
6 Point Estimation Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage Point Estimation Statistical inference: directed toward conclusions about one or more parameters. We will use the generic
More informationChapter 7 - Lecture 1 General concepts and criteria
Chapter 7 - Lecture 1 General concepts and criteria January 29th, 2010 Best estimator Mean Square error Unbiased estimators Example Unbiased estimators not unique Special case MVUE Bootstrap General Question
More informationChapter 7: Point Estimation and Sampling Distributions
Chapter 7: Point Estimation and Sampling Distributions Seungchul Baek Department of Statistics, University of South Carolina STAT 509: Statistics for Engineers 1 / 20 Motivation In chapter 3, we learned
More informationUnit 5: Sampling Distributions of Statistics
Unit 5: Sampling Distributions of Statistics Statistics 571: Statistical Methods Ramón V. León 6/12/2004 Unit 5 - Stat 571 - Ramon V. Leon 1 Definitions and Key Concepts A sample statistic used to estimate
More informationUnit 5: Sampling Distributions of Statistics
Unit 5: Sampling Distributions of Statistics Statistics 571: Statistical Methods Ramón V. León 6/12/2004 Unit 5 - Stat 571 - Ramon V. Leon 1 Definitions and Key Concepts A sample statistic used to estimate
More informationPoint Estimation. Edwin Leuven
Point Estimation Edwin Leuven Introduction Last time we reviewed statistical inference We saw that while in probability we ask: given a data generating process, what are the properties of the outcomes?
More informationNon-informative Priors Multiparameter Models
Non-informative Priors Multiparameter Models Statistics 220 Spring 2005 Copyright c 2005 by Mark E. Irwin Prior Types Informative vs Non-informative There has been a desire for a prior distributions that
More informationMATH 3200 Exam 3 Dr. Syring
. Suppose n eligible voters are polled (randomly sampled) from a population of size N. The poll asks voters whether they support or do not support increasing local taxes to fund public parks. Let M be
More informationChapter 8. Introduction to Statistical Inference
Chapter 8. Introduction to Statistical Inference Point Estimation Statistical inference is to draw some type of conclusion about one or more parameters(population characteristics). Now you know that a
More informationLecture 10: Point Estimation
Lecture 10: Point Estimation MSU-STT-351-Sum-17B (P. Vellaisamy: MSU-STT-351-Sum-17B) Probability & Statistics for Engineers 1 / 31 Basic Concepts of Point Estimation A point estimate of a parameter θ,
More informationChapter 5. Statistical inference for Parametric Models
Chapter 5. Statistical inference for Parametric Models Outline Overview Parameter estimation Method of moments How good are method of moments estimates? Interval estimation Statistical Inference for Parametric
More informationChapter 4: Asymptotic Properties of MLE (Part 3)
Chapter 4: Asymptotic Properties of MLE (Part 3) Daniel O. Scharfstein 09/30/13 1 / 1 Breakdown of Assumptions Non-Existence of the MLE Multiple Solutions to Maximization Problem Multiple Solutions to
More informationSTRESS-STRENGTH RELIABILITY ESTIMATION
CHAPTER 5 STRESS-STRENGTH RELIABILITY ESTIMATION 5. Introduction There are appliances (every physical component possess an inherent strength) which survive due to their strength. These appliances receive
More informationBack to estimators...
Back to estimators... So far, we have: Identified estimators for common parameters Discussed the sampling distributions of estimators Introduced ways to judge the goodness of an estimator (bias, MSE, etc.)
More informationBayesian Normal Stuff
Bayesian Normal Stuff - Set-up of the basic model of a normally distributed random variable with unknown mean and variance (a two-parameter model). - Discuss philosophies of prior selection - Implementation
More informationدرس هفتم یادگیري ماشین. (Machine Learning) دانشگاه فردوسی مشهد دانشکده مهندسی رضا منصفی
یادگیري ماشین توزیع هاي نمونه و تخمین نقطه اي پارامترها Sampling Distributions and Point Estimation of Parameter (Machine Learning) دانشگاه فردوسی مشهد دانشکده مهندسی رضا منصفی درس هفتم 1 Outline Introduction
More informationPosterior Inference. , where should we start? Consider the following computational procedure: 1. draw samples. 2. convert. 3. compute properties
Posterior Inference Example. Consider a binomial model where we have a posterior distribution for the probability term, θ. Suppose we want to make inferences about the log-odds γ = log ( θ 1 θ), where
More informationExercise. Show the corrected sample variance is an unbiased estimator of population variance. S 2 = n i=1 (X i X ) 2 n 1. Exercise Estimation
Exercise Show the corrected sample variance is an unbiased estimator of population variance. S 2 = n i=1 (X i X ) 2 n 1 Exercise S 2 = = = = n i=1 (X i x) 2 n i=1 = (X i µ + µ X ) 2 = n 1 n 1 n i=1 ((X
More informationComputer Statistics with R
MAREK GAGOLEWSKI KONSTANCJA BOBECKA-WESO LOWSKA PRZEMYS LAW GRZEGORZEWSKI Computer Statistics with R 5. Point Estimation Faculty of Mathematics and Information Science Warsaw University of Technology []
More informationHardy Weinberg Model- 6 Genotypes
Hardy Weinberg Model- 6 Genotypes Silvelyn Zwanzig Hardy -Weinberg with six genotypes. In a large population of plants (Mimulus guttatus there are possible alleles S, I, F at one locus resulting in six
More informationUNIVERSITY OF VICTORIA Midterm June 2014 Solutions
UNIVERSITY OF VICTORIA Midterm June 04 Solutions NAME: STUDENT NUMBER: V00 Course Name & No. Inferential Statistics Economics 46 Section(s) A0 CRN: 375 Instructor: Betty Johnson Duration: hour 50 minutes
More informationCS340 Machine learning Bayesian model selection
CS340 Machine learning Bayesian model selection Bayesian model selection Suppose we have several models, each with potentially different numbers of parameters. Example: M0 = constant, M1 = straight line,
More informationMulti-armed bandit problems
Multi-armed bandit problems Stochastic Decision Theory (2WB12) Arnoud den Boer 13 March 2013 Set-up 13 and 14 March: Lectures. 20 and 21 March: Paper presentations (Four groups, 45 min per group). Before
More informationReview of key points about estimators
Review of key points about estimators Populations can be at least partially described by population parameters Population parameters include: mean, proportion, variance, etc. Because populations are often
More informationA New Hybrid Estimation Method for the Generalized Pareto Distribution
A New Hybrid Estimation Method for the Generalized Pareto Distribution Chunlin Wang Department of Mathematics and Statistics University of Calgary May 18, 2011 A New Hybrid Estimation Method for the GPD
More informationLecture 9 - Sampling Distributions and the CLT
Lecture 9 - Sampling Distributions and the CLT Sta102/BME102 Colin Rundel September 23, 2015 1 Variability of Estimates Activity Sampling distributions - via simulation Sampling distributions - via CLT
More information5.3 Interval Estimation
5.3 Interval Estimation Ulrich Hoensch Wednesday, March 13, 2013 Confidence Intervals Definition Let θ be an (unknown) population parameter. A confidence interval with confidence level C is an interval
More informationPractice Exercises for Midterm Exam ST Statistical Theory - II The ACTUAL exam will consists of less number of problems.
Practice Exercises for Midterm Exam ST 522 - Statistical Theory - II The ACTUAL exam will consists of less number of problems. 1. Suppose X i F ( ) for i = 1,..., n, where F ( ) is a strictly increasing
More informationActuarial Mathematics and Statistics Statistics 5 Part 2: Statistical Inference Tutorial Problems
Actuarial Mathematics and Statistics Statistics 5 Part 2: Statistical Inference Tutorial Problems Spring 2005 1. Which of the following statements relate to probabilities that can be interpreted as frequencies?
More informationExam 2 Spring 2015 Statistics for Applications 4/9/2015
18.443 Exam 2 Spring 2015 Statistics for Applications 4/9/2015 1. True or False (and state why). (a). The significance level of a statistical test is not equal to the probability that the null hypothesis
More informationPoint Estimators. STATISTICS Lecture no. 10. Department of Econometrics FEM UO Brno office 69a, tel
STATISTICS Lecture no. 10 Department of Econometrics FEM UO Brno office 69a, tel. 973 442029 email:jiri.neubauer@unob.cz 8. 12. 2009 Introduction Suppose that we manufacture lightbulbs and we want to state
More informationChapter 4: Commonly Used Distributions. Statistics for Engineers and Scientists Fourth Edition William Navidi
Chapter 4: Commonly Used Distributions Statistics for Engineers and Scientists Fourth Edition William Navidi 2014 by Education. This is proprietary material solely for authorized instructor use. Not authorized
More information12 The Bootstrap and why it works
12 he Bootstrap and why it works For a review of many applications of bootstrap see Efron and ibshirani (1994). For the theory behind the bootstrap see the books by Hall (1992), van der Waart (2000), Lahiri
More information4-1. Chapter 4. Commonly Used Distributions by The McGraw-Hill Companies, Inc. All rights reserved.
4-1 Chapter 4 Commonly Used Distributions 2014 by The Companies, Inc. All rights reserved. Section 4.1: The Bernoulli Distribution 4-2 We use the Bernoulli distribution when we have an experiment which
More informationcontinuous rv Note for a legitimate pdf, we have f (x) 0 and f (x)dx = 1. For a continuous rv, P(X = c) = c f (x)dx = 0, hence
continuous rv Let X be a continuous rv. Then a probability distribution or probability density function (pdf) of X is a function f(x) such that for any two numbers a and b with a b, P(a X b) = b a f (x)dx.
More informationBusiness Statistics 41000: Probability 3
Business Statistics 41000: Probability 3 Drew D. Creal University of Chicago, Booth School of Business February 7 and 8, 2014 1 Class information Drew D. Creal Email: dcreal@chicagobooth.edu Office: 404
More informationChapter 14 : Statistical Inference 1. Note : Here the 4-th and 5-th editions of the text have different chapters, but the material is the same.
Chapter 14 : Statistical Inference 1 Chapter 14 : Introduction to Statistical Inference Note : Here the 4-th and 5-th editions of the text have different chapters, but the material is the same. Data x
More informationEstimation after Model Selection
Estimation after Model Selection Vanja M. Dukić Department of Health Studies University of Chicago E-Mail: vanja@uchicago.edu Edsel A. Peña* Department of Statistics University of South Carolina E-Mail:
More informationVersion A. Problem 1. Let X be the continuous random variable defined by the following pdf: 1 x/2 when 0 x 2, f(x) = 0 otherwise.
Math 224 Q Exam 3A Fall 217 Tues Dec 12 Version A Problem 1. Let X be the continuous random variable defined by the following pdf: { 1 x/2 when x 2, f(x) otherwise. (a) Compute the mean µ E[X]. E[X] x
More information1. Covariance between two variables X and Y is denoted by Cov(X, Y) and defined by. Cov(X, Y ) = E(X E(X))(Y E(Y ))
Correlation & Estimation - Class 7 January 28, 2014 Debdeep Pati Association between two variables 1. Covariance between two variables X and Y is denoted by Cov(X, Y) and defined by Cov(X, Y ) = E(X E(X))(Y
More informationPoint Estimation. Some General Concepts of Point Estimation. Example. Estimator quality
Point Estimation Some General Concepts of Point Estimation Statistical inference = conclusions about parameters Parameters == population characteristics A point estimate of a parameter is a value (based
More informationLecture 2. Probability Distributions Theophanis Tsandilas
Lecture 2 Probability Distributions Theophanis Tsandilas Comment on measures of dispersion Why do common measures of dispersion (variance and standard deviation) use sums of squares: nx (x i ˆµ) 2 i=1
More informationBIO5312 Biostatistics Lecture 5: Estimations
BIO5312 Biostatistics Lecture 5: Estimations Yujin Chung September 27th, 2016 Fall 2016 Yujin Chung Lec5: Estimations Fall 2016 1/34 Recap Yujin Chung Lec5: Estimations Fall 2016 2/34 Today s lecture and
More informationχ 2 distributions and confidence intervals for population variance
χ 2 distributions and confidence intervals for population variance Let Z be a standard Normal random variable, i.e., Z N(0, 1). Define Y = Z 2. Y is a non-negative random variable. Its distribution is
More informationINSTITUTE AND FACULTY OF ACTUARIES. Curriculum 2019 SPECIMEN EXAMINATION
INSTITUTE AND FACULTY OF ACTUARIES Curriculum 2019 SPECIMEN EXAMINATION Subject CS1A Actuarial Statistics Time allowed: Three hours and fifteen minutes INSTRUCTIONS TO THE CANDIDATE 1. Enter all the candidate
More information1 Introduction 1. 3 Confidence interval for proportion p 6
Math 321 Chapter 5 Confidence Intervals (draft version 2019/04/15-13:41:02) Contents 1 Introduction 1 2 Confidence interval for mean µ 2 2.1 Known variance................................. 3 2.2 Unknown
More informationSTAT 509: Statistics for Engineers Dr. Dewei Wang. Copyright 2014 John Wiley & Sons, Inc. All rights reserved.
STAT 509: Statistics for Engineers Dr. Dewei Wang Applied Statistics and Probability for Engineers Sixth Edition Douglas C. Montgomery George C. Runger 7 Point CHAPTER OUTLINE 7-1 Point Estimation 7-2
More informationST440/550: Applied Bayesian Analysis. (5) Multi-parameter models - Summarizing the posterior
(5) Multi-parameter models - Summarizing the posterior Models with more than one parameter Thus far we have studied single-parameter models, but most analyses have several parameters For example, consider
More informationReview of key points about estimators
Review of key points about estimators Populations can be at least partially described by population parameters Population parameters include: mean, proportion, variance, etc. Because populations are often
More informationSampling Distribution
MAT 2379 (Spring 2012) Sampling Distribution Definition : Let X 1,..., X n be a collection of random variables. We say that they are identically distributed if they have a common distribution. Definition
More informationWeek 2 Quantitative Analysis of Financial Markets Hypothesis Testing and Confidence Intervals
Week 2 Quantitative Analysis of Financial Markets Hypothesis Testing and Confidence Intervals Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg :
More informationLecture 23. STAT 225 Introduction to Probability Models April 4, Whitney Huang Purdue University. Normal approximation to Binomial
Lecture 23 STAT 225 Introduction to Probability Models April 4, 2014 approximation Whitney Huang Purdue University 23.1 Agenda 1 approximation 2 approximation 23.2 Characteristics of the random variable:
More informationTutorial 11: Limit Theorems. Baoxiang Wang & Yihan Zhang bxwang, April 10, 2017
Tutorial 11: Limit Theorems Baoxiang Wang & Yihan Zhang bxwang, yhzhang@cse.cuhk.edu.hk April 10, 2017 1 Outline The Central Limit Theorem (CLT) Normal Approximation Based on CLT De Moivre-Laplace Approximation
More informationChapter 5: Statistical Inference (in General)
Chapter 5: Statistical Inference (in General) Shiwen Shen University of South Carolina 2016 Fall Section 003 1 / 17 Motivation In chapter 3, we learn the discrete probability distributions, including Bernoulli,
More informationContents. 1 Introduction. Math 321 Chapter 5 Confidence Intervals. 1 Introduction 1
Math 321 Chapter 5 Confidence Intervals (draft version 2019/04/11-11:17:37) Contents 1 Introduction 1 2 Confidence interval for mean µ 2 2.1 Known variance................................. 2 2.2 Unknown
More informationStatistical Intervals. Chapter 7 Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage
7 Statistical Intervals Chapter 7 Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage Confidence Intervals The CLT tells us that as the sample size n increases, the sample mean X is close to
More informationLecture 22. Survey Sampling: an Overview
Math 408 - Mathematical Statistics Lecture 22. Survey Sampling: an Overview March 25, 2013 Konstantin Zuev (USC) Math 408, Lecture 22 March 25, 2013 1 / 16 Survey Sampling: What and Why In surveys sampling
More informationReview for Final Exam Spring 2014 Jeremy Orloff and Jonathan Bloom
Review for Final Exam 18.05 Spring 2014 Jeremy Orloff and Jonathan Bloom THANK YOU!!!! JON!! PETER!! RUTHI!! ERIKA!! ALL OF YOU!!!! Probability Counting Sets Inclusion-exclusion principle Rule of product
More informationMuch of what appears here comes from ideas presented in the book:
Chapter 11 Robust statistical methods Much of what appears here comes from ideas presented in the book: Huber, Peter J. (1981), Robust statistics, John Wiley & Sons (New York; Chichester). There are many
More informationSTAT/MATH 395 PROBABILITY II
STAT/MATH 395 PROBABILITY II Distribution of Random Samples & Limit Theorems Néhémy Lim University of Washington Winter 2017 Outline Distribution of i.i.d. Samples Convergence of random variables The Laws
More informationCS340 Machine learning Bayesian statistics 3
CS340 Machine learning Bayesian statistics 3 1 Outline Conjugate analysis of µ and σ 2 Bayesian model selection Summarizing the posterior 2 Unknown mean and precision The likelihood function is p(d µ,λ)
More informationMTH6154 Financial Mathematics I Stochastic Interest Rates
MTH6154 Financial Mathematics I Stochastic Interest Rates Contents 4 Stochastic Interest Rates 45 4.1 Fixed Interest Rate Model............................ 45 4.2 Varying Interest Rate Model...........................
More informationGenerating Random Numbers
Generating Random Numbers Aim: produce random variables for given distribution Inverse Method Let F be the distribution function of an univariate distribution and let F 1 (y) = inf{x F (x) y} (generalized
More informationEVA Tutorial #1 BLOCK MAXIMA APPROACH IN HYDROLOGIC/CLIMATE APPLICATIONS. Rick Katz
1 EVA Tutorial #1 BLOCK MAXIMA APPROACH IN HYDROLOGIC/CLIMATE APPLICATIONS Rick Katz Institute for Mathematics Applied to Geosciences National Center for Atmospheric Research Boulder, CO USA email: rwk@ucar.edu
More informationStatistical Intervals (One sample) (Chs )
7 Statistical Intervals (One sample) (Chs 8.1-8.3) Confidence Intervals The CLT tells us that as the sample size n increases, the sample mean X is close to normally distributed with expected value µ and
More information10/1/2012. PSY 511: Advanced Statistics for Psychological and Behavioral Research 1
PSY 511: Advanced Statistics for Psychological and Behavioral Research 1 Pivotal subject: distributions of statistics. Foundation linchpin important crucial You need sampling distributions to make inferences:
More information1. Statistical problems - a) Distribution is known. b) Distribution is unknown.
Probability February 5, 2013 Debdeep Pati Estimation 1. Statistical problems - a) Distribution is known. b) Distribution is unknown. 2. When Distribution is known, then we can have either i) Parameters
More informationChapter 7. Sampling Distributions and the Central Limit Theorem
Chapter 7. Sampling Distributions and the Central Limit Theorem 1 Introduction 2 Sampling Distributions related to the normal distribution 3 The central limit theorem 4 The normal approximation to binomial
More informationCPSC 540: Machine Learning
CPSC 540: Machine Learning Monte Carlo Methods Mark Schmidt University of British Columbia Winter 2019 Last Time: Markov Chains We can use Markov chains for density estimation, d p(x) = p(x 1 ) p(x }{{}
More informationSampling and sampling distribution
Sampling and sampling distribution September 12, 2017 STAT 101 Class 5 Slide 1 Outline of Topics 1 Sampling 2 Sampling distribution of a mean 3 Sampling distribution of a proportion STAT 101 Class 5 Slide
More information**BEGINNING OF EXAMINATION** A random sample of five observations from a population is:
**BEGINNING OF EXAMINATION** 1. You are given: (i) A random sample of five observations from a population is: 0.2 0.7 0.9 1.1 1.3 (ii) You use the Kolmogorov-Smirnov test for testing the null hypothesis,
More information4.1 Introduction Estimating a population mean The problem with estimating a population mean with a sample mean: an example...
Chapter 4 Point estimation Contents 4.1 Introduction................................... 2 4.2 Estimating a population mean......................... 2 4.2.1 The problem with estimating a population mean
More informationCPSC 540: Machine Learning
CPSC 540: Machine Learning Monte Carlo Methods Mark Schmidt University of British Columbia Winter 2018 Last Time: Markov Chains We can use Markov chains for density estimation, p(x) = p(x 1 ) }{{} d p(x
More informationSimulation Wrap-up, Statistics COS 323
Simulation Wrap-up, Statistics COS 323 Today Simulation Re-cap Statistics Variance and confidence intervals for simulations Simulation wrap-up FYI: No class or office hours Thursday Simulation wrap-up
More informationmay be of interest. That is, the average difference between the estimator and the truth. Estimators with Bias(ˆθ) = 0 are called unbiased.
1 Evaluating estimators Suppose you observe data X 1,..., X n that are iid observations with distribution F θ indexed by some parameter θ. When trying to estimate θ, one may be interested in determining
More informationFinancial Risk Forecasting Chapter 9 Extreme Value Theory
Financial Risk Forecasting Chapter 9 Extreme Value Theory Jon Danielsson 2017 London School of Economics To accompany Financial Risk Forecasting www.financialriskforecasting.com Published by Wiley 2011
More informationIEOR E4703: Monte-Carlo Simulation
IEOR E4703: Monte-Carlo Simulation Simulation Efficiency and an Introduction to Variance Reduction Methods Martin Haugh Department of Industrial Engineering and Operations Research Columbia University
More informationBayesian Linear Model: Gory Details
Bayesian Linear Model: Gory Details Pubh7440 Notes By Sudipto Banerjee Let y y i ] n i be an n vector of independent observations on a dependent variable (or response) from n experimental units. Associated
More informationDealing with forecast uncertainty in inventory models
Dealing with forecast uncertainty in inventory models 19th IIF workshop on Supply Chain Forecasting for Operations Lancaster University Dennis Prak Supervisor: Prof. R.H. Teunter June 29, 2016 Dennis Prak
More informationEstimation of a parametric function associated with the lognormal distribution 1
Communications in Statistics Theory and Methods Estimation of a parametric function associated with the lognormal distribution Jiangtao Gou a,b and Ajit C. Tamhane c, a Department of Mathematics and Statistics,
More informationAn Improved Skewness Measure
An Improved Skewness Measure Richard A. Groeneveld Professor Emeritus, Department of Statistics Iowa State University ragroeneveld@valley.net Glen Meeden School of Statistics University of Minnesota Minneapolis,
More informationSection 2.4. Properties of point estimators 135
Section 2.4. Properties of point estimators 135 The fact that S 2 is an estimator of σ 2 for any population distribution is one of the most compelling reasons to use the n 1 in the denominator of the definition
More informationLecture 9 - Sampling Distributions and the CLT. Mean. Margin of error. Sta102/BME102. February 6, Sample mean ( X ): x i
Lecture 9 - Sampling Distributions and the CLT Sta102/BME102 Colin Rundel February 6, 2015 http:// pewresearch.org/ pubs/ 2191/ young-adults-workers-labor-market-pay-careers-advancement-recession Sta102/BME102
More informationSTAT 425: Introduction to Bayesian Analysis
STAT 45: Introduction to Bayesian Analysis Marina Vannucci Rice University, USA Fall 018 Marina Vannucci (Rice University, USA) Bayesian Analysis (Part 1) Fall 018 1 / 37 Lectures 9-11: Multi-parameter
More informationObjective Bayesian Analysis for Heteroscedastic Regression
Analysis for Heteroscedastic Regression & Esther Salazar Universidade Federal do Rio de Janeiro Colóquio Inter-institucional: Modelos Estocásticos e Aplicações 2009 Collaborators: Marco Ferreira and Thais
More informationMachine Learning for Quantitative Finance
Machine Learning for Quantitative Finance Fast derivative pricing Sofie Reyners Joint work with Jan De Spiegeleer, Dilip Madan and Wim Schoutens Derivative pricing is time-consuming... Vanilla option pricing
More informationPractice Exam 1. Loss Amount Number of Losses
Practice Exam 1 1. You are given the following data on loss sizes: An ogive is used as a model for loss sizes. Determine the fitted median. Loss Amount Number of Losses 0 1000 5 1000 5000 4 5000 10000
More information