درس هفتم یادگیري ماشین. (Machine Learning) دانشگاه فردوسی مشهد دانشکده مهندسی رضا منصفی

Size: px
Start display at page:

Download "درس هفتم یادگیري ماشین. (Machine Learning) دانشگاه فردوسی مشهد دانشکده مهندسی رضا منصفی"

Transcription

1 یادگیري ماشین توزیع هاي نمونه و تخمین نقطه اي پارامترها Sampling Distributions and Point Estimation of Parameter (Machine Learning) دانشگاه فردوسی مشهد دانشکده مهندسی رضا منصفی درس هفتم 1

2 Outline Introduction General Concepts of Point Estimation Unbiased Estimators Proof that S is a Biased Estimator of σ Variance of a Point Estimator Standard Error: Reporting a Point Estimate Bootstrap Estimate of the Standard Error Mean Square Error of an Estimator Methods of Point Estimation Method of Moments Method of Maximum Likelihood Bayesian Estimation of Parameters Sampling Distributions Sampling Distributions of Means 2

3 Introduction The field of statistical inference consists of:- Methods used to make decisions Draw conclusions about a population These methods utilize the information contained in a random sample from the population in drawing conclusions (statistical method used for inference and decision making) Statistical inference may be divided into 2 major areas:- 1) Parameter Estimation Point Estimation Use sample data to compute a number that is in some sense a reasonable value (or guess) of the true unknown parameter (i.e., mean value) Interval Estimation With high confidence that it does contain the unknown population parameter. 2. Hypothesis Testing Decide whether to accept or reject a statement about some parameter. 3

4 Terminology Suppose that we want to obtain a point estimate of a population parameter. Before the data is collected, the observations are considered to be random variables, Therefore, Any function of the observation, or Any statistic, is also a random variable. (X 1, X 2,., X n ) For example, the sample mean X and 2 the sample variance S are statistics and so are random variables with numerical value and perhaps CI. 4

5 Terminology: Sampling distribution:- Note: General symbol: Symbol: Since a statistic is a random variable, it has a probability distribution (pd). Is the probability distribution of a statistic The notion of a sampling distribution is very important. When discussing inference problems, it is convenient to have a general symbol to represent the parameter of interest. Use the Greek symbol θ (theta) to represent the parameter. Objective of point estimation: To select a single number, based on sample data, that is the most plausible value for θ. Numerical Value: A numerical value of a sample statistic will be used as the point estimate. Random Process: Monte Carlo, PSO, ACO, GA, SA Distributions 5

6 X a random variable, with probability distribution (pd) f(x), characterized by the unknown parameter θ, say f(θ x) X 1, X 2,.,X n a random sample of size n from X, Terminology: The statistic ˆ h( X1, X 2,..., X n ) is called a point estimator of θ. Hint: ˆ is a random variable because it is a function of random variables Point Estimate: After the sample has been selected, ˆ takes on a particular numerical value ˆ called the point estimate of θ. Definition: Example: (Point Estimator) A point estimate of some population parameter θ is a single numerical value ˆ of a statistic ˆ. The statistic ˆ is called the point estimator. Suppose that the random variable X is normally distributed with an unknown mean µ. The sample mean is a point estimator of the unknown population mean µ. That is, ˆ X. After the sample has been selected, the numerical value ˆx is the point estimate of µ. Thus, if x 1 =25, x 2 =30, x 3 =29, and x 4 =31, the point estimate of µ is 6

7 Similarly, if the population variance σ 2 is also unknown, a point estimator for σ 2 is the sample variance S 2, and the numerical value S 2 =6.9 calculated from the sample data is called the point estimate of σ 2. Estimation problems occur frequently in Engineering. Often need to estimate parameters The mean µ of a single population The variance σ 2,(or standard deviation σ) of a single population The proportion p of items in a population that belong to a class of interest The difference in means of two populations, µ 1 µ 2 The difference in two population proportions p 1 p 2 Reasonable point estimates of these parameters For µ, the estimate is ˆ x the sample mean. For σ 2 2 2, the estimate is ˆ s, the sample variance. x For p, the estimate is pˆ, the sample proportion, where x is the number of n items in a random sample of size n that belong to the class of interest. For µ 1 µ 2, the estimate is ˆ 1 ˆ 2 x1 x2, the difference between the sample means of two independent random samples. For p 1 p 2, the estimate is pˆ 1 pˆ 2, the difference between two sample proportions computed from two independent random samples. 7

8 Note: May have Several different choices for the point estimator of a parameter. Example: Might consider to estimate the mean of a population:- the sample mean, the sample median, or perhaps the average of the smallest and largest observations in the sample as point estimators. In order to decide which point estimator of a particular parameter is the best one to use, we need to examine their statistical properties and develop some criteria for comparing estimators. 8

9 Statistical Inference: Random Variable: Sampling Distributions Concerned with making decisions about a population based on the information contained in a random sample from that population. The sample mean is a statistic; that is, it is a random variable that depends on the results obtained in each particular sample. Since a statistic is a random variable, it has a probability distribution. The random variables are usually assumed to be independent and identically distributed (iid) مدتی: مستقل داراي توزیع یکسان) موهت: مستقل و همتوزیع These random variables are known as random sample..(موهت Definition: (Sampling Distribution) The probability distribution of a statistic is called a sampling distribution. For example, the probability distribution of X is called the sampling distribution of the mean. The sampling distribution of a statistic depends on the distribution of the population, the size of the sample, and the method of sample selection 9

10 Sampling Distributions of Means σ 2 µ σ 2 /n 10

11 The Central Limit Theorem 11

12 Distributions of average scores from throwing dice RandomGenerator If n < 30, the central limit theorem will work if the distribution of the population is not severely non-normal. 12

13 Example: 13

14 Definition: (Approximate Sampling Distribution of a Difference in Sample Mean) 14

15 Unbiased Estimators: General Concepts of Point Estimation An estimator should be close in some sense to the true value of the unknown parameter. ˆ is an unbiased estimator of θ if the expected value of ˆ is equal to θ, i.e., E( ˆ ). This is equivalent to saying that the mean of the probability distribution of mean of the sampling distribution of ˆ ) is equal to θ. ˆ (or the Definition: (Bias of an Estimator) The point estimator ˆ is an unbiased estimator for the parameter θ if E( ˆ ) If the estimator is not unbiased, then the difference E( ˆ ) is called the bias of the estimator ˆ. When an estimator is unbiased, the bias is zero; that is E( ˆ ) 0 15

16 Example: (Sample Mean and Variance are Unbiased) Suppose that X is a random variable with mean µ and variance σ 2. Let X 1, X 2,.., X n be a random sample of size n from the population represented by X. Show: The sample mean X and sample variance S 2 are unbiased estimators of µ and σ 2, respectively. Sample Mean: E( X ) (Shown in previous lessons) Sample mean X is an unbiased estimator of the population mean µ. Sample Variance: We have n? 16

17 The last equality follows from Equation for the mean of a linear function. However, since and we have Therefore, the sample variance S 2 is an unbiased estimator of the population variance σ 2. Although S 2 is unbiased for σ 2, S is a biased estimator of σ!? For large samples, the bias is very small. However, there are good reasons for using S as an estimator of σ in samples from normal distributions, as we will see soon when are discuss confidence intervals and hypothesis testing. 17

18 Sometimes there are several unbiased estimators of the sample population parameter. For example, suppose we take a random sample of size n=10 from a normal population and obtain the data x 1 =12.8, x 2 =9.4, x 3 =8.7, x 4 =11.6, x 5 =13.1, x 6 =9.8, x 7 =14.1, x 8 =8.5, x 9 =12.1, x 10 =10.3. Now the sample mean is the sample median is and a 10% trimmed mean (obtained by discarding the smallest and largest 10% of the sample before averaging) is We can show that all of these are unbiased estimates of µ. Since there is not a unique unbiased estimator, we cannot rely on the property of unbiasedness alone to select our estimator. We need a method to select among unbiased estimators, soon will be suggested! 18

19 Variance of a Point Estimator Suppose that ˆ and ˆ are unbiased estimators of θ. 2 1 This indicates that the distribution of each estimator is centered at the true value of θ. However, the variance of these distributions may be different. Since Note: ˆ 1 has a smaller variance than ˆ the estimator ˆ is more likely 2 1 to produce an estimate close to the true value θ. A logical principle of estimation, when selecting among several estimators, is to choose the estimator that has minimum variance. 19

20 Definition: If we consider all unbiased estimators of θ, the one with the smallest variance is called the Minimum Variance Unbiased Estimator (MVUE). In a sense, the MVUE is most likely among all unbiased estimators to produce an estimate ˆ that is close to the true value of θ. It has been possible to develop methodology to identify the MVUE in many practical situations. Theorem: If X 1, X 2,.., X n is a random sample of size n from a normal distribution with mean µ and variance σ 2, the sample mean X is the MVUE for µ. In situations in which we do not know whether an MVUE exists, we could still use a minimum variance principle to choose among competing estimators. Suppose, for example, we wish to estimate the mean of a population (not necessarily a normal population). We have a random sample of n observations X 1, X 2,.., X n and we wish to compare two possible estimators for µ; the sample mean X and a single observation from the sample, say, X i. Note that both X and X i are unbiased estimators of µ; for the sample mean, we have 2 V ( X ) and the variance of any observation is 2 V ( X ). n n 2 Since V ( X ) V ( X i ) for sample sizes we would conclude that the sample mean is a better estimator of µ than a single observation X i. i 20

21 Methods of Point Estimation The definitions of unbiasness and other properties of estimators do not provide any guidance about how good estimators can be obtained. In this lesson, 2 methods for obtaining point estimators are discussed:- Moments and Maximum Likelihood (ML) Maximum Likelihood Estimates (MLE) are generally preferable to Moment Estimators (ME) because they have better efficiency properties. However, moment estimators are sometimes easier to compute. Both methods can produce unbiased point estimators? 21

22 Method of Moments Idea behind the method of moments: To equate population moments, which are defined in terms of expected values, to the corresponding sample moments. The population moments will be functions of the unknown parameters. These equations are solved to yield estimators of the unknown parameters. Definition: (Moments) Let X 1, X 2,.., X n be a random sample from the probability distribution f (x), where f (x) can be a discrete Probability Mass Function (PMF) or a continuous Probability Density Function (PDF). The k th population moment (or distribution moment) is E(X k ), k = 1, 2,.. n 1 The corresponding k th k sample moment is X i, k 1, 2,... n i 1 To illustrate, the first population moment is E(X )=µ, and the first sample moment is 1 n X i n i 1 X Thus by equating the population and sample moments, we find that ˆ X That is, the sample mean is the moment estimator of the population mean. In the general case, the population moments will be functions of the unknown parameters of the distribution, say, θ 1, θ 2,, θ m 22

23 Definition: (Moments Estimators) Let X 1, X 2,.., X n be a random sample from either a probability mass function or probability density function with m unknown parameters θ 1, θ 2,, θ m. The moment estimators are found by equating the first m population moments to the first m sample moments and solving the resulting equations for the unknown parameters. Example: (Exponential Distribution Moments Estimators) 23

24 Example: (Normal Distribution Moments Estimators) Biased estimate? Example: (Gamma Distribution Moments Estimators) 24

25 Moment 0? Moment 1 Moment 2 Skewness: Moment 3 In probability theory and statistics, skewness is a measure of the extent to which a probability distribution of a real-valued random variable "leans" to one side of the mean. The skewness value can be positive or negative, or even undefined. Kurtosis: Moment 4 In probability theory and statistics, kurtosis (from the Greek word κυρτός, kyrtos or kurtos, meaning curved, arching) is any measure of the "peakedness" of the probability distribution of a real-valued random variable. In a similar way to the concept of skewness, kurtosis is a descriptor of the shape of a probability distribution and, just as for skewness, there are different ways of quantifying it for a theoretical distribution and corresponding ways of estimating it from a sample from a population. There are various interpretations of kurtosis, and of how particular measures should be interpreted; these are primarily peakedness (width of peak), tail weight, and lack of shoulders (distribution primarily peak and tails, not in between). 25

26 Gaussian (Normal) Distribution mean Variance Skewness Kurtosis???? 26 26

27 Method of Maximum Likelihood One of the best methods of obtaining a point estimator of a parameter is the method of Maximum Likelihood Estimation (MLE). This technique was developed in the 1920s by a famous British statistician, Sir R. A. Fisher. As the name implies, the estimator will be the value of the parameter that maximizes the likelihood function. Definition: Note: (Maximum Likelihood Estimator) Suppose that X is a random variable with probability distribution f (x; θ), where θ is a single unknown parameter. Let x 1, x 2., x n be the observed values in a random sample of size n. Then the likelihood function of the sample is The likelihood function is now a function of only the unknown parameter θ. The Maximum Likelihood Estimator (MLE) of θ is the value of θ that maximizes the likelihood function L(θ). 27

28 In the case of a discrete random variable, the interpretation of the likelihood function is clear. The likelihood function of the sample L(θ) is just the probability That is, L(θ) is just the probability of obtaining the sample values x 1, x 2,.., x n. Therefore, in the discrete case, the maximum likelihood estimator is an estimator that maximizes the probability of occurrence of the sample values. Example: (Bernoulli Distribution MLE) 28

29 29

30 Example: (Normal Distribution MLE) 30

31 Example: (Exponential Distribution MLE) 31

32 Example: (Normal Distribution MLE for µ and σ 2 ) n? Dirtributions 32

33 Maximum Likelihood Estimators Estimator: Estimate: Any statistic used to estimate the value of an unknown parameter θ is called an estimator of θ. The observed value of the estimator is called the estimate. For instance, as we shall see, the usual estimator of the mean of a normal population, based on a sample X 1,..., X n from that population, is the sample mean 1 X X i n i If a sample of size 3 yields the data X 1 = 2, X 2 = 3, X 3 = 4, then the estimate of the population mean, resulting from the estimator X, is the value 3. 33

34 Suppose that the random variables X 1,..., X n, whose joint distribution is assumed given except for an unknown parameter θ, are to be observed. The problem of interest is to use the observed values to estimate θ. For example, the X i s might be independent, exponential random variables each having the same unknown mean θ. In this case, the joint density function of the random variables would be given by 34

35 Given: Notation: A random variable X has a probability distribution that is a function of a parameter θ. In the form of Probability distribution f(x θ). This implies that f(x θ), the exact form of the distribution of X is conditional on the value assigned to θ. Classical approach to estimation: Consist of taking a random sample of size n from this distribution and then substituting the sample value x i into the estimation of θ. Additional information: Suppose that we have some additional information about θ, and that we can summarize that information in the form of a probability distribution for θ, say f(θ). This probability distribution is often called the prior distribution for θ. Mean of the prior is µ 0 and the variance is σ 02. For example µ 0 =0, and σ 02 =1. prior distribution 35

36 This is a very novel concept insofar as the rest of this lessons is concerned because we are now viewing the parameter θ as a random variable. The probabilities associated with the prior distribution are often called subjective probabilities (degree of believes)!, in that they usually reflect the analyst s degree of belief regarding the true value of θ. The Bayesian approach to estimation uses the prior distribution for θ, f (θ), and the joint probability distribution of the sample, say f ( x1, x2,..., xn, ) to find a posterior distribution for θ, say, (,,..., ) f x1 x2 x n This posterior distribution contains information both from the sample and the prior distribution for θ. In a sense, it expresses our degree of belief regarding the true value of θ after observing the sample data. It is easy conceptually to find the posterior distribution. 36

37 Bayesian Estimation of Parameters Name of the game: Statistical inference based on the information in the sample data. 2 views of probability: 1) Relative frequencies Objective probabilities approach 2) Degree of belief Usual approach, i.e., Estimation based on MLE Subjective probabilities approach Bayesian approach Combines sample information with other information that may be available prior to collecting the sample. Purpose of this section:- Briefly illustrate how this approach may be used in parameter estimation. 37

38 The joint probability distribution of the sample X 1, X 2,., X n and the parameter θ (remember that θ is a random variable) is f ( x, x,..., x, ) f ( x, x,..., x ) f ( ) 1 2 n 1 2 and the marginal distribution of X 1, X 2,., X n is f ( x1, x2,..., xn) n f ( x, x,..., x, ), discrete n f ( x, x,..., x, ) d, continous n Therefore, the desired distribution is Likelihood of Distribution f ( x1, x2,..., xn, ) f ( x1, x2,..., xn ) f ( ) f ( x1, x2,..., xn ) f ( x, x,..., x ) f ( x, x,..., x ) 1 2 n 1 2 n Prior Distribution Posterior Distribution Joint probability Distribution Evidence of Distribution 38

39 We define the Bayes estimator of θ as the value of the posterior distribution (,,..., ) f x1 x2 x n that corresponds to the mean Sometimes, the mean of the posterior distribution of θ can be determined easily. As a function of θ, f x1 x2 x n (,,..., ) x 1, x 2,., x n are just constants. is a probability density function and Because θ enters into (,,..., ) only through f ( x, x,..., x, ) if f x1 x2 x n 1 2 f ( x1, x2,..., xn, ) as a function of θ is recognized as a well-known probability function, the posterior mean of θ can be deduced from the well-known distribution without integration or even calculation of n f ( x1, x2,..., x n ) 39

40 Example: Bayes Estimator for the Mean of a Normal Distribution Let X 1, X 2,, X n be a random sample from the normal distribution with mean µ and variance σ 2 where µ is unknown? And σ 2 is known!! (single parameter) Assume that the prior distribution for µ is normal with mean µ 0 and variance σ ; that is 1 ( 0) 1 ( ) f ( ) exp{ } exp{ } The joint probability distribution of the sample is 1 ( 2 ) f ( x, x,..., x ) exp{ }, iid sample n n 2 i n n n 2 i i 1 (2 0 ) f ( x, x,..., x ) exp{ ( ) ( x ) } 1 1 f x x x x x n n n 2 ( 1, 2,..., n ) exp{ ( )( 2 )} n 2 i i i 1 i 1 (2 0 ) 40

41 Thus, the joint probability distribution of the sample and µ is Upon completing the square in the exponent where h 1 (x 1,., x n, σ 2, µ 0,σ 02 ) is a function of the observed values, σ 2, µ 0, and σ 0 2 Now, because f (x 1,., x n ) does not depend on µ, 41

42 This is recognized as a normal probability density function with posterior mean and posterior variance Consequently, the Bayes estimate of µ is a weighted average of µ 0 and. For purposes of comparison, note that the maximum likelihood estimate of µ is x. To illustrate, suppose that we have a sample of size n =10 from a normal distribution with unknown mean µ and variance σ 2 =4. Assume that the prior distribution for µ is normal with mean µ 0 and variance σ 02 =1. If the sample mean is 0.75, the Bayes estimate of µ is Note that the maximum likelihood estimate of 2 2 ( 0 n 0, 0 x) n posterior mean 0 posterior mean 2 2 n 0 n, posterior mean x 2 2 ( 0, var 0 ) n posterior iance posterior var iance 2 2 n 0 n, posterior var iance 0 x is. x

43 Bayesian Estimation of Parameters (Another approach) 43

44 precision = 1.0/variation = 1/σ 2 precision n = precision 0 + n * precision ML for n=0 precision n = precision 0 n= Precision n = precision ML µ n = σ n2 (µ 0 / σ 02 + i x i /σ 2 )=( σ n2 / σ 02 ) µ 0 + ( σ n2 / σ 2 ) n * µ ML for n=0 µ n = µ 0 n= µ n = µ ML 44

45 45

46 Remarks: There is a relationship between the Bayes estimator for a parameter and the maximum likelihood estimator of the same parameter. For large sample sizes, the two are nearly equivalent. In general, the difference between the two estimators is small compared to In practical problems, a moderate sample size will produce approximately the same estimate by either the Bayes or maximum likelihood method, if the sample results are consistent with the assumed prior information. If the sample results are inconsistent with the prior assumptions, the Bayes estimate may differ considerably from the maximum likelihood estimate. In these circumstances, if the sample results are accepted as being correct, the prior information must be incorrect. The maximum likelihood estimate would then be the better estimate to use. If the sample results are very different from the prior information, the Bayes estimator will always tend to produce an estimate that is between the maximum likelihood estimate and the prior assumptions. If there is more inconsistency between the prior information and the sample, there will be more difference between the two estimates. 1 n 46

47 Suppose that we want to obtain a point estimate of a population parameter. We know that before the data is collected, the observations are considered to be random variables, say X 1, X 2,.., X n. Therefore, any function of the observation, or any statistic, is also a random variable. For example, the sample mean X - and the sample variance S 2 are statistics and they are also random variables. Since a statistic is a random variable, it has a probability distribution. We call the probability distribution of a statistic a sampling distribution. The notion of a sampling distribution is very important and will be discussed and illustrated later in the lesson. When discussing inference problems, it is convenient to have a general symbol to represent the parameter of interest. We will use the Greek symbol (theta, µ) to represent the parameter. The objective of point estimation is to select a single number, based on sample data, that is the most plausible value for. A numerical value of a sample statistic will be used as the point estimate. 47

48 Parameter Estimation Let X 1,..., X n be a random sample from a distribution F θ that is specified up to a vector of unknown parameters θ. Sample could be from a or it could be from a It is usual in the opposite is true in Poisson distribution whose mean value is unknown; Normal distribution having an unknown mean and variance Probability theory to suppose that all of the parameters of a distribution are known, Statistics, where a central problem is to use the observed data to make inferences about the unknown parameters we present the Maximum Likelihood (ML) method for determining estimators of unknown parameters. 48

49 The estimates so obtained are called Point Estimates because they specify a single quantity as an estimate of θ. Later on, we consider the problem of obtaining Interval Estimates In this case, rather than specifying a certain value as our estimate of θ, we specify an interval in which we estimate that θ lies. Additionally, we consider the question of how much confidence we can attach to such an interval estimate. We illustrate by showing how to obtain an interval estimate of the unknown mean of a normal distribution whose variance is specified. We then consider a variety of interval estimation problems. We also present an interval estimate of the mean of a normal distribution whose variance is unknown. We obtain an interval estimate of the variance of a normal distribution. 49

50 We determine an interval estimate for the difference of two normal means, both when their variances are assumed to be known and when they are assumed to be unknown (although in the latter case we suppose that the unknown variances are equal). We present interval estimates of the mean of a Bernoulli random variable and the mean of an exponential random variable. We return to the general problem of obtaining point estimates of unknown parameters and show how to evaluate an estimator by considering its mean square error. The bias of an estimator is discussed, and its relationship to the mean square error is explored. We consider the problem of determining an estimate of an unknown parameter when there is some prior information available. This is the Bayesian approach, which supposes that prior to observing the data, information about θ is always available to the decision maker, and that this information can be expressed in terms of a probability distribution on θ. In such a situation, we show how to compute the Bayes estimator, which is the estimator whose expected squared distance from θ is minimal 50

Point Estimation. Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage

Point Estimation. Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage 6 Point Estimation Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage Point Estimation Statistical inference: directed toward conclusions about one or more parameters. We will use the generic

More information

Point Estimation. Some General Concepts of Point Estimation. Example. Estimator quality

Point Estimation. Some General Concepts of Point Estimation. Example. Estimator quality Point Estimation Some General Concepts of Point Estimation Statistical inference = conclusions about parameters Parameters == population characteristics A point estimate of a parameter is a value (based

More information

5/5/2014 یادگیري ماشین. (Machine Learning) ارزیابی فرضیه ها دانشگاه فردوسی مشهد دانشکده مهندسی رضا منصفی. Evaluating Hypothesis (بخش دوم)

5/5/2014 یادگیري ماشین. (Machine Learning) ارزیابی فرضیه ها دانشگاه فردوسی مشهد دانشکده مهندسی رضا منصفی. Evaluating Hypothesis (بخش دوم) یادگیري ماشین درس نوزدهم (Machine Learning) دانشگاه فردوسی مشهد دانشکده مهندسی رضا منصفی ارزیابی فرضیه ها Evaluating Hypothesis (بخش دوم) 1 فهرست مطالب خطاي نمونه Error) (Sample خطاي واقعی Error) (True

More information

Point Estimation. Copyright Cengage Learning. All rights reserved.

Point Estimation. Copyright Cengage Learning. All rights reserved. 6 Point Estimation Copyright Cengage Learning. All rights reserved. 6.2 Methods of Point Estimation Copyright Cengage Learning. All rights reserved. Methods of Point Estimation The definition of unbiasedness

More information

Chapter 8. Introduction to Statistical Inference

Chapter 8. Introduction to Statistical Inference Chapter 8. Introduction to Statistical Inference Point Estimation Statistical inference is to draw some type of conclusion about one or more parameters(population characteristics). Now you know that a

More information

Chapter 4: Commonly Used Distributions. Statistics for Engineers and Scientists Fourth Edition William Navidi

Chapter 4: Commonly Used Distributions. Statistics for Engineers and Scientists Fourth Edition William Navidi Chapter 4: Commonly Used Distributions Statistics for Engineers and Scientists Fourth Edition William Navidi 2014 by Education. This is proprietary material solely for authorized instructor use. Not authorized

More information

Review for Final Exam Spring 2014 Jeremy Orloff and Jonathan Bloom

Review for Final Exam Spring 2014 Jeremy Orloff and Jonathan Bloom Review for Final Exam 18.05 Spring 2014 Jeremy Orloff and Jonathan Bloom THANK YOU!!!! JON!! PETER!! RUTHI!! ERIKA!! ALL OF YOU!!!! Probability Counting Sets Inclusion-exclusion principle Rule of product

More information

Applied Statistics I

Applied Statistics I Applied Statistics I Liang Zhang Department of Mathematics, University of Utah July 14, 2008 Liang Zhang (UofU) Applied Statistics I July 14, 2008 1 / 18 Point Estimation Liang Zhang (UofU) Applied Statistics

More information

Chapter 7: Estimation Sections

Chapter 7: Estimation Sections 1 / 40 Chapter 7: Estimation Sections 7.1 Statistical Inference Bayesian Methods: Chapter 7 7.2 Prior and Posterior Distributions 7.3 Conjugate Prior Distributions 7.4 Bayes Estimators Frequentist Methods:

More information

Definition 9.1 A point estimate is any function T (X 1,..., X n ) of a random sample. We often write an estimator of the parameter θ as ˆθ.

Definition 9.1 A point estimate is any function T (X 1,..., X n ) of a random sample. We often write an estimator of the parameter θ as ˆθ. 9 Point estimation 9.1 Rationale behind point estimation When sampling from a population described by a pdf f(x θ) or probability function P [X = x θ] knowledge of θ gives knowledge of the entire population.

More information

Lecture 10: Point Estimation

Lecture 10: Point Estimation Lecture 10: Point Estimation MSU-STT-351-Sum-17B (P. Vellaisamy: MSU-STT-351-Sum-17B) Probability & Statistics for Engineers 1 / 31 Basic Concepts of Point Estimation A point estimate of a parameter θ,

More information

8.1 Estimation of the Mean and Proportion

8.1 Estimation of the Mean and Proportion 8.1 Estimation of the Mean and Proportion Statistical inference enables us to make judgments about a population on the basis of sample information. The mean, standard deviation, and proportions of a population

More information

Chapter 7: Point Estimation and Sampling Distributions

Chapter 7: Point Estimation and Sampling Distributions Chapter 7: Point Estimation and Sampling Distributions Seungchul Baek Department of Statistics, University of South Carolina STAT 509: Statistics for Engineers 1 / 20 Motivation In chapter 3, we learned

More information

Point Estimators. STATISTICS Lecture no. 10. Department of Econometrics FEM UO Brno office 69a, tel

Point Estimators. STATISTICS Lecture no. 10. Department of Econometrics FEM UO Brno office 69a, tel STATISTICS Lecture no. 10 Department of Econometrics FEM UO Brno office 69a, tel. 973 442029 email:jiri.neubauer@unob.cz 8. 12. 2009 Introduction Suppose that we manufacture lightbulbs and we want to state

More information

INSTITUTE AND FACULTY OF ACTUARIES. Curriculum 2019 SPECIMEN EXAMINATION

INSTITUTE AND FACULTY OF ACTUARIES. Curriculum 2019 SPECIMEN EXAMINATION INSTITUTE AND FACULTY OF ACTUARIES Curriculum 2019 SPECIMEN EXAMINATION Subject CS1A Actuarial Statistics Time allowed: Three hours and fifteen minutes INSTRUCTIONS TO THE CANDIDATE 1. Enter all the candidate

More information

Lecture 2. Probability Distributions Theophanis Tsandilas

Lecture 2. Probability Distributions Theophanis Tsandilas Lecture 2 Probability Distributions Theophanis Tsandilas Comment on measures of dispersion Why do common measures of dispersion (variance and standard deviation) use sums of squares: nx (x i ˆµ) 2 i=1

More information

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER Two hours MATH20802 To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER STATISTICAL METHODS Answer any FOUR of the SIX questions.

More information

TABLE OF CONTENTS - VOLUME 2

TABLE OF CONTENTS - VOLUME 2 TABLE OF CONTENTS - VOLUME 2 CREDIBILITY SECTION 1 - LIMITED FLUCTUATION CREDIBILITY PROBLEM SET 1 SECTION 2 - BAYESIAN ESTIMATION, DISCRETE PRIOR PROBLEM SET 2 SECTION 3 - BAYESIAN CREDIBILITY, DISCRETE

More information

Much of what appears here comes from ideas presented in the book:

Much of what appears here comes from ideas presented in the book: Chapter 11 Robust statistical methods Much of what appears here comes from ideas presented in the book: Huber, Peter J. (1981), Robust statistics, John Wiley & Sons (New York; Chichester). There are many

More information

Lecture Notes 6. Assume F belongs to a family of distributions, (e.g. F is Normal), indexed by some parameter θ.

Lecture Notes 6. Assume F belongs to a family of distributions, (e.g. F is Normal), indexed by some parameter θ. Sufficient Statistics Lecture Notes 6 Sufficiency Data reduction in terms of a particular statistic can be thought of as a partition of the sample space X. Definition T is sufficient for θ if the conditional

More information

STAT 509: Statistics for Engineers Dr. Dewei Wang. Copyright 2014 John Wiley & Sons, Inc. All rights reserved.

STAT 509: Statistics for Engineers Dr. Dewei Wang. Copyright 2014 John Wiley & Sons, Inc. All rights reserved. STAT 509: Statistics for Engineers Dr. Dewei Wang Applied Statistics and Probability for Engineers Sixth Edition Douglas C. Montgomery George C. Runger 7 Point CHAPTER OUTLINE 7-1 Point Estimation 7-2

More information

Chapter 5: Statistical Inference (in General)

Chapter 5: Statistical Inference (in General) Chapter 5: Statistical Inference (in General) Shiwen Shen University of South Carolina 2016 Fall Section 003 1 / 17 Motivation In chapter 3, we learn the discrete probability distributions, including Bernoulli,

More information

Chapter 7: Estimation Sections

Chapter 7: Estimation Sections 1 / 31 : Estimation Sections 7.1 Statistical Inference Bayesian Methods: 7.2 Prior and Posterior Distributions 7.3 Conjugate Prior Distributions 7.4 Bayes Estimators Frequentist Methods: 7.5 Maximum Likelihood

More information

Chapter 7: Estimation Sections

Chapter 7: Estimation Sections Chapter 7: Estimation Sections 7.1 Statistical Inference Bayesian Methods: 7.2 Prior and Posterior Distributions 7.3 Conjugate Prior Distributions Frequentist Methods: 7.5 Maximum Likelihood Estimators

More information

Chapter 8: Sampling distributions of estimators Sections

Chapter 8: Sampling distributions of estimators Sections Chapter 8 continued Chapter 8: Sampling distributions of estimators Sections 8.1 Sampling distribution of a statistic 8.2 The Chi-square distributions 8.3 Joint Distribution of the sample mean and sample

More information

**BEGINNING OF EXAMINATION** A random sample of five observations from a population is:

**BEGINNING OF EXAMINATION** A random sample of five observations from a population is: **BEGINNING OF EXAMINATION** 1. You are given: (i) A random sample of five observations from a population is: 0.2 0.7 0.9 1.1 1.3 (ii) You use the Kolmogorov-Smirnov test for testing the null hypothesis,

More information

The Bernoulli distribution

The Bernoulli distribution This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike License. Your use of this material constitutes acceptance of that license and the conditions of use of materials on this

More information

6.041SC Probabilistic Systems Analysis and Applied Probability, Fall 2013 Transcript Lecture 23

6.041SC Probabilistic Systems Analysis and Applied Probability, Fall 2013 Transcript Lecture 23 6.041SC Probabilistic Systems Analysis and Applied Probability, Fall 2013 Transcript Lecture 23 The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare

More information

10/1/2012. PSY 511: Advanced Statistics for Psychological and Behavioral Research 1

10/1/2012. PSY 511: Advanced Statistics for Psychological and Behavioral Research 1 PSY 511: Advanced Statistics for Psychological and Behavioral Research 1 Pivotal subject: distributions of statistics. Foundation linchpin important crucial You need sampling distributions to make inferences:

More information

MATH 3200 Exam 3 Dr. Syring

MATH 3200 Exam 3 Dr. Syring . Suppose n eligible voters are polled (randomly sampled) from a population of size N. The poll asks voters whether they support or do not support increasing local taxes to fund public parks. Let M be

More information

Statistics for Business and Economics

Statistics for Business and Economics Statistics for Business and Economics Chapter 7 Estimation: Single Population Copyright 010 Pearson Education, Inc. Publishing as Prentice Hall Ch. 7-1 Confidence Intervals Contents of this chapter: Confidence

More information

Estimation after Model Selection

Estimation after Model Selection Estimation after Model Selection Vanja M. Dukić Department of Health Studies University of Chicago E-Mail: vanja@uchicago.edu Edsel A. Peña* Department of Statistics University of South Carolina E-Mail:

More information

Chapter 5. Sampling Distributions

Chapter 5. Sampling Distributions Lecture notes, Lang Wu, UBC 1 Chapter 5. Sampling Distributions 5.1. Introduction In statistical inference, we attempt to estimate an unknown population characteristic, such as the population mean, µ,

More information

Module 4: Point Estimation Statistics (OA3102)

Module 4: Point Estimation Statistics (OA3102) Module 4: Point Estimation Statistics (OA3102) Professor Ron Fricker Naval Postgraduate School Monterey, California Reading assignment: WM&S chapter 8.1-8.4 Revision: 1-12 1 Goals for this Module Define

More information

Statistical estimation

Statistical estimation Statistical estimation Statistical modelling: theory and practice Gilles Guillot gigu@dtu.dk September 3, 2013 Gilles Guillot (gigu@dtu.dk) Estimation September 3, 2013 1 / 27 1 Introductory example 2

More information

The Two-Sample Independent Sample t Test

The Two-Sample Independent Sample t Test Department of Psychology and Human Development Vanderbilt University 1 Introduction 2 3 The General Formula The Equal-n Formula 4 5 6 Independence Normality Homogeneity of Variances 7 Non-Normality Unequal

More information

Chapter 7 - Lecture 1 General concepts and criteria

Chapter 7 - Lecture 1 General concepts and criteria Chapter 7 - Lecture 1 General concepts and criteria January 29th, 2010 Best estimator Mean Square error Unbiased estimators Example Unbiased estimators not unique Special case MVUE Bootstrap General Question

More information

Some Characteristics of Data

Some Characteristics of Data Some Characteristics of Data Not all data is the same, and depending on some characteristics of a particular dataset, there are some limitations as to what can and cannot be done with that data. Some key

More information

Confidence Intervals Introduction

Confidence Intervals Introduction Confidence Intervals Introduction A point estimate provides no information about the precision and reliability of estimation. For example, the sample mean X is a point estimate of the population mean μ

More information

Key Objectives. Module 2: The Logic of Statistical Inference. Z-scores. SGSB Workshop: Using Statistical Data to Make Decisions

Key Objectives. Module 2: The Logic of Statistical Inference. Z-scores. SGSB Workshop: Using Statistical Data to Make Decisions SGSB Workshop: Using Statistical Data to Make Decisions Module 2: The Logic of Statistical Inference Dr. Tom Ilvento January 2006 Dr. Mugdim Pašić Key Objectives Understand the logic of statistical inference

More information

STA 532: Theory of Statistical Inference

STA 532: Theory of Statistical Inference STA 532: Theory of Statistical Inference Robert L. Wolpert Department of Statistical Science Duke University, Durham, NC, USA 2 Estimating CDFs and Statistical Functionals Empirical CDFs Let {X i : i n}

More information

Introduction to Sequential Monte Carlo Methods

Introduction to Sequential Monte Carlo Methods Introduction to Sequential Monte Carlo Methods Arnaud Doucet NCSU, October 2008 Arnaud Doucet () Introduction to SMC NCSU, October 2008 1 / 36 Preliminary Remarks Sequential Monte Carlo (SMC) are a set

More information

CS340 Machine learning Bayesian model selection

CS340 Machine learning Bayesian model selection CS340 Machine learning Bayesian model selection Bayesian model selection Suppose we have several models, each with potentially different numbers of parameters. Example: M0 = constant, M1 = straight line,

More information

Chapter 6: Point Estimation

Chapter 6: Point Estimation Chapter 6: Point Estimation Professor Sharabati Purdue University March 10, 2014 Professor Sharabati (Purdue University) Point Estimation Spring 2014 1 / 37 Chapter Overview Point estimator and point estimate

More information

Back to estimators...

Back to estimators... Back to estimators... So far, we have: Identified estimators for common parameters Discussed the sampling distributions of estimators Introduced ways to judge the goodness of an estimator (bias, MSE, etc.)

More information

CSE 312 Winter Learning From Data: Maximum Likelihood Estimators (MLE)

CSE 312 Winter Learning From Data: Maximum Likelihood Estimators (MLE) CSE 312 Winter 2017 Learning From Data: Maximum Likelihood Estimators (MLE) 1 Parameter Estimation Given: independent samples x1, x2,..., xn from a parametric distribution f(x θ) Goal: estimate θ. Not

More information

ME3620. Theory of Engineering Experimentation. Spring Chapter III. Random Variables and Probability Distributions.

ME3620. Theory of Engineering Experimentation. Spring Chapter III. Random Variables and Probability Distributions. ME3620 Theory of Engineering Experimentation Chapter III. Random Variables and Probability Distributions Chapter III 1 3.2 Random Variables In an experiment, a measurement is usually denoted by a variable

More information

A New Hybrid Estimation Method for the Generalized Pareto Distribution

A New Hybrid Estimation Method for the Generalized Pareto Distribution A New Hybrid Estimation Method for the Generalized Pareto Distribution Chunlin Wang Department of Mathematics and Statistics University of Calgary May 18, 2011 A New Hybrid Estimation Method for the GPD

More information

PROBABILITY AND STATISTICS

PROBABILITY AND STATISTICS Monday, January 12, 2015 1 PROBABILITY AND STATISTICS Zhenyu Ye January 12, 2015 Monday, January 12, 2015 2 References Ch10 of Experiments in Modern Physics by Melissinos. Particle Physics Data Group Review

More information

UQ, STAT2201, 2017, Lectures 3 and 4 Unit 3 Probability Distributions.

UQ, STAT2201, 2017, Lectures 3 and 4 Unit 3 Probability Distributions. UQ, STAT2201, 2017, Lectures 3 and 4 Unit 3 Probability Distributions. Random Variables 2 A random variable X is a numerical (integer, real, complex, vector etc.) summary of the outcome of the random experiment.

More information

Interval estimation. September 29, Outline Basic ideas Sampling variation and CLT Interval estimation using X More general problems

Interval estimation. September 29, Outline Basic ideas Sampling variation and CLT Interval estimation using X More general problems Interval estimation September 29, 2017 STAT 151 Class 7 Slide 1 Outline of Topics 1 Basic ideas 2 Sampling variation and CLT 3 Interval estimation using X 4 More general problems STAT 151 Class 7 Slide

More information

Chapter 5. Statistical inference for Parametric Models

Chapter 5. Statistical inference for Parametric Models Chapter 5. Statistical inference for Parametric Models Outline Overview Parameter estimation Method of moments How good are method of moments estimates? Interval estimation Statistical Inference for Parametric

More information

Point Estimation. Principle of Unbiased Estimation. When choosing among several different estimators of θ, select one that is unbiased.

Point Estimation. Principle of Unbiased Estimation. When choosing among several different estimators of θ, select one that is unbiased. Point Estimation Point Estimation Definition A point estimate of a parameter θ is a single number that can be regarded as a sensible value for θ. A point estimate is obtained by selecting a suitable statistic

More information

Lecture Data Science

Lecture Data Science Web Science & Technologies University of Koblenz Landau, Germany Lecture Data Science Statistics Foundations JProf. Dr. Claudia Wagner Learning Goals How to describe sample data? What is mode/median/mean?

More information

1/2 2. Mean & variance. Mean & standard deviation

1/2 2. Mean & variance. Mean & standard deviation Question # 1 of 10 ( Start time: 09:46:03 PM ) Total Marks: 1 The probability distribution of X is given below. x: 0 1 2 3 4 p(x): 0.73? 0.06 0.04 0.01 What is the value of missing probability? 0.54 0.16

More information

Homework Problems Stat 479

Homework Problems Stat 479 Chapter 10 91. * A random sample, X1, X2,, Xn, is drawn from a distribution with a mean of 2/3 and a variance of 1/18. ˆ = (X1 + X2 + + Xn)/(n-1) is the estimator of the distribution mean θ. Find MSE(

More information

4-1. Chapter 4. Commonly Used Distributions by The McGraw-Hill Companies, Inc. All rights reserved.

4-1. Chapter 4. Commonly Used Distributions by The McGraw-Hill Companies, Inc. All rights reserved. 4-1 Chapter 4 Commonly Used Distributions 2014 by The Companies, Inc. All rights reserved. Section 4.1: The Bernoulli Distribution 4-2 We use the Bernoulli distribution when we have an experiment which

More information

Business Statistics 41000: Probability 3

Business Statistics 41000: Probability 3 Business Statistics 41000: Probability 3 Drew D. Creal University of Chicago, Booth School of Business February 7 and 8, 2014 1 Class information Drew D. Creal Email: dcreal@chicagobooth.edu Office: 404

More information

Probability Models.S2 Discrete Random Variables

Probability Models.S2 Discrete Random Variables Probability Models.S2 Discrete Random Variables Operations Research Models and Methods Paul A. Jensen and Jonathan F. Bard Results of an experiment involving uncertainty are described by one or more random

More information

BIO5312 Biostatistics Lecture 5: Estimations

BIO5312 Biostatistics Lecture 5: Estimations BIO5312 Biostatistics Lecture 5: Estimations Yujin Chung September 27th, 2016 Fall 2016 Yujin Chung Lec5: Estimations Fall 2016 1/34 Recap Yujin Chung Lec5: Estimations Fall 2016 2/34 Today s lecture and

More information

CS 361: Probability & Statistics

CS 361: Probability & Statistics March 12, 2018 CS 361: Probability & Statistics Inference Binomial likelihood: Example Suppose we have a coin with an unknown probability of heads. We flip the coin 10 times and observe 2 heads. What can

More information

Chapter 5 Univariate time-series analysis. () Chapter 5 Univariate time-series analysis 1 / 29

Chapter 5 Univariate time-series analysis. () Chapter 5 Univariate time-series analysis 1 / 29 Chapter 5 Univariate time-series analysis () Chapter 5 Univariate time-series analysis 1 / 29 Time-Series Time-series is a sequence fx 1, x 2,..., x T g or fx t g, t = 1,..., T, where t is an index denoting

More information

Simple Descriptive Statistics

Simple Descriptive Statistics Simple Descriptive Statistics These are ways to summarize a data set quickly and accurately The most common way of describing a variable distribution is in terms of two of its properties: Central tendency

More information

Point Estimation. Edwin Leuven

Point Estimation. Edwin Leuven Point Estimation Edwin Leuven Introduction Last time we reviewed statistical inference We saw that while in probability we ask: given a data generating process, what are the properties of the outcomes?

More information

Chapter 7: SAMPLING DISTRIBUTIONS & POINT ESTIMATION OF PARAMETERS

Chapter 7: SAMPLING DISTRIBUTIONS & POINT ESTIMATION OF PARAMETERS Chapter 7: SAMPLING DISTRIBUTIONS & POINT ESTIMATION OF PARAMETERS Part 1: Introduction Sampling Distributions & the Central Limit Theorem Point Estimation & Estimators Sections 7-1 to 7-2 Sample data

More information

Econ 300: Quantitative Methods in Economics. 11th Class 10/19/09

Econ 300: Quantitative Methods in Economics. 11th Class 10/19/09 Econ 300: Quantitative Methods in Economics 11th Class 10/19/09 Statistical thinking will one day be as necessary for efficient citizenship as the ability to read and write. --H.G. Wells discuss test [do

More information

A NEW POINT ESTIMATOR FOR THE MEDIAN OF GAMMA DISTRIBUTION

A NEW POINT ESTIMATOR FOR THE MEDIAN OF GAMMA DISTRIBUTION Banneheka, B.M.S.G., Ekanayake, G.E.M.U.P.D. Viyodaya Journal of Science, 009. Vol 4. pp. 95-03 A NEW POINT ESTIMATOR FOR THE MEDIAN OF GAMMA DISTRIBUTION B.M.S.G. Banneheka Department of Statistics and

More information

Unit 5: Sampling Distributions of Statistics

Unit 5: Sampling Distributions of Statistics Unit 5: Sampling Distributions of Statistics Statistics 571: Statistical Methods Ramón V. León 6/12/2004 Unit 5 - Stat 571 - Ramon V. Leon 1 Definitions and Key Concepts A sample statistic used to estimate

More information

Unit 5: Sampling Distributions of Statistics

Unit 5: Sampling Distributions of Statistics Unit 5: Sampling Distributions of Statistics Statistics 571: Statistical Methods Ramón V. León 6/12/2004 Unit 5 - Stat 571 - Ramon V. Leon 1 Definitions and Key Concepts A sample statistic used to estimate

More information

Chapter 4: Asymptotic Properties of MLE (Part 3)

Chapter 4: Asymptotic Properties of MLE (Part 3) Chapter 4: Asymptotic Properties of MLE (Part 3) Daniel O. Scharfstein 09/30/13 1 / 1 Breakdown of Assumptions Non-Existence of the MLE Multiple Solutions to Maximization Problem Multiple Solutions to

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Maximum Likelihood Estimation The likelihood and log-likelihood functions are the basis for deriving estimators for parameters, given data. While the shapes of these two functions are different, they have

More information

Continuous random variables

Continuous random variables Continuous random variables probability density function (f(x)) the probability distribution function of a continuous random variable (analogous to the probability mass function for a discrete random variable),

More information

Numerical Descriptive Measures. Measures of Center: Mean and Median

Numerical Descriptive Measures. Measures of Center: Mean and Median Steve Sawin Statistics Numerical Descriptive Measures Having seen the shape of a distribution by looking at the histogram, the two most obvious questions to ask about the specific distribution is where

More information

Institute for the Advancement of University Learning & Department of Statistics

Institute for the Advancement of University Learning & Department of Statistics Institute for the Advancement of University Learning & Department of Statistics Descriptive Statistics for Research (Hilary Term, 00) Lecture 4: Estimation (I.) Overview of Estimation In most studies or

More information

Computer Statistics with R

Computer Statistics with R MAREK GAGOLEWSKI KONSTANCJA BOBECKA-WESO LOWSKA PRZEMYS LAW GRZEGORZEWSKI Computer Statistics with R 5. Point Estimation Faculty of Mathematics and Information Science Warsaw University of Technology []

More information

Chapter 7 Sampling Distributions and Point Estimation of Parameters

Chapter 7 Sampling Distributions and Point Estimation of Parameters Chapter 7 Sampling Distributions and Point Estimation of Parameters Part 1: Sampling Distributions, the Central Limit Theorem, Point Estimation & Estimators Sections 7-1 to 7-2 1 / 25 Statistical Inferences

More information

Lecture 17: More on Markov Decision Processes. Reinforcement learning

Lecture 17: More on Markov Decision Processes. Reinforcement learning Lecture 17: More on Markov Decision Processes. Reinforcement learning Learning a model: maximum likelihood Learning a value function directly Monte Carlo Temporal-difference (TD) learning COMP-424, Lecture

More information

Contents. An Overview of Statistical Applications CHAPTER 1. Contents (ix) Preface... (vii)

Contents. An Overview of Statistical Applications CHAPTER 1. Contents (ix) Preface... (vii) Contents (ix) Contents Preface... (vii) CHAPTER 1 An Overview of Statistical Applications 1.1 Introduction... 1 1. Probability Functions and Statistics... 1..1 Discrete versus Continuous Functions... 1..

More information

Statistics and Probability

Statistics and Probability Statistics and Probability Continuous RVs (Normal); Confidence Intervals Outline Continuous random variables Normal distribution CLT Point estimation Confidence intervals http://www.isrec.isb-sib.ch/~darlene/geneve/

More information

Module Tag PSY_P2_M 7. PAPER No.2: QUANTITATIVE METHODS MODULE No.7: NORMAL DISTRIBUTION

Module Tag PSY_P2_M 7. PAPER No.2: QUANTITATIVE METHODS MODULE No.7: NORMAL DISTRIBUTION Subject Paper No and Title Module No and Title Paper No.2: QUANTITATIVE METHODS Module No.7: NORMAL DISTRIBUTION Module Tag PSY_P2_M 7 TABLE OF CONTENTS 1. Learning Outcomes 2. Introduction 3. Properties

More information

On Some Test Statistics for Testing the Population Skewness and Kurtosis: An Empirical Study

On Some Test Statistics for Testing the Population Skewness and Kurtosis: An Empirical Study Florida International University FIU Digital Commons FIU Electronic Theses and Dissertations University Graduate School 8-26-2016 On Some Test Statistics for Testing the Population Skewness and Kurtosis:

More information

Exam 2 Spring 2015 Statistics for Applications 4/9/2015

Exam 2 Spring 2015 Statistics for Applications 4/9/2015 18.443 Exam 2 Spring 2015 Statistics for Applications 4/9/2015 1. True or False (and state why). (a). The significance level of a statistical test is not equal to the probability that the null hypothesis

More information

Version A. Problem 1. Let X be the continuous random variable defined by the following pdf: 1 x/2 when 0 x 2, f(x) = 0 otherwise.

Version A. Problem 1. Let X be the continuous random variable defined by the following pdf: 1 x/2 when 0 x 2, f(x) = 0 otherwise. Math 224 Q Exam 3A Fall 217 Tues Dec 12 Version A Problem 1. Let X be the continuous random variable defined by the following pdf: { 1 x/2 when x 2, f(x) otherwise. (a) Compute the mean µ E[X]. E[X] x

More information

Basic Procedure for Histograms

Basic Procedure for Histograms Basic Procedure for Histograms 1. Compute the range of observations (min. & max. value) 2. Choose an initial # of classes (most likely based on the range of values, try and find a number of classes that

More information

Alexander Marianski August IFRS 9: Probably Weighted and Biased?

Alexander Marianski August IFRS 9: Probably Weighted and Biased? Alexander Marianski August 2017 IFRS 9: Probably Weighted and Biased? Introductions Alexander Marianski Associate Director amarianski@deloitte.co.uk Alexandra Savelyeva Assistant Manager asavelyeva@deloitte.co.uk

More information

STATISTICS and PROBABILITY

STATISTICS and PROBABILITY Introduction to Statistics Atatürk University STATISTICS and PROBABILITY LECTURE: SAMPLING DISTRIBUTIONS and POINT ESTIMATIONS Prof. Dr. İrfan KAYMAZ Atatürk University Engineering Faculty Department of

More information

Posterior Inference. , where should we start? Consider the following computational procedure: 1. draw samples. 2. convert. 3. compute properties

Posterior Inference. , where should we start? Consider the following computational procedure: 1. draw samples. 2. convert. 3. compute properties Posterior Inference Example. Consider a binomial model where we have a posterior distribution for the probability term, θ. Suppose we want to make inferences about the log-odds γ = log ( θ 1 θ), where

More information

Probability & Statistics

Probability & Statistics Probability & Statistics BITS Pilani K K Birla Goa Campus Dr. Jajati Keshari Sahoo Department of Mathematics Statistics Descriptive statistics Inferential statistics /38 Inferential Statistics 1. Involves:

More information

Conjugate priors: Beta and normal Class 15, Jeremy Orloff and Jonathan Bloom

Conjugate priors: Beta and normal Class 15, Jeremy Orloff and Jonathan Bloom 1 Learning Goals Conjugate s: Beta and normal Class 15, 18.05 Jeremy Orloff and Jonathan Bloom 1. Understand the benefits of conjugate s.. Be able to update a beta given a Bernoulli, binomial, or geometric

More information

Likelihood Methods of Inference. Toss coin 6 times and get Heads twice.

Likelihood Methods of Inference. Toss coin 6 times and get Heads twice. Methods of Inference Toss coin 6 times and get Heads twice. p is probability of getting H. Probability of getting exactly 2 heads is 15p 2 (1 p) 4 This function of p, is likelihood function. Definition:

More information

SOCIETY OF ACTUARIES EXAM STAM SHORT-TERM ACTUARIAL MATHEMATICS EXAM STAM SAMPLE QUESTIONS

SOCIETY OF ACTUARIES EXAM STAM SHORT-TERM ACTUARIAL MATHEMATICS EXAM STAM SAMPLE QUESTIONS SOCIETY OF ACTUARIES EXAM STAM SHORT-TERM ACTUARIAL MATHEMATICS EXAM STAM SAMPLE QUESTIONS Questions 1-307 have been taken from the previous set of Exam C sample questions. Questions no longer relevant

More information

ECE 295: Lecture 03 Estimation and Confidence Interval

ECE 295: Lecture 03 Estimation and Confidence Interval ECE 295: Lecture 03 Estimation and Confidence Interval Spring 2018 Prof Stanley Chan School of Electrical and Computer Engineering Purdue University 1 / 23 Theme of this Lecture What is Estimation? You

More information

STRESS-STRENGTH RELIABILITY ESTIMATION

STRESS-STRENGTH RELIABILITY ESTIMATION CHAPTER 5 STRESS-STRENGTH RELIABILITY ESTIMATION 5. Introduction There are appliances (every physical component possess an inherent strength) which survive due to their strength. These appliances receive

More information

Chapter 8: Sampling distributions of estimators Sections

Chapter 8: Sampling distributions of estimators Sections Chapter 8: Sampling distributions of estimators Sections 8.1 Sampling distribution of a statistic 8.2 The Chi-square distributions 8.3 Joint Distribution of the sample mean and sample variance Skip: p.

More information

9 Expectation and Variance

9 Expectation and Variance 9 Expectation and Variance Two numbers are often used to summarize a probability distribution for a random variable X. The mean is a measure of the center or middle of the probability distribution, and

More information

IEOR E4602: Quantitative Risk Management

IEOR E4602: Quantitative Risk Management IEOR E4602: Quantitative Risk Management Basic Concepts and Techniques of Risk Management Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com

More information

Random Variables and Probability Distributions

Random Variables and Probability Distributions Chapter 3 Random Variables and Probability Distributions Chapter Three Random Variables and Probability Distributions 3. Introduction An event is defined as the possible outcome of an experiment. In engineering

More information

2 DESCRIPTIVE STATISTICS

2 DESCRIPTIVE STATISTICS Chapter 2 Descriptive Statistics 47 2 DESCRIPTIVE STATISTICS Figure 2.1 When you have large amounts of data, you will need to organize it in a way that makes sense. These ballots from an election are rolled

More information

Week 2 Quantitative Analysis of Financial Markets Hypothesis Testing and Confidence Intervals

Week 2 Quantitative Analysis of Financial Markets Hypothesis Testing and Confidence Intervals Week 2 Quantitative Analysis of Financial Markets Hypothesis Testing and Confidence Intervals Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg :

More information

Section The Sampling Distribution of a Sample Mean

Section The Sampling Distribution of a Sample Mean Section 5.2 - The Sampling Distribution of a Sample Mean Statistics 104 Autumn 2004 Copyright c 2004 by Mark E. Irwin The Sampling Distribution of a Sample Mean Example: Quality control check of light

More information