Random Sampling & Confidence Intervals

Size: px
Start display at page:

Download "Random Sampling & Confidence Intervals"

Transcription

1 +σ +σ +3σ Random Sampling & Confidence Intervals Lecture / Dr. P. s Clinic Consultant Module in Probability & Statistics in Engineering

2 +σ +σ +3σ This Week in P&S (and Next, and Next ) This week (Oct 5): Random Sampling The essence of statistics Sampling distribution Central Limit Theorem Introduction to confidence intervals Next week (Oct 1) Joint probability distributions guest lecture slides are already on the web Come prepared (having read the text/slides) so that you can follow easier. The week after (Oct 19) Continue with confidence intervals And the week after that (Oct 6) Midterm Exam

3 Statistics +σ +σ +3σ So what do we do with all those stuff we have learned so far? The means and the variances and the distributions and the The problem with the world is well, it is too big! Too many people to vote too many chips to evaluate too many students to tabulate their weights/heights/grades too many of too many things to keep track of! It is hard work! The real business of statistics is to save time and money! Nobody likes doing unnecessary work(!) such as checking every single chip to see if they are defective - and one thing statistics can help is tell us precisely how lazy we can be!

4 An example +σ +σ +3σ Let s assume that the true weight distribution on this campus is in fact normal with a mean of 160 lbs and a std.dev of 10. We do not know this fact, and wish to find out. Since there are 10,000 students, we don t want to ask all of them, so we take a random sample of size say 10. Sample1 Sample Sample3 Sample4 Sample5 Sample x s Mean Std.dev The mean and variance change from one sample to the next! Not knowing the true parameters, how do we know which one is true? It appears as if the mean and variance (std.dev.) themselves are random variables!

5 Statistic +σ +σ +3σ statistic A is any quantity whose value can be calculated from sample data. Prior to obtaining data, there is uncertainty as to what value of any particular statistic will result. A statistic is a random variable denoted by an uppercase letter; a lowercase letter is used to represent the calculated or observed value of the statistic. In our previous example, both sample mean X and sample variance S are in fact random variables, since they are the results of random experiments, and since their value can be calculated from sample data, they are statistics Since statistics are random variables, they too have their own probability distributions. This can be readily seen from the table: Both the sample mean and std.dev. vary from one sample to the next, and hence they have a distribution! The probability distribution of a statistics is called sampling distribution. Which describes how the statistic varies in value across all samples that might be selected.

6 Random Sampling +σ +σ +3σ Since there are too many of everything, - and we hate to do a lot of work - we take a small sample and do our analysis on that small sample. An obvious question is how big of a sample do we need to get meaningful results? Also, how do we choose the sample subjects to ensure that the sample is a good representative of the population? To get statistically dependable results, we need to choose the sample at random. How do we ensure randomness?

7 Random Samples +σ +σ +3σ The rv s X 1,,X n are said to form a random sample of size n if 1. The X i s are independent rv s. Selection of any one (student, chip, light bulb, etc.) has no influence on the others. Every X i has the same probability distribution. Each unit in the sample has equal chance of being chosen These two conditions can be combined by saying that X i s are independent and identically distributed (i.i.d). These conditions can be satisfied exactly only if population size is infinite or the sampling is done with replacement Otherwise, if sample size n is at most 5% of population size N, we can usually assume that X i s are iid.

8 Sampling Distribution +σ +σ +3σ We typically obtain the sampling distribution through a simulation experiment for which we need to determine The statistic of interest (the mean and variance of the weight, defect rate, etc.) The population distribution (is it normal, uniform, Poisson?) The sample size n (how many subjects/objects are there in each sample). The number of replications k (how many samples are obtained). To do this by random sampling (or by computer and a simulation software) Obtain k different random samples from your population Calculate the value(s) of the sought statistic(s) for each of the k samples Construct a histogram of k such numbers (i.e., statistics obtained from k samples) The histogram gives an approximate sampling distribution of the statistic The approximation will approach true distribution as k (typical values are 500, 1000)

9 =160, σ=10 +σ +σ +3σ x x x x = (0.3865) = ( ) = (0.0653) = ( 0.009) s s s s x 1 x x 3 x 4 = 0.4 = = = %Choose random numbers from normal dist. for k=1:500; P1(:,k)=normrnd(160, 10, 5,1); P(:,k)=normrnd(160, 10, 10,1); P3(:,k)=normrnd(160, 10, 0,1); P4(:,k)=normrnd(160, 10, 30,1); end %Calculate the mean sampling distributions xbar1=mean(p1); xbar=mean(p); xbar3=mean(p3); xbar4=mean(p4); %Plot the histograms (sampling distributions) subplot(1); hist(xbar1); axis([ ]) subplot(); hist(xbar); axis([ ]) subplot(3); hist(xbar3); axis([ ]) subplot(4); hist(xbar4); axis([ ]) %Calculate mean and variance of each samp. dist. mu_1=mean(xbar1); s1=var(xbar1); mu_=mean(xbar); s=var(xbar); mu_3=mean(xbar3); s3=var(xbar3); mu_4=mean(xbar4); s4=var(xbar4); What do you notice with these numbers? If you run the above code, your results will have different numbers, but a similar trend. Why?

10 +σ +σ +3σ Distribution of the Sample Mean We observed that 1. The mean of 500 samples (of size 5,10, 0 and 30) are all around 160, the mean of the distribution from which they were drawn. However as n increases, the sample mean gets closer and closer to the actual mean.. The variance of the sample mean becomes smaller and smaller as n increases. In fact, s X = σ / n. These observations can be formalized as follows: Let X 1,, X n be a random sample from a distribution with mean value and standard deviation σ.then E [ X ] = s = σ = σ n X X Why is this important? We now know that if we take many samples, and take the mean of their means the overall mean approaches to the true (and unknown) mean. More over, since we are lazy, we often take just one sample (of size n). We can take the mean (and variance) of this sample as an estimate of the true mean (and the true variance with a factor of n ), knowing that our estimate would get better, if we had more samples X =

11 +σ +σ +3σ Distribution of Sum of RVs If we make a new r.v., say, To = X X n (the sample total), we can make two observations: 1. The distribution of the sums seems to be Gaussian as well. The sample mean of the sum appears to be n and, unlike the variance of original sample, the sample variance of the sum appears to increase with increasing n. We can formalize these as: E s [ X ] X = σ = X X = = σ n Let X 1,, X n be a random sample from a normal distribution with mean value and standard deviation σ. Then for any n, X is normally distributed, as is To. [ ] ( ) E T = n, V T = nσ,and σ = nσ. o o T o

12 A Strange Observation +σ +σ +3σ We just saw that if a series of rv s drawn from a normal dist. are added, their sum is also a r.v. with a normal distribution. In fact, this follows, even if the individual r.v. s are not normal, or even if they all come from a different distribution:?

13 +σ +σ +3σ Matlab Code Demonstrating CLT % Central Limit Theorem % Sum of random variables, even if they all come from different % distributions, is normal! %Writen by Dr. Robi Polikar for CC - P&S Fall 006 clear close all p1=lognrnd(1, 0.9, 10000,1); p=poissrnd(3,10000,1); p3=binornd(100,0.01, 10000,1); p4=unifrnd(0, 10, 10000, 1); P=[p1 p p3 p4]; psum=sum(p'); %sum of all random variables subplot(1) hist(p1,0) grid title('p1: Lognormal, \mu=1, \sigma=0.9') subplot() hist(p,0) grid title('p: Poisson, \lambda=5') subplot(3) hist(p3,0) title('p3:binomial, m=100, p=0.01') grid subplot(4) hist(p4,0) grid title('p4:uniform, a=0, b=10') figure hist(psum,40) axis([ ]) grid title('histogram of T=P1+P+P3+P4')

14 The Central Limit Theorem +σ +σ +3σ Let X 1,, X n be a random sample from a distribution with mean value and variance σ. Then if n is sufficiently large, X has approximately a normal distribution with = = σ X X n and σ, and To=X 1 + +X n also has approximately a normal distribution with = n, σ = nσ. T o T o The larger the value of n, (in particular n>30) the better the approximation.

15 More on CLT +σ +σ +3σ If we have two independent populations with means 1 and and variances σ 1 and σ And if X 1 and X are the sample means of the two independent random samples of size n 1 and n, from these two populations, respectively, then the sampling distribution of Z = X ( ) X 1 1 σ1 n1+ σ n is approximately standard normal as long as other conditions of CLT are satisfied. If the original two distributions are normal, then Z is exactly standard normal. The expression can be generalized to a larger number of populations. σ Also, the random variable Y X X is normal with 1 σ N ±, + = 1± X1 X1 See Example 7.3 in your text 1 n n

16 Central Limit Theorem +σ +σ +3σ This should explain why normal distribution is so common in nature, science and everywhere else: Every natural phenomenon is a result of a combination of a large number effects. For example, a student s weight or even his/her academic success is the result of genetics, nutrition, illness/health, study habits, extracurricular activities, and of course the number of beers consumed over the weekend s beer party!!! Put everything together and voila!...you ve got yourself normal!

17 +σ +σ +3σ Random Sampling for Binomial Distribution From previous discussion, if the underlying population is N(,σ), and we draw a random sample of size n from that population, then the mean of the sample, X is also normal with N(, σ /n). What if we are dealing with proportion of successes, with only possible outcomes: For example, we are interested in chip defects. We check a random sample of chips and the outcome of this experiment is flawed or not flawed, not something we can compute a mean of (unlike student weights), but rather a proportion. If we define X=number of successes in a sample (say # of flawed chips!), then we are interested in the random variable P=X/n, where n is the sample size. We know that as n, p (the value of the r.v. P) approaches true probability of a chip being flawed. But since we take a finite sample n, what can we say about our estimate of p? To answer this question, we conduct the following experiment: We draw a random sample of size n from a known binomial distribution with a known probability of success p. We compute our estimate of the true p simply by calculating x/n (x: # of successes out of n ). We call this estimate pˆ We repeat this k times. That is we take a total of k samples, each of size n and compute pˆ k times. We look at the mean and variance of these k estimates of pˆ

18 +σ +σ +3σ Random Sampling of Proportions Then, we get very similar results to that of random samples of normal r.v.: E [ Pˆ ] = p ( 1 p) σ p( 1 p) p σ σ ˆ = = or σ ˆ = = P P n n n n For large n Pˆ is approximately normal The sample mean of pˆ is indeed p The variance of pˆ is true variance divided by n! This means that, if we have one sample of size n, then we can estimate the true probability of success and the true variance by the mean and the variance of the sample (with a factor of n). = p pˆ σ nσ Pˆ or σ nσ Pˆ

19 How Good is Our Estimate? +σ +σ +3σ We have seen that we can estimate the true mean and variance of a population parameter from the statistics of its random samples If we take many such random samples, then we can have reasonable information on how accurate such estimates are. But in real life, we only take one random sample (of size n), and estimate the parameters from that one sample. How good is this estimate? Given the earlier example of student weights, if the mean of the sample that we happened to take was 16.4, then we would estimate this value as the true average weight. This gives us no information on how accurate our estimate is. Instead of reporting a single estimate like 16.4, an interval of potential values would be far more helpful, say if we could say the average weight on this campus fall within [154 ~ 166] lbs. Further, it would be really helpful, if we could estimate the validity of this interval as well: The true average weight lies within [154 ~ 166] lbs with 95% probability.

20 +σ +σ +3σ Inductive Reasoning & Confidence Intervals This brings us to true domain of statistics: Inductive reasoning or making inference, that is, given a single random sample of size n, what can we conclude about the population? And what is our confidence in such a conclusion?

21 +σ +σ +3σ Confidence Interval & Confidence Levels An alternative to reporting a single value for the parameter being estimated is to calculate and report an entire interval of plausible values a confidence interval (CI). A confidence level is a measure of the degree of reliability of the interval. The true defect rate has a confidence interval of [0.15% 0.18%] with a 95% confidence level The true average weight lies in [ ] lbs with a confidence level of 90% Candidate X will win by getting the 54%±% of the votes (with a conf. level of 99%) How do we estimate such confidence intervals and levels? Two types of problems: Proportions of populations inferences on ratio (%) of a population Mean of populations inferences on mean value of population.

22 Election Time +σ +σ +3σ In a recent election somewhere, incumbent senator commissions a poll to find out how likely it is for him to get elected. The pollster draws a random sample of 1000 voters and ask them about their opinion of the candidate He finds out that 555 potential voters favor the candidate So we have one random sample of size n=1000, with estimated probability of success p=0.55 The pollster further says that he had 95% confidence in his estimate. What does that mean?

23 +σ +σ +3σ Definition of 95% confidence Consider an archer, who can hit a 10cm radius bull s eye 95% of the time, i.e., she missed 1/0 times. Sitting behind the target, the detective cannot see the bull s eye. The archer shoots one single arrow. Knowing the archer s skill, the detective draws a 10cm radius around the arrow. He now has 95% confidence that his circle includes the center of the bulls eye. The true meaning of the 95% confidence is that if he were to draw 10cm radius circles around many arrows, his circles would include the center of the bull s eye 95% of the time! Technically speaking, this is different then saying that there is 95% probability that his one circle includes the center.

24 +σ +σ +3σ Computing Confidence Intervals This is a two step procedure. But first remember that for proportions (probability of success), this is a binomial distribution. Furthermore, the estimate of p from a random sample of size of n has a normal distribution with mean =p and variance p(1-p)/n =σ/n If we want 95% confidence, we have to make sure that the estimate of the probability of success occupies 95% of the area under its probability distribution, which we already know that is (at least nearly) normal with mean p hat and variance σ ( 1 p) Therefore, we are looking to find the true p such that our confidence of it lying within a given confidence interval is 0.95 Pˆ = p n z 0 +z

25 +σ +σ +3σ Computing Confidence Intervals Remember the following facts: 1. The underlying population creating the successes has a binomial distribution with probability of success p, mean np and variance p(1-p).. A binomial distribution where number of trials is sufficiently large resembles a Gaussian with mean np and variance p(1-p). 3. We do not have access to this distribution, however. What we have is a random sample of size n from this distribution, where we look at the average probability of success, which is pˆ, our estimate of p. It is this pˆ estimate that also has a normal distribution with mean p and variance σ /n = p(1-p)/n In order to compute the confidence interval at a 95% confidence level: P( 1.96 Z 1.96) 0.95 P 1.96 P P pˆ p σ ( p 1.96σ ˆ p p p σ p ) 0.95 ( pˆ 1.96σ p pˆ σ ) p p p Little bit algebra Little bit more algebra Our estimate of the probability of success The true probability of success unknown to us (estimated as proportion of success in n trials

26 +σ +σ +3σ Computing Confidence Intervals So what does this expression tell us? P ( pˆ 1.96σ p pˆ σ ) p p It tells us that the true mean p (prob. of success) lies within +/- 1.96σ p of our estimate pˆ with a probability of 0.95! But there is one tiny problem: We in fact do not know σ p, the true std.dev. of the population. So we fudge a little bit and replace σ p with our estimate of this std.dev (estimated as the std.dev. of the numbers in our sample). And therefore we can compute our confidence interval as Recall that the variance of the sample mean was proportional to the original variance through the following. The variance of the sample mean is also called the standard error of the estimate pˆ SE P ( pˆ 1.96σ p pˆ σ ) p( p) p ( ) ( p) Pˆ 1 σ P 1 = ˆ = = or σ ˆ = pˆ σ P σ = P P n n n n pˆ

27 Computing Confidence Intervals +σ +σ +3σ Now, step. Obtain a random sample, and determine the value of pˆ compute σ Pˆ σ Pˆ = and then the 95% conf. interval ( pˆ 1.96σ, pˆ + 1. σ ) ˆ 96 p p ( 1 p) n pˆ The detective does this through sampling: Ask 1000 individuals if n=550 says they will vote for the candidate he represents, p_hat=550/1000=0.55. Similarly, we can calculate the standard error as 0.55*(1 0.55) SE ( pˆ ) = σ p ˆ = = Then the confidence interval is pˆ ± 1.96* σ = ± 1.96* pˆ ± p This is what polls refer to when they say margin of error. In the above example, we have p 0.581, or in other words, p=0.55 with a 3% margin of error. Most polls use a 95% confidence level.

28 +σ +σ +3σ What About other Confidence Levels? 95% confidence is the most commonly used confidence level, and is good enough for most applications, such as newspaper polls, etc. But what if 95% confidence is just not good enough! May be we want 99% confidence How do we increase confidence? Either increase the interval Or increase the prob. of success

29 Other Confidence Levels +σ +σ +3σ The first method is really equivalent to increasing the confidence interval. That is if we want a higher probability that the true value will fall within an interval of our estimate, then we need to increase that interval. If we are 95% confident that the true defect rate is 0.15±%, then we may be 99% confident that the interval is 0.15±5%. If we are 95% confident that the candidate will receive 55%±3%, we can be more certain, say 99%, by saying that the candidate will receive 55%±6%, In general, we can be more certain if increase the confidence interval, and less certain if decrease it.

30 100(1-α)% Confidence +σ +σ +3σ So how to we compute confidence levels other then 95%? Instead of trying it for a different example, say 99%, let s make it more general We define the parameter α as the area left outside the confidence level For example, for 95% confidence, we computed the critical values of z for which the area between was In this case, the area left outside the confidence zone has a probability of 0.05, which determines the α. For 99% α=0.01, for 93%, α=0.07, etc. Writing in reverse, an α of 0.01 corresponds to 99 % confidence level, an α of 0.07 is a 93% confidence level So in general, any given value of α determines 100(1- α)% confidence level. Since half of α lies on each side of the tails, we are interested in finding the z values corresponding to ±α/, which are called ± z α/ 1-α α/ α/ -z α/ 0 +z α/

31 Other Confidence Levels +σ +σ +3σ Ex: For 99% confidence, α=0.01, we are looking for the z- values between which the area of the normal curve is This means, 0.01 probability lies in the tails, on each side. From the tables or MATLAB, we find -z α/ =Φ(0.005)=-.576 (norminv(0.005)) +z α/ =Φ(0.995)=+.576 So the critical value for 99% conf. level is.57 1-α α/ α/ -z α/ 0 +z α/ Similarly, you can find out that the critical values for various confidence levels are as follows. Confidence Level (%) Critical Value z α/ p = ˆ = pˆ α / pˆ ± z α / ( p ± z σ ) ( 1 p) p n

32 Back to our candidate +σ +σ +3σ ( ˆ ) SE p = σ = ( pˆ.58σ, pˆ +. σ ) ˆ 58 p Pˆ p ( 1 p) ( ˆ σpˆ ˆ σpˆ ) 0.99 = P p.58 p p+.58 ( pˆ) pˆ p = pˆ ±.58 = 0.55 ±.58 n 1000 p= 0.55 ± 0.41 [0.51< p< 0.59] n pˆ

33 Other Confidence Levels +σ +σ +3σ Recall. We mentioned two methods to increase the confidence, the first of which was to increase the confidence interval. The second is simply to increase the probability of success. If we knew the archer could hit 95% of her arrows within 1cm (as opposed to 10cm) of the bull s eye, our estimate would be sharper. How do we actually do this? We need to take (many) more samples, and see if 95% of them fall within the more precise circle. This directly follows from the fact that as we increase the sample size, the variance sample mean (standard error) decreases. Distribution of pˆ SE p( p) ( Pˆ 1 ) = ˆ = σ P σ = P n n The larger n we pick, the smaller the standard error (variance of the estimate) becomes! p Large n Small n

34 Picking Sample Size +σ +σ +3σ Then, what is the sample size necessary to ensure a given confidence level and a small confidence interval? If the candidate were to insist on a small error on the estimate say 99% confidence level with a small confidence interval, say error being ±0.01, then how many people should be polled? Since the interval has the form pˆ ± error = pˆ ± z σ α / pˆ error = z n = α / σ pˆ = z α / * ( zα / ) p ( 1 p) ( error) * p ( 1 p) n Note that, since at the time we do this calculation, the sample has not been taken. Therefore, we do not have even an estimate for p. A conservative guess is to take p*=0.5, because for this value, p(1-p) is maximum.

35 What Happened? +σ +σ +3σ But then again a candidate can lose despite going into election with that much confidence! How can that happen??? Politically, well, that is beyond the scope of this class, but what might have happened statistically, that is our business! All this probability stuff is only good before the election. After the election, the candidate is either 100% in, or 100% out!

36 +σ +σ +3σ And then there is also Well, everything we have done so far assumes that the binomial distribution can be approximated by the normal distribution, a perfectly valid assumption, as long as we have a large sample size, satisfying at least n.p>10 For smaller sample sizes, it can be shown that the following is a better estimate: p ˆ LowerBound < p < pˆ + UpperBound zα pˆ + LowerBound = n zα pˆ + UpperBound = n z + z α 1+ α 1+ pˆ ( z ) α pˆ ( 1 pˆ ) n n ( 1 pˆ ) n ( z ) n α z + 4n α z + 4n α

37 Exit Polls +σ +σ +3σ Note that the sampling can be done right after the election as well, in fact, right after someone votes. In this case, the pollsters would ask a small sample of actual voters whom they voted for. Using only this small sample, say 1000 out of 1,000,000 voters, they can often get a 95% estimate on the confidence interval for each candidate. But remember, the closer the p is to 0.5, the larger the sample size you need. This is precisely where the networks failed in Florida in 000 presidential elections. Watch closely for exit polls next on Nov 8, as numbers start getting in Also, while we have used the polling as an application, all expressions apply to many other applications in particular of engineering including quality control.

38 Homework +σ +σ +3σ Chapter 7: Questions, 5, 6, 10,11, 1

Chapter 5. Sampling Distributions

Chapter 5. Sampling Distributions Lecture notes, Lang Wu, UBC 1 Chapter 5. Sampling Distributions 5.1. Introduction In statistical inference, we attempt to estimate an unknown population characteristic, such as the population mean, µ,

More information

HOMEWORK: Due Mon 11/8, Chapter 9: #15, 25, 37, 44

HOMEWORK: Due Mon 11/8, Chapter 9: #15, 25, 37, 44 This week: Chapter 9 (will do 9.6 to 9.8 later, with Chap. 11) Understanding Sampling Distributions: Statistics as Random Variables ANNOUNCEMENTS: Shandong Min will give the lecture on Friday. See website

More information

STAT Chapter 7: Confidence Intervals

STAT Chapter 7: Confidence Intervals STAT 515 -- Chapter 7: Confidence Intervals With a point estimate, we used a single number to estimate a parameter. We can also use a set of numbers to serve as reasonable estimates for the parameter.

More information

Chapter 9. Sampling Distributions. A sampling distribution is created by, as the name suggests, sampling.

Chapter 9. Sampling Distributions. A sampling distribution is created by, as the name suggests, sampling. Chapter 9 Sampling Distributions 9.1 Sampling Distributions A sampling distribution is created by, as the name suggests, sampling. The method we will employ on the rules of probability and the laws of

More information

Data Analysis and Statistical Methods Statistics 651

Data Analysis and Statistical Methods Statistics 651 Review of previous lecture: Why confidence intervals? Data Analysis and Statistical Methods Statistics 651 http://www.stat.tamu.edu/~suhasini/teaching.html Suhasini Subba Rao Suppose you want to know the

More information

MATH 10 INTRODUCTORY STATISTICS

MATH 10 INTRODUCTORY STATISTICS MATH 10 INTRODUCTORY STATISTICS Tommy Khoo Your friendly neighbourhood graduate student. It is Time for Homework Again! ( ω `) Please hand in your homework. Third homework will be posted on the website,

More information

ECO220Y Estimation: Confidence Interval Estimator for Sample Proportions Readings: Chapter 11 (skip 11.5)

ECO220Y Estimation: Confidence Interval Estimator for Sample Proportions Readings: Chapter 11 (skip 11.5) ECO220Y Estimation: Confidence Interval Estimator for Sample Proportions Readings: Chapter 11 (skip 11.5) Fall 2011 Lecture 10 (Fall 2011) Estimation Lecture 10 1 / 23 Review: Sampling Distributions Sample

More information

Module 3: Sampling Distributions and the CLT Statistics (OA3102)

Module 3: Sampling Distributions and the CLT Statistics (OA3102) Module 3: Sampling Distributions and the CLT Statistics (OA3102) Professor Ron Fricker Naval Postgraduate School Monterey, California Reading assignment: WM&S chpt 7.1-7.3, 7.5 Revision: 1-12 1 Goals for

More information

5.3 Statistics and Their Distributions

5.3 Statistics and Their Distributions Chapter 5 Joint Probability Distributions and Random Samples Instructor: Lingsong Zhang 1 Statistics and Their Distributions 5.3 Statistics and Their Distributions Statistics and Their Distributions Consider

More information

Section 0: Introduction and Review of Basic Concepts

Section 0: Introduction and Review of Basic Concepts Section 0: Introduction and Review of Basic Concepts Carlos M. Carvalho The University of Texas McCombs School of Business mccombs.utexas.edu/faculty/carlos.carvalho/teaching 1 Getting Started Syllabus

More information

Module 4: Point Estimation Statistics (OA3102)

Module 4: Point Estimation Statistics (OA3102) Module 4: Point Estimation Statistics (OA3102) Professor Ron Fricker Naval Postgraduate School Monterey, California Reading assignment: WM&S chapter 8.1-8.4 Revision: 1-12 1 Goals for this Module Define

More information

8.1 Estimation of the Mean and Proportion

8.1 Estimation of the Mean and Proportion 8.1 Estimation of the Mean and Proportion Statistical inference enables us to make judgments about a population on the basis of sample information. The mean, standard deviation, and proportions of a population

More information

BIO5312 Biostatistics Lecture 5: Estimations

BIO5312 Biostatistics Lecture 5: Estimations BIO5312 Biostatistics Lecture 5: Estimations Yujin Chung September 27th, 2016 Fall 2016 Yujin Chung Lec5: Estimations Fall 2016 1/34 Recap Yujin Chung Lec5: Estimations Fall 2016 2/34 Today s lecture and

More information

Business Statistics 41000: Probability 4

Business Statistics 41000: Probability 4 Business Statistics 41000: Probability 4 Drew D. Creal University of Chicago, Booth School of Business February 14 and 15, 2014 1 Class information Drew D. Creal Email: dcreal@chicagobooth.edu Office:

More information

A useful modeling tricks.

A useful modeling tricks. .7 Joint models for more than two outcomes We saw that we could write joint models for a pair of variables by specifying the joint probabilities over all pairs of outcomes. In principal, we could do this

More information

Data Analysis and Statistical Methods Statistics 651

Data Analysis and Statistical Methods Statistics 651 Data Analysis and Statistical Methods Statistics 651 http://www.stat.tamu.edu/~suhasini/teaching.html Lecture 13 (MWF) Designing the experiment: Margin of Error Suhasini Subba Rao Terminology: The population

More information

Lecture 16: Estimating Parameters (Confidence Interval Estimates of the Mean)

Lecture 16: Estimating Parameters (Confidence Interval Estimates of the Mean) Statistics 16_est_parameters.pdf Michael Hallstone, Ph.D. hallston@hawaii.edu Lecture 16: Estimating Parameters (Confidence Interval Estimates of the Mean) Some Common Sense Assumptions for Interval Estimates

More information

Statistics and Probability

Statistics and Probability Statistics and Probability Continuous RVs (Normal); Confidence Intervals Outline Continuous random variables Normal distribution CLT Point estimation Confidence intervals http://www.isrec.isb-sib.ch/~darlene/geneve/

More information

Class 16. Daniel B. Rowe, Ph.D. Department of Mathematics, Statistics, and Computer Science. Marquette University MATH 1700

Class 16. Daniel B. Rowe, Ph.D. Department of Mathematics, Statistics, and Computer Science. Marquette University MATH 1700 Class 16 Daniel B. Rowe, Ph.D. Department of Mathematics, Statistics, and Computer Science Copyright 013 by D.B. Rowe 1 Agenda: Recap Chapter 7. - 7.3 Lecture Chapter 8.1-8. Review Chapter 6. Problem Solving

More information

The "bell-shaped" curve, or normal curve, is a probability distribution that describes many real-life situations.

The bell-shaped curve, or normal curve, is a probability distribution that describes many real-life situations. 6.1 6.2 The Standard Normal Curve The "bell-shaped" curve, or normal curve, is a probability distribution that describes many real-life situations. Basic Properties 1. The total area under the curve is.

More information

Simple Random Sampling. Sampling Distribution

Simple Random Sampling. Sampling Distribution STAT 503 Sampling Distribution and Statistical Estimation 1 Simple Random Sampling Simple random sampling selects with equal chance from (available) members of population. The resulting sample is a simple

More information

Chapter 7: SAMPLING DISTRIBUTIONS & POINT ESTIMATION OF PARAMETERS

Chapter 7: SAMPLING DISTRIBUTIONS & POINT ESTIMATION OF PARAMETERS Chapter 7: SAMPLING DISTRIBUTIONS & POINT ESTIMATION OF PARAMETERS Part 1: Introduction Sampling Distributions & the Central Limit Theorem Point Estimation & Estimators Sections 7-1 to 7-2 Sample data

More information

Chapter 7 presents the beginning of inferential statistics. The two major activities of inferential statistics are

Chapter 7 presents the beginning of inferential statistics. The two major activities of inferential statistics are Chapter 7 presents the beginning of inferential statistics. Concept: Inferential Statistics The two major activities of inferential statistics are 1 to use sample data to estimate values of population

More information

Lecture 9 - Sampling Distributions and the CLT

Lecture 9 - Sampling Distributions and the CLT Lecture 9 - Sampling Distributions and the CLT Sta102/BME102 Colin Rundel September 23, 2015 1 Variability of Estimates Activity Sampling distributions - via simulation Sampling distributions - via CLT

More information

19. CONFIDENCE INTERVALS FOR THE MEAN; KNOWN VARIANCE

19. CONFIDENCE INTERVALS FOR THE MEAN; KNOWN VARIANCE 19. CONFIDENCE INTERVALS FOR THE MEAN; KNOWN VARIANCE We assume here that the population variance σ 2 is known. This is an unrealistic assumption, but it allows us to give a simplified presentation which

More information

Lecture 9 - Sampling Distributions and the CLT. Mean. Margin of error. Sta102/BME102. February 6, Sample mean ( X ): x i

Lecture 9 - Sampling Distributions and the CLT. Mean. Margin of error. Sta102/BME102. February 6, Sample mean ( X ): x i Lecture 9 - Sampling Distributions and the CLT Sta102/BME102 Colin Rundel February 6, 2015 http:// pewresearch.org/ pubs/ 2191/ young-adults-workers-labor-market-pay-careers-advancement-recession Sta102/BME102

More information

Homework: Due Wed, Feb 20 th. Chapter 8, # 60a + 62a (count together as 1), 74, 82

Homework: Due Wed, Feb 20 th. Chapter 8, # 60a + 62a (count together as 1), 74, 82 Announcements: Week 5 quiz begins at 4pm today and ends at 3pm on Wed If you take more than 20 minutes to complete your quiz, you will only receive partial credit. (It doesn t cut you off.) Today: Sections

More information

Sampling Distributions

Sampling Distributions AP Statistics Ch. 7 Notes Sampling Distributions A major field of statistics is statistical inference, which is using information from a sample to draw conclusions about a wider population. Parameter:

More information

Unit 5: Sampling Distributions of Statistics

Unit 5: Sampling Distributions of Statistics Unit 5: Sampling Distributions of Statistics Statistics 571: Statistical Methods Ramón V. León 6/12/2004 Unit 5 - Stat 571 - Ramon V. Leon 1 Definitions and Key Concepts A sample statistic used to estimate

More information

Unit 5: Sampling Distributions of Statistics

Unit 5: Sampling Distributions of Statistics Unit 5: Sampling Distributions of Statistics Statistics 571: Statistical Methods Ramón V. León 6/12/2004 Unit 5 - Stat 571 - Ramon V. Leon 1 Definitions and Key Concepts A sample statistic used to estimate

More information

Midterm Exam III Review

Midterm Exam III Review Midterm Exam III Review Dr. Joseph Brennan Math 148, BU Dr. Joseph Brennan (Math 148, BU) Midterm Exam III Review 1 / 25 Permutations and Combinations ORDER In order to count the number of possible ways

More information

Estimating parameters 5.3 Confidence Intervals 5.4 Sample Variance

Estimating parameters 5.3 Confidence Intervals 5.4 Sample Variance Estimating parameters 5.3 Confidence Intervals 5.4 Sample Variance Prof. Tesler Math 186 Winter 2017 Prof. Tesler Ch. 5: Confidence Intervals, Sample Variance Math 186 / Winter 2017 1 / 29 Estimating parameters

More information

LESSON 7 INTERVAL ESTIMATION SAMIE L.S. LY

LESSON 7 INTERVAL ESTIMATION SAMIE L.S. LY LESSON 7 INTERVAL ESTIMATION SAMIE L.S. LY 1 THIS WEEK S PLAN Part I: Theory + Practice ( Interval Estimation ) Part II: Theory + Practice ( Interval Estimation ) z-based Confidence Intervals for a Population

More information

STA Module 3B Discrete Random Variables

STA Module 3B Discrete Random Variables STA 2023 Module 3B Discrete Random Variables Learning Objectives Upon completing this module, you should be able to 1. Determine the probability distribution of a discrete random variable. 2. Construct

More information

Chapter 8 Statistical Intervals for a Single Sample

Chapter 8 Statistical Intervals for a Single Sample Chapter 8 Statistical Intervals for a Single Sample Part 1: Confidence intervals (CI) for population mean µ Section 8-1: CI for µ when σ 2 known & drawing from normal distribution Section 8-1.2: Sample

More information

ECON 214 Elements of Statistics for Economists 2016/2017

ECON 214 Elements of Statistics for Economists 2016/2017 ECON 214 Elements of Statistics for Economists 2016/2017 Topic The Normal Distribution Lecturer: Dr. Bernardin Senadza, Dept. of Economics bsenadza@ug.edu.gh College of Education School of Continuing and

More information

AMS 7 Sampling Distributions, Central limit theorem, Confidence Intervals Lecture 4

AMS 7 Sampling Distributions, Central limit theorem, Confidence Intervals Lecture 4 AMS 7 Sampling Distributions, Central limit theorem, Confidence Intervals Lecture 4 Department of Applied Mathematics and Statistics, University of California, Santa Cruz Summer 2014 1 / 26 Sampling Distributions!!!!!!

More information

Chapter 4: Commonly Used Distributions. Statistics for Engineers and Scientists Fourth Edition William Navidi

Chapter 4: Commonly Used Distributions. Statistics for Engineers and Scientists Fourth Edition William Navidi Chapter 4: Commonly Used Distributions Statistics for Engineers and Scientists Fourth Edition William Navidi 2014 by Education. This is proprietary material solely for authorized instructor use. Not authorized

More information

Sampling and sampling distribution

Sampling and sampling distribution Sampling and sampling distribution September 12, 2017 STAT 101 Class 5 Slide 1 Outline of Topics 1 Sampling 2 Sampling distribution of a mean 3 Sampling distribution of a proportion STAT 101 Class 5 Slide

More information

STA 320 Fall Thursday, Dec 5. Sampling Distribution. STA Fall

STA 320 Fall Thursday, Dec 5. Sampling Distribution. STA Fall STA 320 Fall 2013 Thursday, Dec 5 Sampling Distribution STA 320 - Fall 2013-1 Review We cannot tell what will happen in any given individual sample (just as we can not predict a single coin flip in advance).

More information

Homework: Due Wed, Nov 3 rd Chapter 8, # 48a, 55c and 56 (count as 1), 67a

Homework: Due Wed, Nov 3 rd Chapter 8, # 48a, 55c and 56 (count as 1), 67a Homework: Due Wed, Nov 3 rd Chapter 8, # 48a, 55c and 56 (count as 1), 67a Announcements: There are some office hour changes for Nov 5, 8, 9 on website Week 5 quiz begins after class today and ends at

More information

Chapter 9: Sampling Distributions

Chapter 9: Sampling Distributions Chapter 9: Sampling Distributions 9. Introduction This chapter connects the material in Chapters 4 through 8 (numerical descriptive statistics, sampling, and probability distributions, in particular) with

More information

Central Limit Theorem (cont d) 7/28/2006

Central Limit Theorem (cont d) 7/28/2006 Central Limit Theorem (cont d) 7/28/2006 Central Limit Theorem for Binomial Distributions Theorem. For the binomial distribution b(n, p, j) we have lim npq b(n, p, np + x npq ) = φ(x), n where φ(x) is

More information

SAMPLING DISTRIBUTIONS. Chapter 7

SAMPLING DISTRIBUTIONS. Chapter 7 SAMPLING DISTRIBUTIONS Chapter 7 7.1 How Likely Are the Possible Values of a Statistic? The Sampling Distribution Statistic and Parameter Statistic numerical summary of sample data: p-hat or xbar Parameter

More information

Commonly Used Distributions

Commonly Used Distributions Chapter 4: Commonly Used Distributions 1 Introduction Statistical inference involves drawing a sample from a population and analyzing the sample data to learn about the population. We often have some knowledge

More information

Chapter 14 : Statistical Inference 1. Note : Here the 4-th and 5-th editions of the text have different chapters, but the material is the same.

Chapter 14 : Statistical Inference 1. Note : Here the 4-th and 5-th editions of the text have different chapters, but the material is the same. Chapter 14 : Statistical Inference 1 Chapter 14 : Introduction to Statistical Inference Note : Here the 4-th and 5-th editions of the text have different chapters, but the material is the same. Data x

More information

Chapter 7. Sampling Distributions and the Central Limit Theorem

Chapter 7. Sampling Distributions and the Central Limit Theorem Chapter 7. Sampling Distributions and the Central Limit Theorem 1 Introduction 2 Sampling Distributions related to the normal distribution 3 The central limit theorem 4 The normal approximation to binomial

More information

Statistical Intervals (One sample) (Chs )

Statistical Intervals (One sample) (Chs ) 7 Statistical Intervals (One sample) (Chs 8.1-8.3) Confidence Intervals The CLT tells us that as the sample size n increases, the sample mean X is close to normally distributed with expected value µ and

More information

Confidence Intervals. σ unknown, small samples The t-statistic /22

Confidence Intervals. σ unknown, small samples The t-statistic /22 Confidence Intervals σ unknown, small samples The t-statistic 1 /22 Homework Read Sec 7-3. Discussion Question pg 365 Do Ex 7-3 1-4, 6, 9, 12, 14, 15, 17 2/22 Objective find the confidence interval for

More information

μ: ESTIMATES, CONFIDENCE INTERVALS, AND TESTS Business Statistics

μ: ESTIMATES, CONFIDENCE INTERVALS, AND TESTS Business Statistics μ: ESTIMATES, CONFIDENCE INTERVALS, AND TESTS Business Statistics CONTENTS Estimating parameters The sampling distribution Confidence intervals for μ Hypothesis tests for μ The t-distribution Comparison

More information

MATH 264 Problem Homework I

MATH 264 Problem Homework I MATH Problem Homework I Due to December 9, 00@:0 PROBLEMS & SOLUTIONS. A student answers a multiple-choice examination question that offers four possible answers. Suppose that the probability that the

More information

Homework: (Due Wed) Chapter 10: #5, 22, 42

Homework: (Due Wed) Chapter 10: #5, 22, 42 Announcements: Discussion today is review for midterm, no credit. You may attend more than one discussion section. Bring 2 sheets of notes and calculator to midterm. We will provide Scantron form. Homework:

More information

6.041SC Probabilistic Systems Analysis and Applied Probability, Fall 2013 Transcript Lecture 23

6.041SC Probabilistic Systems Analysis and Applied Probability, Fall 2013 Transcript Lecture 23 6.041SC Probabilistic Systems Analysis and Applied Probability, Fall 2013 Transcript Lecture 23 The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare

More information

Confidence Intervals and Sample Size

Confidence Intervals and Sample Size Confidence Intervals and Sample Size Chapter 6 shows us how we can use the Central Limit Theorem (CLT) to 1. estimate a population parameter (such as the mean or proportion) using a sample, and. determine

More information

Statistical Methods in Practice STAT/MATH 3379

Statistical Methods in Practice STAT/MATH 3379 Statistical Methods in Practice STAT/MATH 3379 Dr. A. B. W. Manage Associate Professor of Mathematics & Statistics Department of Mathematics & Statistics Sam Houston State University Overview 6.1 Discrete

More information

Confidence Intervals Introduction

Confidence Intervals Introduction Confidence Intervals Introduction A point estimate provides no information about the precision and reliability of estimation. For example, the sample mean X is a point estimate of the population mean μ

More information

Part V - Chance Variability

Part V - Chance Variability Part V - Chance Variability Dr. Joseph Brennan Math 148, BU Dr. Joseph Brennan (Math 148, BU) Part V - Chance Variability 1 / 78 Law of Averages In Chapter 13 we discussed the Kerrich coin-tossing experiment.

More information

2. Modeling Uncertainty

2. Modeling Uncertainty 2. Modeling Uncertainty Models for Uncertainty (Random Variables): Big Picture We now move from viewing the data to thinking about models that describe the data. Since the real world is uncertain, our

More information

Statistics 431 Spring 2007 P. Shaman. Preliminaries

Statistics 431 Spring 2007 P. Shaman. Preliminaries Statistics 4 Spring 007 P. Shaman The Binomial Distribution Preliminaries A binomial experiment is defined by the following conditions: A sequence of n trials is conducted, with each trial having two possible

More information

Chapter 7. Sampling Distributions and the Central Limit Theorem

Chapter 7. Sampling Distributions and the Central Limit Theorem Chapter 7. Sampling Distributions and the Central Limit Theorem 1 Introduction 2 Sampling Distributions related to the normal distribution 3 The central limit theorem 4 The normal approximation to binomial

More information

Normal distribution Approximating binomial distribution by normal 2.10 Central Limit Theorem

Normal distribution Approximating binomial distribution by normal 2.10 Central Limit Theorem 1.1.2 Normal distribution 1.1.3 Approimating binomial distribution by normal 2.1 Central Limit Theorem Prof. Tesler Math 283 Fall 216 Prof. Tesler 1.1.2-3, 2.1 Normal distribution Math 283 / Fall 216 1

More information

Chapter 3 Discrete Random Variables and Probability Distributions

Chapter 3 Discrete Random Variables and Probability Distributions Chapter 3 Discrete Random Variables and Probability Distributions Part 3: Special Discrete Random Variable Distributions Section 3.5 Discrete Uniform Section 3.6 Bernoulli and Binomial Others sections

More information

Chapter 7: Point Estimation and Sampling Distributions

Chapter 7: Point Estimation and Sampling Distributions Chapter 7: Point Estimation and Sampling Distributions Seungchul Baek Department of Statistics, University of South Carolina STAT 509: Statistics for Engineers 1 / 20 Motivation In chapter 3, we learned

More information

MATH 10 INTRODUCTORY STATISTICS

MATH 10 INTRODUCTORY STATISTICS MATH 10 INTRODUCTORY STATISTICS Ramesh Yapalparvi Week 4 à Midterm Week 5 woohoo Chapter 9 Sampling Distributions ß today s lecture Sampling distributions of the mean and p. Difference between means. Central

More information

Math 140 Introductory Statistics. Next midterm May 1

Math 140 Introductory Statistics. Next midterm May 1 Math 140 Introductory Statistics Next midterm May 1 8.1 Confidence intervals 54% of Americans approve the job the president is doing with a margin error of 3% 55% of 18-29 year olds consider themselves

More information

A random variable (r. v.) is a variable whose value is a numerical outcome of a random phenomenon.

A random variable (r. v.) is a variable whose value is a numerical outcome of a random phenomenon. Chapter 14: random variables p394 A random variable (r. v.) is a variable whose value is a numerical outcome of a random phenomenon. Consider the experiment of tossing a coin. Define a random variable

More information

STAT Chapter 7: Central Limit Theorem

STAT Chapter 7: Central Limit Theorem STAT 251 - Chapter 7: Central Limit Theorem In this chapter we will introduce the most important theorem in statistics; the central limit theorem. What have we seen so far? First, we saw that for an i.i.d

More information

Math 243 Section 4.3 The Binomial Distribution

Math 243 Section 4.3 The Binomial Distribution Math 243 Section 4.3 The Binomial Distribution Overview Notation for the mean, standard deviation and variance The Binomial Model Bernoulli Trials Notation for the mean, standard deviation and variance

More information

2011 Pearson Education, Inc

2011 Pearson Education, Inc Statistics for Business and Economics Chapter 4 Random Variables & Probability Distributions Content 1. Two Types of Random Variables 2. Probability Distributions for Discrete Random Variables 3. The Binomial

More information

STA Rev. F Learning Objectives. What is a Random Variable? Module 5 Discrete Random Variables

STA Rev. F Learning Objectives. What is a Random Variable? Module 5 Discrete Random Variables STA 2023 Module 5 Discrete Random Variables Learning Objectives Upon completing this module, you should be able to: 1. Determine the probability distribution of a discrete random variable. 2. Construct

More information

Module 4: Probability

Module 4: Probability Module 4: Probability 1 / 22 Probability concepts in statistical inference Probability is a way of quantifying uncertainty associated with random events and is the basis for statistical inference. Inference

More information

Sampling Distributions Chapter 18

Sampling Distributions Chapter 18 Sampling Distributions Chapter 18 Parameter vs Statistic Example: Identify the population, the parameter, the sample, and the statistic in the given settings. a) The Gallup Poll asked a random sample of

More information

Statistical Intervals. Chapter 7 Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage

Statistical Intervals. Chapter 7 Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage 7 Statistical Intervals Chapter 7 Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage Confidence Intervals The CLT tells us that as the sample size n increases, the sample mean X is close to

More information

Chapter 4 and 5 Note Guide: Probability Distributions

Chapter 4 and 5 Note Guide: Probability Distributions Chapter 4 and 5 Note Guide: Probability Distributions Probability Distributions for a Discrete Random Variable A discrete probability distribution function has two characteristics: Each probability is

More information

4-1. Chapter 4. Commonly Used Distributions by The McGraw-Hill Companies, Inc. All rights reserved.

4-1. Chapter 4. Commonly Used Distributions by The McGraw-Hill Companies, Inc. All rights reserved. 4-1 Chapter 4 Commonly Used Distributions 2014 by The Companies, Inc. All rights reserved. Section 4.1: The Bernoulli Distribution 4-2 We use the Bernoulli distribution when we have an experiment which

More information

Key Objectives. Module 2: The Logic of Statistical Inference. Z-scores. SGSB Workshop: Using Statistical Data to Make Decisions

Key Objectives. Module 2: The Logic of Statistical Inference. Z-scores. SGSB Workshop: Using Statistical Data to Make Decisions SGSB Workshop: Using Statistical Data to Make Decisions Module 2: The Logic of Statistical Inference Dr. Tom Ilvento January 2006 Dr. Mugdim Pašić Key Objectives Understand the logic of statistical inference

More information

***SECTION 8.1*** The Binomial Distributions

***SECTION 8.1*** The Binomial Distributions ***SECTION 8.1*** The Binomial Distributions CHAPTER 8 ~ The Binomial and Geometric Distributions In practice, we frequently encounter random phenomenon where there are two outcomes of interest. For example,

More information

Interval estimation. September 29, Outline Basic ideas Sampling variation and CLT Interval estimation using X More general problems

Interval estimation. September 29, Outline Basic ideas Sampling variation and CLT Interval estimation using X More general problems Interval estimation September 29, 2017 STAT 151 Class 7 Slide 1 Outline of Topics 1 Basic ideas 2 Sampling variation and CLT 3 Interval estimation using X 4 More general problems STAT 151 Class 7 Slide

More information

4.3 Normal distribution

4.3 Normal distribution 43 Normal distribution Prof Tesler Math 186 Winter 216 Prof Tesler 43 Normal distribution Math 186 / Winter 216 1 / 4 Normal distribution aka Bell curve and Gaussian distribution The normal distribution

More information

CS 237: Probability in Computing

CS 237: Probability in Computing CS 237: Probability in Computing Wayne Snyder Computer Science Department Boston University Lecture 12: Continuous Distributions Uniform Distribution Normal Distribution (motivation) Discrete vs Continuous

More information

9 Expectation and Variance

9 Expectation and Variance 9 Expectation and Variance Two numbers are often used to summarize a probability distribution for a random variable X. The mean is a measure of the center or middle of the probability distribution, and

More information

4.1 Probability Distributions

4.1 Probability Distributions Probability and Statistics Mrs. Leahy Chapter 4: Discrete Probability Distribution ALWAYS KEEP IN MIND: The Probability of an event is ALWAYS between: and!!!! 4.1 Probability Distributions Random Variables

More information

Probability. An intro for calculus students P= Figure 1: A normal integral

Probability. An intro for calculus students P= Figure 1: A normal integral Probability An intro for calculus students.8.6.4.2 P=.87 2 3 4 Figure : A normal integral Suppose we flip a coin 2 times; what is the probability that we get more than 2 heads? Suppose we roll a six-sided

More information

CH 5 Normal Probability Distributions Properties of the Normal Distribution

CH 5 Normal Probability Distributions Properties of the Normal Distribution Properties of the Normal Distribution Example A friend that is always late. Let X represent the amount of minutes that pass from the moment you are suppose to meet your friend until the moment your friend

More information

FEEG6017 lecture: The normal distribution, estimation, confidence intervals. Markus Brede,

FEEG6017 lecture: The normal distribution, estimation, confidence intervals. Markus Brede, FEEG6017 lecture: The normal distribution, estimation, confidence intervals. Markus Brede, mb8@ecs.soton.ac.uk The normal distribution The normal distribution is the classic "bell curve". We've seen that

More information

Binomial Random Variables. Binomial Random Variables

Binomial Random Variables. Binomial Random Variables Bernoulli Trials Definition A Bernoulli trial is a random experiment in which there are only two possible outcomes - success and failure. 1 Tossing a coin and considering heads as success and tails as

More information

3. Flip two pennies, and record the number of heads observed. Repeat this chance experiment three more times for a total of four flips.

3. Flip two pennies, and record the number of heads observed. Repeat this chance experiment three more times for a total of four flips. Student Outcomes Given a description of a discrete random variable, students determine the probability distribution of that variable. Students interpret probabilities in context. Lesson Notes In this lesson,

More information

ME3620. Theory of Engineering Experimentation. Spring Chapter III. Random Variables and Probability Distributions.

ME3620. Theory of Engineering Experimentation. Spring Chapter III. Random Variables and Probability Distributions. ME3620 Theory of Engineering Experimentation Chapter III. Random Variables and Probability Distributions Chapter III 1 3.2 Random Variables In an experiment, a measurement is usually denoted by a variable

More information

Elementary Statistics Lecture 5

Elementary Statistics Lecture 5 Elementary Statistics Lecture 5 Sampling Distributions Chong Ma Department of Statistics University of South Carolina Chong Ma (Statistics, USC) STAT 201 Elementary Statistics 1 / 24 Outline 1 Introduction

More information

BIOL The Normal Distribution and the Central Limit Theorem

BIOL The Normal Distribution and the Central Limit Theorem BIOL 300 - The Normal Distribution and the Central Limit Theorem In the first week of the course, we introduced a few measures of center and spread, and discussed how the mean and standard deviation are

More information

MATH 10 INTRODUCTORY STATISTICS

MATH 10 INTRODUCTORY STATISTICS MATH 10 INTRODUCTORY STATISTICS Tommy Khoo Your friendly neighbourhood graduate student. Midterm Exam ٩(^ᴗ^)۶ In class, next week, Thursday, 26 April. 1 hour, 45 minutes. 5 questions of varying lengths.

More information

Statistics 13 Elementary Statistics

Statistics 13 Elementary Statistics Statistics 13 Elementary Statistics Summer Session I 2012 Lecture Notes 5: Estimation with Confidence intervals 1 Our goal is to estimate the value of an unknown population parameter, such as a population

More information

Chapter 7.2: Large-Sample Confidence Intervals for a Population Mean and Proportion. Instructor: Elvan Ceyhan

Chapter 7.2: Large-Sample Confidence Intervals for a Population Mean and Proportion. Instructor: Elvan Ceyhan 1 Chapter 7.2: Large-Sample Confidence Intervals for a Population Mean and Proportion Instructor: Elvan Ceyhan Outline of this chapter: Large-Sample Interval for µ Confidence Intervals for Population Proportion

More information

ECON 214 Elements of Statistics for Economists

ECON 214 Elements of Statistics for Economists ECON 214 Elements of Statistics for Economists Session 7 The Normal Distribution Part 1 Lecturer: Dr. Bernardin Senadza, Dept. of Economics Contact Information: bsenadza@ug.edu.gh College of Education

More information

Math 120 Introduction to Statistics Mr. Toner s Lecture Notes. Standardizing normal distributions The Standard Normal Curve

Math 120 Introduction to Statistics Mr. Toner s Lecture Notes. Standardizing normal distributions The Standard Normal Curve 6.1 6.2 The Standard Normal Curve Standardizing normal distributions The "bell-shaped" curve, or normal curve, is a probability distribution that describes many reallife situations. Basic Properties 1.

More information

Lecture 9. Probability Distributions

Lecture 9. Probability Distributions Lecture 9 Probability Distributions Outline 6-1 Introduction 6-2 Probability Distributions 6-3 Mean, Variance, and Expectation 6-4 The Binomial Distribution Outline 7-2 Properties of the Normal Distribution

More information

χ 2 distributions and confidence intervals for population variance

χ 2 distributions and confidence intervals for population variance χ 2 distributions and confidence intervals for population variance Let Z be a standard Normal random variable, i.e., Z N(0, 1). Define Y = Z 2. Y is a non-negative random variable. Its distribution is

More information

Chapter 5. Statistical inference for Parametric Models

Chapter 5. Statistical inference for Parametric Models Chapter 5. Statistical inference for Parametric Models Outline Overview Parameter estimation Method of moments How good are method of moments estimates? Interval estimation Statistical Inference for Parametric

More information

Chapter 3 Discrete Random Variables and Probability Distributions

Chapter 3 Discrete Random Variables and Probability Distributions Chapter 3 Discrete Random Variables and Probability Distributions Part 4: Special Discrete Random Variable Distributions Sections 3.7 & 3.8 Geometric, Negative Binomial, Hypergeometric NOTE: The discrete

More information

MA 1125 Lecture 12 - Mean and Standard Deviation for the Binomial Distribution. Objectives: Mean and standard deviation for the binomial distribution.

MA 1125 Lecture 12 - Mean and Standard Deviation for the Binomial Distribution. Objectives: Mean and standard deviation for the binomial distribution. MA 5 Lecture - Mean and Standard Deviation for the Binomial Distribution Friday, September 9, 07 Objectives: Mean and standard deviation for the binomial distribution.. Mean and Standard Deviation of the

More information