Chapter 5. Statistical inference for Parametric Models
|
|
- Joy Bryan
- 5 years ago
- Views:
Transcription
1 Chapter 5. Statistical inference for Parametric Models
2 Outline Overview Parameter estimation Method of moments How good are method of moments estimates? Interval estimation
3 Statistical Inference for Parametric Models Parametric statistical inference refers to the process of estimating the parameters of a chosen distributional model from a sample; quantifying how accurate these estimates are; dealing formally with the uncertainty that exist in the data.
4 Example: Diseased trees. Recall: the variable of interest is run length of diseased trees; we have assumed a Geometric(θ) distribution can be used to model this variable; the parameter θ is the probability of a tree being diseased, so θ Θ = [0,1]. Previously we experimented graphically with different values of θ:
5 Example: Diseased trees. p.m.f p.m.f Run length Run length
6 Estimation: Using methods to be introduced in Section 5.2, we find a best guess of the parameter value: ˆθ = for the partial data and ˆθ = for the full data. The estimate hasn t been changed very much by the addition of more data. The benefit of the extra data is the increased reliability of the best estimate for the full data set.
7 Estimation: Quantifying reliability through reflecting uncertainty: we construct a set of values of θ Θ which are the most plausible given the observed data; specifically, we estimate an interval of value for θ; if we have more data, we have more information about θ and therefore the interval is tighter; we must also decide how confident we want to be that θ lies in the interval.
8 Confidence intervals for θ: 50% interval has an even chance of containing the true value of θ: (0.29,0.35) for the partial data and (0.33,0.36) for the full data. 95% interval has a good chance of containing the true value of θ: (0.23,0.43) for the partial data and (0.29,0.41) for the full data. Q. The length of these intervals are
9 50% confidence interval: 0.06 for the partial data and 0.03 for the full data. 95% confidence interval: 0.2 for the partial data and 0.12 for the full data.
10 Prediction under the fitted model: We can now estimate the p.m.f. by replacing the unknown parameters by their estimates: p(x; ˆθ) = ˆθ x (1 ˆθ) for x = 0,1,.... Q. This can now be used to estimate the probability of longer runs of diseased trees than we saw in the data. Assuming ˆθ = 0.343, what is the probability of a run of 6 or more diseased trees?
11 Prediction under the fitted model: We can now estimate the p.m.f. by replacing the unknown parameters by their estimates: p(x; ˆθ) = ˆθ x (1 ˆθ) for x = 0,1,.... Q. This can now be used to estimate the probability of longer runs of diseased trees than we saw in the data. Assuming ˆθ = 0.343, what is the probability of a run of 6 or more diseased trees? For example, for a Geometric(θ) random variable X and a positive integer x then from Math 104 P(X x;θ) = θ x, so the estimate for the probability of a run of 6 or more diseased trees is P(X 6; ˆθ) = ˆθ 6 = (0.343) 6 = This would not be possible without a model and an inference method for the model
12 Assessment of the fitted model: Count Run length
13 Assessment of the fitted model: Conclusions: the estimated parameter is a reasonable value for the data; the underlying Geometric model seems to be a good choice. So we learn about the probability of disease but also the random mechanism by which disease spreads.
14 Statistical Inference for Parametric Models We have now completed the final stages of statistical analysis of trees data. Recall the whole procedure: 1. selection of parametric model Geometric(θ); 2. estimation of unknown model parameters; 3. assessment of validity of model choice; 4. use of model to predict aspects of variable of interest.
15 Statistical Inference for Parametric Models We have now completed the final stages of statistical analysis of trees data. Recall the whole procedure: 1. selection of parametric model Geometric(θ); 2. estimation of unknown model parameters; 3. assessment of validity of model choice; 4. use of model to predict aspects of variable of interest. These stages are followed in all statistical inference: here the model fitted well; if the model doesn t fit then we need to cycle round with a new model choice; improvements to the model are prompted by observed weaknesses and strengths of earlier analyses.
16 Parameter estimation In Chapter 4, while looking at various parametric models, we learned that within the same parametric family models, the description of probabilities can dramatically change by the choice of parameters. We now introduce a systematic approach to choosing parameters using data.
17 Method of moments In a smoking ban survey, if 75 out of 100 surveyed agree with the ban, a reasonable estimate for θ, the population proportion who agree with the law, would be 0.75, the sample proportion. The simple idea of using a sample quantity in place of a population quantity is the basis of method of moments, as well as the summary statistics measures used in exploratory data analysis. A more elaborate approach using likelihood will be discussed in MATH 235.
18 Sample mean Often the population mean itself is of primary interest and sample mean is one of the most popular summary statistics. Theorem Let X 1,,X n are identically distributed random variables with expectation µ = E[X], then E[ X] = µ. This theorem suggests that taking an empirical average might be a good idea when only a finite sample is available.
19 First we look at the case where n = 2. Then we have [ X1 + X ] 2 E[ X] = E = E[X 1 + X 2 ] 2 2 = E[X 1] + E[X 2 ] = µ + µ = µ 2 2
20 First we look at the case where n = 2. Then we have [ X1 + X ] 2 E[ X] = E = E[X 1 + X 2 ] 2 2 = E[X 1] + E[X 2 ] = µ + µ = µ 2 2 We can generalise to any n: [ X1 + + X ] n E[ X] = E n = E[X 1] + + E[X n ] n = E[X X n ] n = µ + + µ n = nµ n = µ Note that X is a random variable and the expectation takes into account all possible values of x that can arise in any particular observations.
21 Exercise Poisson. Suppose X 1,X 2 are i.i.d. Poisson(θ) random variables and X = X 1+X 2 2. What is E[ X]?
22 Exercise Poisson. Suppose X 1,X 2 are i.i.d. Poisson(θ) random variables and X = X 1+X 2 2. What is E[ X]? E[ X] = θ
23 Sample proportion Exercise Bernoulli. Bernoulli random variables take values on 0,1. For example, 5 responses from a survey on the smoking ban might look like: 0,1,1,0,1 0 for disagree and 1 for agree. If we take the average, 3/5 is the proportion of responses that agree with the law. If we take another sample of 5, the proportion may change. In general when the random variables X 1,,X n only take values of 0 and 1, X is called sample proportion.
24 Sample proportion and Binomial distribution Recall that each X i is called Bernoulli trial. So if they are i.i.d. (what does that mean?), then the sample proportion is indeed the sample mean of Bernoulli random variables. Moreover, we know that the sum of the random variables follows a Binomial distribution: Y = n X i Binomial(n,θ) i=1 with µ = nθ and σ 2 = nθ(1 θ). In this case, the sample proportion is simply Y n, where Y Binomial(n,θ).
25 In this case, the sample proportion is simply Y n, where Y Binomial(n,θ). In particular, E[ n X i ] = E[Y ] = nθ i=1 [ Y ] E[ X] = E = θ n n Var[ X i ] = Var[Y ] = nθ(1 θ) i=1 [ Y ] Var[ X] = Var n = Var[Y ] θ(1 θ) n 2 = n
26 Exercise Binomial. Give an interpretation of Y and Y /n for the survey example. This will be used later to quantify sampling variation of the sample proportion.
27 Y represents the total number of people (responses) who agree with the law and Y /n represensts the proportion of people (responses) who agree with the law. This will be used later to quantify sampling variation of the sample proportion.
28 The method of moments For a random variable X, E[X] - the first moment E[X 2 ] - the second moment E[X k ] - the kth moment, k = 1,2,, The sample moments are then calculated by ˆµ k = 1 n ˆµ k = 1 n n i=1 n i=1 X k i x k i with random variables: estimator with observations: estimate If the unknown parameter θ is expressed by the population moments or some function of them, it can be estimated by replacing the population quantities by the corresponding sample quantities.
29 Exercise Poisson distribution. The first moment for the Poisson distribution is the parameter µ = E[X]. The first sample moment is X = 1 n n i=1 X i which is the method of moments estimator of µ. Since θ = µ, this is the estimator of the rate parameter θ.
30 Exercise Exponential distribution. The first moment for the Exponential distribution is µ = E[X] and the method of moments estimator of µ is X. Since θ = 1 E[X], this can be estimated by
31 θ = 1 X
32 Exercise Normal distribution. The first and second moments for the normal distributions are Thus, µ 1 = E[X] = µ µ 2 = E[X 2 ] = µ 2 + σ 2 µ = µ 1 σ 2 = µ 2 µ 2 1 What are the method of moments estimator of these parameters?
33 ˆµ = X ˆσ 2 = 1 n n i=1 X 2 i X 2 = 1 n n (X i X) 2 i=1
34 Sampling distribution of sample mean: symmetric case n = 10 n = 20 no. experiments no. experiments ˆθ ˆθ n = 50 n = 100 no. experiments no. experiments ˆθ ˆθ Figure: Sampling distribution of ˆθ with increasing sample size
35 n = 100 n = 1000 no. experiments no. experiments ˆθ ˆθ n = 5000 n = no. experiments no. experiments ˆθ ˆθ Figure: Sampling distribution of ˆθ with increasing sample size
36 Effect of sample size Exercise Sample size. Summarise your findings about the estimates ˆθ and comment on the effect of sample size. The estimates, on average, agree with the true value of θ = (0, 0.3, 0.5, 0.7, 1) however there is considerable variability depending on the sample size. In general the larger sample size, the (larger, smaller) variability in the estimates and thus the (more accurate, less accurate) the estimates become.
37 Effect of sample size The estimates, on average, agree with the true value of θ = (0, 0.3, 0.5, 0.7, 1) however there is considerable variability depending on the sample size. In general the larger the sample size is, the (larger, smaller) variability in the estimates and thus the (more accurate, less accurate) the estimates become.
38 Sampling distribution of sample mean: asymmetric case The same phenomeon is expected if the result came from a survey instead of a coin tossing experiment, except that the centre of the distribution will change accordingly. The only limitation with the survey is that we would not be able to run the same survey for 1000 times to see the effect!
39 Sampling distribution of sample mean: asymmetric case The same phenomeon is expected if the result came from a survey instead of a coin tossing experiment, except that the centre of the distribution will change accordingly. The only limitation with the survey is that we would not be able to run the same survey for 1000 times to see the effect! Suppose that the population proportion agreeing with the low be θ = 0.8 and n sample was taken, where n = 10, 20, 50, 100.
40 n = 10 n = 20 no. experiments no. experiments ˆθ ˆθ n = 50 n = 100 no. experiments no. experiments ˆθ ˆθ Figure: Sampling distribution of ˆθ with increasing sample size
41 Exercise What is the underlying distributional model used in Figure 3? How is the shape of the distribution of ˆθ affected by the sample size? The shape of the distribution of different estiamtes becomes more (symmetric, flat) when the sample size increases. Explain why the larger sample size would be preferrable in practice?
42 Exercise What is the underlying distributional model used in Figure 3? X 1 10, where X 1 Binomial(10,0.8) 20, where X 2 X Binomial(20, 0.8), 3 50, where X X 3 Binomial(50,0.8), 4 100, where X 4 Binomial(100,0.8) X 2 How is the shape of the distribution of ˆθ affected by the sample size? The shape of the distribution of different estiamtes becomes more (symmetric, flat) when the sample size increases. Explain why the larger sample size would be preferrable in practice? smaller variability and less skewed distribution for the sample mean
43 n = 100 n = 1000 no. experiments no. experiments ˆθ ˆθ n = 5000 n = no. experiments no. experiments ˆθ ˆθ Figure: Sampling distribution of ˆθ with increasing sample size
44 Variability of sample mean A mathematical explanation of the behaviour of the sample mean estimates comes from the following property. Theorem If X 1,,X n are i.i.d random variables with expectation E[X i ] = µ and Var[X i ] = σ 2, then Var[ X] = σ2 n.
45 Consider the case where n = 2: [ X1 + X ] 2 Var[ X] = Var 2 = σ2 + σ 2 = σ2 4 2 = Var[X 1 + X 2 ] 2 2
46 Consider the case where n = 2: [ X1 + X ] 2 Var[ X] = Var 2 = σ2 + σ 2 = σ2 4 2 We can generalise to any n: [ X1 + + X ] n Var[ X] = Var n = σ2 + + σ 2 n 2 = Var[X 1 + X 2 ] 2 2 = Var[X X n ] n 2 = nσ2 n 2 = σ2 n
47 How variable the estimates are If more and more sample were available, the estimates will converge to the true parameter because variance decreases (converges to zero) as sample size increases; Var[ X] = σ2 n 0 as n E[ X] = θ for all sample sizes and thus there is no systematic bias. therefore the estimate will converge to the true parameter as the sample size increases.
48 How variable the estimates are If more and more sample were available, the estimates will converge to the true parameter because variance decreases (converges to zero) as sample size increases; Var[ X] = σ2 n 0 as n E[ X] = θ for all sample sizes and thus there is no systematic bias. therefore the estimate will converge to the true parameter as the sample size increases. Note that properties of sample mean do not depend on any particular distributional models and thus is not limited to the parametric models that we are considering here.
49 Standard Error We have seen that it is important to take into account sampling variability in the estimation. As a measure of precision, standard error is defined as the squared root of variance of the estimator. Estimator: ˆθ Standard error(ˆθ) = StdError(ˆθ) = Var(ˆθ).
50 If X 1,,X n are an i.i.d. sample from X with E[X] = µ and Var[X] = σ 2, then the method of moments estimator of θ is ˆµ = X and the standard error is
51 If X 1,,X n are an i.i.d. sample from X with E[X] = µ and Var[X] = σ 2, then the method of moments estimator of θ is ˆµ = X and the standard error is StdError(ˆµ) = σ n In practice when σ is unknown, it will be replaced by its estimate.
52 Exercise Poisson distribution. If X Poisson(θ), then we know µ = θ and σ 2 = θ. Thus, the method of moments estimator of θ is ˆθ = X and its standard error is given by StdErrorr(ˆθ) = ˆθ n.
53 Exercise Binomial distribution If X Binomial(n,θ), then we know µ = nθ and σ 2 = nθ(1 θ). Since θ = µ n, the method of moments estimator θ is ˆθ = X n sample proportion. and the standard error of the estimator is
54 StdError(ˆθ) = ˆθ(1 ˆθ). n
55 Hospitals example We consider the hospitals example. Hospital 2: 5 out of 10 operations classified as a success. What does this tell us about the probabilities of successful operations now?
56 Hospitals example We consider the hospitals example. Hospital 2: 5 out of 10 operations classified as a success. What does this tell us about the probabilities of successful operations now? Two possible answers depending on assumptions: Independence: this assumption seems reasonable from the context of the problem. Identically distributed: it is not clear from the context whether the probability of success is the same at each hospital. We will look at what happens when we assume the successes at the two hospitals are identically distributed; NOT identically distributed.
57 We denote by X 1 the random variable number of successful operations at the first hospital; X 2 the random variable number of successful operations at the second hospital; We assume that X 1 and X 2 : are independent; with X i following Binomial(10,θ i ) for i = 1,2. We observe variable X = (X 1,X 2 ) with value x = (x 1,x 2 ) = (9,5).
58 Non-identically distributed i.e. θ = (θ 1,θ 2 )) The method of moments estimates are and the variance is ˆθ 1 = 9 10 ˆθ 2 = 5 10 Var[ ˆθ 1 ] = ˆθ 1 (1 ˆθ 1 ) 10 So the standard errors are = 0.009, Var[ˆθ 2 ] = ˆθ 2 (1 ˆθ 2 ) 10 =
59 Non-identically distributed i.e. θ = (θ 1,θ 2 )) The method of moments estimates are and the variance is ˆθ 1 = 9 10 ˆθ 2 = 5 10 Var[ ˆθ 1 ] = ˆθ 1 (1 ˆθ 1 ) 10 So the standard errors are = 0.009, Var[ˆθ 2 ] = ˆθ 2 (1 ˆθ 2 ) 10 = StdError(ˆθ 1 ) = = 0.095, StdError(ˆθ 2 ) = = There is no correlation between the estimates. This is reasonable as the data from each hospital tells us only about the probability of a successful operation for that hospital.
60 Identically distributed i.e. θ = θ so θ 1 = θ 2 = θ. Then we can aggregate the information as total number of trials is 10+10=20 and the number of successes is 9+5=14. So the method of moments estimate is ˆθ = (9 + 5)/( ) = 0.7 and the variance is
61 Identically distributed i.e. θ = θ so θ 1 = θ 2 = θ. Then we can aggregate the information as total number of trials is 10+10=20 and the number of successes is 9+5=14. So the method of moments estimate is ˆθ = (9 + 5)/( ) = 0.7 and the variance is Var(ˆθ) = Var(ˆθ) = ˆθ(1 ˆθ) 2 10 = , So the standard error is
62 Identically distributed i.e. θ = θ so θ 1 = θ 2 = θ. Then we can aggregate the information as total number of trials is 10+10=20 and the number of successes is 9+5=14. So the method of moments estimate is ˆθ = (9 + 5)/( ) = 0.7 and the variance is Var(ˆθ) = Var(ˆθ) = ˆθ(1 ˆθ) 2 10 = , So the standard error is StdError(ˆθ) = =
63 Standard error of functions of sample mean We have seen that X plays an important role in estimating the population mean and we were able to quantify variability of the sample mean. Other cases such as Exponential distribution, the estimator for θ is 1/ X, but what is Var[1/ X]? Certainly, [ Var 2 ] X 1 + X 2 [ 2 ] [ 2 ] Var + Var. X 1 X 2
64 Standard error of functions of sample mean Taylor approximation: If g is differentiable, then g( X) g(µ) + g (µ)( X µ) Write [ ] Var = Var[g( X)], where g(x) = 1/x. 1 X So, we can compute the variance using the approximation: Var[g( X)] Var[g(µ) + g (µ)( X µ)] = Var[g (µ)( X µ)] = g (µ) 2 Var[ X µ] = g (µ) 2 Var[ X] Verify each step! Note that g should be evaluated at µ, which is a function of θ.
65 Assume X 1,,X n are an i.i.d. sample from X with E[X] = µ and Var[X] = σ 2. Let X = 1 n n i=1 X i. The standard error of g( X) is StdError[g( X)] = g (µ) σ n.
66 For the Exponential(θ), we have and µ = 1 θ, σ2 = 1 θ 2 StdError( X) = σ n = 1 θ n Now θ = 1/µ and ˆθ = 1/ X: if g(x) = 1/x, g (x) = 1/x 2 and g (µ) = 1/µ 2 = θ 2, so the standard error is Since θ is unknown, StdError(1/ X) = θ 2 1 θ n = θ n StdError(1/ X) = ˆθ n
67 Diseased Trees example Consider the diseased tree example. For the diseased trees example for the partial and full data n = 50 and 109 respectively. First let us use general notation for the observations x 1,...,x n so that we can do the mathematics once for both cases. Here we are assuming the data come from i.i.d. random variables with p.m.f. p(x;θ) = θ x (1 θ), so that θ = θ, Θ = [0,1] and θ (1 θ) 2. µ = E[X] = θ 1 θ and σ2 = Var[X] = The method of moments estimate of θ is ˆθ = x 1 + x.
68 For the partial data 50 i=1 x i = = 24 so ˆθ = 24/ /50 = For the full data 109 x i = = 57 i=1 so ˆθ = 57/ /109 =
69 For the standard error of ˆθ: if g(x) = x 1+x, g (x) = 1, so (1+x) 2 g (µ) = 1 = (1 θ) 2 and σ = θ (1+µ) 2 1 θ so the standard error of ˆθ is ˆθ(1 ˆθ) StdError(ˆθ) =. n
70 For the standard error of ˆθ: if g(x) = x 1+x, g (x) = 1, so (1+x) 2 g (µ) = 1 = (1 θ) 2 and σ = θ (1+µ) 2 1 θ so the standard error of ˆθ is ˆθ(1 ˆθ) StdError(ˆθ) =. n For the partial data, StdError(ˆθ) = ( ) 50 = For the full data, StdError(ˆθ) = ( ) 109 = 0.037
71 Confidence region Instead of picking out one single value, we choose a set of parameter values which are consistent with the observed data. We term such a set a confidence region for θ. confidence region. The estimator of the confidence region is a random region, which has the probability 1 α of containing the true value θ 0. If the parameter of interest is 1-dimensional, it is called confidence interval. Based on ˆθ, how do we choose a confidence region to ensure the required probability?
72 Probability of an interval We first look at the case where we know the underlying distribution is Normal. Write z (p) for the pth quantile of the Normal(0,1) distribution. If X Normal(µ,σ 2 ) then In other words, Pr(µ z (α/2) σ X µ + z (α/2) σ) = 1 α Pr(µ [X z (α/2) σ,x + z (α/2) σ]) = 1 α
73 Probabilities of Normal distribution µ 3σ µ 2σ µ σ µ µ + σ µ + 2σ µ + 3σ P(µ σ < X < µ + σ) = P(µ 2σ < X < µ + 2σ) = P(µ 3σ < X < µ + 3σ) = Figure: Illustration of coverage probability of Normal(µ, σ 2 ) distribution.
74 If X 1,,X n are i.i.d. random variables from Normal(µ,σ 2 ) distribution and X = 1 n n i=1 X i, then X Normal(µ, σ2 n ). The proof of this result is given in MATH 230. Therefore, we can choose an interval that has the required probability for the estimator X: Pr(µ [ X z (α/2) σ n, X + z (α/2) σ n ]) = 1 α Hence, [ X z (α/2) σ n, X + z (α/2) σ n ] is (1 α)% confidence interval of µ.
75 Central Limit Theorem If X 1,,X n are i.i.d. random variables from unknown distribution function and X = 1 n X i, then X approximately Normal(µ, σ2 n ).
76 No matter what distribution the original data come from, the sample mean follows approximately Normal distribution if you have a large enough sample. This is one of the most significant results in statistics. Again the formal proof will be given in MATH 230. Here we use our intuition and informal justification shown in Figures This result allows us to construct (approximate) confidence interval as we did for the Normal data.
77 Confidence interval Generally, an approximate (1 α)% confidence interval can be constructed from standard errors of an estimator. The standard error is the factor which determines the width of confidence intervals for θ. The (1 α)% confidence interval for θ is (ˆθ z (α/2) StdError(ˆθ), ˆθ + z (α/2) StdError(ˆθ)). where z (p) is the pth quantile of Normal(0,1) distribution.
78 Exercise Hospitals. Find the 95% confidence interval for θ 1 under two assumptions considered earlier. Use z (0.025) = Non-identically distributed: (ˆθ StdError, ˆθ StdError(ˆθ 1 )) = ( , ) = (0.7138,1.0862) Identically distributed: (ˆθ StdError(ˆθ 1 ), ˆθ StdError(ˆθ 1 )) = ( , ) = (0.4992,0.9
79 Interpretation of confidence interval Exercise Suppose that all our MATH 105 students take a sample of the same size from UG for a smoking ban survey and each student, based on his/own sample, constructs a 95% confidence interval for the UG proportion of students who agree with the law. Would all the confidence intervals be the same? Would the length of all the confidence intervals be the same? Would your confidence interval contain the true value of population proportion? Would exactly 95 out of 100 intervals contain the true value of population proportion?
80 Interpretation of confidence interval Would all the confidence intervals be the same? Probably not. Would the length of all the confidence intervals be the same? No - because the standard error is also estimated Would your confidence interval contain the true value of population proportion? Would exactly 95 out of 100 intervals contain the true value of population proportion?
81 Interpretation of confidence interval Would all the confidence intervals be the same? Would the length of all the confidence intervals be the same? Would your confidence interval contain the true value of population proportion? We do not know whether each confidence region contains the true value. However, we can expect that approximately 95% of those intervals would contain the true value. Would exactly 95 out of 100 intervals contain the true value of population proportion? No, it is possible that all intervals could contain the true value by chance or more than 5% intervals may not contain the true value. Our intervals are only a sample of all possible confidence intervals!
82 Sampling distribution θ 1.96 x StdErr θ θ x StdErr θ^ 1.96 x StdErr θ^ θ^ x StdErr Figure: Illustration of 95% confidence interval
Interval estimation. September 29, Outline Basic ideas Sampling variation and CLT Interval estimation using X More general problems
Interval estimation September 29, 2017 STAT 151 Class 7 Slide 1 Outline of Topics 1 Basic ideas 2 Sampling variation and CLT 3 Interval estimation using X 4 More general problems STAT 151 Class 7 Slide
More informationMATH 3200 Exam 3 Dr. Syring
. Suppose n eligible voters are polled (randomly sampled) from a population of size N. The poll asks voters whether they support or do not support increasing local taxes to fund public parks. Let M be
More informationChapter 7: Point Estimation and Sampling Distributions
Chapter 7: Point Estimation and Sampling Distributions Seungchul Baek Department of Statistics, University of South Carolina STAT 509: Statistics for Engineers 1 / 20 Motivation In chapter 3, we learned
More informationDefinition 9.1 A point estimate is any function T (X 1,..., X n ) of a random sample. We often write an estimator of the parameter θ as ˆθ.
9 Point estimation 9.1 Rationale behind point estimation When sampling from a population described by a pdf f(x θ) or probability function P [X = x θ] knowledge of θ gives knowledge of the entire population.
More informationApplied Statistics I
Applied Statistics I Liang Zhang Department of Mathematics, University of Utah July 14, 2008 Liang Zhang (UofU) Applied Statistics I July 14, 2008 1 / 18 Point Estimation Liang Zhang (UofU) Applied Statistics
More informationChapter 8. Introduction to Statistical Inference
Chapter 8. Introduction to Statistical Inference Point Estimation Statistical inference is to draw some type of conclusion about one or more parameters(population characteristics). Now you know that a
More informationChapter 7 - Lecture 1 General concepts and criteria
Chapter 7 - Lecture 1 General concepts and criteria January 29th, 2010 Best estimator Mean Square error Unbiased estimators Example Unbiased estimators not unique Special case MVUE Bootstrap General Question
More informationConfidence Intervals Introduction
Confidence Intervals Introduction A point estimate provides no information about the precision and reliability of estimation. For example, the sample mean X is a point estimate of the population mean μ
More informationChapter 5: Statistical Inference (in General)
Chapter 5: Statistical Inference (in General) Shiwen Shen University of South Carolina 2016 Fall Section 003 1 / 17 Motivation In chapter 3, we learn the discrete probability distributions, including Bernoulli,
More informationMVE051/MSG Lecture 7
MVE051/MSG810 2017 Lecture 7 Petter Mostad Chalmers November 20, 2017 The purpose of collecting and analyzing data Purpose: To build and select models for parts of the real world (which can be used for
More informationPoint Estimation. Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage
6 Point Estimation Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage Point Estimation Statistical inference: directed toward conclusions about one or more parameters. We will use the generic
More informationConjugate priors: Beta and normal Class 15, Jeremy Orloff and Jonathan Bloom
1 Learning Goals Conjugate s: Beta and normal Class 15, 18.05 Jeremy Orloff and Jonathan Bloom 1. Understand the benefits of conjugate s.. Be able to update a beta given a Bernoulli, binomial, or geometric
More informationChapter 7: SAMPLING DISTRIBUTIONS & POINT ESTIMATION OF PARAMETERS
Chapter 7: SAMPLING DISTRIBUTIONS & POINT ESTIMATION OF PARAMETERS Part 1: Introduction Sampling Distributions & the Central Limit Theorem Point Estimation & Estimators Sections 7-1 to 7-2 Sample data
More informationShifting our focus. We were studying statistics (data, displays, sampling...) The next few lectures focus on probability (randomness) Why?
Probability Introduction Shifting our focus We were studying statistics (data, displays, sampling...) The next few lectures focus on probability (randomness) Why? What is Probability? Probability is used
More informationChapter 7: Estimation Sections
1 / 40 Chapter 7: Estimation Sections 7.1 Statistical Inference Bayesian Methods: Chapter 7 7.2 Prior and Posterior Distributions 7.3 Conjugate Prior Distributions 7.4 Bayes Estimators Frequentist Methods:
More informationPoint Estimation. Principle of Unbiased Estimation. When choosing among several different estimators of θ, select one that is unbiased.
Point Estimation Point Estimation Definition A point estimate of a parameter θ is a single number that can be regarded as a sensible value for θ. A point estimate is obtained by selecting a suitable statistic
More informationHomework Assignments
Homework Assignments Week 1 (p. 57) #4.1, 4., 4.3 Week (pp 58 6) #4.5, 4.6, 4.8(a), 4.13, 4.0, 4.6(b), 4.8, 4.31, 4.34 Week 3 (pp 15 19) #1.9, 1.1, 1.13, 1.15, 1.18 (pp 9 31) #.,.6,.9 Week 4 (pp 36 37)
More informationBinomial Random Variables. Binomial Random Variables
Bernoulli Trials Definition A Bernoulli trial is a random experiment in which there are only two possible outcomes - success and failure. 1 Tossing a coin and considering heads as success and tails as
More informationChapter 5. Sampling Distributions
Lecture notes, Lang Wu, UBC 1 Chapter 5. Sampling Distributions 5.1. Introduction In statistical inference, we attempt to estimate an unknown population characteristic, such as the population mean, µ,
More informationAMS 7 Sampling Distributions, Central limit theorem, Confidence Intervals Lecture 4
AMS 7 Sampling Distributions, Central limit theorem, Confidence Intervals Lecture 4 Department of Applied Mathematics and Statistics, University of California, Santa Cruz Summer 2014 1 / 26 Sampling Distributions!!!!!!
More informationChapter 4: Commonly Used Distributions. Statistics for Engineers and Scientists Fourth Edition William Navidi
Chapter 4: Commonly Used Distributions Statistics for Engineers and Scientists Fourth Edition William Navidi 2014 by Education. This is proprietary material solely for authorized instructor use. Not authorized
More informationChapter 7: Estimation Sections
1 / 31 : Estimation Sections 7.1 Statistical Inference Bayesian Methods: 7.2 Prior and Posterior Distributions 7.3 Conjugate Prior Distributions 7.4 Bayes Estimators Frequentist Methods: 7.5 Maximum Likelihood
More informationPoint Estimation. Some General Concepts of Point Estimation. Example. Estimator quality
Point Estimation Some General Concepts of Point Estimation Statistical inference = conclusions about parameters Parameters == population characteristics A point estimate of a parameter is a value (based
More informationThe Bernoulli distribution
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike License. Your use of this material constitutes acceptance of that license and the conditions of use of materials on this
More informationCentral Limit Theorem (cont d) 7/28/2006
Central Limit Theorem (cont d) 7/28/2006 Central Limit Theorem for Binomial Distributions Theorem. For the binomial distribution b(n, p, j) we have lim npq b(n, p, np + x npq ) = φ(x), n where φ(x) is
More informationReview for Final Exam Spring 2014 Jeremy Orloff and Jonathan Bloom
Review for Final Exam 18.05 Spring 2014 Jeremy Orloff and Jonathan Bloom THANK YOU!!!! JON!! PETER!! RUTHI!! ERIKA!! ALL OF YOU!!!! Probability Counting Sets Inclusion-exclusion principle Rule of product
More informationProbability Theory and Simulation Methods. April 9th, Lecture 20: Special distributions
April 9th, 2018 Lecture 20: Special distributions Week 1 Chapter 1: Axioms of probability Week 2 Chapter 3: Conditional probability and independence Week 4 Chapters 4, 6: Random variables Week 9 Chapter
More informationA random variable (r. v.) is a variable whose value is a numerical outcome of a random phenomenon.
Chapter 14: random variables p394 A random variable (r. v.) is a variable whose value is a numerical outcome of a random phenomenon. Consider the experiment of tossing a coin. Define a random variable
More informationChapter 4: Asymptotic Properties of MLE (Part 3)
Chapter 4: Asymptotic Properties of MLE (Part 3) Daniel O. Scharfstein 09/30/13 1 / 1 Breakdown of Assumptions Non-Existence of the MLE Multiple Solutions to Maximization Problem Multiple Solutions to
More information4-1. Chapter 4. Commonly Used Distributions by The McGraw-Hill Companies, Inc. All rights reserved.
4-1 Chapter 4 Commonly Used Distributions 2014 by The Companies, Inc. All rights reserved. Section 4.1: The Bernoulli Distribution 4-2 We use the Bernoulli distribution when we have an experiment which
More informationProbability. An intro for calculus students P= Figure 1: A normal integral
Probability An intro for calculus students.8.6.4.2 P=.87 2 3 4 Figure : A normal integral Suppose we flip a coin 2 times; what is the probability that we get more than 2 heads? Suppose we roll a six-sided
More informationChapter 8: Sampling distributions of estimators Sections
Chapter 8 continued Chapter 8: Sampling distributions of estimators Sections 8.1 Sampling distribution of a statistic 8.2 The Chi-square distributions 8.3 Joint Distribution of the sample mean and sample
More informationReview of the Topics for Midterm I
Review of the Topics for Midterm I STA 100 Lecture 9 I. Introduction The objective of statistics is to make inferences about a population based on information contained in a sample. A population is the
More informationLecture 9 - Sampling Distributions and the CLT
Lecture 9 - Sampling Distributions and the CLT Sta102/BME102 Colin Rundel September 23, 2015 1 Variability of Estimates Activity Sampling distributions - via simulation Sampling distributions - via CLT
More informationBIO5312 Biostatistics Lecture 5: Estimations
BIO5312 Biostatistics Lecture 5: Estimations Yujin Chung September 27th, 2016 Fall 2016 Yujin Chung Lec5: Estimations Fall 2016 1/34 Recap Yujin Chung Lec5: Estimations Fall 2016 2/34 Today s lecture and
More informationMLLunsford 1. Activity: Central Limit Theorem Theory and Computations
MLLunsford 1 Activity: Central Limit Theorem Theory and Computations Concepts: The Central Limit Theorem; computations using the Central Limit Theorem. Prerequisites: The student should be familiar with
More informationLecture Notes 6. Assume F belongs to a family of distributions, (e.g. F is Normal), indexed by some parameter θ.
Sufficient Statistics Lecture Notes 6 Sufficiency Data reduction in terms of a particular statistic can be thought of as a partition of the sample space X. Definition T is sufficient for θ if the conditional
More informationChapter 3 Discrete Random Variables and Probability Distributions
Chapter 3 Discrete Random Variables and Probability Distributions Part 3: Special Discrete Random Variable Distributions Section 3.5 Discrete Uniform Section 3.6 Bernoulli and Binomial Others sections
More informationSTAT/MATH 395 PROBABILITY II
STAT/MATH 395 PROBABILITY II Distribution of Random Samples & Limit Theorems Néhémy Lim University of Washington Winter 2017 Outline Distribution of i.i.d. Samples Convergence of random variables The Laws
More informationMidterm Exam III Review
Midterm Exam III Review Dr. Joseph Brennan Math 148, BU Dr. Joseph Brennan (Math 148, BU) Midterm Exam III Review 1 / 25 Permutations and Combinations ORDER In order to count the number of possible ways
More informationLecture 10: Point Estimation
Lecture 10: Point Estimation MSU-STT-351-Sum-17B (P. Vellaisamy: MSU-STT-351-Sum-17B) Probability & Statistics for Engineers 1 / 31 Basic Concepts of Point Estimation A point estimate of a parameter θ,
More informationPoint Estimation. Edwin Leuven
Point Estimation Edwin Leuven Introduction Last time we reviewed statistical inference We saw that while in probability we ask: given a data generating process, what are the properties of the outcomes?
More informationSTAT 241/251 - Chapter 7: Central Limit Theorem
STAT 241/251 - Chapter 7: Central Limit Theorem In this chapter we will introduce the most important theorem in statistics; the central limit theorem. What have we seen so far? First, we saw that for an
More informationSampling and sampling distribution
Sampling and sampling distribution September 12, 2017 STAT 101 Class 5 Slide 1 Outline of Topics 1 Sampling 2 Sampling distribution of a mean 3 Sampling distribution of a proportion STAT 101 Class 5 Slide
More informationIEOR 3106: Introduction to OR: Stochastic Models. Fall 2013, Professor Whitt. Class Lecture Notes: Tuesday, September 10.
IEOR 3106: Introduction to OR: Stochastic Models Fall 2013, Professor Whitt Class Lecture Notes: Tuesday, September 10. The Central Limit Theorem and Stock Prices 1. The Central Limit Theorem (CLT See
More informationSTAT Chapter 7: Central Limit Theorem
STAT 251 - Chapter 7: Central Limit Theorem In this chapter we will introduce the most important theorem in statistics; the central limit theorem. What have we seen so far? First, we saw that for an i.i.d
More informationA probability distribution shows the possible outcomes of an experiment and the probability of each of these outcomes.
Introduction In the previous chapter we discussed the basic concepts of probability and described how the rules of addition and multiplication were used to compute probabilities. In this chapter we expand
More informationStatistical Methods in Practice STAT/MATH 3379
Statistical Methods in Practice STAT/MATH 3379 Dr. A. B. W. Manage Associate Professor of Mathematics & Statistics Department of Mathematics & Statistics Sam Houston State University Overview 6.1 Discrete
More informationChapter 7 Sampling Distributions and Point Estimation of Parameters
Chapter 7 Sampling Distributions and Point Estimation of Parameters Part 1: Sampling Distributions, the Central Limit Theorem, Point Estimation & Estimators Sections 7-1 to 7-2 1 / 25 Statistical Inferences
More informationIntroduction to Statistical Data Analysis II
Introduction to Statistical Data Analysis II JULY 2011 Afsaneh Yazdani Preface Major branches of Statistics: - Descriptive Statistics - Inferential Statistics Preface What is Inferential Statistics? Preface
More informationSTAT 111 Recitation 3
STAT 111 Recitation 3 Linjun Zhang stat.wharton.upenn.edu/~linjunz/ September 23, 2017 Misc. The unpicked-up homeworks will be put in the STAT 111 box in the Stats Department lobby (It s on the 4th floor
More informationThe topics in this section are related and necessary topics for both course objectives.
2.5 Probability Distributions The topics in this section are related and necessary topics for both course objectives. A probability distribution indicates how the probabilities are distributed for outcomes
More informationVersion A. Problem 1. Let X be the continuous random variable defined by the following pdf: 1 x/2 when 0 x 2, f(x) = 0 otherwise.
Math 224 Q Exam 3A Fall 217 Tues Dec 12 Version A Problem 1. Let X be the continuous random variable defined by the following pdf: { 1 x/2 when x 2, f(x) otherwise. (a) Compute the mean µ E[X]. E[X] x
More informationSection 0: Introduction and Review of Basic Concepts
Section 0: Introduction and Review of Basic Concepts Carlos M. Carvalho The University of Texas McCombs School of Business mccombs.utexas.edu/faculty/carlos.carvalho/teaching 1 Getting Started Syllabus
More informationLecture 9. Probability Distributions. Outline. Outline
Outline Lecture 9 Probability Distributions 6-1 Introduction 6- Probability Distributions 6-3 Mean, Variance, and Expectation 6-4 The Binomial Distribution Outline 7- Properties of the Normal Distribution
More informationUQ, STAT2201, 2017, Lectures 3 and 4 Unit 3 Probability Distributions.
UQ, STAT2201, 2017, Lectures 3 and 4 Unit 3 Probability Distributions. Random Variables 2 A random variable X is a numerical (integer, real, complex, vector etc.) summary of the outcome of the random experiment.
More informationLecture 9. Probability Distributions
Lecture 9 Probability Distributions Outline 6-1 Introduction 6-2 Probability Distributions 6-3 Mean, Variance, and Expectation 6-4 The Binomial Distribution Outline 7-2 Properties of the Normal Distribution
More informationChapter 8: Sampling distributions of estimators Sections
Chapter 8: Sampling distributions of estimators Sections 8.1 Sampling distribution of a statistic 8.2 The Chi-square distributions 8.3 Joint Distribution of the sample mean and sample variance Skip: p.
More information4.3 Normal distribution
43 Normal distribution Prof Tesler Math 186 Winter 216 Prof Tesler 43 Normal distribution Math 186 / Winter 216 1 / 4 Normal distribution aka Bell curve and Gaussian distribution The normal distribution
More informationExamples: Random Variables. Discrete and Continuous Random Variables. Probability Distributions
Random Variables Examples: Random variable a variable (typically represented by x) that takes a numerical value by chance. Number of boys in a randomly selected family with three children. Possible values:
More informationEcon 6900: Statistical Problems. Instructor: Yogesh Uppal
Econ 6900: Statistical Problems Instructor: Yogesh Uppal Email: yuppal@ysu.edu Lecture Slides 4 Random Variables Probability Distributions Discrete Distributions Discrete Uniform Probability Distribution
More informationEngineering Statistics ECIV 2305
Engineering Statistics ECIV 2305 Section 5.3 Approximating Distributions with the Normal Distribution Introduction A very useful property of the normal distribution is that it provides good approximations
More information1. Covariance between two variables X and Y is denoted by Cov(X, Y) and defined by. Cov(X, Y ) = E(X E(X))(Y E(Y ))
Correlation & Estimation - Class 7 January 28, 2014 Debdeep Pati Association between two variables 1. Covariance between two variables X and Y is denoted by Cov(X, Y) and defined by Cov(X, Y ) = E(X E(X))(Y
More informationMTH6154 Financial Mathematics I Stochastic Interest Rates
MTH6154 Financial Mathematics I Stochastic Interest Rates Contents 4 Stochastic Interest Rates 45 4.1 Fixed Interest Rate Model............................ 45 4.2 Varying Interest Rate Model...........................
More informationLecture 2. Probability Distributions Theophanis Tsandilas
Lecture 2 Probability Distributions Theophanis Tsandilas Comment on measures of dispersion Why do common measures of dispersion (variance and standard deviation) use sums of squares: nx (x i ˆµ) 2 i=1
More informationPoint Estimators. STATISTICS Lecture no. 10. Department of Econometrics FEM UO Brno office 69a, tel
STATISTICS Lecture no. 10 Department of Econometrics FEM UO Brno office 69a, tel. 973 442029 email:jiri.neubauer@unob.cz 8. 12. 2009 Introduction Suppose that we manufacture lightbulbs and we want to state
More informationRandom Variables Handout. Xavier Vilà
Random Variables Handout Xavier Vilà Course 2004-2005 1 Discrete Random Variables. 1.1 Introduction 1.1.1 Definition of Random Variable A random variable X is a function that maps each possible outcome
More information5. In fact, any function of a random variable is also a random variable
Random Variables - Class 11 October 14, 2012 Debdeep Pati 1 Random variables 1.1 Expectation of a function of a random variable 1. Expectation of a function of a random variable 2. We know E(X) = x xp(x)
More informationRandom Variables CHAPTER 6.3 BINOMIAL AND GEOMETRIC RANDOM VARIABLES
Random Variables CHAPTER 6.3 BINOMIAL AND GEOMETRIC RANDOM VARIABLES Essential Question How can I determine whether the conditions for using binomial random variables are met? Binomial Settings When the
More information8.1 Estimation of the Mean and Proportion
8.1 Estimation of the Mean and Proportion Statistical inference enables us to make judgments about a population on the basis of sample information. The mean, standard deviation, and proportions of a population
More informationPart V - Chance Variability
Part V - Chance Variability Dr. Joseph Brennan Math 148, BU Dr. Joseph Brennan (Math 148, BU) Part V - Chance Variability 1 / 78 Law of Averages In Chapter 13 we discussed the Kerrich coin-tossing experiment.
More informationLearning From Data: MLE. Maximum Likelihood Estimators
Learning From Data: MLE Maximum Likelihood Estimators 1 Parameter Estimation Assuming sample x1, x2,..., xn is from a parametric distribution f(x θ), estimate θ. E.g.: Given sample HHTTTTTHTHTTTHH of (possibly
More informationNormal distribution Approximating binomial distribution by normal 2.10 Central Limit Theorem
1.1.2 Normal distribution 1.1.3 Approimating binomial distribution by normal 2.1 Central Limit Theorem Prof. Tesler Math 283 Fall 216 Prof. Tesler 1.1.2-3, 2.1 Normal distribution Math 283 / Fall 216 1
More informationElementary Statistics Lecture 5
Elementary Statistics Lecture 5 Sampling Distributions Chong Ma Department of Statistics University of South Carolina Chong Ma (Statistics, USC) STAT 201 Elementary Statistics 1 / 24 Outline 1 Introduction
More informationStatistics for Business and Economics
Statistics for Business and Economics Chapter 5 Continuous Random Variables and Probability Distributions Ch. 5-1 Probability Distributions Probability Distributions Ch. 4 Discrete Continuous Ch. 5 Probability
More informationThe Binomial Probability Distribution
The Binomial Probability Distribution MATH 130, Elements of Statistics I J. Robert Buchanan Department of Mathematics Fall 2017 Objectives After this lesson we will be able to: determine whether a probability
More information4 Random Variables and Distributions
4 Random Variables and Distributions Random variables A random variable assigns each outcome in a sample space. e.g. called a realization of that variable to Note: We ll usually denote a random variable
More information6 If and then. (a) 0.6 (b) 0.9 (c) 2 (d) Which of these numbers can be a value of probability distribution of a discrete random variable
1. A number between 0 and 1 that is use to measure uncertainty is called: (a) Random variable (b) Trial (c) Simple event (d) Probability 2. Probability can be expressed as: (a) Rational (b) Fraction (c)
More informationSubject CS1 Actuarial Statistics 1 Core Principles. Syllabus. for the 2019 exams. 1 June 2018
` Subject CS1 Actuarial Statistics 1 Core Principles Syllabus for the 2019 exams 1 June 2018 Copyright in this Core Reading is the property of the Institute and Faculty of Actuaries who are the sole distributors.
More informationBernoulli and Binomial Distributions
Bernoulli and Binomial Distributions Bernoulli Distribution a flipped coin turns up either heads or tails an item on an assembly line is either defective or not defective a piece of fruit is either damaged
More informationCommonly Used Distributions
Chapter 4: Commonly Used Distributions 1 Introduction Statistical inference involves drawing a sample from a population and analyzing the sample data to learn about the population. We often have some knowledge
More informationIntroduction to Probability and Inference HSSP Summer 2017, Instructor: Alexandra Ding July 19, 2017
Introduction to Probability and Inference HSSP Summer 2017, Instructor: Alexandra Ding July 19, 2017 Please fill out the attendance sheet! Suggestions Box: Feedback and suggestions are important to the
More informationBusiness Statistics 41000: Probability 3
Business Statistics 41000: Probability 3 Drew D. Creal University of Chicago, Booth School of Business February 7 and 8, 2014 1 Class information Drew D. Creal Email: dcreal@chicagobooth.edu Office: 404
More informationMath489/889 Stochastic Processes and Advanced Mathematical Finance Homework 5
Math489/889 Stochastic Processes and Advanced Mathematical Finance Homework 5 Steve Dunbar Due Fri, October 9, 7. Calculate the m.g.f. of the random variable with uniform distribution on [, ] and then
More informationCase Study: Heavy-Tailed Distribution and Reinsurance Rate-making
Case Study: Heavy-Tailed Distribution and Reinsurance Rate-making May 30, 2016 The purpose of this case study is to give a brief introduction to a heavy-tailed distribution and its distinct behaviors in
More informationStatistics 6 th Edition
Statistics 6 th Edition Chapter 5 Discrete Probability Distributions Chap 5-1 Definitions Random Variables Random Variables Discrete Random Variable Continuous Random Variable Ch. 5 Ch. 6 Chap 5-2 Discrete
More informationChapter 7: Estimation Sections
Chapter 7: Estimation Sections 7.1 Statistical Inference Bayesian Methods: 7.2 Prior and Posterior Distributions 7.3 Conjugate Prior Distributions Frequentist Methods: 7.5 Maximum Likelihood Estimators
More informationModule 4: Probability
Module 4: Probability 1 / 22 Probability concepts in statistical inference Probability is a way of quantifying uncertainty associated with random events and is the basis for statistical inference. Inference
More informationUNIT 4 MATHEMATICAL METHODS
UNIT 4 MATHEMATICAL METHODS PROBABILITY Section 1: Introductory Probability Basic Probability Facts Probabilities of Simple Events Overview of Set Language Venn Diagrams Probabilities of Compound Events
More informationLecture 9 - Sampling Distributions and the CLT. Mean. Margin of error. Sta102/BME102. February 6, Sample mean ( X ): x i
Lecture 9 - Sampling Distributions and the CLT Sta102/BME102 Colin Rundel February 6, 2015 http:// pewresearch.org/ pubs/ 2191/ young-adults-workers-labor-market-pay-careers-advancement-recession Sta102/BME102
More information2011 Pearson Education, Inc
Statistics for Business and Economics Chapter 4 Random Variables & Probability Distributions Content 1. Two Types of Random Variables 2. Probability Distributions for Discrete Random Variables 3. The Binomial
More informationME3620. Theory of Engineering Experimentation. Spring Chapter III. Random Variables and Probability Distributions.
ME3620 Theory of Engineering Experimentation Chapter III. Random Variables and Probability Distributions Chapter III 1 3.2 Random Variables In an experiment, a measurement is usually denoted by a variable
More informationChapter 3 - Lecture 5 The Binomial Probability Distribution
Chapter 3 - Lecture 5 The Binomial Probability October 12th, 2009 Experiment Examples Moments and moment generating function of a Binomial Random Variable Outline Experiment Examples A binomial experiment
More informationA random variable (r. v.) is a variable whose value is a numerical outcome of a random phenomenon.
Chapter 14: random variables p394 A random variable (r. v.) is a variable whose value is a numerical outcome of a random phenomenon. Consider the experiment of tossing a coin. Define a random variable
More informationSome Discrete Distribution Families
Some Discrete Distribution Families ST 370 Many families of discrete distributions have been studied; we shall discuss the ones that are most commonly found in applications. In each family, we need a formula
More informationProbability Theory. Probability and Statistics for Data Science CSE594 - Spring 2016
Probability Theory Probability and Statistics for Data Science CSE594 - Spring 2016 What is Probability? 2 What is Probability? Examples outcome of flipping a coin (seminal example) amount of snowfall
More informationUnit 5: Sampling Distributions of Statistics
Unit 5: Sampling Distributions of Statistics Statistics 571: Statistical Methods Ramón V. León 6/12/2004 Unit 5 - Stat 571 - Ramon V. Leon 1 Definitions and Key Concepts A sample statistic used to estimate
More informationUnit 5: Sampling Distributions of Statistics
Unit 5: Sampling Distributions of Statistics Statistics 571: Statistical Methods Ramón V. León 6/12/2004 Unit 5 - Stat 571 - Ramon V. Leon 1 Definitions and Key Concepts A sample statistic used to estimate
More informationStatistics and Probability
Statistics and Probability Continuous RVs (Normal); Confidence Intervals Outline Continuous random variables Normal distribution CLT Point estimation Confidence intervals http://www.isrec.isb-sib.ch/~darlene/geneve/
More informationProblems from 9th edition of Probability and Statistical Inference by Hogg, Tanis and Zimmerman:
Math 224 Fall 207 Homework 5 Drew Armstrong Problems from 9th edition of Probability and Statistical Inference by Hogg, Tanis and Zimmerman: Section 3., Exercises 3, 0. Section 3.3, Exercises 2, 3, 0,.
More information