MM and ML for a sample of n = 30 from Gamma(3,2) ===============================================

Size: px
Start display at page:

Download "MM and ML for a sample of n = 30 from Gamma(3,2) ==============================================="

Transcription

1 and for a sample of n = 30 from Gamma(3,2) =============================================== Generate the sample with shape parameter α = 3 and scale parameter λ = 2 > x=rgamma(30,3,2) > x [1] [9] [17] [25] > summary(x) Min. 1st Qu. Median Mean 3rd Qu. Max > xave=mean(x) > xave [1] ! the sample average, x > xsd=sqrt(var(x)) > xsd [1] ! the sample standard deviation, s > stem(x) The decimal point is at the > hist(x) 1

2 Estimation of the Unknown Parameters α and λ: =================================== Now pretend we don t know that this sample is from a Gamma(3,2) population. Treat it as a random sample of n = 30 data points from some population. We will use the Gamma(α,λ) distribution as the statistical model for this data set. Method of Moments estimates (Es) for this sample: > amm=xave^2/(29*xsd^2/30) > amm [1] ! αˆ > lmm=amm/xave > lmm [1] ! λˆ Method of Maximum Likelihood estimates (Es) for this sample: Evaluate log of average average of logs: > c=log(xave)-mean(log(x)) > c [1] First, let s plot f ( α ) = Γ'( / Γ( log( + c, the function of α for which we need to find the root to determine the Es. Note that Γ '( / Γ( can be calculated as digamma (α ) in R. a) We know α > 0. The E suggests α is around 3. Let s plot f (α ) for 0 < α < 10: > xx=seq(0.1,10,0.1) > y=digamma(xx)-log(xx)+c > plot(xx,y) > abline(0,0) 2

3 b) It looks like f (α ) is monotone increasing, with a root somewhere between 2 and 4. (You can check 2 monotonicity by plotting f '(. Note that Γ' '( / Γ( [ Γ'( / Γ( ] can be calculated as trigamma ( α ) in R.) Plot f (α ) on the interval [2, 4]: > xx=seq(2,4,0.01) > y=digamma(xx)-log(xx)+c > plot(xx,y) > abline(0,0) The root is close to 2.5, so αˆ is going to be a fair bit different from ˆ α From the plot, it is clear we can use Newton s method to find the root as precisely as desired. The R function N0gamma below determines the root (iteration stops once absolute change < 10-6 ) and prints out the details of the iterations (Note that f (α ) is used in N0gamma. No real reason for that.): > N0gamma function(a,x){ it=0 e= diff=1 dx=log(mean(x))-mean(log(x)) while(abs(diff)>e){ it=it+1 f=log(a)-digamma(a)-dx df=1/a-trigamma(a) diff=-f/df cat("it= ",it,"a= ",a," f= ",f," df= ",df," diff= ",diff,"\n") a=a+diff cat("e= ",a,"\n") return(a) Let s try an initial guess of 2 for the root: > N0gamma(2,x) It= 1 a= 2 f= df= diff= It= 2 a= f= df= diff= It= 3 a= f= df= diff=

4 It= 4 a= f= e-06 df= diff= e-05 It= 5 a= f= e-12 df= diff= e-11 E= [1] Note how the iteration converges to the root α : the first step is quite large (diff = 0.42) but the successive steps are smaller: 6x10-3, 1x10-5 and 9x10-11 at the 3 rd, 4 th and 5 th iterations, reflecting the quadratic convergence property of Newton s method. The value of the function at the successive iterations very quickly becomes very small. Also note that the derivative of the function changes very little once you get close to the root (as is also clear from the plot). For this function, Newton s method works very well. What if we start further away? Let s try an initial guess of 4: > N0gamma(4,x) It= 1 a= 4 f= df= diff= It= 2 a= f= df= diff= It= 3 a= f= df= diff= It= 4 a= f= df= diff= It= 5 a= f= e-05 df= diff= It= 6 a= f= e-08 df= diff= e-07 E= [1] Basically the same story: even though the first iteration overshoots the root, the successive iterations quickly locate the root. If you used a very large initial guess, the first step could lead to a negative updated value of α not a permissible value. N0gamma would then bomb when it tries to evaluate the function at that updated value. (Try an initial value of 6, for example). Depending on the nature of the function, Newton s method can require careful guidance until it gets into the vicinity of the root where the quadratic convergence property takes hold. > aml= ! αˆ > lml=aml/xave > lml [1] ! λˆ Evaluation of the Precision of these Estimates via Parametric Bootstrap Method: ========================================================== The R function bootgamma first calculates αˆ and for the sample x and then generates B samples of size n (the same size as the original sample) from the Gamma( αˆ, λˆ ) distribution. The Es are evaluated for each of these B bootstrap samples and are stored in a Bx2 matrix. > bootgamma function(x,b){ n=length(x) am=(n/(n-1))*mean(x)^2/var(x) lm=am/mean(x) ab=matrix(0,nrow=b,ncol=2) λˆ 4

5 for(i in 1:B){ xb=rgamma(n,shape=am,rate=lm) ab[i,1]=(n/(n-1))*mean(xb)^2/var(xb) ab[i,2]=ab[i,1]/mean(xb) return(ab) Let s do B = 1000 bootstrap samples: > ymm=bootgamma(x,1000) > summary(ymm) X1 X2 Min. :1.354 Min. : st Qu.: st Qu.: Median :3.429 Median : Mean :3.616 Mean : rd Qu.: rd Qu.: Max. :9.242 Max. : > plot(ymm[,1],ymm[,2]) Now do the same thing for the Es αˆ and λˆ. The R function bootgamma uses the R function Ngamma to evaluate the Es for the bootstrap samples; Ngamma is a version of the funtion N0gamma above with all the detailed output suppressed. > bootgamma function(x,b){ n=length(x) am=ngamma(mean(x)^2/var(x),x) lm=am/mean(x) ab=matrix(0,nrow=b,ncol=2) for(i in 1:B){ xb=rgamma(n,shape=am,rate=lm) ab[i,1]=ngamma(mean(xb)^2/var(xb),xb) ab[i,2]=ab[i,1]/mean(xb) return(ab) 5

6 Let s do B = 1000 bootstrap samples: > yml=bootgamma(x,1000) > summary(yml) X1 X2 Min. :1.357 Min. : st Qu.: st Qu.: Median :2.657 Median : Mean :2.797 Mean : rd Qu.: rd Qu.: Max. :6.892 Max. : > plot(yml[,1],ym[,2]) Note that the x and y scales are different in this plot than in the corresponding plot for the Es. Both the summaries and the plots seem to indicate less scatter (in both dimensions!) for the bootstrap Es than for the bootstrap Es. So it looks like, at least for this example, that the Es are less variable than the Es. Let s compare graphically: Bootstrap estimates of α : > boxplot(ymm[,1],yml[,1]) 6

7 Bootstrap estimates of λ : > boxplot(ymm[,2],yml[,2]) In both cases, the boxplots are centered at different values for the E and the E, but it is also clear that box part of the boxplot (the central portion of those datasets) is considerably less spread out for the E than for the E. In case you prefer histograms to boxplots: > par(mfrow=c(2,1)) > hist(ymm[,1],prob=t) > hist(yml[,1],prob=t) 7

8 > hist(ymm[,2],prob=t) > hist(yml[,2],prob=t) To quantify the variability of the bootstrap values of αˆ and λˆ, let s calculate the variancecovariance matrix: > cov(ymm) [,1] [,2] [1,] [2,] So, the bootstrap estimates of: " the SEs of αˆ and λˆ are = and = 0.690, respectively, " the correlation between αˆ and λˆ is / = Similarly for the bootstrap Es: > cov(yml) [,1] [,2] [1,] [2,] So, the bootstrap estimates of: " the SEs of αˆ and λˆ are = and = 0.495, respectively, " the correlation between αˆ and λˆ is / = The bootstrap estimates of the SEs are much smaller for the Es than for the Es, indicating that, at least for this example, the Es are much less variable than the Es. We will soon see this is a general phenomenon: in a sense to be described more precisely in class, Es are always better than Es. In fact, we will show that (in that same sense) no estimator can do better than the E. 8

9 You may be curious how much things change if you do more bootstrap samples: > ymm=bootgamma(x,10000) > summary(ymm) X1 X2 Min. : Min. : st Qu.: st Qu.: Median : Median : Mean : Mean : rd Qu.: rd Qu.: Max. : Max. : > cov(ymm) [,1] [,2] [1,] [2,] > yml=bootgamma(x,10000) > summary(yml) X1 X2 Min. :1.262 Min. : st Qu.: st Qu.: Median :2.666 Median : Mean :2.801 Mean : rd Qu.: rd Qu.: Max. :8.889 Max. : > cov(yml) [,1] [,2] [1,] [2,] Plotting the bootstrap Es and the bootstrap Es (note the differences in the scales) yields: 9

10 The resulting bootstrap estimates of the SEs: " of αˆ and λˆ are = and = 0.669, respectively, " of αˆ and λˆ are = and = 0.493, respectively, In summary, once the results are expressed as estimated SEs, they differ in only minor ways from the corresponding results obtained with B = If we assume that n = 30 is large enough to consider both the Es and the Es to be, for all practical purposes, unbiased (note that the results above suggest this may NOT be the case), then we would estimate the efficiency of the E relative to the E to be approximately: " (0.770/1.078) 2 = 0.51, or 51% for the estimation of α, " (0.493/0.669) 2 = 0.54, or 54% for the estimation of λ. That is, we would need almost twice the sample size to attain the same asymptotic variance with the Es as with the Es. Confidence Intervals for the Unknown Parameters α and λ: =========================================== We will see in class that ˆ θ ± 1.96 Sˆ E( ˆ) θ provides an approximate 95% confidence interval for θ whenever the estimator θˆ is asymptotically normally distributed, as is the case for Es and Es. Using the bootstrap estimates of the SEs (let s use the B=10000 results), we obtain approximate 95% confidence intervals: For α of: " ± 1.96 x $ (1.05, 5.27) based on the E, αˆ " ± 1.96 x $ (1.03, 4.05) based on the E, αˆ For λ of: " ± 1.96 x $ (0.50, 3.13) based on the E, λˆ " ± 1.96 x $ (0.49, 2.42) based on the E, λˆ Note that, in both cases, the lower endpoints of the two confidence intervals are basically the same so the main difference is in the upper endpoint. Because the estimated SEs for the Es are smaller than those for the Es, the confidence intervals based on the Es are quite a bit shorter that is, the Es lead to considerably tighter ranges of plausible values for the unknown parameters. Comparison to Results Based on Large Sample Theory: ======================================== As noted in class, the (bivariate) CLT for the first two sample moments together with the delta method yield the asymptotic (bivariate) distribution of the E and the general large sample theory results for Es presented in class yield the asymptotic distribution of the E. Specifically, for the case of sampling from a Gamma(α,λ) distribution, as n : and ˆ θ ˆ α = ˆ λ N α 1 2α ( α + 1), λ n 2( α + 1) λ 2( α + 1) λ 2 (2α + 3) λ / α 2, 10

11 ˆ θ ˆ α = ˆ λ N α 1 α, λ n[ αg( 1] λ λ 2 λ g( 2, where g( = Γ' '( / Γ( [ Γ' ( / Γ( ] 2 is the trigamma function. Plugging the values of the estimates into these expressions (need to use Es in the expressions for θˆ and Es in the expressions for θˆ as that is all we would have in practice) yields the estimated (asymptotic) SEs: " for αˆ and λˆ as [2(3.1609)(4.1609)/30] 1/2 = and [9.3219(1.8131) 2 /30(3.1609)] 1/2 = 0.568, respectively, " for αˆ and λˆ as [2.5376/30(0.2222)] 1/2 = and [(1.4555) 2 (0.4816)/30(0.2222)] 1/2 = 0.391, respectively. Compared to the bootstrap results, in both cases, it appears these asymptotic expressions underestimate the variability to some degree; that is, it looks like n = 30 may not be a large enough sample size for these asymptotic results to provide accurate approximations. Let s check this using simulation. As you have learned in the lab, it is very easy to use R to carry out simulation experiments and this is a powerful tool that you can use in many circumstances, for example, when you want to check the properties of some random phenomenon and the necessary probability calculations are too hard to do analytically. Simulate distributions of E and E for a sample of n = 30 from Gamma(3,2): ============================================================ The function simgamma evaluates Es for repeated samples of n = 30 from Gamma(3,2). You could easily make this function suitable for any Gamma distribution. We simulate 1000 samples: > simgamma function(n,s){ ab=matrix(0,nrow=s,ncol=2) for(i in 1:S){ xb=rgamma(n,shape=3,rate=2) ab[i,1]=(n/(n-1))*mean(xb)^2/var(xb) ab[i,2]=ab[i,1]/mean(xb) return(ab) > ymm=simgamma(30,1000) > summary(ymm) X1 X2 Min. :1.328 Min. : st Qu.: st Qu.: Median :3.293 Median : Mean :3.385 Mean : rd Qu.: rd Qu.:

12 Max. :8.597 Max. : > cov(ymm) [,1] [,2] [1,] [2,] Now do the same thing for the Es: > simgamma function(n,s){ ab=matrix(0,nrow=s,ncol=2) for(i in 1:S){ xb=rgamma(n,shape=3,rate=2) ab[i,1]=ngamma(mean(xb)^2/var(xb),xb) ab[i,2]=ab[i,1]/mean(xb) return(ab) > yml=simgamma(30,1000) > summary(yml) X1 X2 Min. :1.568 Min. : st Qu.: st Qu.:1.750 Median :3.122 Median :2.095 Mean :3.283 Mean : rd Qu.: rd Qu.:2.583 Max. :8.783 Max. :5.849 > cov(yml) [,1] [,2] [1,] [2,] First consider estimation of α. Plot the histograms of the simulated Es and Es of α and superimpose a plot of the normal density corresponding to the asymptotic approximations: > hist(ymm[,1],prob=t) > xp=seq(1.33,8.59,0.01) > s=sqrt(2*3*4/30) > s [1] > lines(xp,dnorm(xp,mean=3,sd=s)) > hist(yml[,1],prob=t) > xp=seq(1.57,8.78,0.01) > s=sqrt(3/(30*(3*trigamma(3)-1))) > s [1] > lines(xp,dnorm(xp,mean=3,sd=s)) 12

13 The histograms of the simulated Es and Es are skewed to the right (positively) and hence more spread out than the (symmetric) asymptotic normal approximation. Although not overly dramatic, this skewness is apparent in the qqplots for the E (top plot below) and E (bottom plot). > qqnorm(ymm[,1]) > qqnorm(yml[,1]) 13

14 Because of this skewness in the true distributions, the (symmetric) asymptotic normal approximations will underestimate the variability as we suspected was the case (now we know!). We conclude that a larger value of n is required for the asymptotic normal approximations to the distributions of αˆ and αˆ to be accurate. Now consider estimation of λ similarly. > hist(ymm[,2],prob=t) > xp=seq(0.89,5.74,0.01) > s=sqrt((2*3+3)*2*2/(3*30)) > s [1] > lines(xp,dnorm(xp,mean=2,sd=s)) > hist(yml[,2],prob=t) > xp=seq(1.03,5.84,0.01) > s=sqrt(2*2*trigamma(3)/(30*(3*trigamma(3)-1))) > s [1] > lines(xp,dnorm(xp,mean=2,sd=s)) The histograms of the simulated Es and Es are again skewed to the right (positively) and hence more spread out than the (symmetric) asymptotic normal approximation. This skewness can also be seen in the qqplots for the E (top plot below) and the E (bottom plot). > qqnorm(ymm[,2]) > qqnorm(yml[,2]) 14

15 Transformations to Improve Asymptotic Normal Approximations: =============================================== The asymptotic normal approximations to the distributions of the Es and the Es do not appear to be very accurate for this example. Might these approximations perform better on a different scale? For positive random variables that have (positively) skewed distributions, a natural transformation to consider is the log it pulls in the very large positive values so might eliminate the positive skewness. Of course, it could induce negative skewness! From the asymptotic results given above, the delta method easily yields that for the case of sampling from a Gamma(α,λ) distribution, as n : and ˆ ˆ log( α ) log( α ) 1 2( α + 1) / α 2( α + 1) / α log( θ = ) ˆ N 2,, log( λ ) log( λ) n 2( α + 1) / α (2α + 3) / α ˆ ˆ log( α ) log( 1 1/ α log( θ ) = ˆ N 2, log( λ ) log( λ) n[ αg( 1] 1/ α 1/ α g(. Let s compare the histograms of the logs of the simulated estimates of α to these asymptotic normal approximations: > hist(log(ymm[,1]),prob=t) > xp=seq(0.29,2.15,0.01) > s=sqrt(2*4/(3*30)) > lines(xp,dnorm(xp,mean=log(3),sd=s)) > hist(log(yml[,1]),prob=t) > xp=seq(0.46,2.17,0.01) > s=1/sqrt(30*3*(3*trigamma(3)-1)) > lines(xp,dnorm(xp,mean=log(3),sd=s)) 15

16 Note how much more symmetric than the earlier histograms these are. The asymptotic normal approximations to the distributions of log( ˆ α ) and log( ˆ α ) are still far from perfect but they now look reasonably accurate. The improvement is also apparent in the qqplots that look very much like straight lines: > qqnorm(log(ymm[,1])) > qqnorm(log(yml[,1])) 16

17 Similarly, for the simulated estimates of λ: > hist(log(ymm[,2]),prob=t) > xp=seq(-0.11,1.74,0.01) > s=sqrt((2*3+3)/(3*30)) > lines(xp,dnorm(xp,mean=log(2),sd=s)) > hist(log(yml[,2]),prob=t) > xp=seq(0.03,1.76,0.01) > s=sqrt(trigamma(3)/(30*(3*trigamma(3)-1))) > lines(xp,dnorm(xp,mean=log(2),sd=s)) These histograms are also much more symmetric than the earlier histograms of the estimates on the raw scale. The asymptotic normal approximations to the distributions of log( ˆ λ ) and log( ˆ λ ) now look reasonably accurate and this is also relected in the qqplots that look very much like straight lines: > qqnorm(log(ymm[,2])) > qqnorm(log(yml[,2])) 17

18 You may want to check how much different things look for this example when: " the values of α and λ being considered are different, " the value of n being considered is different. Of course, you can equally well carry out such an investigation for any other example! Your computer is a powerful tool for learning. You can use it to carry out: " exact calculations of probabilities that are not otherwise feasible, " the Monte Carlo method to approximate probabilities (and other integrals), " simulation studies to help understand when asymptotic approximations are accurate. THE END 18

1. Variability in estimates and CLT

1. Variability in estimates and CLT Unit3: Foundationsforinference 1. Variability in estimates and CLT Sta 101 - Fall 2015 Duke University, Department of Statistical Science Dr. Çetinkaya-Rundel Slides posted at http://bit.ly/sta101_f15

More information

Window Width Selection for L 2 Adjusted Quantile Regression

Window Width Selection for L 2 Adjusted Quantile Regression Window Width Selection for L 2 Adjusted Quantile Regression Yoonsuh Jung, The Ohio State University Steven N. MacEachern, The Ohio State University Yoonkyung Lee, The Ohio State University Technical Report

More information

STAT 157 HW1 Solutions

STAT 157 HW1 Solutions STAT 157 HW1 Solutions http://www.stat.ucla.edu/~dinov/courses_students.dir/10/spring/stats157.dir/ Problem 1. 1.a: (6 points) Determine the Relative Frequency and the Cumulative Relative Frequency (fill

More information

Chapter 7: Point Estimation and Sampling Distributions

Chapter 7: Point Estimation and Sampling Distributions Chapter 7: Point Estimation and Sampling Distributions Seungchul Baek Department of Statistics, University of South Carolina STAT 509: Statistics for Engineers 1 / 20 Motivation In chapter 3, we learned

More information

On Performance of Confidence Interval Estimate of Mean for Skewed Populations: Evidence from Examples and Simulations

On Performance of Confidence Interval Estimate of Mean for Skewed Populations: Evidence from Examples and Simulations On Performance of Confidence Interval Estimate of Mean for Skewed Populations: Evidence from Examples and Simulations Khairul Islam 1 * and Tanweer J Shapla 2 1,2 Department of Mathematics and Statistics

More information

STA 248 H1S Winter 2008 Assignment 1 Solutions

STA 248 H1S Winter 2008 Assignment 1 Solutions 1. (a) Measures of location: STA 248 H1S Winter 2008 Assignment 1 Solutions i. The mean, 100 1=1 x i/100, can be made arbitrarily large if one of the x i are made arbitrarily large since the sample size

More information

Introduction to R (2)

Introduction to R (2) Introduction to R (2) Boxplots Boxplots are highly efficient tools for the representation of the data distributions. The five number summary can be located in boxplots. Additionally, we can distinguish

More information

A NEW POINT ESTIMATOR FOR THE MEDIAN OF GAMMA DISTRIBUTION

A NEW POINT ESTIMATOR FOR THE MEDIAN OF GAMMA DISTRIBUTION Banneheka, B.M.S.G., Ekanayake, G.E.M.U.P.D. Viyodaya Journal of Science, 009. Vol 4. pp. 95-03 A NEW POINT ESTIMATOR FOR THE MEDIAN OF GAMMA DISTRIBUTION B.M.S.G. Banneheka Department of Statistics and

More information

4.1 Introduction Estimating a population mean The problem with estimating a population mean with a sample mean: an example...

4.1 Introduction Estimating a population mean The problem with estimating a population mean with a sample mean: an example... Chapter 4 Point estimation Contents 4.1 Introduction................................... 2 4.2 Estimating a population mean......................... 2 4.2.1 The problem with estimating a population mean

More information

Introduction to Algorithmic Trading Strategies Lecture 8

Introduction to Algorithmic Trading Strategies Lecture 8 Introduction to Algorithmic Trading Strategies Lecture 8 Risk Management Haksun Li haksun.li@numericalmethod.com www.numericalmethod.com Outline Value at Risk (VaR) Extreme Value Theory (EVT) References

More information

Basic Procedure for Histograms

Basic Procedure for Histograms Basic Procedure for Histograms 1. Compute the range of observations (min. & max. value) 2. Choose an initial # of classes (most likely based on the range of values, try and find a number of classes that

More information

Point Estimation. Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage

Point Estimation. Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage 6 Point Estimation Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage Point Estimation Statistical inference: directed toward conclusions about one or more parameters. We will use the generic

More information

MVE051/MSG Lecture 7

MVE051/MSG Lecture 7 MVE051/MSG810 2017 Lecture 7 Petter Mostad Chalmers November 20, 2017 The purpose of collecting and analyzing data Purpose: To build and select models for parts of the real world (which can be used for

More information

Data Distributions and Normality

Data Distributions and Normality Data Distributions and Normality Definition (Non)Parametric Parametric statistics assume that data come from a normal distribution, and make inferences about parameters of that distribution. These statistical

More information

Bias Reduction Using the Bootstrap

Bias Reduction Using the Bootstrap Bias Reduction Using the Bootstrap Find f t (i.e., t) so that or E(f t (P, P n ) P) = 0 E(T(P n ) θ(p) + t P) = 0. Change the problem to the sample: whose solution is so the bias-reduced estimate is E(T(P

More information

Technology Support Center Issue

Technology Support Center Issue United States Office of Office of Solid EPA/600/R-02/084 Environmental Protection Research and Waste and October 2002 Agency Development Emergency Response Technology Support Center Issue Estimation of

More information

Chapter 7. Inferences about Population Variances

Chapter 7. Inferences about Population Variances Chapter 7. Inferences about Population Variances Introduction () The variability of a population s values is as important as the population mean. Hypothetical distribution of E. coli concentrations from

More information

Random Variables and Probability Distributions

Random Variables and Probability Distributions Chapter 3 Random Variables and Probability Distributions Chapter Three Random Variables and Probability Distributions 3. Introduction An event is defined as the possible outcome of an experiment. In engineering

More information

STRESS-STRENGTH RELIABILITY ESTIMATION

STRESS-STRENGTH RELIABILITY ESTIMATION CHAPTER 5 STRESS-STRENGTH RELIABILITY ESTIMATION 5. Introduction There are appliances (every physical component possess an inherent strength) which survive due to their strength. These appliances receive

More information

Subject CS1 Actuarial Statistics 1 Core Principles. Syllabus. for the 2019 exams. 1 June 2018

Subject CS1 Actuarial Statistics 1 Core Principles. Syllabus. for the 2019 exams. 1 June 2018 ` Subject CS1 Actuarial Statistics 1 Core Principles Syllabus for the 2019 exams 1 June 2018 Copyright in this Core Reading is the property of the Institute and Faculty of Actuaries who are the sole distributors.

More information

We will use an example which will result in a paired t test regarding the labor force participation rate for women in the 60 s and 70 s.

We will use an example which will result in a paired t test regarding the labor force participation rate for women in the 60 s and 70 s. Now let s review methods for one quantitative variable. We will use an example which will result in a paired t test regarding the labor force participation rate for women in the 60 s and 70 s. 17 The labor

More information

Lecture 17: More on Markov Decision Processes. Reinforcement learning

Lecture 17: More on Markov Decision Processes. Reinforcement learning Lecture 17: More on Markov Decision Processes. Reinforcement learning Learning a model: maximum likelihood Learning a value function directly Monte Carlo Temporal-difference (TD) learning COMP-424, Lecture

More information

Unit 5: Sampling Distributions of Statistics

Unit 5: Sampling Distributions of Statistics Unit 5: Sampling Distributions of Statistics Statistics 571: Statistical Methods Ramón V. León 6/12/2004 Unit 5 - Stat 571 - Ramon V. Leon 1 Definitions and Key Concepts A sample statistic used to estimate

More information

Unit 5: Sampling Distributions of Statistics

Unit 5: Sampling Distributions of Statistics Unit 5: Sampling Distributions of Statistics Statistics 571: Statistical Methods Ramón V. León 6/12/2004 Unit 5 - Stat 571 - Ramon V. Leon 1 Definitions and Key Concepts A sample statistic used to estimate

More information

Describing Data: One Quantitative Variable

Describing Data: One Quantitative Variable STAT 250 Dr. Kari Lock Morgan The Big Picture Describing Data: One Quantitative Variable Population Sampling SECTIONS 2.2, 2.3 One quantitative variable (2.2, 2.3) Statistical Inference Sample Descriptive

More information

ECE 295: Lecture 03 Estimation and Confidence Interval

ECE 295: Lecture 03 Estimation and Confidence Interval ECE 295: Lecture 03 Estimation and Confidence Interval Spring 2018 Prof Stanley Chan School of Electrical and Computer Engineering Purdue University 1 / 23 Theme of this Lecture What is Estimation? You

More information

Introduction to Computational Finance and Financial Econometrics Descriptive Statistics

Introduction to Computational Finance and Financial Econometrics Descriptive Statistics You can t see this text! Introduction to Computational Finance and Financial Econometrics Descriptive Statistics Eric Zivot Summer 2015 Eric Zivot (Copyright 2015) Descriptive Statistics 1 / 28 Outline

More information

Statistics 431 Spring 2007 P. Shaman. Preliminaries

Statistics 431 Spring 2007 P. Shaman. Preliminaries Statistics 4 Spring 007 P. Shaman The Binomial Distribution Preliminaries A binomial experiment is defined by the following conditions: A sequence of n trials is conducted, with each trial having two possible

More information

The histogram should resemble the uniform density, the mean should be close to 0.5, and the standard deviation should be close to 1/ 12 =

The histogram should resemble the uniform density, the mean should be close to 0.5, and the standard deviation should be close to 1/ 12 = Chapter 19 Monte Carlo Valuation Question 19.1 The histogram should resemble the uniform density, the mean should be close to.5, and the standard deviation should be close to 1/ 1 =.887. Question 19. The

More information

Statistical Intervals (One sample) (Chs )

Statistical Intervals (One sample) (Chs ) 7 Statistical Intervals (One sample) (Chs 8.1-8.3) Confidence Intervals The CLT tells us that as the sample size n increases, the sample mean X is close to normally distributed with expected value µ and

More information

**BEGINNING OF EXAMINATION** A random sample of five observations from a population is:

**BEGINNING OF EXAMINATION** A random sample of five observations from a population is: **BEGINNING OF EXAMINATION** 1. You are given: (i) A random sample of five observations from a population is: 0.2 0.7 0.9 1.1 1.3 (ii) You use the Kolmogorov-Smirnov test for testing the null hypothesis,

More information

The Two-Sample Independent Sample t Test

The Two-Sample Independent Sample t Test Department of Psychology and Human Development Vanderbilt University 1 Introduction 2 3 The General Formula The Equal-n Formula 4 5 6 Independence Normality Homogeneity of Variances 7 Non-Normality Unequal

More information

Data Analysis and Statistical Methods Statistics 651

Data Analysis and Statistical Methods Statistics 651 Data Analysis and Statistical Methods Statistics 651 http://www.stat.tamu.edu/~suhasini/teaching.html Lecture 10 (MWF) Checking for normality of the data using the QQplot Suhasini Subba Rao Checking for

More information

Statistics for Business and Economics

Statistics for Business and Economics Statistics for Business and Economics Chapter 7 Estimation: Single Population Copyright 010 Pearson Education, Inc. Publishing as Prentice Hall Ch. 7-1 Confidence Intervals Contents of this chapter: Confidence

More information

درس هفتم یادگیري ماشین. (Machine Learning) دانشگاه فردوسی مشهد دانشکده مهندسی رضا منصفی

درس هفتم یادگیري ماشین. (Machine Learning) دانشگاه فردوسی مشهد دانشکده مهندسی رضا منصفی یادگیري ماشین توزیع هاي نمونه و تخمین نقطه اي پارامترها Sampling Distributions and Point Estimation of Parameter (Machine Learning) دانشگاه فردوسی مشهد دانشکده مهندسی رضا منصفی درس هفتم 1 Outline Introduction

More information

MATH 3200 Exam 3 Dr. Syring

MATH 3200 Exam 3 Dr. Syring . Suppose n eligible voters are polled (randomly sampled) from a population of size N. The poll asks voters whether they support or do not support increasing local taxes to fund public parks. Let M be

More information

Hypothesis Tests: One Sample Mean Cal State Northridge Ψ320 Andrew Ainsworth PhD

Hypothesis Tests: One Sample Mean Cal State Northridge Ψ320 Andrew Ainsworth PhD Hypothesis Tests: One Sample Mean Cal State Northridge Ψ320 Andrew Ainsworth PhD MAJOR POINTS Sampling distribution of the mean revisited Testing hypotheses: sigma known An example Testing hypotheses:

More information

A New Hybrid Estimation Method for the Generalized Pareto Distribution

A New Hybrid Estimation Method for the Generalized Pareto Distribution A New Hybrid Estimation Method for the Generalized Pareto Distribution Chunlin Wang Department of Mathematics and Statistics University of Calgary May 18, 2011 A New Hybrid Estimation Method for the GPD

More information

Point Estimation. Some General Concepts of Point Estimation. Example. Estimator quality

Point Estimation. Some General Concepts of Point Estimation. Example. Estimator quality Point Estimation Some General Concepts of Point Estimation Statistical inference = conclusions about parameters Parameters == population characteristics A point estimate of a parameter is a value (based

More information

Linda Allen, Jacob Boudoukh and Anthony Saunders, Understanding Market, Credit and Operational Risk: The Value at Risk Approach

Linda Allen, Jacob Boudoukh and Anthony Saunders, Understanding Market, Credit and Operational Risk: The Value at Risk Approach P1.T4. Valuation & Risk Models Linda Allen, Jacob Boudoukh and Anthony Saunders, Understanding Market, Credit and Operational Risk: The Value at Risk Approach Bionic Turtle FRM Study Notes Reading 26 By

More information

Web Science & Technologies University of Koblenz Landau, Germany. Lecture Data Science. Statistics and Probabilities JProf. Dr.

Web Science & Technologies University of Koblenz Landau, Germany. Lecture Data Science. Statistics and Probabilities JProf. Dr. Web Science & Technologies University of Koblenz Landau, Germany Lecture Data Science Statistics and Probabilities JProf. Dr. Claudia Wagner Data Science Open Position @GESIS Student Assistant Job in Data

More information

Some estimates of the height of the podium

Some estimates of the height of the podium Some estimates of the height of the podium 24 36 40 40 40 41 42 44 46 48 50 53 65 98 1 5 number summary Inter quartile range (IQR) range = max min 2 1.5 IQR outlier rule 3 make a boxplot 24 36 40 40 40

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Maximum Likelihood Estimation The likelihood and log-likelihood functions are the basis for deriving estimators for parameters, given data. While the shapes of these two functions are different, they have

More information

A Bayesian Control Chart for the Coecient of Variation in the Case of Pooled Samples

A Bayesian Control Chart for the Coecient of Variation in the Case of Pooled Samples A Bayesian Control Chart for the Coecient of Variation in the Case of Pooled Samples R van Zyl a,, AJ van der Merwe b a PAREXEL International, Bloemfontein, South Africa b University of the Free State,

More information

Testing the significance of the RV coefficient

Testing the significance of the RV coefficient 1 / 19 Testing the significance of the RV coefficient Application to napping data Julie Josse, François Husson and Jérôme Pagès Applied Mathematics Department Agrocampus Rennes, IRMAR CNRS UMR 6625 Agrostat

More information

STAT 113 Variability

STAT 113 Variability STAT 113 Variability Colin Reimer Dawson Oberlin College September 14, 2017 1 / 48 Outline Last Time: Shape and Center Variability Boxplots and the IQR Variance and Standard Deviaton Transformations 2

More information

Analysis of truncated data with application to the operational risk estimation

Analysis of truncated data with application to the operational risk estimation Analysis of truncated data with application to the operational risk estimation Petr Volf 1 Abstract. Researchers interested in the estimation of operational risk often face problems arising from the structure

More information

yuimagui: A graphical user interface for the yuima package. User Guide yuimagui v1.0

yuimagui: A graphical user interface for the yuima package. User Guide yuimagui v1.0 yuimagui: A graphical user interface for the yuima package. User Guide yuimagui v1.0 Emanuele Guidotti, Stefano M. Iacus and Lorenzo Mercuri February 21, 2017 Contents 1 yuimagui: Home 3 2 yuimagui: Data

More information

Bootstrap Inference for Multiple Imputation Under Uncongeniality

Bootstrap Inference for Multiple Imputation Under Uncongeniality Bootstrap Inference for Multiple Imputation Under Uncongeniality Jonathan Bartlett www.thestatsgeek.com www.missingdata.org.uk Department of Mathematical Sciences University of Bath, UK Joint Statistical

More information

Chapter 5: Statistical Inference (in General)

Chapter 5: Statistical Inference (in General) Chapter 5: Statistical Inference (in General) Shiwen Shen University of South Carolina 2016 Fall Section 003 1 / 17 Motivation In chapter 3, we learn the discrete probability distributions, including Bernoulli,

More information

Lecture 2 Describing Data

Lecture 2 Describing Data Lecture 2 Describing Data Thais Paiva STA 111 - Summer 2013 Term II July 2, 2013 Lecture Plan 1 Types of data 2 Describing the data with plots 3 Summary statistics for central tendency and spread 4 Histograms

More information

Lecture 9 - Sampling Distributions and the CLT

Lecture 9 - Sampling Distributions and the CLT Lecture 9 - Sampling Distributions and the CLT Sta102/BME102 Colin Rundel September 23, 2015 1 Variability of Estimates Activity Sampling distributions - via simulation Sampling distributions - via CLT

More information

NCSS Statistical Software. Reference Intervals

NCSS Statistical Software. Reference Intervals Chapter 586 Introduction A reference interval contains the middle 95% of measurements of a substance from a healthy population. It is a type of prediction interval. This procedure calculates one-, and

More information

The Assumption(s) of Normality

The Assumption(s) of Normality The Assumption(s) of Normality Copyright 2000, 2011, 2016, J. Toby Mordkoff This is very complicated, so I ll provide two versions. At a minimum, you should know the short one. It would be great if you

More information

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER Two hours MATH20802 To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER STATISTICAL METHODS Answer any FOUR of the SIX questions.

More information

Midterm Exam. b. What are the continuously compounded returns for the two stocks?

Midterm Exam. b. What are the continuously compounded returns for the two stocks? University of Washington Fall 004 Department of Economics Eric Zivot Economics 483 Midterm Exam This is a closed book and closed note exam. However, you are allowed one page of notes (double-sided). Answer

More information

ROM SIMULATION Exact Moment Simulation using Random Orthogonal Matrices

ROM SIMULATION Exact Moment Simulation using Random Orthogonal Matrices ROM SIMULATION Exact Moment Simulation using Random Orthogonal Matrices Bachelier Finance Society Meeting Toronto 2010 Henley Business School at Reading Contact Author : d.ledermann@icmacentre.ac.uk Alexander

More information

CS 361: Probability & Statistics

CS 361: Probability & Statistics March 12, 2018 CS 361: Probability & Statistics Inference Binomial likelihood: Example Suppose we have a coin with an unknown probability of heads. We flip the coin 10 times and observe 2 heads. What can

More information

appstats5.notebook September 07, 2016 Chapter 5

appstats5.notebook September 07, 2016 Chapter 5 Chapter 5 Describing Distributions Numerically Chapter 5 Objective: Students will be able to use statistics appropriate to the shape of the data distribution to compare of two or more different data sets.

More information

Institute for the Advancement of University Learning & Department of Statistics

Institute for the Advancement of University Learning & Department of Statistics Institute for the Advancement of University Learning & Department of Statistics Descriptive Statistics for Research (Hilary Term, 00) Lecture 4: Estimation (I.) Overview of Estimation In most studies or

More information

University of New South Wales Semester 1, Economics 4201 and Homework #2 Due on Tuesday 3/29 (20% penalty per day late)

University of New South Wales Semester 1, Economics 4201 and Homework #2 Due on Tuesday 3/29 (20% penalty per day late) University of New South Wales Semester 1, 2011 School of Economics James Morley 1. Autoregressive Processes (15 points) Economics 4201 and 6203 Homework #2 Due on Tuesday 3/29 (20 penalty per day late)

More information

Finding Roots by "Closed" Methods

Finding Roots by Closed Methods Finding Roots by "Closed" Methods One general approach to finding roots is via so-called "closed" methods. Closed methods A closed method is one which starts with an interval, inside of which you know

More information

A Test of the Normality Assumption in the Ordered Probit Model *

A Test of the Normality Assumption in the Ordered Probit Model * A Test of the Normality Assumption in the Ordered Probit Model * Paul A. Johnson Working Paper No. 34 March 1996 * Assistant Professor, Vassar College. I thank Jahyeong Koo, Jim Ziliak and an anonymous

More information

2 Exploring Univariate Data

2 Exploring Univariate Data 2 Exploring Univariate Data A good picture is worth more than a thousand words! Having the data collected we examine them to get a feel for they main messages and any surprising features, before attempting

More information

Regression and Simulation

Regression and Simulation Regression and Simulation This is an introductory R session, so it may go slowly if you have never used R before. Do not be discouraged. A great way to learn a new language like this is to plunge right

More information

2 Control variates. λe λti λe e λt i where R(t) = t Y 1 Y N(t) is the time from the last event to t. L t = e λr(t) e e λt(t) Exercises

2 Control variates. λe λti λe e λt i where R(t) = t Y 1 Y N(t) is the time from the last event to t. L t = e λr(t) e e λt(t) Exercises 96 ChapterVI. Variance Reduction Methods stochastic volatility ISExSoren5.9 Example.5 (compound poisson processes) Let X(t) = Y + + Y N(t) where {N(t)},Y, Y,... are independent, {N(t)} is Poisson(λ) with

More information

A Markov Chain Monte Carlo Approach to Estimate the Risks of Extremely Large Insurance Claims

A Markov Chain Monte Carlo Approach to Estimate the Risks of Extremely Large Insurance Claims International Journal of Business and Economics, 007, Vol. 6, No. 3, 5-36 A Markov Chain Monte Carlo Approach to Estimate the Risks of Extremely Large Insurance Claims Wan-Kai Pang * Department of Applied

More information

Applications of Good s Generalized Diversity Index. A. J. Baczkowski Department of Statistics, University of Leeds Leeds LS2 9JT, UK

Applications of Good s Generalized Diversity Index. A. J. Baczkowski Department of Statistics, University of Leeds Leeds LS2 9JT, UK Applications of Good s Generalized Diversity Index A. J. Baczkowski Department of Statistics, University of Leeds Leeds LS2 9JT, UK Internal Report STAT 98/11 September 1998 Applications of Good s Generalized

More information

Much of what appears here comes from ideas presented in the book:

Much of what appears here comes from ideas presented in the book: Chapter 11 Robust statistical methods Much of what appears here comes from ideas presented in the book: Huber, Peter J. (1981), Robust statistics, John Wiley & Sons (New York; Chichester). There are many

More information

Market Risk Analysis Volume I

Market Risk Analysis Volume I Market Risk Analysis Volume I Quantitative Methods in Finance Carol Alexander John Wiley & Sons, Ltd List of Figures List of Tables List of Examples Foreword Preface to Volume I xiii xvi xvii xix xxiii

More information

Calculating VaR. There are several approaches for calculating the Value at Risk figure. The most popular are the

Calculating VaR. There are several approaches for calculating the Value at Risk figure. The most popular are the VaR Pro and Contra Pro: Easy to calculate and to understand. It is a common language of communication within the organizations as well as outside (e.g. regulators, auditors, shareholders). It is not really

More information

Simulation Wrap-up, Statistics COS 323

Simulation Wrap-up, Statistics COS 323 Simulation Wrap-up, Statistics COS 323 Today Simulation Re-cap Statistics Variance and confidence intervals for simulations Simulation wrap-up FYI: No class or office hours Thursday Simulation wrap-up

More information

4 Reinforcement Learning Basic Algorithms

4 Reinforcement Learning Basic Algorithms Learning in Complex Systems Spring 2011 Lecture Notes Nahum Shimkin 4 Reinforcement Learning Basic Algorithms 4.1 Introduction RL methods essentially deal with the solution of (optimal) control problems

More information

STAT Chapter 6: Sampling Distributions

STAT Chapter 6: Sampling Distributions STAT 515 -- Chapter 6: Sampling Distributions Definition: Parameter = a number that characterizes a population (example: population mean ) it s typically unknown. Statistic = a number that characterizes

More information

μ: ESTIMATES, CONFIDENCE INTERVALS, AND TESTS Business Statistics

μ: ESTIMATES, CONFIDENCE INTERVALS, AND TESTS Business Statistics μ: ESTIMATES, CONFIDENCE INTERVALS, AND TESTS Business Statistics CONTENTS Estimating parameters The sampling distribution Confidence intervals for μ Hypothesis tests for μ The t-distribution Comparison

More information

Key Objectives. Module 2: The Logic of Statistical Inference. Z-scores. SGSB Workshop: Using Statistical Data to Make Decisions

Key Objectives. Module 2: The Logic of Statistical Inference. Z-scores. SGSB Workshop: Using Statistical Data to Make Decisions SGSB Workshop: Using Statistical Data to Make Decisions Module 2: The Logic of Statistical Inference Dr. Tom Ilvento January 2006 Dr. Mugdim Pašić Key Objectives Understand the logic of statistical inference

More information

An Improved Skewness Measure

An Improved Skewness Measure An Improved Skewness Measure Richard A. Groeneveld Professor Emeritus, Department of Statistics Iowa State University ragroeneveld@valley.net Glen Meeden School of Statistics University of Minnesota Minneapolis,

More information

Lecture 9 - Sampling Distributions and the CLT. Mean. Margin of error. Sta102/BME102. February 6, Sample mean ( X ): x i

Lecture 9 - Sampling Distributions and the CLT. Mean. Margin of error. Sta102/BME102. February 6, Sample mean ( X ): x i Lecture 9 - Sampling Distributions and the CLT Sta102/BME102 Colin Rundel February 6, 2015 http:// pewresearch.org/ pubs/ 2191/ young-adults-workers-labor-market-pay-careers-advancement-recession Sta102/BME102

More information

Extend the ideas of Kan and Zhou paper on Optimal Portfolio Construction under parameter uncertainty

Extend the ideas of Kan and Zhou paper on Optimal Portfolio Construction under parameter uncertainty Extend the ideas of Kan and Zhou paper on Optimal Portfolio Construction under parameter uncertainty George Photiou Lincoln College University of Oxford A dissertation submitted in partial fulfilment for

More information

The following content is provided under a Creative Commons license. Your support

The following content is provided under a Creative Commons license. Your support MITOCW Recitation 6 The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make

More information

STOCHASTIC COST ESTIMATION AND RISK ANALYSIS IN MANAGING SOFTWARE PROJECTS

STOCHASTIC COST ESTIMATION AND RISK ANALYSIS IN MANAGING SOFTWARE PROJECTS STOCHASTIC COST ESTIMATION AND RISK ANALYSIS IN MANAGING SOFTWARE PROJECTS Dr A.M. Connor Software Engineering Research Lab Auckland University of Technology Auckland, New Zealand andrew.connor@aut.ac.nz

More information

Gamma Distribution Fitting

Gamma Distribution Fitting Chapter 552 Gamma Distribution Fitting Introduction This module fits the gamma probability distributions to a complete or censored set of individual or grouped data values. It outputs various statistics

More information

12 The Bootstrap and why it works

12 The Bootstrap and why it works 12 he Bootstrap and why it works For a review of many applications of bootstrap see Efron and ibshirani (1994). For the theory behind the bootstrap see the books by Hall (1992), van der Waart (2000), Lahiri

More information

Consistent estimators for multilevel generalised linear models using an iterated bootstrap

Consistent estimators for multilevel generalised linear models using an iterated bootstrap Multilevel Models Project Working Paper December, 98 Consistent estimators for multilevel generalised linear models using an iterated bootstrap by Harvey Goldstein hgoldstn@ioe.ac.uk Introduction Several

More information

On Some Test Statistics for Testing the Population Skewness and Kurtosis: An Empirical Study

On Some Test Statistics for Testing the Population Skewness and Kurtosis: An Empirical Study Florida International University FIU Digital Commons FIU Electronic Theses and Dissertations University Graduate School 8-26-2016 On Some Test Statistics for Testing the Population Skewness and Kurtosis:

More information

Probability. An intro for calculus students P= Figure 1: A normal integral

Probability. An intro for calculus students P= Figure 1: A normal integral Probability An intro for calculus students.8.6.4.2 P=.87 2 3 4 Figure : A normal integral Suppose we flip a coin 2 times; what is the probability that we get more than 2 heads? Suppose we roll a six-sided

More information

Data screening, transformations: MRC05

Data screening, transformations: MRC05 Dale Berger Data screening, transformations: MRC05 This is a demonstration of data screening and transformations for a regression analysis. Our interest is in predicting current salary from education level

More information

the display, exploration and transformation of the data are demonstrated and biases typically encountered are highlighted.

the display, exploration and transformation of the data are demonstrated and biases typically encountered are highlighted. 1 Insurance data Generalized linear modeling is a methodology for modeling relationships between variables. It generalizes the classical normal linear model, by relaxing some of its restrictive assumptions,

More information

Chapter 5. Continuous Random Variables and Probability Distributions. 5.1 Continuous Random Variables

Chapter 5. Continuous Random Variables and Probability Distributions. 5.1 Continuous Random Variables Chapter 5 Continuous Random Variables and Probability Distributions 5.1 Continuous Random Variables 1 2CHAPTER 5. CONTINUOUS RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS Probability Distributions Probability

More information

Normal Probability Distributions

Normal Probability Distributions Normal Probability Distributions Properties of Normal Distributions The most important probability distribution in statistics is the normal distribution. Normal curve A normal distribution is a continuous

More information

Chapter 3. Numerical Descriptive Measures. Copyright 2016 Pearson Education, Ltd. Chapter 3, Slide 1

Chapter 3. Numerical Descriptive Measures. Copyright 2016 Pearson Education, Ltd. Chapter 3, Slide 1 Chapter 3 Numerical Descriptive Measures Copyright 2016 Pearson Education, Ltd. Chapter 3, Slide 1 Objectives In this chapter, you learn to: Describe the properties of central tendency, variation, and

More information

Section 2.4. Properties of point estimators 135

Section 2.4. Properties of point estimators 135 Section 2.4. Properties of point estimators 135 The fact that S 2 is an estimator of σ 2 for any population distribution is one of the most compelling reasons to use the n 1 in the denominator of the definition

More information

Regression Review and Robust Regression. Slides prepared by Elizabeth Newton (MIT)

Regression Review and Robust Regression. Slides prepared by Elizabeth Newton (MIT) Regression Review and Robust Regression Slides prepared by Elizabeth Newton (MIT) S-Plus Oil City Data Frame Monthly Excess Returns of Oil City Petroleum, Inc. Stocks and the Market SUMMARY: The oilcity

More information

Statistics & Statistical Tests: Assumptions & Conclusions

Statistics & Statistical Tests: Assumptions & Conclusions Degrees of Freedom Statistics & Statistical Tests: Assumptions & Conclusions Kinds of degrees of freedom Kinds of Distributions Kinds of Statistics & assumptions required to perform each Normal Distributions

More information

The method of Maximum Likelihood.

The method of Maximum Likelihood. Maximum Likelihood The method of Maximum Likelihood. In developing the least squares estimator - no mention of probabilities. Minimize the distance between the predicted linear regression and the observed

More information

Sample Size Calculations for Odds Ratio in presence of misclassification (SSCOR Version 1.8, September 2017)

Sample Size Calculations for Odds Ratio in presence of misclassification (SSCOR Version 1.8, September 2017) Sample Size Calculations for Odds Ratio in presence of misclassification (SSCOR Version 1.8, September 2017) 1. Introduction The program SSCOR available for Windows only calculates sample size requirements

More information

European option pricing under parameter uncertainty

European option pricing under parameter uncertainty European option pricing under parameter uncertainty Martin Jönsson (joint work with Samuel Cohen) University of Oxford Workshop on BSDEs, SPDEs and their Applications July 4, 2017 Introduction 2/29 Introduction

More information

Introduction Dickey-Fuller Test Option Pricing Bootstrapping. Simulation Methods. Chapter 13 of Chris Brook s Book.

Introduction Dickey-Fuller Test Option Pricing Bootstrapping. Simulation Methods. Chapter 13 of Chris Brook s Book. Simulation Methods Chapter 13 of Chris Brook s Book Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 April 26, 2017 Christopher

More information

MFE/3F Questions Answer Key

MFE/3F Questions Answer Key MFE/3F Questions Download free full solutions from www.actuarialbrew.com, or purchase a hard copy from www.actexmadriver.com, or www.actuarialbookstore.com. Chapter 1 Put-Call Parity and Replication 1.01

More information

Chapter 5. Statistical inference for Parametric Models

Chapter 5. Statistical inference for Parametric Models Chapter 5. Statistical inference for Parametric Models Outline Overview Parameter estimation Method of moments How good are method of moments estimates? Interval estimation Statistical Inference for Parametric

More information