Expectations. Definition Let X be a discrete rv with set of possible values D and pmf p(x). The expected value or mean value of X, denoted by E(X ) or

Similar documents
Probability Distributions for Discrete RV

Chapter 3 - Lecture 3 Expected Values of Discrete Random Va

Discrete Random Variables and Probability Distributions. Stat 4570/5570 Based on Devore s book (Ed 8)

Discrete Random Variables

Binomial Random Variables. Binomial Random Variables

4 Random Variables and Distributions

5. In fact, any function of a random variable is also a random variable

Statistics and Their Distributions

Discrete Random Variables

Chapter 3 Discrete Random Variables and Probability Distributions

STA258H5. Al Nosedal and Alison Weir. Winter Al Nosedal and Alison Weir STA258H5 Winter / 41

Mean of a Discrete Random variable. Suppose that X is a discrete random variable whose distribution is : :

STOR Lecture 7. Random Variables - I

Statistics for Business and Economics

Chapter 7. Sampling Distributions and the Central Limit Theorem

Statistics 511 Additional Materials

Chapter 7. Sampling Distributions and the Central Limit Theorem

Probability Theory. Mohamed I. Riffi. Islamic University of Gaza

continuous rv Note for a legitimate pdf, we have f (x) 0 and f (x)dx = 1. For a continuous rv, P(X = c) = c f (x)dx = 0, hence

STA Module 3B Discrete Random Variables

Chapter 3 Discrete Random Variables and Probability Distributions

Discrete Random Variables

Discrete Random Variables

Section Random Variables and Histograms

Statistical Methods in Practice STAT/MATH 3379

Part 1 In which we meet the law of averages. The Law of Averages. The Expected Value & The Standard Error. Where Are We Going?

LECTURE CHAPTER 3 DESCRETE RANDOM VARIABLE

Test 7A AP Statistics Name: Directions: Work on these sheets.

Central Limit Theorem 11/08/2005

Statistics for Managers Using Microsoft Excel 7 th Edition

Some Discrete Distribution Families

Random Variables Handout. Xavier Vilà

STA Rev. F Learning Objectives. What is a Random Variable? Module 5 Discrete Random Variables

Topic 6 - Continuous Distributions I. Discrete RVs. Probability Density. Continuous RVs. Background Reading. Recall the discrete distributions

Section Distributions of Random Variables

Normal Distribution. Definition A continuous rv X is said to have a normal distribution with. the pdf of X is

Chapter 3 Discrete Random Variables and Probability Distributions

Random variables. Discrete random variables. Continuous random variables.

Random Variables and Applications OPRE 6301

Expected Value and Variance

UQ, STAT2201, 2017, Lectures 3 and 4 Unit 3 Probability Distributions.

4.2 Probability Distributions

1 PMF and CDF Random Variable PMF and CDF... 4

Probability and Random Variables A FINANCIAL TIMES COMPANY

Chapter 16. Random Variables. Copyright 2010, 2007, 2004 Pearson Education, Inc.

Statistics, Their Distributions, and the Central Limit Theorem

Chapter 6: Random Variables and Probability Distributions

What was in the last lecture?

Statistics 6 th Edition

Chapter 16. Random Variables. Copyright 2010 Pearson Education, Inc.

Chapter 5 Discrete Probability Distributions. Random Variables Discrete Probability Distributions Expected Value and Variance

Math 14 Lecture Notes Ch. 4.3

variance risk Alice & Bob are gambling (again). X = Alice s gain per flip: E[X] = Time passes... Alice (yawning) says let s raise the stakes

Binomial Distributions

MLLunsford 1. Activity: Central Limit Theorem Theory and Computations

Chapter 3 - Lecture 5 The Binomial Probability Distribution

X = x p(x) 1 / 6 1 / 6 1 / 6 1 / 6 1 / 6 1 / 6. x = 1 x = 2 x = 3 x = 4 x = 5 x = 6 values for the random variable X

9 Expectation and Variance

STAT 111 Recitation 3

Lecture 8. The Binomial Distribution. Binomial Distribution. Binomial Distribution. Probability Distributions: Normal and Binomial

Business Statistics 41000: Homework # 2

Chapter 5 Student Lecture Notes 5-1. Department of Quantitative Methods & Information Systems. Business Statistics

Review for Final Exam Spring 2014 Jeremy Orloff and Jonathan Bloom

5.2 Random Variables, Probability Histograms and Probability Distributions

Bernoulli and Binomial Distributions

ECON 214 Elements of Statistics for Economists 2016/2017

6. Continous Distributions

Discrete probability distributions

Name: Date: Pd: Quiz Review

The Bernoulli distribution

A probability distribution shows the possible outcomes of an experiment and the probability of each of these outcomes.

ECO220Y Introduction to Probability Readings: Chapter 6 (skip section 6.9) and Chapter 9 (section )

4.2 Bernoulli Trials and Binomial Distributions

6 Central Limit Theorem. (Chs 6.4, 6.5)

Section Distributions of Random Variables

STAT Chapter 4/6: Random Variables and Probability Distributions

The Normal Distribution

Chapter 5. Statistical inference for Parametric Models

Figure 1: 2πσ is said to have a normal distribution with mean µ and standard deviation σ. This is also denoted

Have you ever wondered whether it would be worth it to buy a lottery ticket every week, or pondered on questions such as If I were offered a choice

15.063: Communicating with Data Summer Recitation 3 Probability II

1. A player of Monopoly owns properties with respective rents $90, $150, $200, $150. Anyone landing on a given property has to pay the rent.

5.4 Normal Approximation of the Binomial Distribution

Version A. Problem 1. Let X be the continuous random variable defined by the following pdf: 1 x/2 when 0 x 2, f(x) = 0 otherwise.

A random variable (r. v.) is a variable whose value is a numerical outcome of a random phenomenon.

Lecture 23. STAT 225 Introduction to Probability Models April 4, Whitney Huang Purdue University. Normal approximation to Binomial

Review of the Topics for Midterm I

Elementary Statistics Lecture 5

A random variable (r. v.) is a variable whose value is a numerical outcome of a random phenomenon.

Sampling Distribution

VIDEO 1. A random variable is a quantity whose value depends on chance, for example, the outcome when a die is rolled.

Learning Objec0ves. Statistics for Business and Economics. Discrete Probability Distribu0ons

MA : Introductory Probability

Consider the following examples: ex: let X = tossing a coin three times and counting the number of heads

Module 3: Sampling Distributions and the CLT Statistics (OA3102)

Business Statistics. Chapter 5 Discrete Probability Distributions QMIS 120. Dr. Mohammad Zainal

Section 7.5 The Normal Distribution. Section 7.6 Application of the Normal Distribution

STAT/MATH 395 PROBABILITY II

The Central Limit Theorem. Sec. 8.2: The Random Variable. it s Distribution. it s Distribution

Random Variable: Definition

Transcription:

Definition Let X be a discrete rv with set of possible values D and pmf p(x). The expected value or mean value of X, denoted by E(X ) or µ X, is E(X ) = µ X = x D x p(x)

Definition Let X be a discrete rv with set of possible values D and pmf p(x). The expected value or mean value of X, denoted by E(X ) or µ X, is E(X ) = µ X = x D x p(x) e.g (Problem 30) A group of individuals who have automobile insurance from a certain company is randomly selected. Let Y be the number of moving violations for which the individual was cited during the last 3 years. The pmf of Y is y 0 1 2 3 Then the expected value of p(y) 0.60 0.25 0.10 0.05 moving violations for that group is µ Y = E(Y ) = 0 0.60 + 1 0.25 + 2 0.10 + 3 0.05 = 0.60

y 0 1 2 3 p(y) 0.60 0.25 0.10 0.05 Assume the total number of individuals in that group is 100, then there are 60 individuals without moving violation, 25 with 1 moving violation, 10 with 2 moving violations and 5 with 3 moving violations.

y 0 1 2 3 p(y) 0.60 0.25 0.10 0.05 Assume the total number of individuals in that group is 100, then there are 60 individuals without moving violation, 25 with 1 moving violation, 10 with 2 moving violations and 5 with 3 moving violations. The population mean is calculated as µ = 0 60 + 1 25 + 2 10 + 3 5 100 = 0.60

y 0 1 2 3 p(y) 0.60 0.25 0.10 0.05 Assume the total number of individuals in that group is 100, then there are 60 individuals without moving violation, 25 with 1 moving violation, 10 with 2 moving violations and 5 with 3 moving violations. The population mean is calculated as µ = 0 60 + 1 25 + 2 10 + 3 5 100 = 0.60 60 µ = 0 100 + 1 25 100 + 2 10 100 + 3 5 100 = 0 0.60 + 1 0.25 + 2 0.10 + 3 0.05 = 0.60

y 0 1 2 3 p(y) 0.60 0.25 0.10 0.05 Assume the total number of individuals in that group is 100, then there are 60 individuals without moving violation, 25 with 1 moving violation, 10 with 2 moving violations and 5 with 3 moving violations. The population mean is calculated as µ = 0 60 + 1 25 + 2 10 + 3 5 100 = 0.60 60 µ = 0 100 + 1 25 100 + 2 10 100 + 3 5 100 = 0 0.60 + 1 0.25 + 2 0.10 + 3 0.05 = 0.60 The population size is irrevelant if we know the pmf!

Examples: Let X be a Bernoulli rv with pmf 1 p x = 0 p(x) = p x = 1 0 x 0, or 1

Examples: Let X be a Bernoulli rv with pmf 1 p x = 0 p(x) = p x = 1 0 x 0, or 1 Then the expected value for X is E(X ) = 0 p(0) + 1 p(1) = p

Examples: Let X be a Bernoulli rv with pmf 1 p x = 0 p(x) = p x = 1 0 x 0, or 1 Then the expected value for X is E(X ) = 0 p(0) + 1 p(1) = p We see that the expected value of a Bernoulli rv X is just the probability that X takes on the value 1.

Examples: Consider the cards drawing example again and assume we have infinitely many cards this time. Let X = the number of drawings until we get a. If the probability for getting a is α, then the pmf for X is { α(1 α) x 1 x = 1, 2, 3,... p(x) = 0 otherwise

Examples: Consider the cards drawing example again and assume we have infinitely many cards this time. Let X = the number of drawings until we get a. If the probability for getting a is α, then the pmf for X is { α(1 α) x 1 x = 1, 2, 3,... p(x) = 0 otherwise The expected value for X is E(X ) = D x p(x) = xα(1 α) x 1 = α [ d dα (1 α)x ] x=1 x=1

Examples: Consider the cards drawing example again and assume we have infinitely many cards this time. Let X = the number of drawings until we get a. If the probability for getting a is α, then the pmf for X is { α(1 α) x 1 x = 1, 2, 3,... p(x) = 0 otherwise The expected value for X is E(X ) = D x p(x) = xα(1 α) x 1 = α [ d dα (1 α)x ] x=1 x=1 E(X ) = α{ d dα [ (1 α) x ]} = α{ d dα (1 α α )} = 1 α x=1

Examples 3.20 Let X be the number of interviews a student has prior to getting a job. The pmf for X is { k x = 1, 2, 3,... p(x) = x 2 0 otherwise where k is chosen so that x=1 (k/x 2 ) = 1. (It can be showed that x=1 (1/x 2 ) <, which implies that such a k exists.) The expected value of X is µ = E(X ) = x=1 The expected value is NOT finite! x k x 2 = k 1 x =! x=1

Examples 3.20 Let X be the number of interviews a student has prior to getting a job. The pmf for X is { k x = 1, 2, 3,... p(x) = x 2 0 otherwise where k is chosen so that x=1 (k/x 2 ) = 1. (It can be showed that x=1 (1/x 2 ) <, which implies that such a k exists.) The expected value of X is µ = E(X ) = x=1 The expected value is NOT finite! Heavy Tail: x k x 2 = k 1 x =! x=1

Examples 3.20 Let X be the number of interviews a student has prior to getting a job. The pmf for X is { k x = 1, 2, 3,... p(x) = x 2 0 otherwise where k is chosen so that x=1 (k/x 2 ) = 1. (It can be showed that x=1 (1/x 2 ) <, which implies that such a k exists.) The expected value of X is µ = E(X ) = x=1 x k x 2 = k 1 x =! x=1 The expected value is NOT finite! Heavy Tail: distribution with a large amount of probability far from µ

Example (Problem 38) Let X = the outcome when a fair die is rolled once. If before the 1 die is rolled you are offered either 3.5 dollars or 1 X dollars, would you accept the guaranteed amount or would you gamble?

Example (Problem 38) Let X = the outcome when a fair die is rolled once. If before the 1 die is rolled you are offered either 3.5 dollars or 1 X dollars, would you accept the guaranteed amount or would you gamble? x 1 2 3 4 5 6 1 1 1 1 1 1 p(x) 6 6 6 6 6 6

Example (Problem 38) Let X = the outcome when a fair die is rolled once. If before the 1 die is rolled you are offered either 3.5 dollars or 1 X dollars, would you accept the guaranteed amount or would you gamble? x 1 2 3 4 5 6 p(x) 1 1 1 1 1 1 6 6 1 1 x 1 2 6 1 3 6 1 4 6 1 5 6 1 6

Example (Problem 38) Let X = the outcome when a fair die is rolled once. If before the 1 die is rolled you are offered either 3.5 dollars or 1 X dollars, would you accept the guaranteed amount or would you gamble? x 1 2 3 4 5 6 p(x) 1 1 1 1 1 1 6 6 1 1 x 1 2 6 1 3 6 1 4 6 1 5 6 1 6 Then the expected dollars from gambling is E( 1 X ) = 6 x=1 1 x p( 1 x ) = 1 1 6 + 1 2 1 6 + + 1 6 1 6 = 49 120 < 1 3.5

Proposition If the rv X has a set of possible values D and pmf p(x), then the expected value of any function h(x ), denoted by E[h(X )] or µ hx, is computed by E[h(X )] = h(x) p(x) D

Example 3.23 A computer store has purchased three computers of a certain type at $500 apiece. It will sell them for $1000 apiece. The manufacturer has agreed to repurchase any computers still unsold after a specified period at $200 apiece.

Example 3.23 A computer store has purchased three computers of a certain type at $500 apiece. It will sell them for $1000 apiece. The manufacturer has agreed to repurchase any computers still unsold after a specified period at $200 apiece. Let X denote the number of computers sold, and suppose that p(0) = 0.1, p(1) = 0.2, p(2) = 0.3, p(3) = 0.4. Let h(x ) denote the profit associated with selling X units,

Example 3.23 A computer store has purchased three computers of a certain type at $500 apiece. It will sell them for $1000 apiece. The manufacturer has agreed to repurchase any computers still unsold after a specified period at $200 apiece. Let X denote the number of computers sold, and suppose that p(0) = 0.1, p(1) = 0.2, p(2) = 0.3, p(3) = 0.4. Let h(x ) denote the profit associated with selling X units, then h(x ) = revenue cost = 1000X + 200(3 X ) 1500 = 800X 900.

Example 3.23 A computer store has purchased three computers of a certain type at $500 apiece. It will sell them for $1000 apiece. The manufacturer has agreed to repurchase any computers still unsold after a specified period at $200 apiece. Let X denote the number of computers sold, and suppose that p(0) = 0.1, p(1) = 0.2, p(2) = 0.3, p(3) = 0.4. Let h(x ) denote the profit associated with selling X units, then h(x ) = revenue cost = 1000X + 200(3 X ) 1500 = 800X 900. The expected profit is E[h(X )] = h(0) p(0) + h(1) p(1) + h(2) p(2) + h(3) p(3) = ( 900)(0.1) + ( 100)(0.2) + (700)(0.3) + (1500)(0.4) = 700

Proposition E(aX + b) = a E(X ) + b (Or, using alternative notation, µ ax +b = a µ X + b.)

Proposition E(aX + b) = a E(X ) + b (Or, using alternative notation, µ ax +b = a µ X + b.) e.g. for the previous example, E[h(X )] = E(800X 900) = 800 E(X ) 900 = 700

Proposition E(aX + b) = a E(X ) + b (Or, using alternative notation, µ ax +b = a µ X + b.) e.g. for the previous example, E[h(X )] = E(800X 900) = 800 E(X ) 900 = 700 Corollary 1. For any constant a, E(aX ) = a E(X ). 2. For any constant b, E(X + b) = E(X ) + b.

Definition Let X have pmf p(x) and expected value µ. Then the variance of X, denoted by V (X ) or σ 2 X, or just σ2 X, is V (X ) = D (x µ) 2 p(x) = E[(X µ) 2 ] The stand deviation (SD) of X is σ X = σx 2

Example: For the previous example, the pmf is given as x 0 1 2 3 p(x) 0.1 0.2 0.3 0.4

Example: For the previous example, the pmf is given as x 0 1 2 3 p(x) 0.1 0.2 0.3 0.4 then the variance of X is V (X ) = σ 2 = 3 (x 2) 2 p(x) x=0 = (0 2) 2 (0.1) + (1 2) 2 (0.2) + (2 2) 2 (0.3) + (3 2) 2 (0.4) = 1

Recall that for sample variance s 2, we have s 2 = S xx n 1 = x 2 i ( x i ) 2 n 1 n

Recall that for sample variance s 2, we have s 2 = S xx n 1 = x 2 i ( x i ) 2 n 1 n Proposition V (X ) = σ 2 = [ D x 2 p(x)] µ 2 = E(X 2 ) [E(X )] 2

Recall that for sample variance s 2, we have s 2 = S xx n 1 = x 2 i ( x i ) 2 n 1 n Proposition V (X ) = σ 2 = [ D x 2 p(x)] µ 2 = E(X 2 ) [E(X )] 2 e.g. for the previous example, the pmf is given as x 0 1 2 3 p(x) 0.1 0.2 0.3 0.4 Then V (X ) = E(X 2 ) [E(X )] 2 = 1 2 0.2 + 2 2 0.3 + 3 2 0.4 (2) 2 = 1

Proposition If h(x ) is a function of a rv X, then V [h(x )] = σ 2 h(x ) = D {h(x) E[h(X )]} 2 p(x) = E[h(X ) 2 ] {E[h(X )]} 2 If h(x ) is linear, i.e. h(x ) = ax + b for some nonrandom constant a and b, then V (ax + b) = σ 2 ax +b = a2 σ 2 X and σ ax +b = a σ X In particular, σ ax = a σ X, σ X +b = σ X

Example 3.23 continued A computer store has purchased three computers of a certain type at $500 apiece. It will sell them for $1000 apiece. The manufacturer has agreed to repurchase any computers still unsold after a specified period at $200 apiece. Let X denote the number of computers sold, and suppose that p(0) = 0.1, p(1) = 0.2, p(2) = 0.3, p(3) = 0.4. Let h(x ) denote the profit associated with selling X units, then h(x ) = revenue cost = 1000X + 200(3 X ) 1500 = 800X 900.

Example 3.23 continued A computer store has purchased three computers of a certain type at $500 apiece. It will sell them for $1000 apiece. The manufacturer has agreed to repurchase any computers still unsold after a specified period at $200 apiece. Let X denote the number of computers sold, and suppose that p(0) = 0.1, p(1) = 0.2, p(2) = 0.3, p(3) = 0.4. Let h(x ) denote the profit associated with selling X units, then h(x ) = revenue cost = 1000X + 200(3 X ) 1500 = 800X 900. The variance of h(x ) is V [h(x )] = V [800X 900] = 800 2 V [X ] = 640, 000 And the SD is σ h(x ) = V [h(x )] = 800.