Multidimensional Monotonicity Discovery with mbart

Size: px
Start display at page:

Download "Multidimensional Monotonicity Discovery with mbart"

Transcription

1 Multidimensional Monotonicity Discovery with mart Rob McCulloch Arizona State Collaborations with: Hugh Chipman (Acadia), Edward George (Wharton, University of Pennsylvania), Tom Shively (UT Austin) October 23, 2018 orthern Arizona University 1 / 46

2 Plan I) Review ART II) Introduce Monotone ART: mart III) Monotonicity Discovery with mart 2 / 46

3 eginning with a Single Tree Model 3 / 46

4 f(x) x2 x1 Three different views of a bivariate tree. 4 / 46

5 ayesian CART: Just add a prior π(m, T ) ayesian CART Model Search (Chipman, George, McCulloch 1998) π(m, T ) = π(m T )π(t ) π(m T ) : (µ 1, µ 2,..., µ b ) b (0, τ 2 I ) π(t ): Stochastic process to generate tree skeleton plus uniform prior on splitting variables and splitting rules. Closed form for π(t y) facilitates MCMC stochastic search for promising trees. 5 / 46

6 ote Although we just talking about the classic decision tree setup, our vibe is very different for the usual CART type approach pioneered by rieman (in statistics). In CART you just have an agorithm for fitting a tree to training data. We have a full generative model and our prior plays a key role. 6 / 46

7 Moving on to ART ayesian Additive Regression Trees (Chipman, George, McCulloch 2010) The ART ensemble model Y = g(x; T 1, M 1 )+g(x; T 2, M 2 )+...+g(x; T m, M m )+σz, z (0, 1) Each (T i, M i ) identifies a single tree. E(Y x, T 1, M 1,..., T m, M m ) is the sum of m bottom node µ s, one from each tree. umber of trees m can be much larger than sample size n. g(x; T 1, M 1 ), g(x; T 2, M 2 ),..., g(x; T m, M m ) is a highly redundant over-complete basis with many many parameters. 7 / 46

8 Complete the Model with a Regularization Prior π((t 1, M 1 ), (T 2, M 2 ),..., (T m, M m ), σ) π applies the ayesian CART prior to each (T j, M j ) independently so that: Each T small. Each µ small. σ will be compatible with the observed variation of y. The observed variation of y is used to guide the choice of the hyperparameters for the µ and σ priors. π is a boosting/regularization prior as it keeps the contribution of each g(x; T i, M i ) small, explaining only a small portion of the fit. 8 / 46

9 Connections to Other Modeling Ideas Y = g(x;t 1,M 1 ) g(x;t m,m m ) + & z plus #((T 1,M 1 ),...(T m,m m ),&) ayesian onparametrics: Lots of parameters (to make model flexible) A strong prior to shrink towards simple structure (regularization) ART shrinks towards additive models with some interaction Dynamic Random asis Elements: g(x; T 1, M 1 ),..., g(x; T m, M m ) are dimensionally adaptive oosting: Fit becomes the cumulative effort of many weak learners 9 / 46

10 A Sketch of the ART MCMC Algorithm Y = g(x;t 1,M 1 ) g(x;t m,m m ) + & z plus #((T 1,M 1 ),...(T m,m m ),&) Outer Loop is a simple Gibbs sampler: (T i, M i ) all other (T j, M j ), and σ) σ (T 1, M 1,...,..., T m, M m ) To draw (T i, M i ) above, subtract the contributions of the other trees from both sides to get a simple one-tree model. We integrate out M to draw T and then draw M T. 10 / 46

11 For the draw of T we use a Metropolis-Hastings within Gibbs step. Our proposal moves around tree space by proposing local modifications such as the birth-death step: such as? => propose a more complex tree? => propose a simpler tree... as the MCMC runs, each tree in the sum will grow and shrink, swapping fit amongst them / 46

12 uild up the fit, by adding up tiny bits of fit.. 12 / 46

13 Using the MCMC Output to Draw Inference Each iteration d results in a draw from the posterior of f ˆf d ( ) = g( ; T 1d, M 1d ) + + g( ; T md, M md ) To estimate f (x) we simply average the ˆf d ( ) draws at x Posterior uncertainty is captured by variation of the ˆf d (x) eg, 95% HPD region estimated by middle 95% of values Can do the same with functionals of f. 13 / 46

14 Out of Sample Prediction Predictive comparisons on 42 data sets. Data from Kim, Loh, Shih and Chaudhuri (2006) (thanks Wei-Yin Loh!) p = 3 to 65, n = 100 to 7,000. for each data set 20 random splits into 5/6 train and 1/6 test use 5-fold cross-validation on train to pick hyperparameters (except ART-default!) gives 20*42 = 840 out-of-sample predictions, for each prediction, divide rmse of different methods by the smallest + each boxplots represents 840 predictions for a method means you are 20% worse than the best + ART-cv best + ART-default (use default prior) does amazingly well!! Rondom Forests eural et oosting ART cv ART default / 46

15 Automatic Uncertainty Quantification y y A simple simulated 1-dimensional example 95% pointwise posterior intervals, ART 95% pointwise posterior intervals, mart posterior mean true f x x ote: mart on the right plot to be introduced next 15 / 46

16 Part II. Monotone ART - mart Multidimensional Monotone ART (Chipman, George, McCulloch, Shively 2018) Idea: Approximate multivariate monotone functions by the sum of many single tree models, each of which is monotonic. 16 / 46

17 x2 An Example of a Monotonic Tree f(x) Three different views of a bivariate monotonic tree. x1 17 / 46

18 f(x) x2 What makes this single tree monotonic? x1 A function g is said to be monotonic in x i if for any δ > 0, g(x 1, x 2,..., x i + δ, x i+1,..., x k ; T, M) g(x 1, x 2,..., x i, x i+1,..., x k ; T, M). For simplicity and wlog, let s restrict attention to monotone nondecreasing functions. 18 / 46

19 To implement this monotonicity in tree language we simply constrain the mean level of a node to be greater than those of it below neighbors and less than those of its above neighbors. x x1 node 7 is disjoint from node 4. node 10 is a below neighbor of node 13. node 7 is an above neighbor of node 13. The mean level of node 13 must be greater than those of 10 and 12 and less than that of node / 46

20 The mart Prior Recall the ART parameter θ = ((T 1, M 1 ), (T 2, M 2 ),..., (T m, M m ), σ) Let S = {θ : every tree is monotonic in a desired subset of x i s} To impose the monotonicity we simply truncate the ART prior π(θ) to the set S π (θ) π(θ) I S (θ) where I S (θ) is 1 if every tree in θ is montonic. 20 / 46

21 A ew ART MCMC Christmas Tree Algorithm π((t 1, M 1 ), (T 2, M 2 ),..., (T m, M m ), σ y)) ayesian backfitting again: Iteratively sample each (T j, M j ) given (y, σ) and other (T j, M j ) s Each (T 0, M 0 ) (T 1, M 1 ) update is sampled as follows: Denote move as (T 0, M 0 Common, M0 Old ) (T 1, M 0 Common, M1 ew ) Propose T via birth, death, etc. If M-H with π(t, M y) accepts (T, M 0 Common ) Set (T 1, M 1 Common ) = (T, M 0 Common ) Sample M 1 ew from π(m ew T 1, M 1 Common, y) Only M 0 Old M1 ew needs to be updated. Works for both ART and mart. 21 / 46

22 M 0 Common = µ 1, µ 2 Old ART algorithm: integrate out all the µ s and then play around with the tree. Christmas Tree: condition on all the µ not affected by the proposed tree move. 22 / 46

23 Example: Product of two x s Let s consider a very simple simulated monotone example: Y = x 1 x 2 + ɛ, x i Uniform(0, 1). Here is the plot of the true function f (x 1, x 2 ) = x 1 x 2 x f(x) x2 x x1 23 / 46

24 First we try a single (just one tree), unconstrained tree model. Here is the graph of the fit. x f(x) x2 x x1 The fit is not terrible, but there are some aspects of the fit which violate monotonicity. 24 / 46

25 Here is the graph of the fit with the monotone constraint: x f(x) x2 x x1 We see that our fit is monotonic, and more representative of the true f. 25 / 46

26 Here is the unconstrained ART fit: x f(x) x2 x x1 Much better (of course) but not monotone! 26 / 46

27 And, finally, the constrained ART fit: x f(x) x2 x x1 ot ad! Same method works with any number of x s! 27 / 46

28 A 5-Dimensional Example Y = x 1 x x 3 x x 5 + ɛ, ɛ (0, σ 2 ), x i Uniform(0, 1). For various values of σ, we simulated 5,000 observations. 28 / 46

29 RMSE improvement over unconstrained ART bart 1 mbart 1 bart 2 mbart 2 bart 3 mbart 3 bart 4 mbart 4 σ = 0.2, 0.5, 0.7, / 46

30 Part III. Discovering Monotonicity with mart Suppose we don t know if f (x) is monotone up, monotone down or even monotone at all. Of course, a simple strategy would be simply compare the fits from ART and mart. Good news, we can do much better than this! As we ll now see, mart can be deployed to simultaneously estimate all the monotone components of f. With this strategy, monotonicity can be discovered rather than imposed! 30 / 46

31 The Monotone Decomposition of a Function To begin simply, suppose x is one-dimensional and f is of bounded variation. Any such f can be uniquely written (up to an additive constant) as the sum of a monotone up function and a monotone down function where f (x) = f up (x) + f down (x) when f (x) is increasing, f up (x) increases at the same rate and is flat otherwise, when f (x) is decreasing, f down (x) decreases at the same rate and is flat otherwise. 31 / 46

32 The Discovery Strategy with mart Key Idea: To discover the monotone decomposition of f, we simply treat f (x) as a two-dimensional function in R 2, f (x) = f (x, x) = f up (x) + f down (x). Letting x 1 = x 2 = x be duplicate copies of x, we apply mart to estimate f (x 1, x 2 ) constrained to be monotone up in the x 1 direction, and constrained to be monotone down in the x 2 direction. Let s look at some illuminating one-dimensional examples. 32 / 46

33 y y Example: Suppose Y = x 3 + ɛ. ART and mart martd, fup, fdown ART mart mart: fup mart: fdown martd: overall fit x x ote that ˆf down 0 (the red in the right plot), as we would expect when f is monotone up. ote: mart looks nicer than ART, not restricted! 33 / 46

34 y y As the sample size is increased from 200 to 1,000, ˆf down gets even flatter. ART and mart martd, fup, fdown ART mart mart: fup mart: fdown martd: overall fit x x Suggests consistent estimation of the monotone components!! 34 / 46

35 Example: Suppose Y = x 2 + ɛ x y ART and mart ART mart x y martd, fup, fdown mart: fup mart: fdown martd: overall fit On the left, ART is good, but simple mart is not. On the right, ˆf up and ˆf down are spot on. And martd = ˆf up + ˆf down seems even better than ART! 35 / 46

36 Example: Suppose Y = sin(x) + ɛ. y y ART and mart martd, fup, fdown ART mart mart: fup mart: fdown martd: overall fit x x ART is good, but simple mart reveals nothing. ˆf up and ˆf down have discovered the monotone decomposition. And martd = ˆf up + ˆf down is great too. To extend this approach to multidimensional x, we simply duplicate each and every component of x!!! 36 / 46

37 Discovering Monotonicity, Simple House Price Data Let s look at a very simple example where we relate y=house price to three characteristics of the house. > head(x) nbhd size brick [1,] [2,] [3,] [4,] [5,] [6,] > dim(x) [1] > summary(x) nbhd size brick Min. :1.000 Min. :1.450 Min. : st Qu.: st Qu.: st Qu.: Median :2.000 Median :2.000 Median : Mean :1.961 Mean :2.001 Mean : rd Qu.: rd Qu.: rd Qu.: Max. :3.000 Max. :2.590 Max. : > summary(y) Min. 1st Qu. Median Mean 3rd Qu. Max y: thousands of dollars. x: three neighborhoods, thousands of square feet, brick or not. 37 / 46

38 Call: lm(formula = price ~ nbhd + size + brick, data = hdat) Residuals: Min 1Q Median 3Q Max Coefficients: Estimate Std. Error t value Pr(> t ) (Intercept) nbhd * nbhd < 2e-16 *** size e-13 *** brickyes e-12 *** --- Residual standard error: 12.5 on 123 degrees of freedom Multiple R-squared: ,Adjusted R-squared: F-statistic: on 4 and 123 DF, p-value: < 2.2e-16 If the linear model is correct, we are monotone up in all three variables. Remark: For the linear model we have to dummy up nbhd, but for ART and mart we can simply leave it as an ordered numerical categorical variable. 38 / 46

39 Just using x = size of the house, y = price appears to be marginally increasing in size. (ˆf down 0) x y ART, mart, and martd ART mart martd linear x y martd: y on (x up, x down) martd: monotone up fit martd: monotone down fit martd linear mart and martd seem much better than ART. 39 / 46

40 Using x = (nbhd, size, brick), here are the relationships between the fitted values from various models. y bart mbart mbartd fup fdown ote the high correlation between mart, martd and ˆf up. 40 / 46

41 x axis: martd = ˆf up + ˆf down. y axis: red: ˆf up, green: ˆf down yhat mbartd fup and fdown fup fdown martd ˆf up suggests f is multivariate monotonic!!! 41 / 46

42 Let s now look at the effect of size conditionally on the six possible values of (nbdh, brick) ART: conditional effect of size mart: conditional effect of size martd: conditional effect of size price HD1 HD2 HD3 price price size size size mart and martd look very similar!! The conditionally monotone effect of size is becoming clearer! 42 / 46

43 And finally, the effect of size conditionally on the six possible values of (nbdh, brick) via ˆf up and ˆf down mart, fup: conditional effect of size martd, fdown: conditional effect of size price price size size ˆf up and martd look very similar!! Price is clearly conditionally monotone up in all three variables! y simultaneously estimating ˆf up + ˆf down, we have discovered monotonicity without any imposed assumptions!!! 43 / 46

44 Concluding Remarks martd = ˆf up + ˆf down provides a new assumption free approach for the discovery of the monotone components of f in multidimensional settings. Discovering such regions of monotonicity may of scientific interest in real applications. We have used informal variable selection to identify the monotone components here. More formal variable selection can be used in higher dimensional settings. As a doubly adaptive shape-constrained regularization approach, martd will adapt to mart when monotonicity is present, martd will adapt to ART when monotonicity absent, martd will be at least as good and maybe better, than the best of mart and ART in general. 44 / 46

45 Concluding Remarks The fully ayesian nature of ART greatly facilitates extensions such as mart, martd and many others. Despite its many compelling successes in practice, theoretical frequentist support for ART only now beginning to appear. For example, Rockova and van der Pas (2017) Posterior Concentration for ayesian Regression Trees and Their Ensembles recently obtained the first theoretical results for ayesian CART and ART, showing near-minimax posterior concentration when p > n for classes of Holder continuous functions. Monotone ART paper is available on Arxiv. Software for mart is available at 45 / 46

46 Thank You! 46 / 46

Monotonically Constrained Bayesian Additive Regression Trees

Monotonically Constrained Bayesian Additive Regression Trees Constrained Bayesian Additive Regression Trees Robert McCulloch University of Chicago, Booth School of Business Joint with: Hugh Chipman (Acadia), Ed George (UPenn, Wharton), Tom Shively (U Texas, McCombs)

More information

BART: Bayesian Additive Regression Trees

BART: Bayesian Additive Regression Trees BART: Bayesian Additive Regression Trees Hugh A. Chipman, Edward I. George, Robert E. McCulloch July 2005 Abstract We develop a Bayesian sum-of-trees model where each tree is constrained by a prior to

More information

Chapter 7: Estimation Sections

Chapter 7: Estimation Sections 1 / 40 Chapter 7: Estimation Sections 7.1 Statistical Inference Bayesian Methods: Chapter 7 7.2 Prior and Posterior Distributions 7.3 Conjugate Prior Distributions 7.4 Bayes Estimators Frequentist Methods:

More information

Let us assume that we are measuring the yield of a crop plant on 5 different plots at 4 different observation times.

Let us assume that we are measuring the yield of a crop plant on 5 different plots at 4 different observation times. Mixed-effects models An introduction by Christoph Scherber Up to now, we have been dealing with linear models of the form where ß0 and ß1 are parameters of fixed value. Example: Let us assume that we are

More information

Chapter 7: Estimation Sections

Chapter 7: Estimation Sections 1 / 31 : Estimation Sections 7.1 Statistical Inference Bayesian Methods: 7.2 Prior and Posterior Distributions 7.3 Conjugate Prior Distributions 7.4 Bayes Estimators Frequentist Methods: 7.5 Maximum Likelihood

More information

6 Multiple Regression

6 Multiple Regression More than one X variable. 6 Multiple Regression Why? Might be interested in more than one marginal effect Omitted Variable Bias (OVB) 6.1 and 6.2 House prices and OVB Should I build a fireplace? The following

More information

Multiple linear regression

Multiple linear regression Multiple linear regression Business Statistics 41000 Spring 2017 1 Topics 1. Including multiple predictors 2. Controlling for confounders 3. Transformations, interactions, dummy variables OpenIntro 8.1,

More information

Multiple regression - a brief introduction

Multiple regression - a brief introduction Multiple regression - a brief introduction Multiple regression is an extension to regular (simple) regression. Instead of one X, we now have several. Suppose, for example, that you are trying to predict

More information

Non-linearities in Simple Regression

Non-linearities in Simple Regression Non-linearities in Simple Regression 1. Eample: Monthly Earnings and Years of Education In this tutorial, we will focus on an eample that eplores the relationship between total monthly earnings and years

More information

Top-down particle filtering for Bayesian decision trees

Top-down particle filtering for Bayesian decision trees Top-down particle filtering for Bayesian decision trees Balaji Lakshminarayanan 1, Daniel M. Roy 2 and Yee Whye Teh 3 1. Gatsby Unit, UCL, 2. University of Cambridge and 3. University of Oxford Outline

More information

3.4 Copula approach for modeling default dependency. Two aspects of modeling the default times of several obligors

3.4 Copula approach for modeling default dependency. Two aspects of modeling the default times of several obligors 3.4 Copula approach for modeling default dependency Two aspects of modeling the default times of several obligors 1. Default dynamics of a single obligor. 2. Model the dependence structure of defaults

More information

Exam 2 Spring 2015 Statistics for Applications 4/9/2015

Exam 2 Spring 2015 Statistics for Applications 4/9/2015 18.443 Exam 2 Spring 2015 Statistics for Applications 4/9/2015 1. True or False (and state why). (a). The significance level of a statistical test is not equal to the probability that the null hypothesis

More information

Business Statistics 41000: Probability 3

Business Statistics 41000: Probability 3 Business Statistics 41000: Probability 3 Drew D. Creal University of Chicago, Booth School of Business February 7 and 8, 2014 1 Class information Drew D. Creal Email: dcreal@chicagobooth.edu Office: 404

More information

Copyright 2011 Pearson Education, Inc. Publishing as Addison-Wesley.

Copyright 2011 Pearson Education, Inc. Publishing as Addison-Wesley. Appendix: Statistics in Action Part I Financial Time Series 1. These data show the effects of stock splits. If you investigate further, you ll find that most of these splits (such as in May 1970) are 3-for-1

More information

A potentially useful approach to model nonlinearities in time series is to assume different behavior (structural break) in different subsamples

A potentially useful approach to model nonlinearities in time series is to assume different behavior (structural break) in different subsamples 1.3 Regime switching models A potentially useful approach to model nonlinearities in time series is to assume different behavior (structural break) in different subsamples (or regimes). If the dates, the

More information

Final Exam Suggested Solutions

Final Exam Suggested Solutions University of Washington Fall 003 Department of Economics Eric Zivot Economics 483 Final Exam Suggested Solutions This is a closed book and closed note exam. However, you are allowed one page of handwritten

More information

WC-5 Just How Credible Is That Employer? Exploring GLMs and Multilevel Modeling for NCCI s Excess Loss Factor Methodology

WC-5 Just How Credible Is That Employer? Exploring GLMs and Multilevel Modeling for NCCI s Excess Loss Factor Methodology Antitrust Notice The Casualty Actuarial Society is committed to adhering strictly to the letter and spirit of the antitrust laws. Seminars conducted under the auspices of the CAS are designed solely to

More information

COS 513: Gibbs Sampling

COS 513: Gibbs Sampling COS 513: Gibbs Sampling Matthew Salesi December 6, 2010 1 Overview Concluding the coverage of Markov chain Monte Carlo (MCMC) sampling methods, we look today at Gibbs sampling. Gibbs sampling is a simple

More information

FE670 Algorithmic Trading Strategies. Stevens Institute of Technology

FE670 Algorithmic Trading Strategies. Stevens Institute of Technology FE670 Algorithmic Trading Strategies Lecture 4. Cross-Sectional Models and Trading Strategies Steve Yang Stevens Institute of Technology 09/26/2013 Outline 1 Cross-Sectional Methods for Evaluation of Factor

More information

Math 2311 Bekki George Office Hours: MW 11am to 12:45pm in 639 PGH Online Thursdays 4-5:30pm And by appointment

Math 2311 Bekki George Office Hours: MW 11am to 12:45pm in 639 PGH Online Thursdays 4-5:30pm And by appointment Math 2311 Bekki George bekki@math.uh.edu Office Hours: MW 11am to 12:45pm in 639 PGH Online Thursdays 4-5:30pm And by appointment Class webpage: http://www.math.uh.edu/~bekki/math2311.html Math 2311 Class

More information

High-Frequency Data Analysis and Market Microstructure [Tsay (2005), chapter 5]

High-Frequency Data Analysis and Market Microstructure [Tsay (2005), chapter 5] 1 High-Frequency Data Analysis and Market Microstructure [Tsay (2005), chapter 5] High-frequency data have some unique characteristics that do not appear in lower frequencies. At this class we have: Nonsynchronous

More information

Regression Review and Robust Regression. Slides prepared by Elizabeth Newton (MIT)

Regression Review and Robust Regression. Slides prepared by Elizabeth Newton (MIT) Regression Review and Robust Regression Slides prepared by Elizabeth Newton (MIT) S-Plus Oil City Data Frame Monthly Excess Returns of Oil City Petroleum, Inc. Stocks and the Market SUMMARY: The oilcity

More information

Dynamic Replication of Non-Maturing Assets and Liabilities

Dynamic Replication of Non-Maturing Assets and Liabilities Dynamic Replication of Non-Maturing Assets and Liabilities Michael Schürle Institute for Operations Research and Computational Finance, University of St. Gallen, Bodanstr. 6, CH-9000 St. Gallen, Switzerland

More information

Homework Assignment Section 3

Homework Assignment Section 3 Homework Assignment Section 3 Tengyuan Liang Business Statistics Booth School of Business Problem 1 A company sets different prices for a particular stereo system in eight different regions of the country.

More information

Adaptive Experiments for Policy Choice. March 8, 2019

Adaptive Experiments for Policy Choice. March 8, 2019 Adaptive Experiments for Policy Choice Maximilian Kasy Anja Sautmann March 8, 2019 Introduction The goal of many experiments is to inform policy choices: 1. Job search assistance for refugees: Treatments:

More information

Objective Bayesian Analysis for Heteroscedastic Regression

Objective Bayesian Analysis for Heteroscedastic Regression Analysis for Heteroscedastic Regression & Esther Salazar Universidade Federal do Rio de Janeiro Colóquio Inter-institucional: Modelos Estocásticos e Aplicações 2009 Collaborators: Marco Ferreira and Thais

More information

Regression and Simulation

Regression and Simulation Regression and Simulation This is an introductory R session, so it may go slowly if you have never used R before. Do not be discouraged. A great way to learn a new language like this is to plunge right

More information

Predicting Foreign Exchange Arbitrage

Predicting Foreign Exchange Arbitrage Predicting Foreign Exchange Arbitrage Stefan Huber & Amy Wang 1 Introduction and Related Work The Covered Interest Parity condition ( CIP ) should dictate prices on the trillion-dollar foreign exchange

More information

Econometric Methods for Valuation Analysis

Econometric Methods for Valuation Analysis Econometric Methods for Valuation Analysis Margarita Genius Dept of Economics M. Genius (Univ. of Crete) Econometric Methods for Valuation Analysis Cagliari, 2017 1 / 26 Correlation Analysis Simple Regression

More information

The data definition file provided by the authors is reproduced below: Obs: 1500 home sales in Stockton, CA from Oct 1, 1996 to Nov 30, 1998

The data definition file provided by the authors is reproduced below: Obs: 1500 home sales in Stockton, CA from Oct 1, 1996 to Nov 30, 1998 Economics 312 Sample Project Report Jeffrey Parker Introduction This project is based on Exercise 2.12 on page 81 of the Hill, Griffiths, and Lim text. It examines how the sale price of houses in Stockton,

More information

High Dimensional Bayesian Optimisation and Bandits via Additive Models

High Dimensional Bayesian Optimisation and Bandits via Additive Models 1/20 High Dimensional Bayesian Optimisation and Bandits via Additive Models Kirthevasan Kandasamy, Jeff Schneider, Barnabás Póczos ICML 15 July 8 2015 2/20 Bandits & Optimisation Maximum Likelihood inference

More information

Dummy Variables. 1. Example: Factors Affecting Monthly Earnings

Dummy Variables. 1. Example: Factors Affecting Monthly Earnings Dummy Variables A dummy variable or binary variable is a variable that takes on a value of 0 or 1 as an indicator that the observation has some kind of characteristic. Common examples: Sex (female): FEMALE=1

More information

Modelling the Sharpe ratio for investment strategies

Modelling the Sharpe ratio for investment strategies Modelling the Sharpe ratio for investment strategies Group 6 Sako Arts 0776148 Rik Coenders 0777004 Stefan Luijten 0783116 Ivo van Heck 0775551 Rik Hagelaars 0789883 Stephan van Driel 0858182 Ellen Cardinaels

More information

Extend the ideas of Kan and Zhou paper on Optimal Portfolio Construction under parameter uncertainty

Extend the ideas of Kan and Zhou paper on Optimal Portfolio Construction under parameter uncertainty Extend the ideas of Kan and Zhou paper on Optimal Portfolio Construction under parameter uncertainty George Photiou Lincoln College University of Oxford A dissertation submitted in partial fulfilment for

More information

MODEL SELECTION CRITERIA IN R:

MODEL SELECTION CRITERIA IN R: 1. R 2 statistics We may use MODEL SELECTION CRITERIA IN R R 2 = SS R SS T = 1 SS Res SS T or R 2 Adj = 1 SS Res/(n p) SS T /(n 1) = 1 ( ) n 1 (1 R 2 ). n p where p is the total number of parameters. R

More information

This is a open-book exam. Assigned: Friday November 27th 2009 at 16:00. Due: Monday November 30th 2009 before 10:00.

This is a open-book exam. Assigned: Friday November 27th 2009 at 16:00. Due: Monday November 30th 2009 before 10:00. University of Iceland School of Engineering and Sciences Department of Industrial Engineering, Mechanical Engineering and Computer Science IÐN106F Industrial Statistics II - Bayesian Data Analysis Fall

More information

CONVERGENCE OF OPTION REWARDS FOR MARKOV TYPE PRICE PROCESSES MODULATED BY STOCHASTIC INDICES

CONVERGENCE OF OPTION REWARDS FOR MARKOV TYPE PRICE PROCESSES MODULATED BY STOCHASTIC INDICES CONVERGENCE OF OPTION REWARDS FOR MARKOV TYPE PRICE PROCESSES MODULATED BY STOCHASTIC INDICES D. S. SILVESTROV, H. JÖNSSON, AND F. STENBERG Abstract. A general price process represented by a two-component

More information

CHAPTER 8. Confidence Interval Estimation Point and Interval Estimates

CHAPTER 8. Confidence Interval Estimation Point and Interval Estimates CHAPTER 8. Confidence Interval Estimation Point and Interval Estimates A point estimate is a single number, a confidence interval provides additional information about the variability of the estimate Lower

More information

XLSTAT TIP SHEET FOR BUSINESS STATISTICS CENGAGE LEARNING

XLSTAT TIP SHEET FOR BUSINESS STATISTICS CENGAGE LEARNING XLSTAT TIP SHEET FOR BUSINESS STATISTICS CENGAGE LEARNING INTRODUCTION XLSTAT makes accessible to anyone a powerful, complete and user-friendly data analysis and statistical solution. Accessibility to

More information

Relevant parameter changes in structural break models

Relevant parameter changes in structural break models Relevant parameter changes in structural break models A. Dufays J. Rombouts Forecasting from Complexity April 27 th, 2018 1 Outline Sparse Change-Point models 1. Motivation 2. Model specification Shrinkage

More information

The test has 13 questions. Answer any four. All questions carry equal (25) marks.

The test has 13 questions. Answer any four. All questions carry equal (25) marks. 2014 Booklet No. TEST CODE: QEB Afternoon Questions: 4 Time: 2 hours Write your Name, Registration Number, Test Code, Question Booklet Number etc. in the appropriate places of the answer booklet. The test

More information

Bayesian Normal Stuff

Bayesian Normal Stuff Bayesian Normal Stuff - Set-up of the basic model of a normally distributed random variable with unknown mean and variance (a two-parameter model). - Discuss philosophies of prior selection - Implementation

More information

Jaime Frade Dr. Niu Interest rate modeling

Jaime Frade Dr. Niu Interest rate modeling Interest rate modeling Abstract In this paper, three models were used to forecast short term interest rates for the 3 month LIBOR. Each of the models, regression time series, GARCH, and Cox, Ingersoll,

More information

Probability. An intro for calculus students P= Figure 1: A normal integral

Probability. An intro for calculus students P= Figure 1: A normal integral Probability An intro for calculus students.8.6.4.2 P=.87 2 3 4 Figure : A normal integral Suppose we flip a coin 2 times; what is the probability that we get more than 2 heads? Suppose we roll a six-sided

More information

MATH20330: Optimization for Economics Homework 1: Solutions

MATH20330: Optimization for Economics Homework 1: Solutions MATH0330: Optimization for Economics Homework 1: Solutions 1. Sketch the graphs of the following linear and quadratic functions: f(x) = 4x 3, g(x) = 4 3x h(x) = x 6x + 8, R(q) = 400 + 30q q. y = f(x) is

More information

Basic Procedure for Histograms

Basic Procedure for Histograms Basic Procedure for Histograms 1. Compute the range of observations (min. & max. value) 2. Choose an initial # of classes (most likely based on the range of values, try and find a number of classes that

More information

Lecture 9: Classification and Regression Trees

Lecture 9: Classification and Regression Trees Lecture 9: Classification and Regression Trees Advanced Applied Multivariate Analysis STAT 2221, Spring 2015 Sungkyu Jung Department of Statistics, University of Pittsburgh Xingye Qiao Department of Mathematical

More information

THE UNIVERSITY OF TEXAS AT AUSTIN Department of Information, Risk, and Operations Management

THE UNIVERSITY OF TEXAS AT AUSTIN Department of Information, Risk, and Operations Management THE UNIVERSITY OF TEXAS AT AUSTIN Department of Information, Risk, and Operations Management BA 386T Tom Shively PROBABILITY CONCEPTS AND NORMAL DISTRIBUTIONS The fundamental idea underlying any statistical

More information

ECS171: Machine Learning

ECS171: Machine Learning ECS171: Machine Learning Lecture 15: Tree-based Algorithms Cho-Jui Hsieh UC Davis March 7, 2018 Outline Decision Tree Random Forest Gradient Boosted Decision Tree (GBDT) Decision Tree Each node checks

More information

Chapter 7: Estimation Sections

Chapter 7: Estimation Sections Chapter 7: Estimation Sections 7.1 Statistical Inference Bayesian Methods: 7.2 Prior and Posterior Distributions 7.3 Conjugate Prior Distributions Frequentist Methods: 7.5 Maximum Likelihood Estimators

More information

1. You are given the following information about a stationary AR(2) model:

1. You are given the following information about a stationary AR(2) model: Fall 2003 Society of Actuaries **BEGINNING OF EXAMINATION** 1. You are given the following information about a stationary AR(2) model: (i) ρ 1 = 05. (ii) ρ 2 = 01. Determine φ 2. (A) 0.2 (B) 0.1 (C) 0.4

More information

Model 0: We start with a linear regression model: log Y t = β 0 + β 1 (t 1980) + ε, with ε N(0,

Model 0: We start with a linear regression model: log Y t = β 0 + β 1 (t 1980) + ε, with ε N(0, Stat 534: Fall 2017. Introduction to the BUGS language and rjags Installation: download and install JAGS. You will find the executables on Sourceforge. You must have JAGS installed prior to installing

More information

Penalty Functions. The Premise Quadratic Loss Problems and Solutions

Penalty Functions. The Premise Quadratic Loss Problems and Solutions Penalty Functions The Premise Quadratic Loss Problems and Solutions The Premise You may have noticed that the addition of constraints to an optimization problem has the effect of making it much more difficult.

More information

Generalized Linear Models

Generalized Linear Models Generalized Linear Models Scott Creel Wednesday, September 10, 2014 This exercise extends the prior material on using the lm() function to fit an OLS regression and test hypotheses about effects on a parameter.

More information

Milestone2. Zillow House Price Prediciton. Group: Lingzi Hong and Pranali Shetty

Milestone2. Zillow House Price Prediciton. Group: Lingzi Hong and Pranali Shetty Milestone2 Zillow House Price Prediciton Group Lingzi Hong and Pranali Shetty MILESTONE 2 REPORT Data Collection The following additional features were added 1. Population, Number of College Graduates

More information

Chapter 4 Continuous Random Variables and Probability Distributions

Chapter 4 Continuous Random Variables and Probability Distributions Chapter 4 Continuous Random Variables and Probability Distributions Part 2: More on Continuous Random Variables Section 4.5 Continuous Uniform Distribution Section 4.6 Normal Distribution 1 / 27 Continuous

More information

An Introduction to Stochastic Calculus

An Introduction to Stochastic Calculus An Introduction to Stochastic Calculus Haijun Li lih@math.wsu.edu Department of Mathematics Washington State University Week 2-3 Haijun Li An Introduction to Stochastic Calculus Week 2-3 1 / 24 Outline

More information

DATA SUMMARIZATION AND VISUALIZATION

DATA SUMMARIZATION AND VISUALIZATION APPENDIX DATA SUMMARIZATION AND VISUALIZATION PART 1 SUMMARIZATION 1: BUILDING BLOCKS OF DATA ANALYSIS 294 PART 2 PART 3 PART 4 VISUALIZATION: GRAPHS AND TABLES FOR SUMMARIZING AND ORGANIZING DATA 296

More information

Chapter 3. Numerical Descriptive Measures. Copyright 2016 Pearson Education, Ltd. Chapter 3, Slide 1

Chapter 3. Numerical Descriptive Measures. Copyright 2016 Pearson Education, Ltd. Chapter 3, Slide 1 Chapter 3 Numerical Descriptive Measures Copyright 2016 Pearson Education, Ltd. Chapter 3, Slide 1 Objectives In this chapter, you learn to: Describe the properties of central tendency, variation, and

More information

Chapter 8 Statistical Intervals for a Single Sample

Chapter 8 Statistical Intervals for a Single Sample Chapter 8 Statistical Intervals for a Single Sample Part 1: Confidence intervals (CI) for population mean µ Section 8-1: CI for µ when σ 2 known & drawing from normal distribution Section 8-1.2: Sample

More information

Scenario reduction and scenario tree construction for power management problems

Scenario reduction and scenario tree construction for power management problems Scenario reduction and scenario tree construction for power management problems N. Gröwe-Kuska, H. Heitsch and W. Römisch Humboldt-University Berlin Institute of Mathematics Page 1 of 20 IEEE Bologna POWER

More information

Choice Probabilities. Logit Choice Probabilities Derivation. Choice Probabilities. Basic Econometrics in Transportation.

Choice Probabilities. Logit Choice Probabilities Derivation. Choice Probabilities. Basic Econometrics in Transportation. 1/31 Choice Probabilities Basic Econometrics in Transportation Logit Models Amir Samimi Civil Engineering Department Sharif University of Technology Primary Source: Discrete Choice Methods with Simulation

More information

Stochastic Dual Dynamic Programming

Stochastic Dual Dynamic Programming 1 / 43 Stochastic Dual Dynamic Programming Operations Research Anthony Papavasiliou 2 / 43 Contents [ 10.4 of BL], [Pereira, 1991] 1 Recalling the Nested L-Shaped Decomposition 2 Drawbacks of Nested Decomposition

More information

**BEGINNING OF EXAMINATION** A random sample of five observations from a population is:

**BEGINNING OF EXAMINATION** A random sample of five observations from a population is: **BEGINNING OF EXAMINATION** 1. You are given: (i) A random sample of five observations from a population is: 0.2 0.7 0.9 1.1 1.3 (ii) You use the Kolmogorov-Smirnov test for testing the null hypothesis,

More information

MBA 7020 Sample Final Exam

MBA 7020 Sample Final Exam Descriptive Measures, Confidence Intervals MBA 7020 Sample Final Exam Given the following sample of weight measurements (in pounds) of 25 children aged 4, answer the following questions(1 through 3): 45,

More information

Sublinear Time Algorithms Oct 19, Lecture 1

Sublinear Time Algorithms Oct 19, Lecture 1 0368.416701 Sublinear Time Algorithms Oct 19, 2009 Lecturer: Ronitt Rubinfeld Lecture 1 Scribe: Daniel Shahaf 1 Sublinear-time algorithms: motivation Twenty years ago, there was practically no investigation

More information

Statistics for Business and Economics

Statistics for Business and Economics Statistics for Business and Economics Chapter 5 Continuous Random Variables and Probability Distributions Ch. 5-1 Probability Distributions Probability Distributions Ch. 4 Discrete Continuous Ch. 5 Probability

More information

Calibration of Interest Rates

Calibration of Interest Rates WDS'12 Proceedings of Contributed Papers, Part I, 25 30, 2012. ISBN 978-80-7378-224-5 MATFYZPRESS Calibration of Interest Rates J. Černý Charles University, Faculty of Mathematics and Physics, Prague,

More information

2 Control variates. λe λti λe e λt i where R(t) = t Y 1 Y N(t) is the time from the last event to t. L t = e λr(t) e e λt(t) Exercises

2 Control variates. λe λti λe e λt i where R(t) = t Y 1 Y N(t) is the time from the last event to t. L t = e λr(t) e e λt(t) Exercises 96 ChapterVI. Variance Reduction Methods stochastic volatility ISExSoren5.9 Example.5 (compound poisson processes) Let X(t) = Y + + Y N(t) where {N(t)},Y, Y,... are independent, {N(t)} is Poisson(λ) with

More information

Biostatistics and Design of Experiments Prof. Mukesh Doble Department of Biotechnology Indian Institute of Technology, Madras

Biostatistics and Design of Experiments Prof. Mukesh Doble Department of Biotechnology Indian Institute of Technology, Madras Biostatistics and Design of Experiments Prof. Mukesh Doble Department of Biotechnology Indian Institute of Technology, Madras Lecture - 05 Normal Distribution So far we have looked at discrete distributions

More information

Eco504 Spring 2010 C. Sims FINAL EXAM. β t 1 2 φτ2 t subject to (1)

Eco504 Spring 2010 C. Sims FINAL EXAM. β t 1 2 φτ2 t subject to (1) Eco54 Spring 21 C. Sims FINAL EXAM There are three questions that will be equally weighted in grading. Since you may find some questions take longer to answer than others, and partial credit will be given

More information

Stat 401XV Exam 3 Spring 2017

Stat 401XV Exam 3 Spring 2017 Stat 40XV Exam Spring 07 I have neither given nor received unauthorized assistance on this exam. Name Signed Date Name Printed ATTENTION! Incorrect numerical answers unaccompanied by supporting reasoning

More information

SELECTION OF VARIABLES INFLUENCING IRAQI BANKS DEPOSITS BY USING NEW BAYESIAN LASSO QUANTILE REGRESSION

SELECTION OF VARIABLES INFLUENCING IRAQI BANKS DEPOSITS BY USING NEW BAYESIAN LASSO QUANTILE REGRESSION Vol. 6, No. 1, Summer 2017 2012 Published by JSES. SELECTION OF VARIABLES INFLUENCING IRAQI BANKS DEPOSITS BY USING NEW BAYESIAN Fadel Hamid Hadi ALHUSSEINI a Abstract The main focus of the paper is modelling

More information

σ e, which will be large when prediction errors are Linear regression model

σ e, which will be large when prediction errors are Linear regression model Linear regression model we assume that two quantitative variables, x and y, are linearly related; that is, the population of (x, y) pairs are related by an ideal population regression line y = α + βx +

More information

A Markov Chain Monte Carlo Approach to Estimate the Risks of Extremely Large Insurance Claims

A Markov Chain Monte Carlo Approach to Estimate the Risks of Extremely Large Insurance Claims International Journal of Business and Economics, 007, Vol. 6, No. 3, 5-36 A Markov Chain Monte Carlo Approach to Estimate the Risks of Extremely Large Insurance Claims Wan-Kai Pang * Department of Applied

More information

Strategies for Improving the Efficiency of Monte-Carlo Methods

Strategies for Improving the Efficiency of Monte-Carlo Methods Strategies for Improving the Efficiency of Monte-Carlo Methods Paul J. Atzberger General comments or corrections should be sent to: paulatz@cims.nyu.edu Introduction The Monte-Carlo method is a useful

More information

Extracting Information from the Markets: A Bayesian Approach

Extracting Information from the Markets: A Bayesian Approach Extracting Information from the Markets: A Bayesian Approach Daniel Waggoner The Federal Reserve Bank of Atlanta Florida State University, February 29, 2008 Disclaimer: The views expressed are the author

More information

Efficiency and Herd Behavior in a Signalling Market. Jeffrey Gao

Efficiency and Herd Behavior in a Signalling Market. Jeffrey Gao Efficiency and Herd Behavior in a Signalling Market Jeffrey Gao ABSTRACT This paper extends a model of herd behavior developed by Bikhchandani and Sharma (000) to establish conditions for varying levels

More information

BROWNIAN MOTION Antonella Basso, Martina Nardon

BROWNIAN MOTION Antonella Basso, Martina Nardon BROWNIAN MOTION Antonella Basso, Martina Nardon basso@unive.it, mnardon@unive.it Department of Applied Mathematics University Ca Foscari Venice Brownian motion p. 1 Brownian motion Brownian motion plays

More information

Bayesian Multinomial Model for Ordinal Data

Bayesian Multinomial Model for Ordinal Data Bayesian Multinomial Model for Ordinal Data Overview This example illustrates how to fit a Bayesian multinomial model by using the built-in mutinomial density function (MULTINOM) in the MCMC procedure

More information

IEOR E4703: Monte-Carlo Simulation

IEOR E4703: Monte-Carlo Simulation IEOR E4703: Monte-Carlo Simulation Simulating Stochastic Differential Equations Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com

More information

Modelling strategies for bivariate circular data

Modelling strategies for bivariate circular data Modelling strategies for bivariate circular data John T. Kent*, Kanti V. Mardia, & Charles C. Taylor Department of Statistics, University of Leeds 1 Introduction On the torus there are two common approaches

More information

Predicting Inflation without Predictive Regressions

Predicting Inflation without Predictive Regressions Predicting Inflation without Predictive Regressions Liuren Wu Baruch College, City University of New York Joint work with Jian Hua 6th Annual Conference of the Society for Financial Econometrics June 12-14,

More information

Exam M Fall 2005 PRELIMINARY ANSWER KEY

Exam M Fall 2005 PRELIMINARY ANSWER KEY Exam M Fall 005 PRELIMINARY ANSWER KEY Question # Answer Question # Answer 1 C 1 E C B 3 C 3 E 4 D 4 E 5 C 5 C 6 B 6 E 7 A 7 E 8 D 8 D 9 B 9 A 10 A 30 D 11 A 31 A 1 A 3 A 13 D 33 B 14 C 34 C 15 A 35 A

More information

COMPREHENSIVE WRITTEN EXAMINATION, PAPER III FRIDAY AUGUST 18, 2006, 9:00 A.M. 1:00 P.M. STATISTICS 174 QUESTIONS

COMPREHENSIVE WRITTEN EXAMINATION, PAPER III FRIDAY AUGUST 18, 2006, 9:00 A.M. 1:00 P.M. STATISTICS 174 QUESTIONS COMPREHENSIVE WRITTEN EXAMINATION, PAPER III FRIDAY AUGUST 18, 2006, 9:00 A.M. 1:00 P.M. STATISTICS 174 QUESTIONS Answer all parts. Closed book, calculators allowed. It is important to show all working,

More information

Approximations of Stochastic Programs. Scenario Tree Reduction and Construction

Approximations of Stochastic Programs. Scenario Tree Reduction and Construction Approximations of Stochastic Programs. Scenario Tree Reduction and Construction W. Römisch Humboldt-University Berlin Institute of Mathematics 10099 Berlin, Germany www.mathematik.hu-berlin.de/~romisch

More information

Describing Data: One Quantitative Variable

Describing Data: One Quantitative Variable STAT 250 Dr. Kari Lock Morgan The Big Picture Describing Data: One Quantitative Variable Population Sampling SECTIONS 2.2, 2.3 One quantitative variable (2.2, 2.3) Statistical Inference Sample Descriptive

More information

Lecture 17: More on Markov Decision Processes. Reinforcement learning

Lecture 17: More on Markov Decision Processes. Reinforcement learning Lecture 17: More on Markov Decision Processes. Reinforcement learning Learning a model: maximum likelihood Learning a value function directly Monte Carlo Temporal-difference (TD) learning COMP-424, Lecture

More information

Stochastic Processes and Advanced Mathematical Finance. Multiperiod Binomial Tree Models

Stochastic Processes and Advanced Mathematical Finance. Multiperiod Binomial Tree Models Steven R. Dunbar Department of Mathematics 203 Avery Hall University of Nebraska-Lincoln Lincoln, NE 68588-0130 http://www.math.unl.edu Voice: 402-472-3731 Fax: 402-472-8466 Stochastic Processes and Advanced

More information

Statistical Inference and Methods

Statistical Inference and Methods Department of Mathematics Imperial College London d.stephens@imperial.ac.uk http://stats.ma.ic.ac.uk/ das01/ 14th February 2006 Part VII Session 7: Volatility Modelling Session 7: Volatility Modelling

More information

Regret-based Selection

Regret-based Selection Regret-based Selection David Puelz (UT Austin) Carlos M. Carvalho (UT Austin) P. Richard Hahn (Chicago Booth) May 27, 2017 Two problems 1. Asset pricing: What are the fundamental dimensions (risk factors)

More information

Chapter 2 Uncertainty Analysis and Sampling Techniques

Chapter 2 Uncertainty Analysis and Sampling Techniques Chapter 2 Uncertainty Analysis and Sampling Techniques The probabilistic or stochastic modeling (Fig. 2.) iterative loop in the stochastic optimization procedure (Fig..4 in Chap. ) involves:. Specifying

More information

Chapter 7 Sampling Distributions and Point Estimation of Parameters

Chapter 7 Sampling Distributions and Point Estimation of Parameters Chapter 7 Sampling Distributions and Point Estimation of Parameters Part 1: Sampling Distributions, the Central Limit Theorem, Point Estimation & Estimators Sections 7-1 to 7-2 1 / 25 Statistical Inferences

More information

DRAFT. 1 exercise in state (S, t), π(s, t) = 0 do not exercise in state (S, t) Review of the Risk Neutral Stock Dynamics

DRAFT. 1 exercise in state (S, t), π(s, t) = 0 do not exercise in state (S, t) Review of the Risk Neutral Stock Dynamics Chapter 12 American Put Option Recall that the American option has strike K and maturity T and gives the holder the right to exercise at any time in [0, T ]. The American option is not straightforward

More information

Identifying Long-Run Risks: A Bayesian Mixed-Frequency Approach

Identifying Long-Run Risks: A Bayesian Mixed-Frequency Approach Identifying : A Bayesian Mixed-Frequency Approach Frank Schorfheide University of Pennsylvania CEPR and NBER Dongho Song University of Pennsylvania Amir Yaron University of Pennsylvania NBER February 12,

More information

Machine Learning for Quantitative Finance

Machine Learning for Quantitative Finance Machine Learning for Quantitative Finance Fast derivative pricing Sofie Reyners Joint work with Jan De Spiegeleer, Dilip Madan and Wim Schoutens Derivative pricing is time-consuming... Vanilla option pricing

More information

Introduction to Computational Finance and Financial Econometrics Descriptive Statistics

Introduction to Computational Finance and Financial Econometrics Descriptive Statistics You can t see this text! Introduction to Computational Finance and Financial Econometrics Descriptive Statistics Eric Zivot Summer 2015 Eric Zivot (Copyright 2015) Descriptive Statistics 1 / 28 Outline

More information

Stat 101 Exam 1 - Embers Important Formulas and Concepts 1

Stat 101 Exam 1 - Embers Important Formulas and Concepts 1 1 Chapter 1 1.1 Definitions Stat 101 Exam 1 - Embers Important Formulas and Concepts 1 1. Data Any collection of numbers, characters, images, or other items that provide information about something. 2.

More information

Posterior Inference. , where should we start? Consider the following computational procedure: 1. draw samples. 2. convert. 3. compute properties

Posterior Inference. , where should we start? Consider the following computational procedure: 1. draw samples. 2. convert. 3. compute properties Posterior Inference Example. Consider a binomial model where we have a posterior distribution for the probability term, θ. Suppose we want to make inferences about the log-odds γ = log ( θ 1 θ), where

More information

Probability and Statistics

Probability and Statistics Kristel Van Steen, PhD 2 Montefiore Institute - Systems and Modeling GIGA - Bioinformatics ULg kristel.vansteen@ulg.ac.be CHAPTER 3: PARAMETRIC FAMILIES OF UNIVARIATE DISTRIBUTIONS 1 Why do we need distributions?

More information