Tuning bandit algorithms in stochastic environments

Size: px
Start display at page:

Download "Tuning bandit algorithms in stochastic environments"

Transcription

1 Tuning bandit algorithms in stochastic environments Jean-Yves Audibert, CERTIS - Ecole des Ponts Remi Munos, INRIA Futurs Lille Csaba Szepesvári, University of Alberta The 18th International Conference on Algorithmic Learning Theory October 3, 2007, Sendai International Center, Sendai, Japan

2 Contents Bandit problems UCB and Motivation Tuning UCB by using variance estimates Concentration of the regret Finite horizon finite regret (PAC-UCB) Conclusions

3 Exploration vs. Exploitation Two treatments Unknown success probabilities Goal: find the best treatment while losing the smallest number of patients Explore or exploit?

4 Playing Bandits Payoff is 0 or 1 Arm 1: X0 11, X1 12, X0 13, X0 14, X 15, X 16, X 17, Arm 2: X1 21, X1 22, X0 23, X1 24, X1 25, X1 26, X 27,

5 Exploration vs. Exploitation: Some Applications Simple processes: Clinical trials Job shop scheduling (random jobs) What ad to put on a web-page More complex processes (memory): Optimizing production Controlling an inventory Optimal investment Poker

6 Bandit Problems Optimism in the Face of Uncertainty Introduced by Lai and Robbins (1985) (?) i.i.d. payoffs X 11,X 12,,X 1t, X 21,X 22,,X 2t, Principle: Inflated value of an option = maximum expected reward that looks quite possible given the observations so far Select the option with best inflated value

7 Some definitions Payoff is 0 or 1 Now: t=11 T 1 (t-1) = 4 T 2 (t-1) = 6 I 1 = 1, I 2 = 2, Arm 1: X0 11, X1 12, X0 13, X0 14, X 15, X 16, X 17, Arm 2: X1 21, X1 22, X0 23, X1 24, X1 25, X1 26, X 27, ˆR n def = n t=1 X k,t n t=1 X I t,t It (t)

8 Parametric Bandits [Lai&Robbins] X it p i,θi ( ), θ i unknown, t=1,2, Uncertainty set: Reasonable values of θ given the experience so far U i,t ={θ p i,θ (X i,1:ti (t)) is large mod (t,t i (t)) } Inflated values: Z i,t =max{ E θ θ U i,t } Rule: I t = arg max i Z i,t

9 Bounds Upper bound: Lower bound: If an algorithm is uniformly good then..

10 UCB1 Algorithm (Auer et al., 2002) Algorithm: UCB1(b) 1. Try all options once 2. Use option k with the highest index: Regret bound: R n : Expected loss due to not selecting the best option at time step n. Then:

11 Problem #1 When b 2 σ 2, regret should scale with σ 2 and not b 2!

12 UCB1-NORMAL Algorithm: UCB1-NORMAL 1. Try all options once 2. Use option k with the highest index: ˆµ kt + Regret bound: 16ˆσ 2 kt log(t) T k (t 1)

13 Problem #1 The regret of UCB1(b) scales with O(b 2 ) The regret of UCB1-NORMAL scales with O(σ 2 ) but UCB1-NORMAL assumes normally distributed payoffs UCB-Tuned(b): ˆµ kt + min ( b 2 4, σ2 kt Good experimental results No theoretical guarantees ) log(t) T k (t 1)

14 UCB-V Algorithm: UCB-V(b) 1. Try all options once 2. Use option k with the highest index: ˆµ kt + Regret bound: 2.4 σ 2 kt log(t) T k (t 1) + 3blog(t) T k (t 1)

15 Proof The missing bound (hunch.net): ˆµ t µ σ t log(3δ 1 ) t + 3blog(3δ 1 ) t Bounding the sampling times of suboptimal arms (new bound)

16 Can we decrease exploration? Algorithm: UCB-V(b,ζ,c) 1. Try all options once 2. Use option k with the highest index: ˆµ kt + Theorem: 2ζ σ 2 kt log(t) T k (t 1) +c3blog(t) T k (t 1) When ζ<1, the regret will be polynomial for some bandit problems When cζ<1/6, the regret will be polynomial for some bandit problems

17 Concentration bounds Averages concentrate: S n n µ ( O log(δ 1 ) n Does the regret of UCB* concentrate? R n n µ?? R n E[R n ] 1?? ) RISK??

18 Logarithmic regret implies high risk Theorem: Consider the pseudo-regret R n = k=1k T k (n) k. Then for any ζ>1 and z>γ log(n), P(R n >z) C z -ζ (Gaussian tail:p(r n >z) C exp(-z 2 )) Illustration: Two arms; 2 = µ 2 -µ 1 >0. Modes of law of R n at O(log(n)), O( 2 n)! Only happens when the support of the second best arm s distribution overlaps with that of the optimal arm

19 Finite horizon: PAC-UCB Algorithm: PAC-UCB(N) 1. Try all options ones 2. Use option k with the highest index: ˆµ kt + 2 σ 2 kt L t T k (t 1) + 3bL t T k (t 1), L t =log(nk(t k (t 1)+1)) Theorem: At time N with probability 1-1/N, suboptimal plays are bounded by O(log(K N)). Good when N is known beforehand

20 Conclusions Taking into account the variance lessens dependence on the a priori bound b Low expected regret => high risk PAC-UCB: Finite regret, known horizon, exponential concentration of the regret Optimal balance? Other algorithms? Greater generality: look up the paper!

21 Thank you! Questions?

22 References Optimism in the face of uncertainty: Lai, T. L. and Robbins, H. (1985). Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6:4 22. UCB1 and more: Auer, P., Cesa-Bianchi, N., and Fischer, P. (2002). Finite time analysis of the multiarmed bandit problem. Machine Learning, 47(2-3): Audibert, J., Munos, R., and Szepesvári, Cs. (2007). Tuning bandit algorithms in stochastic environments, ALT-2007.

Monte-Carlo Planning: Introduction and Bandit Basics. Alan Fern

Monte-Carlo Planning: Introduction and Bandit Basics. Alan Fern Monte-Carlo Planning: Introduction and Bandit Basics Alan Fern 1 Large Worlds We have considered basic model-based planning algorithms Model-based planning: assumes MDP model is available Methods we learned

More information

Monte-Carlo Planning: Introduction and Bandit Basics. Alan Fern

Monte-Carlo Planning: Introduction and Bandit Basics. Alan Fern Monte-Carlo Planning: Introduction and Bandit Basics Alan Fern 1 Large Worlds We have considered basic model-based planning algorithms Model-based planning: assumes MDP model is available Methods we learned

More information

Multi-armed bandit problems

Multi-armed bandit problems Multi-armed bandit problems Stochastic Decision Theory (2WB12) Arnoud den Boer 13 March 2013 Set-up 13 and 14 March: Lectures. 20 and 21 March: Paper presentations (Four groups, 45 min per group). Before

More information

Bandit algorithms for tree search Applications to games, optimization, and planning

Bandit algorithms for tree search Applications to games, optimization, and planning Bandit algorithms for tree search Applications to games, optimization, and planning Rémi Munos SequeL project: Sequential Learning http://sequel.futurs.inria.fr/ INRIA Lille - Nord Europe Journées MAS

More information

Treatment Allocations Based on Multi-Armed Bandit Strategies

Treatment Allocations Based on Multi-Armed Bandit Strategies Treatment Allocations Based on Multi-Armed Bandit Strategies Wei Qian and Yuhong Yang Applied Economics and Statistics, University of Delaware School of Statistics, University of Minnesota Innovative Statistics

More information

An algorithm with nearly optimal pseudo-regret for both stochastic and adversarial bandits

An algorithm with nearly optimal pseudo-regret for both stochastic and adversarial bandits JMLR: Workshop and Conference Proceedings vol 49:1 5, 2016 An algorithm with nearly optimal pseudo-regret for both stochastic and adversarial bandits Peter Auer Chair for Information Technology Montanuniversitaet

More information

Dynamic Pricing with Varying Cost

Dynamic Pricing with Varying Cost Dynamic Pricing with Varying Cost L. Jeff Hong College of Business City University of Hong Kong Joint work with Ying Zhong and Guangwu Liu Outline 1 Introduction 2 Problem Formulation 3 Pricing Policy

More information

Multi-Armed Bandit, Dynamic Environments and Meta-Bandits

Multi-Armed Bandit, Dynamic Environments and Meta-Bandits Multi-Armed Bandit, Dynamic Environments and Meta-Bandits C. Hartland, S. Gelly, N. Baskiotis, O. Teytaud and M. Sebag Lab. of Computer Science CNRS INRIA Université Paris-Sud, Orsay, France Abstract This

More information

Bernoulli Bandits An Empirical Comparison

Bernoulli Bandits An Empirical Comparison Bernoulli Bandits An Empirical Comparison Ronoh K.N1,2, Oyamo R.1,2, Milgo E.1,2, Drugan M.1 and Manderick B.1 1- Vrije Universiteit Brussel - Computer Sciences Department - AI Lab Pleinlaan 2 - B-1050

More information

1 Bandit View on Noisy Optimization

1 Bandit View on Noisy Optimization 1 Bandit View on Noisy Optimization Jean-Yves Audibert audibert@certis.enpc.fr Imagine, Université Paris Est; Willow, CNRS/ENS/INRIA Paris, France Sébastien Bubeck sebastien.bubeck@inria.fr Sequel Project,

More information

Monte-Carlo Planning: Basic Principles and Recent Progress

Monte-Carlo Planning: Basic Principles and Recent Progress Monte-Carlo Planning: Basic Principles and Recent Progress Alan Fern School of EECS Oregon State University Outline Preliminaries: Markov Decision Processes What is Monte-Carlo Planning? Uniform Monte-Carlo

More information

Adaptive Experiments for Policy Choice. March 8, 2019

Adaptive Experiments for Policy Choice. March 8, 2019 Adaptive Experiments for Policy Choice Maximilian Kasy Anja Sautmann March 8, 2019 Introduction The goal of many experiments is to inform policy choices: 1. Job search assistance for refugees: Treatments:

More information

Learning for Revenue Optimization. Andrés Muñoz Medina Renato Paes Leme

Learning for Revenue Optimization. Andrés Muñoz Medina Renato Paes Leme Learning for Revenue Optimization Andrés Muñoz Medina Renato Paes Leme How to succeed in business with basic ML? ML $1 $5 $10 $9 Google $35 $1 $8 $7 $7 Revenue $8 $30 $24 $18 $10 $1 $5 Price $7 $8$9$10

More information

Monte-Carlo Planning Look Ahead Trees. Alan Fern

Monte-Carlo Planning Look Ahead Trees. Alan Fern Monte-Carlo Planning Look Ahead Trees Alan Fern 1 Monte-Carlo Planning Outline Single State Case (multi-armed bandits) A basic tool for other algorithms Monte-Carlo Policy Improvement Policy rollout Policy

More information

Exploration for sequential decision making Application to games, tree search, optimization, and planning

Exploration for sequential decision making Application to games, tree search, optimization, and planning Exploration for sequential decision making Application to games, tree search, optimization, and planning Rémi Munos SequeL project: Sequential Learning http://sequel.futurs.inria.fr/ INRIA Lille - Nord

More information

Bandit Learning with switching costs

Bandit Learning with switching costs Bandit Learning with switching costs Jian Ding, University of Chicago joint with: Ofer Dekel (MSR), Tomer Koren (Technion) and Yuval Peres (MSR) June 2016, Harvard University Online Learning with k -Actions

More information

The Non-stationary Stochastic Multi-armed Bandit Problem

The Non-stationary Stochastic Multi-armed Bandit Problem The Non-stationary Stochastic Multi-armed Bandit Problem Robin Allesiardo, Raphaël Féraud, Odalric-Ambrym Maillard To cite this version: Robin Allesiardo, Raphaël Féraud, Odalric-Ambrym Maillard The Non-stationary

More information

Blazing the trails before beating the path: Sample-efficient Monte-Carlo planning

Blazing the trails before beating the path: Sample-efficient Monte-Carlo planning Blazing the trails before beating the path: Sample-efficient Monte-Carlo planning Jean-Bastien Grill Michal Valko SequeL team, INRIA Lille - Nord Europe, France jean-bastien.grill@inria.fr michal.valko@inria.fr

More information

Optimistic Planning for the Stochastic Knapsack Problem

Optimistic Planning for the Stochastic Knapsack Problem Optimistic Planning for the Stochastic Knapsack Problem Anonymous Author Anonymous Author 2 Anonymous Author 3 Unknown Institution Unknown Institution 2 Unknown Institution 3 Abstract The stochastic knapsack

More information

High Dimensional Bayesian Optimisation and Bandits via Additive Models

High Dimensional Bayesian Optimisation and Bandits via Additive Models 1/20 High Dimensional Bayesian Optimisation and Bandits via Additive Models Kirthevasan Kandasamy, Jeff Schneider, Barnabás Póczos ICML 15 July 8 2015 2/20 Bandits & Optimisation Maximum Likelihood inference

More information

Applying Monte Carlo Tree Search to Curling AI

Applying Monte Carlo Tree Search to Curling AI AI 1,a) 2,b) MDP Applying Monte Carlo Tree Search to Curling AI Katsuki Ohto 1,a) Tetsuro Tanaka 2,b) Abstract: We propose an action decision method based on Monte Carlo Tree Search for MDPs with continuous

More information

Zooming Algorithm for Lipschitz Bandits

Zooming Algorithm for Lipschitz Bandits Zooming Algorithm for Lipschitz Bandits Alex Slivkins Microsoft Research New York City Based on joint work with Robert Kleinberg and Eli Upfal (STOC'08) Running examples Dynamic pricing. You release a

More information

The Irrevocable Multi-Armed Bandit Problem

The Irrevocable Multi-Armed Bandit Problem The Irrevocable Multi-Armed Bandit Problem Ritesh Madan Qualcomm-Flarion Technologies May 27, 2009 Joint work with Vivek Farias (MIT) 2 Multi-Armed Bandit Problem n arms, where each arm i is a Markov Decision

More information

Rollout Allocation Strategies for Classification-based Policy Iteration

Rollout Allocation Strategies for Classification-based Policy Iteration Rollout Allocation Strategies for Classification-based Policy Iteration V. Gabillon, A. Lazaric & M. Ghavamzadeh firstname.lastname@inria.fr Workshop on Reinforcement Learning and Search in Very Large

More information

Foundations of Artificial Intelligence

Foundations of Artificial Intelligence Foundations of Artificial Intelligence 44. Monte-Carlo Tree Search: Introduction Thomas Keller Universität Basel May 27, 2016 Board Games: Overview chapter overview: 41. Introduction and State of the Art

More information

Online Network Revenue Management using Thompson Sampling

Online Network Revenue Management using Thompson Sampling Online Network Revenue Management using Thompson Sampling Kris Johnson Ferreira David Simchi-Levi He Wang Working Paper 16-031 Online Network Revenue Management using Thompson Sampling Kris Johnson Ferreira

More information

D I S C O N T I N U O U S DEMAND FUNCTIONS: ESTIMATION AND PRICING. Rotterdam May 24, 2018

D I S C O N T I N U O U S DEMAND FUNCTIONS: ESTIMATION AND PRICING. Rotterdam May 24, 2018 D I S C O N T I N U O U S DEMAND FUNCTIONS: ESTIMATION AND PRICING Arnoud V. den Boer University of Amsterdam N. Bora Keskin Duke University Rotterdam May 24, 2018 Dynamic pricing and learning: Learning

More information

Lecture 7: Bayesian approach to MAB - Gittins index

Lecture 7: Bayesian approach to MAB - Gittins index Advanced Topics in Machine Learning and Algorithmic Game Theory Lecture 7: Bayesian approach to MAB - Gittins index Lecturer: Yishay Mansour Scribe: Mariano Schain 7.1 Introduction In the Bayesian approach

More information

Regret Minimization against Strategic Buyers

Regret Minimization against Strategic Buyers Regret Minimization against Strategic Buyers Mehryar Mohri Courant Institute & Google Research Andrés Muñoz Medina Google Research Motivation Online advertisement: revenue of modern search engine and

More information

Dynamic Pricing under Finite Space Demand Uncertainty: A Multi-Armed Bandit with Dependent Arms

Dynamic Pricing under Finite Space Demand Uncertainty: A Multi-Armed Bandit with Dependent Arms 1 Dynamic Pricing under Finite Space Demand Uncertainty: A Multi-Armed Bandit with Dependent Arms Pouya Tehrani, Yixuan Zhai, Qing Zhao Department of Electrical and Computer Engineering University of California,

More information

Reduced-Variance Payoff Estimation in Adversarial Bandit Problems

Reduced-Variance Payoff Estimation in Adversarial Bandit Problems Reduced-Variance Payoff Estimation in Adversarial Bandit Problems Levente Kocsis and Csaba Szepesvári Computer and Automation Research Institute of the Hungarian Academy of Sciences, Kende u. 13-17, 1111

More information

Multi-armed bandits in dynamic pricing

Multi-armed bandits in dynamic pricing Multi-armed bandits in dynamic pricing Arnoud den Boer University of Twente, Centrum Wiskunde & Informatica Amsterdam Lancaster, January 11, 2016 Dynamic pricing A firm sells a product, with abundant inventory,

More information

Biasing Monte-Carlo Simulations through RAVE Values

Biasing Monte-Carlo Simulations through RAVE Values Biasing Monte-Carlo Simulations through RAVE Values Arpad Rimmel, Fabien Teytaud, Olivier Teytaud To cite this version: Arpad Rimmel, Fabien Teytaud, Olivier Teytaud. Biasing Monte-Carlo Simulations through

More information

Large-Scale SVM Optimization: Taking a Machine Learning Perspective

Large-Scale SVM Optimization: Taking a Machine Learning Perspective Large-Scale SVM Optimization: Taking a Machine Learning Perspective Shai Shalev-Shwartz Toyota Technological Institute at Chicago Joint work with Nati Srebro Talk at NEC Labs, Princeton, August, 2008 Shai

More information

Lecture 17: More on Markov Decision Processes. Reinforcement learning

Lecture 17: More on Markov Decision Processes. Reinforcement learning Lecture 17: More on Markov Decision Processes. Reinforcement learning Learning a model: maximum likelihood Learning a value function directly Monte Carlo Temporal-difference (TD) learning COMP-424, Lecture

More information

Estimation after Model Selection

Estimation after Model Selection Estimation after Model Selection Vanja M. Dukić Department of Health Studies University of Chicago E-Mail: vanja@uchicago.edu Edsel A. Peña* Department of Statistics University of South Carolina E-Mail:

More information

Chapter 8: Sampling distributions of estimators Sections

Chapter 8: Sampling distributions of estimators Sections Chapter 8: Sampling distributions of estimators Sections 8.1 Sampling distribution of a statistic 8.2 The Chi-square distributions 8.3 Joint Distribution of the sample mean and sample variance Skip: p.

More information

MVE051/MSG Lecture 7

MVE051/MSG Lecture 7 MVE051/MSG810 2017 Lecture 7 Petter Mostad Chalmers November 20, 2017 The purpose of collecting and analyzing data Purpose: To build and select models for parts of the real world (which can be used for

More information

Chapter 7: Estimation Sections

Chapter 7: Estimation Sections 1 / 31 : Estimation Sections 7.1 Statistical Inference Bayesian Methods: 7.2 Prior and Posterior Distributions 7.3 Conjugate Prior Distributions 7.4 Bayes Estimators Frequentist Methods: 7.5 Maximum Likelihood

More information

The Value of Stochastic Modeling in Two-Stage Stochastic Programs

The Value of Stochastic Modeling in Two-Stage Stochastic Programs The Value of Stochastic Modeling in Two-Stage Stochastic Programs Erick Delage, HEC Montréal Sharon Arroyo, The Boeing Cie. Yinyu Ye, Stanford University Tuesday, October 8 th, 2013 1 Delage et al. Value

More information

Data-Driven Pricing of Demand Response

Data-Driven Pricing of Demand Response Data-Driven Pricing of Demand Response Kia Khezeli Eilyan Bitar Abstract We consider the setting in which an electric power utility seeks to curtail its peak electricity demand by offering a fixed group

More information

The Sample-Complexity of General Reinforcement Learning

The Sample-Complexity of General Reinforcement Learning Tor Lattimore Australian National University Marcus Hutter Australian National University Peter Sunehag Australian National University Abstract We present a new algorithm for general reinforcement learning

More information

Lecture 5. 1 Online Learning. 1.1 Learning Setup (Perspective of Universe) CSCI699: Topics in Learning & Game Theory

Lecture 5. 1 Online Learning. 1.1 Learning Setup (Perspective of Universe) CSCI699: Topics in Learning & Game Theory CSCI699: Topics in Learning & Game Theory Lecturer: Shaddin Dughmi Lecture 5 Scribes: Umang Gupta & Anastasia Voloshinov In this lecture, we will give a brief introduction to online learning and then go

More information

Monte-Carlo Planning Look Ahead Trees. Alan Fern

Monte-Carlo Planning Look Ahead Trees. Alan Fern Monte-Carlo Planning Look Ahead Trees Alan Fern 1 Monte-Carlo Planning Outline Single State Case (multi-armed bandits) A basic tool for other algorithms Monte-Carlo Policy Improvement Policy rollout Policy

More information

Point Estimation. Principle of Unbiased Estimation. When choosing among several different estimators of θ, select one that is unbiased.

Point Estimation. Principle of Unbiased Estimation. When choosing among several different estimators of θ, select one that is unbiased. Point Estimation Point Estimation Definition A point estimate of a parameter θ is a single number that can be regarded as a sensible value for θ. A point estimate is obtained by selecting a suitable statistic

More information

Applied Statistics I

Applied Statistics I Applied Statistics I Liang Zhang Department of Mathematics, University of Utah July 14, 2008 Liang Zhang (UofU) Applied Statistics I July 14, 2008 1 / 18 Point Estimation Liang Zhang (UofU) Applied Statistics

More information

Chapter 7: Estimation Sections

Chapter 7: Estimation Sections 1 / 40 Chapter 7: Estimation Sections 7.1 Statistical Inference Bayesian Methods: Chapter 7 7.2 Prior and Posterior Distributions 7.3 Conjugate Prior Distributions 7.4 Bayes Estimators Frequentist Methods:

More information

AN ONLINE LEARNING APPROACH TO ALGORITHMIC BIDDING FOR VIRTUAL TRADING

AN ONLINE LEARNING APPROACH TO ALGORITHMIC BIDDING FOR VIRTUAL TRADING AN ONLINE LEARNING APPROACH TO ALGORITHMIC BIDDING FOR VIRTUAL TRADING Lang Tong School of Electrical & Computer Engineering Cornell University, Ithaca, NY Joint work with Sevi Baltaoglu and Qing Zhao

More information

Lecture 11: Bandits with Knapsacks

Lecture 11: Bandits with Knapsacks CMSC 858G: Bandits, Experts and Games 11/14/16 Lecture 11: Bandits with Knapsacks Instructor: Alex Slivkins Scribed by: Mahsa Derakhshan 1 Motivating Example: Dynamic Pricing The basic version of the dynamic

More information

MATH3075/3975 FINANCIAL MATHEMATICS TUTORIAL PROBLEMS

MATH3075/3975 FINANCIAL MATHEMATICS TUTORIAL PROBLEMS MATH307/37 FINANCIAL MATHEMATICS TUTORIAL PROBLEMS School of Mathematics and Statistics Semester, 04 Tutorial problems should be used to test your mathematical skills and understanding of the lecture material.

More information

A reinforcement learning process in extensive form games

A reinforcement learning process in extensive form games A reinforcement learning process in extensive form games Jean-François Laslier CNRS and Laboratoire d Econométrie de l Ecole Polytechnique, Paris. Bernard Walliser CERAS, Ecole Nationale des Ponts et Chaussées,

More information

Confidence Intervals Introduction

Confidence Intervals Introduction Confidence Intervals Introduction A point estimate provides no information about the precision and reliability of estimation. For example, the sample mean X is a point estimate of the population mean μ

More information

Lesson 3: Basic theory of stochastic processes

Lesson 3: Basic theory of stochastic processes Lesson 3: Basic theory of stochastic processes Dipartimento di Ingegneria e Scienze dell Informazione e Matematica Università dell Aquila, umberto.triacca@univaq.it Probability space We start with some

More information

MDP Algorithms. Thomas Keller. June 20, University of Basel

MDP Algorithms. Thomas Keller. June 20, University of Basel MDP Algorithms Thomas Keller University of Basel June 20, 208 Outline of this lecture Markov decision processes Planning via determinization Monte-Carlo methods Monte-Carlo Tree Search Heuristic Search

More information

Risk Measurement in Credit Portfolio Models

Risk Measurement in Credit Portfolio Models 9 th DGVFM Scientific Day 30 April 2010 1 Risk Measurement in Credit Portfolio Models 9 th DGVFM Scientific Day 30 April 2010 9 th DGVFM Scientific Day 30 April 2010 2 Quantitative Risk Management Profit

More information

Optimal Search for Parameters in Monte Carlo Simulation for Derivative Pricing

Optimal Search for Parameters in Monte Carlo Simulation for Derivative Pricing Optimal Search for Parameters in Monte Carlo Simulation for Derivative Pricing Prof. Chuan-Ju Wang Department of Computer Science University of Taipei Joint work with Prof. Ming-Yang Kao March 28, 2014

More information

Asymptotic results discrete time martingales and stochastic algorithms

Asymptotic results discrete time martingales and stochastic algorithms Asymptotic results discrete time martingales and stochastic algorithms Bernard Bercu Bordeaux University, France IFCAM Summer School Bangalore, India, July 2015 Bernard Bercu Asymptotic results for discrete

More information

Distributed Non-Stochastic Experts

Distributed Non-Stochastic Experts Distributed Non-Stochastic Experts Varun Kanade UC Berkeley vkanade@eecs.berkeley.edu Zhenming Liu Princeton University zhenming@cs.princeton.edu Božidar Radunović Microsoft Research bozidar@microsoft.com

More information

Sequential Decision Making

Sequential Decision Making Sequential Decision Making Dynamic programming Christos Dimitrakakis Intelligent Autonomous Systems, IvI, University of Amsterdam, The Netherlands March 18, 2008 Introduction Some examples Dynamic programming

More information

EE641 Digital Image Processing II: Purdue University VISE - October 29,

EE641 Digital Image Processing II: Purdue University VISE - October 29, EE64 Digital Image Processing II: Purdue University VISE - October 9, 004 The EM Algorithm. Suffient Statistics and Exponential Distributions Let p(y θ) be a family of density functions parameterized by

More information

Chapter 8: Sampling distributions of estimators Sections

Chapter 8: Sampling distributions of estimators Sections Chapter 8 continued Chapter 8: Sampling distributions of estimators Sections 8.1 Sampling distribution of a statistic 8.2 The Chi-square distributions 8.3 Joint Distribution of the sample mean and sample

More information

Accelerated Stochastic Gradient Descent Praneeth Netrapalli MSR India

Accelerated Stochastic Gradient Descent Praneeth Netrapalli MSR India Accelerated Stochastic Gradient Descent Praneeth Netrapalli MSR India Presented at OSL workshop, Les Houches, France. Joint work with Prateek Jain, Sham M. Kakade, Rahul Kidambi and Aaron Sidford Linear

More information

CS885 Reinforcement Learning Lecture 3b: May 9, 2018

CS885 Reinforcement Learning Lecture 3b: May 9, 2018 CS885 Reinforcement Learning Lecture 3b: May 9, 2018 Intro to Reinforcement Learning [SutBar] Sec. 5.1-5.3, 6.1-6.3, 6.5, [Sze] Sec. 3.1, 4.3, [SigBuf] Sec. 2.1-2.5, [RusNor] Sec. 21.1-21.3, CS885 Spring

More information

MA 490. Senior Project

MA 490. Senior Project MA 490 Senior Project Project: Prove that the cumulative binomial distributions and the Poisson distributions can be approximated by the Normal distribution and that that approximation gets better as the

More information

Can we have no Nash Equilibria? Can you have more than one Nash Equilibrium? CS 430: Artificial Intelligence Game Theory II (Nash Equilibria)

Can we have no Nash Equilibria? Can you have more than one Nash Equilibrium? CS 430: Artificial Intelligence Game Theory II (Nash Equilibria) CS 0: Artificial Intelligence Game Theory II (Nash Equilibria) ACME, a video game hardware manufacturer, has to decide whether its next game machine will use DVDs or CDs Best, a video game software producer,

More information

EFFICIENT MONTE CARLO ALGORITHM FOR PRICING BARRIER OPTIONS

EFFICIENT MONTE CARLO ALGORITHM FOR PRICING BARRIER OPTIONS Commun. Korean Math. Soc. 23 (2008), No. 2, pp. 285 294 EFFICIENT MONTE CARLO ALGORITHM FOR PRICING BARRIER OPTIONS Kyoung-Sook Moon Reprinted from the Communications of the Korean Mathematical Society

More information

Optimal Regret Minimization in Posted-Price Auctions with Strategic Buyers

Optimal Regret Minimization in Posted-Price Auctions with Strategic Buyers Optimal Regret Minimization in Posted-Price Auctions with Strategic Buyers Mehryar Mohri Courant Institute and Google Research 251 Mercer Street New York, NY 10012 mohri@cims.nyu.edu Andres Muñoz Medina

More information

Chapter 4: Asymptotic Properties of MLE (Part 3)

Chapter 4: Asymptotic Properties of MLE (Part 3) Chapter 4: Asymptotic Properties of MLE (Part 3) Daniel O. Scharfstein 09/30/13 1 / 1 Breakdown of Assumptions Non-Existence of the MLE Multiple Solutions to Maximization Problem Multiple Solutions to

More information

Decision Analysis. Carlos A. Santos Silva June 5 th, 2009

Decision Analysis. Carlos A. Santos Silva June 5 th, 2009 Decision Analysis Carlos A. Santos Silva June 5 th, 2009 What is decision analysis? Often, there is more than one possible solution: Decision depends on the criteria Decision often must be made in uncertain

More information

Bandit based Monte-Carlo Planning

Bandit based Monte-Carlo Planning Bandit based Monte-Carlo Planning Levente Kocsis and Csaba Szepesvári Computer and Automation Research Institute of the Hungarian Academy of Sciences, Kende u. 13-17, 1111 Budapest, Hungary kocsis@sztaki.hu

More information

Learning the Demand Curve in Posted-Price Digital Goods Auctions

Learning the Demand Curve in Posted-Price Digital Goods Auctions Learning the Demand Curve in Posted-Price Digital Goods Auctions ABSTRACT Meenal Chhabra Rensselaer Polytechnic Inst. Dept. of Computer Science Troy, NY, USA chhabm@cs.rpi.edu Online digital goods auctions

More information

OPTIMAL PORTFOLIO CONTROL WITH TRADING STRATEGIES OF FINITE

OPTIMAL PORTFOLIO CONTROL WITH TRADING STRATEGIES OF FINITE Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 005 Seville, Spain, December 1-15, 005 WeA11.6 OPTIMAL PORTFOLIO CONTROL WITH TRADING STRATEGIES OF

More information

Strategies and Nash Equilibrium. A Whirlwind Tour of Game Theory

Strategies and Nash Equilibrium. A Whirlwind Tour of Game Theory Strategies and Nash Equilibrium A Whirlwind Tour of Game Theory (Mostly from Fudenberg & Tirole) Players choose actions, receive rewards based on their own actions and those of the other players. Example,

More information

ONLINE LEARNING IN LIMIT ORDER BOOK TRADE EXECUTION

ONLINE LEARNING IN LIMIT ORDER BOOK TRADE EXECUTION ONLINE LEARNING IN LIMIT ORDER BOOK TRADE EXECUTION Nima Akbarzadeh, Cem Tekin Bilkent University Electrical and Electronics Engineering Department Ankara, Turkey Mihaela van der Schaar Oxford Man Institute

More information

Modelling Returns: the CER and the CAPM

Modelling Returns: the CER and the CAPM Modelling Returns: the CER and the CAPM Carlo Favero Favero () Modelling Returns: the CER and the CAPM 1 / 20 Econometric Modelling of Financial Returns Financial data are mostly observational data: they

More information

INVERSE REWARD DESIGN

INVERSE REWARD DESIGN INVERSE REWARD DESIGN Dylan Hadfield-Menell, Smith Milli, Pieter Abbeel, Stuart Russell, Anca Dragan University of California, Berkeley Slides by Anthony Chen Inverse Reinforcement Learning (Review) Inverse

More information

Elif Özge Özdamar T Reinforcement Learning - Theory and Applications February 14, 2006

Elif Özge Özdamar T Reinforcement Learning - Theory and Applications February 14, 2006 On the convergence of Q-learning Elif Özge Özdamar elif.ozdamar@helsinki.fi T-61.6020 Reinforcement Learning - Theory and Applications February 14, 2006 the covergence of stochastic iterative algorithms

More information

Inference of Several Log-normal Distributions

Inference of Several Log-normal Distributions Inference of Several Log-normal Distributions Guoyi Zhang 1 and Bose Falk 2 Abstract This research considers several log-normal distributions when variances are heteroscedastic and group sizes are unequal.

More information

Machine Learning for Quantitative Finance

Machine Learning for Quantitative Finance Machine Learning for Quantitative Finance Fast derivative pricing Sofie Reyners Joint work with Jan De Spiegeleer, Dilip Madan and Wim Schoutens Derivative pricing is time-consuming... Vanilla option pricing

More information

Estimating Pricing Kernel via Series Methods

Estimating Pricing Kernel via Series Methods Estimating Pricing Kernel via Series Methods Maria Grith Wolfgang Karl Härdle Melanie Schienle Ladislaus von Bortkiewicz Chair of Statistics Chair of Econometrics C.A.S.E. Center for Applied Statistics

More information

Stochastic Dual Dynamic Programming

Stochastic Dual Dynamic Programming 1 / 43 Stochastic Dual Dynamic Programming Operations Research Anthony Papavasiliou 2 / 43 Contents [ 10.4 of BL], [Pereira, 1991] 1 Recalling the Nested L-Shaped Decomposition 2 Drawbacks of Nested Decomposition

More information

Revenue Management with Incomplete Demand Information

Revenue Management with Incomplete Demand Information Revenue Management with Incomplete Demand Information Victor F. Araman René Caldentey Stern School of Business, New York University, New York, NY 10012. Abstract Consider a seller who is endowed with a

More information

Dynamic Pricing with Limited Supply (extended abstract)

Dynamic Pricing with Limited Supply (extended abstract) 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

More information

Definition 9.1 A point estimate is any function T (X 1,..., X n ) of a random sample. We often write an estimator of the parameter θ as ˆθ.

Definition 9.1 A point estimate is any function T (X 1,..., X n ) of a random sample. We often write an estimator of the parameter θ as ˆθ. 9 Point estimation 9.1 Rationale behind point estimation When sampling from a population described by a pdf f(x θ) or probability function P [X = x θ] knowledge of θ gives knowledge of the entire population.

More information

Cooperative Games with Monte Carlo Tree Search

Cooperative Games with Monte Carlo Tree Search Int'l Conf. Artificial Intelligence ICAI'5 99 Cooperative Games with Monte Carlo Tree Search CheeChian Cheng and Norman Carver Department of Computer Science, Southern Illinois University, Carbondale,

More information

Chapter 3 Discrete Random Variables and Probability Distributions

Chapter 3 Discrete Random Variables and Probability Distributions Chapter 3 Discrete Random Variables and Probability Distributions Part 2: Mean and Variance of a Discrete Random Variable Section 3.4 1 / 16 Discrete Random Variable - Expected Value In a random experiment,

More information

Contents. 1 Introduction. Math 321 Chapter 5 Confidence Intervals. 1 Introduction 1

Contents. 1 Introduction. Math 321 Chapter 5 Confidence Intervals. 1 Introduction 1 Math 321 Chapter 5 Confidence Intervals (draft version 2019/04/11-11:17:37) Contents 1 Introduction 1 2 Confidence interval for mean µ 2 2.1 Known variance................................. 2 2.2 Unknown

More information

Characterization of the Optimum

Characterization of the Optimum ECO 317 Economics of Uncertainty Fall Term 2009 Notes for lectures 5. Portfolio Allocation with One Riskless, One Risky Asset Characterization of the Optimum Consider a risk-averse, expected-utility-maximizing

More information

Optimally Thresholded Realized Power Variations for Lévy Jump Diffusion Models

Optimally Thresholded Realized Power Variations for Lévy Jump Diffusion Models Optimally Thresholded Realized Power Variations for Lévy Jump Diffusion Models José E. Figueroa-López 1 1 Department of Statistics Purdue University University of Missouri-Kansas City Department of Mathematics

More information

Compound Reinforcement Learning: Theory and An Application to Finance

Compound Reinforcement Learning: Theory and An Application to Finance Compound Reinforcement Learning: Theory and An Application to Finance Tohgoroh Matsui 1, Takashi Goto 2, Kiyoshi Izumi 3,4, and Yu Chen 3 1 Chubu University, 1200 Matsumoto-cho, Kasugai, 487-8501 Aichi,

More information

arxiv: v1 [cs.lg] 23 Nov 2014

arxiv: v1 [cs.lg] 23 Nov 2014 Revenue Optimization in Posted-Price Auctions with Strategic Buyers arxiv:.0v [cs.lg] Nov 0 Mehryar Mohri Courant Institute and Google Research Mercer Street New York, NY 00 mohri@cims.nyu.edu Abstract

More information

Chapter 7: Point Estimation and Sampling Distributions

Chapter 7: Point Estimation and Sampling Distributions Chapter 7: Point Estimation and Sampling Distributions Seungchul Baek Department of Statistics, University of South Carolina STAT 509: Statistics for Engineers 1 / 20 Motivation In chapter 3, we learned

More information

Reinforcement Learning

Reinforcement Learning Reinforcement Learning Michèle Sebag ; TP : Herilalaina Rakotoarison TAO, CNRS INRIA Université Paris-Sud Nov. 26th, 2018 Credit for slides: Richard Sutton, Freek Stulp, Olivier Pietquin 1 / 90 Where we

More information

Lecture 22. Survey Sampling: an Overview

Lecture 22. Survey Sampling: an Overview Math 408 - Mathematical Statistics Lecture 22. Survey Sampling: an Overview March 25, 2013 Konstantin Zuev (USC) Math 408, Lecture 22 March 25, 2013 1 / 16 Survey Sampling: What and Why In surveys sampling

More information

Supplementary Material: Strategies for exploration in the domain of losses

Supplementary Material: Strategies for exploration in the domain of losses 1 Supplementary Material: Strategies for exploration in the domain of losses Paul M. Krueger 1,, Robert C. Wilson 2,, and Jonathan D. Cohen 3,4 1 Department of Psychology, University of California, Berkeley

More information

Stochastic Approximation Algorithms and Applications

Stochastic Approximation Algorithms and Applications Harold J. Kushner G. George Yin Stochastic Approximation Algorithms and Applications With 24 Figures Springer Contents Preface and Introduction xiii 1 Introduction: Applications and Issues 1 1.0 Outline

More information

Adaptive Control Applied to Financial Market Data

Adaptive Control Applied to Financial Market Data Adaptive Control Applied to Financial Market Data J.Sindelar Charles University, Faculty of Mathematics and Physics and Institute of Information Theory and Automation, Academy of Sciences of the Czech

More information

The value of foresight

The value of foresight Philip Ernst Department of Statistics, Rice University Support from NSF-DMS-1811936 (co-pi F. Viens) and ONR-N00014-18-1-2192 gratefully acknowledged. IMA Financial and Economic Applications June 11, 2018

More information

u (x) < 0. and if you believe in diminishing return of the wealth, then you would require

u (x) < 0. and if you believe in diminishing return of the wealth, then you would require Chapter 8 Markowitz Portfolio Theory 8.7 Investor Utility Functions People are always asked the question: would more money make you happier? The answer is usually yes. The next question is how much more

More information

Parametric Inference and Dynamic State Recovery from Option Panels. Nicola Fusari

Parametric Inference and Dynamic State Recovery from Option Panels. Nicola Fusari Parametric Inference and Dynamic State Recovery from Option Panels Nicola Fusari Joint work with Torben G. Andersen and Viktor Todorov July 2012 Motivation Under realistic assumptions derivatives are nonredundant

More information