Risk-averse Reinforcement Learning for Algorithmic Trading
|
|
- Luke Stevens
- 6 years ago
- Views:
Transcription
1 Risk-averse Reinforcement Learning for Algorithmic Trading Yun Shen 1 Ruihong Huang 2,3 Chang Yan 2 Klaus Obermayer 1 1 TECHNISCHE UNIVERSITÄT BERLIN 2 HUMBOLDT-UNIVERSITÄT ZU BERLIN 3 LOBSTER TEAM IEEE CIFEr, London March 28, 214
2 Introduction Transaction cost: TC = X(P d P ) + x j P j x j P +(X x j )(P n P ) + visible }{{}}{{} invest. related trading related visible: commission fees, taxes, etc. Task: to liquid a huge inventory over a short time horizon. Data: high-frequency (millisecond) limit orders in NASDAQ Method: reinforcement learning (RL) + risk control Kissell & Glantz, Opt. Trading Strategies, 23; Nevmyvaka et al., ICML, 26 Yun Shen (TU Berlin) Risk-averse RL for Algo Trading Introduction 3/18
3 Risk Uncertain future prices, volumes, etc. Example: 21 flash crash Standard RL is risk-neutral does not take risk into consideration perform badly when outlier events happen Variance as a risk measure: non-gaussian noise is computationally infeasible Gaussian noise is not always the case need new measures of risk and new algorithms Nevmyvaka et al., ICML, 26; Almgren & Chriss, J. of Risk, 21 On May 6, 21, the prices of many U.S.-based equity products experienced an extraordinarily rapid decline and recovery. That afternoon, major equity indices in both the futures and securities markets, each already down over 4% from their prior-day close, suddenly plummeted a further 5-6% in a matter of minutes before rebounding almost as quickly. CFTC & SEC Report Yun Shen (TU Berlin) Risk-averse RL for Algo Trading Introduction 4/18
4 Markov decision processes Objective: x j P j x j P +(X x j )(P n P ) max π J(π, s) :=E[S T S 1 = s,π] [ T ] =E R(S t, A t) S 1 = s,π t=1 reward function R : S A Ω R policy π = [π 1,π 2,...,π T ],π t : S A key assumption: Markov P(S t+1 F t) = P(S t+1 S t, A t) [ [ ]] J(π, s) = E π 1 S 1 =s R(S 1, A 1 )+E π 2 S R(S 2 2, A 2 )+...+E π T S [R(S T T, A T )]... Adding risk: max π E π [S T ] λv π [S T ] λ controls the risk sensitivity. example of V: standard deviation it is difficult 3 to solve the problem except the case with Gaussian noise see, e.g., Puterman, 1994; Almgren & Chriss, 2; Mannor & Tsitsiklis, 21. Yun Shen (TU Berlin) Risk-averse RL for Algo Trading Model 6/18
5 Evaluation function/risk measure E π 1 S 1 =s [ [ R(S 1, A 1 )+E π 2 S 2 R(S 2, A 2)+...+E π T S T [R(S T, A T )]... ]] [ [ ]] U π 1 S 1 =s R(S 1, A 1 )+U π 2 S 2 R(S 2, A 2)+...+U π T S T [R(S T, A T )]... U( s, a) is a risk measure for all (s, a) monotonicity translation invariance concavity/coherency Utility-based shortfall 2 : U s,a(x) = sup{m R E s,a[u(x m)] } u: a concave, continuous and strictly increasing function satisfying u() =. concave u risk averse. Shen et al., SIAM J. on Cont. & Opt., 213; Artzner et al., Math. Finance, 1999; Föllmer & Schied, Finance & Stoch., 22; Föllmer & Schied, 24. Yun Shen (TU Berlin) Risk-averse RL for Algo Trading Model 7/18
6 Utility-based shortfall U s,a(x) = sup{m R E s,a[u(x m)] } we consider { 1 u(x) = λ [(x+1)λ 1] x x x < λ (, 1) controls the degree of risk-averseness. λ = 1: U s,a( ) = E s,a( ). Example: {(r 1, p),(r 2, 1 p)} (r 1 > r 2) subjective probability: w(p) = U(p) r 2 r 1 r 2 (prospect theory) u(x) x w(p) p Yun Shen (TU Berlin) Risk-averse RL for Algo Trading Model 8/18
7 Risk-averse reinforcement learning at tth time point, repeat a at state s N times {R i, s i} i=1,2,...,n. iterative update: π (N) t Q (i+1) t (s, a) = Q (i) t (s, a)+ 1 i u (R i + max a ) Q t+1 (s i, a) Q (i) t (s, a) (s) = arg max a A Q (N) (s, a) πt, the optimal policy, as N t (*) Shen et al., Neu. Comp., 214; Dunkel & Weber, Math. Oper. Re., 21. initialize Q T+1 (s, a) = for all s S, a A; for t = T to 1 do initialize Q t(s, a) = for all s S, a A; for each state s S and a A do for n = 1 to N do execute action a at s to obtain sampled reward R and successive state s ; update Q t(s, a) according to (*); end for end for end for needs no knowledge of transition model P(S t+1 S t, A t) data driven; online alg.: adapts to new data easily; parallel computing is possible; risk control by u, specifically, λ. Yun Shen (TU Berlin) Risk-averse RL for Algo Trading Model 9/18
8 Problem Setting Task: sell V shares of AMZN in NASDAQ within H min. Data Provided by LOBSTER with two price levels, i.e., only two best asks and bids. Training Test Test period contains the flash crash on Performance evaluation: mid-quote at time average execution price cost = 1 4 mid-quote at time Risk Standard deviation of costs 95%-quantile cost 95% Yun Shen (TU Berlin) Risk-averse RL for Algo Trading Experiment 11/18
9 MDP formulation time resolution total time horizon H = 1min. Limit orders are submitted to the market at t = n H, n =, 1,...,T 1. T states V = the target volume, I = the number of inventory units. state i = v I/V. market variables: spread, vol. misbalance, signed vol. etc. actions: a = submit a sell order at price ask a (unit: US cent) with all remaining shares. rewards: the cash inflow resulted from a (partial) execution of the limit order placed at ask a ) 2) Price 3) 4) cf. Nevmyvaka et al., ICML, 26. Yun Shen (TU Berlin) Risk-averse RL for Algo Trading Experiment 12/18
10 Results I: Tuning of λ 6 average trading cost 3 standard deviation 4 2 risk neutral V=2k,T=5 V=2k,T= λ 95% qunatile cost risk neutral λ cost at flash crash risk neutral 2 risk neutral λ λ Yun Shen (TU Berlin) Risk-averse RL for Algo Trading Experiment 13/18
11 Results II: Flash Crash Yun Shen (TU Berlin) Risk-averse RL for Algo Trading Experiment 14/18
12 Results II: Flash Crash 4 35 RN no spread RN spread RA no spread RA spread V1k T5 I5 V1k T1 I1 V2k T5 I5 V2k T1 I1 Trading costs on the flash crash spot. Yun Shen (TU Berlin) Risk-averse RL for Algo Trading Experiment 15/18
13 Results III: Overall Performance average trading cost RN no spread RN spread RA no spread RA spread V1k T5 I5 V1k T1 I1 V2k T5 I5 V2k T1 I1 standard deviation RN no spread RN spread RA no spread RA spread V1k T5 I5 V1k T1 I1 V2k T5 I5 V2k T1 I1 95% quantile cost RN no spread RN spread RA no spread RA spread V1k T5 I5 V1k T1 I1 V2k T5 I5 V2k T1 I1 Yun Shen (TU Berlin) Risk-averse RL for Algo Trading Experiment 16/18
14 Conclusion and outlook Conclusion: our novel risk-averse RL significantly reduces the trading cost at the spot of flash crash remarkably lowers down risk in the whole test period at the price of a slight increase of average trading cost Outlook market impact expand state space: test with various market variables. expand action space with volume number. other u-functions, even risk-seeking type?! Thank you for your patience! Yun Shen (TU Berlin) Risk-averse RL for Algo Trading Conclusion and Outlook 18/18
ONLINE LEARNING IN LIMIT ORDER BOOK TRADE EXECUTION
ONLINE LEARNING IN LIMIT ORDER BOOK TRADE EXECUTION Nima Akbarzadeh, Cem Tekin Bilkent University Electrical and Electronics Engineering Department Ankara, Turkey Mihaela van der Schaar Oxford Man Institute
More informationOptimal liquidation with market parameter shift: a forward approach
Optimal liquidation with market parameter shift: a forward approach (with S. Nadtochiy and T. Zariphopoulou) Haoran Wang Ph.D. candidate University of Texas at Austin ICERM June, 2017 Problem Setup and
More informationMaking Decisions. CS 3793 Artificial Intelligence Making Decisions 1
Making Decisions CS 3793 Artificial Intelligence Making Decisions 1 Planning under uncertainty should address: The world is nondeterministic. Actions are not certain to succeed. Many events are outside
More informationOptimal Portfolio Liquidation with Dynamic Coherent Risk
Optimal Portfolio Liquidation with Dynamic Coherent Risk Andrey Selivanov 1 Mikhail Urusov 2 1 Moscow State University and Gazprom Export 2 Ulm University Analysis, Stochastics, and Applications. A Conference
More informationForecasting prices from level-i quotes in the presence of hidden liquidity
Forecasting prices from level-i quotes in the presence of hidden liquidity S. Stoikov, M. Avellaneda and J. Reed December 5, 2011 Background Automated or computerized trading Accounts for 70% of equity
More informationRisk-Averse Anticipation for Dynamic Vehicle Routing
Risk-Averse Anticipation for Dynamic Vehicle Routing Marlin W. Ulmer 1 and Stefan Voß 2 1 Technische Universität Braunschweig, Mühlenpfordtstr. 23, 38106 Braunschweig, Germany, m.ulmer@tu-braunschweig.de
More informationSequential Decision Making
Sequential Decision Making Dynamic programming Christos Dimitrakakis Intelligent Autonomous Systems, IvI, University of Amsterdam, The Netherlands March 18, 2008 Introduction Some examples Dynamic programming
More informationReferences. H. Föllmer, A. Schied, Stochastic Finance (3rd Ed.) de Gruyter 2011 (chapters 4 and 11)
General references on risk measures P. Embrechts, R. Frey, A. McNeil, Quantitative Risk Management, (2nd Ed.) Princeton University Press, 2015 H. Föllmer, A. Schied, Stochastic Finance (3rd Ed.) de Gruyter
More informationDecision Theory: Value Iteration
Decision Theory: Value Iteration CPSC 322 Decision Theory 4 Textbook 9.5 Decision Theory: Value Iteration CPSC 322 Decision Theory 4, Slide 1 Lecture Overview 1 Recap 2 Policies 3 Value Iteration Decision
More informationCSE 473: Artificial Intelligence
CSE 473: Artificial Intelligence Markov Decision Processes (MDPs) Luke Zettlemoyer Many slides over the course adapted from Dan Klein, Stuart Russell or Andrew Moore 1 Announcements PS2 online now Due
More informationReinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration
Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration Piyush Rai CS5350/6350: Machine Learning November 29, 2011 Reinforcement Learning Supervised Learning: Uses explicit supervision
More informationIntro to Reinforcement Learning. Part 3: Core Theory
Intro to Reinforcement Learning Part 3: Core Theory Interactive Example: You are the algorithm! Finite Markov decision processes (finite MDPs) dynamics p p p Experience: S 0 A 0 R 1 S 1 A 1 R 2 S 2 A 2
More informationReinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration
Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration Piyush Rai CS5350/6350: Machine Learning November 29, 2011 Reinforcement Learning Supervised Learning: Uses explicit supervision
More informationMarkov Decision Processes. Lirong Xia
Markov Decision Processes Lirong Xia Today ØMarkov decision processes search with uncertain moves and infinite space ØComputing optimal policy value iteration policy iteration 2 Grid World Ø The agent
More informationCS 188: Artificial Intelligence Spring Announcements
CS 188: Artificial Intelligence Spring 2011 Lecture 9: MDPs 2/16/2011 Pieter Abbeel UC Berkeley Many slides over the course adapted from either Dan Klein, Stuart Russell or Andrew Moore 1 Announcements
More informationLecture 17: More on Markov Decision Processes. Reinforcement learning
Lecture 17: More on Markov Decision Processes. Reinforcement learning Learning a model: maximum likelihood Learning a value function directly Monte Carlo Temporal-difference (TD) learning COMP-424, Lecture
More informationOptimal Trading Strategy With Optimal Horizon
Optimal Trading Strategy With Optimal Horizon Financial Math Festival Florida State University March 1, 2008 Edward Qian PanAgora Asset Management Trading An Integral Part of Investment Process Return
More informationDynamic Risk Management in Electricity Portfolio Optimization via Polyhedral Risk Functionals
Dynamic Risk Management in Electricity Portfolio Optimization via Polyhedral Risk Functionals A. Eichhorn and W. Römisch Humboldt-University Berlin, Department of Mathematics, Germany http://www.math.hu-berlin.de/~romisch
More information16 MAKING SIMPLE DECISIONS
247 16 MAKING SIMPLE DECISIONS Let us associate each state S with a numeric utility U(S), which expresses the desirability of the state A nondeterministic action A will have possible outcome states Result
More informationCSEP 573: Artificial Intelligence
CSEP 573: Artificial Intelligence Markov Decision Processes (MDP)! Ali Farhadi Many slides over the course adapted from Luke Zettlemoyer, Dan Klein, Pieter Abbeel, Stuart Russell or Andrew Moore 1 Outline
More informationCOS402- Artificial Intelligence Fall Lecture 17: MDP: Value Iteration and Policy Iteration
COS402- Artificial Intelligence Fall 2015 Lecture 17: MDP: Value Iteration and Policy Iteration Outline The Bellman equation and Bellman update Contraction Value iteration Policy iteration The Bellman
More information91.420/543: Artificial Intelligence UMass Lowell CS Fall 2010
91.420/543: Artificial Intelligence UMass Lowell CS Fall 2010 Lecture 17 & 18: Markov Decision Processes Oct 12 13, 2010 A subset of Lecture 9 slides from Dan Klein UC Berkeley Many slides over the course
More informationReinforcement Learning
Reinforcement Learning Basic idea: Receive feedback in the form of rewards Agent s utility is defined by the reward function Must (learn to) act so as to maximize expected rewards Grid World The agent
More informationScenario-Based Value-at-Risk Optimization
Scenario-Based Value-at-Risk Optimization Oleksandr Romanko Quantitative Research Group, Algorithmics Incorporated, an IBM Company Joint work with Helmut Mausser Fields Industrial Optimization Seminar
More informationCS 188: Artificial Intelligence
CS 188: Artificial Intelligence Markov Decision Processes Dan Klein, Pieter Abbeel University of California, Berkeley Non Deterministic Search Example: Grid World A maze like problem The agent lives in
More informationReinforcement Learning and Simulation-Based Search
Reinforcement Learning and Simulation-Based Search David Silver Outline 1 Reinforcement Learning 2 3 Planning Under Uncertainty Reinforcement Learning Markov Decision Process Definition A Markov Decision
More informationCPS 270: Artificial Intelligence Markov decision processes, POMDPs
CPS 270: Artificial Intelligence http://www.cs.duke.edu/courses/fall08/cps270/ Markov decision processes, POMDPs Instructor: Vincent Conitzer Warmup: a Markov process with rewards We derive some reward
More informationCOMP417 Introduction to Robotics and Intelligent Systems. Reinforcement Learning - 2
COMP417 Introduction to Robotics and Intelligent Systems Reinforcement Learning - 2 Speaker: Sandeep Manjanna Acklowledgement: These slides use material from Pieter Abbeel s, Dan Klein s and John Schulman
More informationMarkov Decision Process
Markov Decision Process Human-aware Robotics 2018/02/13 Chapter 17.3 in R&N 3rd Ø Announcement: q Slides for this lecture are here: http://www.public.asu.edu/~yzhan442/teaching/cse471/lectures/mdp-ii.pdf
More informationAM 121: Intro to Optimization Models and Methods
AM 121: Intro to Optimization Models and Methods Lecture 18: Markov Decision Processes Yiling Chen and David Parkes Lesson Plan Markov decision processes Policies and Value functions Solving: average reward,
More informationReinforcement Learning
Reinforcement Learning MDP March May, 2013 MDP MDP: S, A, P, R, γ, µ State can be partially observable: Partially Observable MDPs () Actions can be temporally extended: Semi MDPs (SMDPs) and Hierarchical
More informationReport for technical cooperation between Georgia Institute of Technology and ONS - Operador Nacional do Sistema Elétrico Risk Averse Approach
Report for technical cooperation between Georgia Institute of Technology and ONS - Operador Nacional do Sistema Elétrico Risk Averse Approach Alexander Shapiro and Wajdi Tekaya School of Industrial and
More informationThe Agent-Environment Interface Goals, Rewards, Returns The Markov Property The Markov Decision Process Value Functions Optimal Value Functions
The Agent-Environment Interface Goals, Rewards, Returns The Markov Property The Markov Decision Process Value Functions Optimal Value Functions Optimality and Approximation Finite MDP: {S, A, R, p, γ}
More informationMDPs and Value Iteration 2/20/17
MDPs and Value Iteration 2/20/17 Recall: State Space Search Problems A set of discrete states A distinguished start state A set of actions available to the agent in each state An action function that,
More informationMartingale Pricing Theory in Discrete-Time and Discrete-Space Models
IEOR E4707: Foundations of Financial Engineering c 206 by Martin Haugh Martingale Pricing Theory in Discrete-Time and Discrete-Space Models These notes develop the theory of martingale pricing in a discrete-time,
More informationTemporal Abstraction in RL
Temporal Abstraction in RL How can an agent represent stochastic, closed-loop, temporally-extended courses of action? How can it act, learn, and plan using such representations? HAMs (Parr & Russell 1998;
More informationLecture 12: MDP1. Victor R. Lesser. CMPSCI 683 Fall 2010
Lecture 12: MDP1 Victor R. Lesser CMPSCI 683 Fall 2010 Biased Random GSAT - WalkSat Notice no random restart 2 Today s lecture Search where there is Uncertainty in Operator Outcome --Sequential Decision
More information16 MAKING SIMPLE DECISIONS
253 16 MAKING SIMPLE DECISIONS Let us associate each state S with a numeric utility U(S), which expresses the desirability of the state A nondeterministic action a will have possible outcome states Result(a)
More informationMaking Complex Decisions
Ch. 17 p.1/29 Making Complex Decisions Chapter 17 Ch. 17 p.2/29 Outline Sequential decision problems Value iteration algorithm Policy iteration algorithm Ch. 17 p.3/29 A simple environment 3 +1 p=0.8 2
More informationRisk-Averse Decision Making and Control
Marek Petrik University of New Hampshire Mohammad Ghavamzadeh Adobe Research February 4, 2017 Introduction to Risk Averse Modeling Outline Introduction to Risk Averse Modeling (Average) Value at Risk Coherent
More informationReasoning with Uncertainty
Reasoning with Uncertainty Markov Decision Models Manfred Huber 2015 1 Markov Decision Process Models Markov models represent the behavior of a random process, including its internal state and the externally
More informationarxiv: v2 [q-fin.tr] 29 Oct 2017
Instantaneous order impact and high-frequency strategy optimization in limit order books arxiv:1707.01167v2 [q-fin.tr] 29 Oct 2017 Federico Gonzalez and Mark Schervish, Department of Statistics, Carnegie
More informationESTIMATION OF UTILITY FUNCTIONS: MARKET VS. REPRESENTATIVE AGENT THEORY
ESTIMATION OF UTILITY FUNCTIONS: MARKET VS. REPRESENTATIVE AGENT THEORY Kai Detlefsen Wolfgang K. Härdle Rouslan A. Moro, Deutsches Institut für Wirtschaftsforschung (DIW) Center for Applied Statistics
More information17 MAKING COMPLEX DECISIONS
267 17 MAKING COMPLEX DECISIONS The agent s utility now depends on a sequence of decisions In the following 4 3grid environment the agent makes a decision to move (U, R, D, L) at each time step When the
More informationTDT4171 Artificial Intelligence Methods
TDT47 Artificial Intelligence Methods Lecture 7 Making Complex Decisions Norwegian University of Science and Technology Helge Langseth IT-VEST 0 helgel@idi.ntnu.no TDT47 Artificial Intelligence Methods
More informationLecture 2: Making Good Sequences of Decisions Given a Model of World. CS234: RL Emma Brunskill Winter 2018
Lecture 2: Making Good Sequences of Decisions Given a Model of World CS234: RL Emma Brunskill Winter 218 Human in the loop exoskeleton work from Steve Collins lab Class Structure Last Time: Introduction
More informationCS 188: Artificial Intelligence. Outline
C 188: Artificial Intelligence Markov Decision Processes (MDPs) Pieter Abbeel UC Berkeley ome slides adapted from Dan Klein 1 Outline Markov Decision Processes (MDPs) Formalism Value iteration In essence
More informationOptimal routing and placement of orders in limit order markets
Optimal routing and placement of orders in limit order markets Rama CONT Arseniy KUKANOV Imperial College London Columbia University New York CFEM-GARP Joint Event and Seminar 05/01/13, New York Choices,
More informationBasic Framework. About this class. Rewards Over Time. [This lecture adapted from Sutton & Barto and Russell & Norvig]
Basic Framework [This lecture adapted from Sutton & Barto and Russell & Norvig] About this class Markov Decision Processes The Bellman Equation Dynamic Programming for finding value functions and optimal
More informationRisk measures: Yet another search of a holy grail
Risk measures: Yet another search of a holy grail Dirk Tasche Financial Services Authority 1 dirk.tasche@gmx.net Mathematics of Financial Risk Management Isaac Newton Institute for Mathematical Sciences
More informationEE365: Markov Decision Processes
EE365: Markov Decision Processes Markov decision processes Markov decision problem Examples 1 Markov decision processes 2 Markov decision processes add input (or action or control) to Markov chain with
More informationReinforcement Learning. Slides based on those used in Berkeley's AI class taught by Dan Klein
Reinforcement Learning Slides based on those used in Berkeley's AI class taught by Dan Klein Reinforcement Learning Basic idea: Receive feedback in the form of rewards Agent s utility is defined by the
More informationPrice Impact and Optimal Execution Strategy
OXFORD MAN INSTITUE, UNIVERSITY OF OXFORD SUMMER RESEARCH PROJECT Price Impact and Optimal Execution Strategy Bingqing Liu Supervised by Stephen Roberts and Dieter Hendricks Abstract Price impact refers
More informationOptimal Policies for Distributed Data Aggregation in Wireless Sensor Networks
Optimal Policies for Distributed Data Aggregation in Wireless Sensor Networks Hussein Abouzeid Department of Electrical Computer and Systems Engineering Rensselaer Polytechnic Institute abouzeid@ecse.rpi.edu
More informationOverview: Representation Techniques
1 Overview: Representation Techniques Week 6 Representations for classical planning problems deterministic environment; complete information Week 7 Logic programs for problem representations including
More informationMarkov Decision Processes II
Markov Decision Processes II Daisuke Oyama Topics in Economic Theory December 17, 2014 Review Finite state space S, finite action space A. The value of a policy σ A S : v σ = β t Q t σr σ, t=0 which satisfies
More informationLecture 7: Bayesian approach to MAB - Gittins index
Advanced Topics in Machine Learning and Algorithmic Game Theory Lecture 7: Bayesian approach to MAB - Gittins index Lecturer: Yishay Mansour Scribe: Mariano Schain 7.1 Introduction In the Bayesian approach
More informationEnergy Storage Arbitrage in Real-Time Markets via Reinforcement Learning
Energy Storage Arbitrage in Real-Time Markets via Reinforcement Learning Hao Wang, Baosen Zhang Department of Electrical Engineering, University of Washington, Seattle, WA 9895 Email: {hwang6,zhangbao}@uw.edu
More informationUtility Indifference Pricing and Dynamic Programming Algorithm
Chapter 8 Utility Indifference ricing and Dynamic rogramming Algorithm In the Black-Scholes framework, we can perfectly replicate an option s payoff. However, it may not be true beyond the Black-Scholes
More informationOn the Lower Arbitrage Bound of American Contingent Claims
On the Lower Arbitrage Bound of American Contingent Claims Beatrice Acciaio Gregor Svindland December 2011 Abstract We prove that in a discrete-time market model the lower arbitrage bound of an American
More informationMarkov Decision Processes
Markov Decision Processes Ryan P. Adams COS 324 Elements of Machine Learning Princeton University We now turn to a new aspect of machine learning, in which agents take actions and become active in their
More informationLong-Term Values in MDPs, Corecursively
Long-Term Values in MDPs, Corecursively Applied Category Theory, 15-16 March 2018, NIST Helle Hvid Hansen Delft University of Technology Helle Hvid Hansen (TU Delft) MDPs, Corecursively NIST, 15/Mar/2018
More informationMarkov Decision Processes (MDPs) CS 486/686 Introduction to AI University of Waterloo
Markov Decision Processes (MDPs) CS 486/686 Introduction to AI University of Waterloo Outline Sequential Decision Processes Markov chains Highlight Markov property Discounted rewards Value iteration Markov
More informationarxiv: v1 [math.pr] 6 Apr 2015
Analysis of the Optimal Resource Allocation for a Tandem Queueing System arxiv:1504.01248v1 [math.pr] 6 Apr 2015 Liu Zaiming, Chen Gang, Wu Jinbiao School of Mathematics and Statistics, Central South University,
More informationSequential Coalition Formation for Uncertain Environments
Sequential Coalition Formation for Uncertain Environments Hosam Hanna Computer Sciences Department GREYC - University of Caen 14032 Caen - France hanna@info.unicaen.fr Abstract In several applications,
More informationPortfolio Management and Optimal Execution via Convex Optimization
Portfolio Management and Optimal Execution via Convex Optimization Enzo Busseti Stanford University April 9th, 2018 Problems portfolio management choose trades with optimization minimize risk, maximize
More informationNon-Deterministic Search
Non-Deterministic Search MDP s 1 Non-Deterministic Search How do you plan (search) when your actions might fail? In general case, how do you plan, when the actions have multiple possible outcomes? 2 Example:
More informationMarkov Decision Processes
Markov Decision Processes Robert Platt Northeastern University Some images and slides are used from: 1. CS188 UC Berkeley 2. AIMA 3. Chris Amato Stochastic domains So far, we have studied search Can use
More informationTemporal Abstraction in RL. Outline. Example. Markov Decision Processes (MDPs) ! Options
Temporal Abstraction in RL Outline How can an agent represent stochastic, closed-loop, temporally-extended courses of action? How can it act, learn, and plan using such representations?! HAMs (Parr & Russell
More informationLogistics. CS 473: Artificial Intelligence. Markov Decision Processes. PS 2 due today Midterm in one week
CS 473: Artificial Intelligence Markov Decision Processes Dan Weld University of Washington [Slides originally created by Dan Klein & Pieter Abbeel for CS188 Intro to AI at UC Berkeley. All CS188 materials
More informationOptimal Incentive Contract with Costly and Flexible Monitoring
Optimal Incentive Contract with Costly and Flexible Monitoring Anqi Li 1 Ming Yang 2 1 Department of Economics, Washington University in St. Louis 2 Fuqua School of Business, Duke University January 2016
More informationOptimal Portfolio Liquidation and Macro Hedging
Bloomberg Quant Seminar, October 15, 2015 Optimal Portfolio Liquidation and Macro Hedging Marco Avellaneda Courant Institute, YU Joint work with Yilun Dong and Benjamin Valkai Liquidity Risk Measures Liquidity
More informationNBER WORKING PAPER SERIES HIGH FREQUENCY TRADERS: TAKING ADVANTAGE OF SPEED. Yacine Aït-Sahalia Mehmet Saglam
NBER WORKING PAPER SERIES HIGH FREQUENCY TRADERS: TAKING ADVANTAGE OF SPEED Yacine Aït-Sahalia Mehmet Saglam Working Paper 19531 http://www.nber.org/papers/w19531 NATIONAL BUREAU OF ECONOMIC RESEARCH 1050
More informationPrudence, risk measures and the Optimized Certainty Equivalent: a note
Working Paper Series Department of Economics University of Verona Prudence, risk measures and the Optimized Certainty Equivalent: a note Louis Raymond Eeckhoudt, Elisa Pagani, Emanuela Rosazza Gianin WP
More informationUncertain Covariance Models
Uncertain Covariance Models RISK FORECASTS THAT KNOW HOW ACCURATE THEY ARE AND WHERE AU G 1 8, 2 0 1 5 Q WA FA F E W B O S TO N A N I S H R. S H A H, C FA A N I S H R S @ I N V E S T M E N TG R A D E M
More informationValuation of Hong Kong REIT Based on Risk Sensitive Value Measure Method
International Journal of Real Options and Strategy 4 (2016) 1 33 Theoretical Paper Valuation of Hong Kong REIT Based on Risk Sensitive Value Measure Method Lan Ban Tetsuya Misawa Yoshio Miyahara Graduate
More informationLecture 5: Iterative Combinatorial Auctions
COMS 6998-3: Algorithmic Game Theory October 6, 2008 Lecture 5: Iterative Combinatorial Auctions Lecturer: Sébastien Lahaie Scribe: Sébastien Lahaie In this lecture we examine a procedure that generalizes
More informationComplex Decisions. Sequential Decision Making
Sequential Decision Making Outline Sequential decision problems Value iteration Policy iteration POMDPs (basic concepts) Slides partially based on the Book "Reinforcement Learning: an introduction" by
More informationMarkov Decision Processes
Markov Decision Processes Robert Platt Northeastern University Some images and slides are used from: 1. CS188 UC Berkeley 2. RN, AIMA Stochastic domains Image: Berkeley CS188 course notes (downloaded Summer
More informationAlgorithmic Trading using Reinforcement Learning augmented with Hidden Markov Model
Algorithmic Trading using Reinforcement Learning augmented with Hidden Markov Model Simerjot Kaur (sk3391) Stanford University Abstract This work presents a novel algorithmic trading system based on reinforcement
More informationarxiv: v1 [math.oc] 25 Mar 2015
A Time Consistent Formulation of Risk Constrained Stochastic Optimal Control Yin-Lam Chow, Marco Pavone arxiv:153.7461v1 [math.oc 25 Mar 215 Abstract Time-consistency is an essential requirement in risk
More informationPricing and ordering decisions of risk-averse newsvendors: Expectile-based value at risk (E-VaR) approach
NTMSCI 6, No. 2, 102-109 (2018) 102 New Trends in Mathematical Sciences http://dx.doi.org/10.20852/ntmsci.2018.275 Pricing and ordering decisions of risk-averse newsvendors: Expectile-based value at risk
More informationCS 188: Artificial Intelligence Fall 2011
CS 188: Artificial Intelligence Fall 2011 Lecture 9: MDPs 9/22/2011 Dan Klein UC Berkeley Many slides over the course adapted from either Stuart Russell or Andrew Moore 2 Grid World The agent lives in
More informationRisk-Sensitive Querying for Adapting Web Service Compositions
Risk-Sensitive Querying for Adapting Web Service Compositions John Harney and Prashant Doshi LSDIS Lab, Dept. of Computer Science, University of Georgia, Athens, GA 30602 {jfh,pdoshi}@cs.uga.edu Abstract.
More informationFIN11. Trading and Market Microstructure. Autumn 2017
FIN11 Trading and Market Microstructure Autumn 2017 Lecturer: Klaus R. Schenk-Hoppé Session 7 Dealers Themes Dealers What & Why Market making Profits & Risks Wake-up video: Wall Street in 1920s http://www.youtube.com/watch?
More informationMDPs: Bellman Equations, Value Iteration
MDPs: Bellman Equations, Value Iteration Sutton & Barto Ch 4 (Cf. AIMA Ch 17, Section 2-3) Adapted from slides kindly shared by Stuart Russell Sutton & Barto Ch 4 (Cf. AIMA Ch 17, Section 2-3) 1 Appreciations
More informationSupplementary Material: Strategies for exploration in the domain of losses
1 Supplementary Material: Strategies for exploration in the domain of losses Paul M. Krueger 1,, Robert C. Wilson 2,, and Jonathan D. Cohen 3,4 1 Department of Psychology, University of California, Berkeley
More informationAlgorithmic and High-Frequency Trading
LOBSTER June 2 nd 2016 Algorithmic and High-Frequency Trading Julia Schmidt Overview Introduction Market Making Grossman-Miller Market Making Model Trading Costs Measuring Liquidity Market Making using
More informationIntroduction to Reinforcement Learning. MAL Seminar
Introduction to Reinforcement Learning MAL Seminar 2014-2015 RL Background Learning by interacting with the environment Reward good behavior, punish bad behavior Trial & Error Combines ideas from psychology
More informationCS 461: Machine Learning Lecture 8
CS 461: Machine Learning Lecture 8 Dr. Kiri Wagstaff kiri.wagstaff@calstatela.edu 2/23/08 CS 461, Winter 2008 1 Plan for Today Review Clustering Reinforcement Learning How different from supervised, unsupervised?
More informationOptimal Search for Parameters in Monte Carlo Simulation for Derivative Pricing
Optimal Search for Parameters in Monte Carlo Simulation for Derivative Pricing Prof. Chuan-Ju Wang Department of Computer Science University of Taipei Joint work with Prof. Ming-Yang Kao March 28, 2014
More informationReinforcement learning and Markov Decision Processes (MDPs) (B) Avrim Blum
Reinforcement learning and Markov Decision Processes (MDPs) 15-859(B) Avrim Blum RL and MDPs General scenario: We are an agent in some state. Have observations, perform actions, get rewards. (See lights,
More informationPayment mechanisms and risk-aversion in electricity markets with uncertain supply
Payment mechanisms and risk-aversion in electricity markets with uncertain supply Ryan Cory-Wright Joint work with Golbon Zakeri (thanks to Andy Philpott) ISMP, Bordeaux, July 2018. ORC, Massachusetts
More informationMarkov Decision Processes: Making Decision in the Presence of Uncertainty. (some of) R&N R&N
Markov Decision Processes: Making Decision in the Presence of Uncertainty (some of) R&N 16.1-16.6 R&N 17.1-17.4 Different Aspects of Machine Learning Supervised learning Classification - concept learning
More informationCS 343: Artificial Intelligence
CS 343: Artificial Intelligence Markov Decision Processes II Prof. Scott Niekum The University of Texas at Austin [These slides based on those of Dan Klein and Pieter Abbeel for CS188 Intro to AI at UC
More informationThe Problem of Temporal Abstraction
The Problem of Temporal Abstraction How do we connect the high level to the low-level? " the human level to the physical level? " the decide level to the action level? MDPs are great, search is great,
More informationCS 188: Artificial Intelligence
CS 188: Artificial Intelligence Markov Decision Processes Dan Klein, Pieter Abbeel University of California, Berkeley Non-Deterministic Search 1 Example: Grid World A maze-like problem The agent lives
More informationDynamic Programming and Reinforcement Learning
Dynamic Programming and Reinforcement Learning Daniel Russo Columbia Business School Decision Risk and Operations Division Fall, 2017 Daniel Russo (Columbia) Fall 2017 1 / 34 Supervised Machine Learning
More informationNASDAQ GEMX INET SYSTEM SETTINGS
NASDAQ GEMX INET SYSTEM SETTINGS 1. Hours of Operation 6:00 a.m. ET System begins accepting orders. 9:30 a.m. ET System begins disseminating imbalance and price information for the opening auction. 9:30
More informationThe Irrevocable Multi-Armed Bandit Problem
The Irrevocable Multi-Armed Bandit Problem Ritesh Madan Qualcomm-Flarion Technologies May 27, 2009 Joint work with Vivek Farias (MIT) 2 Multi-Armed Bandit Problem n arms, where each arm i is a Markov Decision
More information