Ellipsoid Method. ellipsoid method. convergence proof. inequality constraints. feasibility problems. Prof. S. Boyd, EE364b, Stanford University
|
|
- Theodora McDowell
- 6 years ago
- Views:
Transcription
1 Ellipsoid Method ellipsoid method convergence proof inequality constraints feasibility problems Prof. S. Boyd, EE364b, Stanford University
2 Ellipsoid method developed by Shor, Nemirovsky, Yudin in 1970s used in 1979 by Khachian to show polynomial solvability of LPs each step requires cutting-plane or subgradient evaluation modest storage (O(n 2 )) modest computation per step (O(n 2 )), via analytical formula efficient in theory; slow but steady in practice Prof. S. Boyd, EE364b, Stanford University 1
3 Motivation in cutting-plane methods serious computation is needed to find next query point (typically O(n 2 m), with not small constant) localization polyhedron grows in complexity as algorithm progresses (we can, however, prune constraints to keep m proportional to n, e.g., m = 4n) ellipsoid method addresses both issues, but retains theoretical efficiency Prof. S. Boyd, EE364b, Stanford University 2
4 Ellipsoid algorithm for minimizing convex function idea: localize x in an ellipsoid instead of a polyhedron 1. at iteration k we know x E (k) 2. set x (k+1) := center(e (k) ); evaluate g (k) f(x (k+1) ) (g (k) = f(x (k) ) if f is differentiable) 3. hence we know (a half-ellipsoid) x E (k) {z g (k+1)t (z x (k+1) ) 0} 4. set E (k+1) := minimum volume ellipsoid covering E (k) {z g (k+1)t (z x (k+1) ) 0} Prof. S. Boyd, EE364b, Stanford University 3
5 g (k+1) E (k) E (k+1) x (k+1) compared to cutting-plane methods: localization set doesn t grow more complicated easy to compute query point but, we add unnecessary points in step 4 Prof. S. Boyd, EE364b, Stanford University 4
6 Properties of ellipsoid method reduces to bisection for n = 1 simple formula for E (k+1) given E (k), g (k+1) E (k+1) can be larger than E (k) in diameter (max semi-axis length), but is always smaller in volume vol(e (k+1) ) < e 1 2n vol(e (k) ) (volume reduction factor degrades rapidly with n, compared to CG or MVE cutting-plane methods) Prof. S. Boyd, EE364b, Stanford University 5
7 Example x (0) x (1) x (2) Prof. S. Boyd, EE364b, Stanford University 6
8 x (3) x (4) x (5) Prof. S. Boyd, EE364b, Stanford University 7
9 Updating the ellipsoid E(x, P) = { z (z x) T P 1 (z x) 1 } E E + x + x g Prof. S. Boyd, EE364b, Stanford University 8
10 (for n > 1) minimum volume ellipsoid containing half-ellipsoid is given by E { z g T (z x) 0 } x + = x 1 n + 1 P g P + = where g = (1/ g T Pg)g n 2 n 2 1 ( P 2 n + 1 P g gt P ) Prof. S. Boyd, EE364b, Stanford University 9
11 Simple stopping criterion f(x ) f(x (k) ) + g (k)t (x x (k) ) f(x (k) ) + inf z E (k) g (k)t (z x (k) ) = f(x (k) ) g (k)t P (k) g (k) second inequality holds since x E k simple stopping criterion: g (k)t P (k) g (k) ǫ = f(x (k) ) f(x ) ǫ Prof. S. Boyd, EE364b, Stanford University 10
12 Basic ellipsoid algorithm ellipsoid described as E(x, P) = {z (z x) T P 1 (z x) 1} given ellipsoid E(x, P) containing x, accuracy ǫ > 0 repeat 1. evaluate g f(x) 2. if g T Pg ǫ, return(x) 3. update ellipsoid 3a. g := g 1 g T Pg 3b. x := x 1 n+1 P g 3c. P := n2 n 2 1 ( P 2 n+1 P g gt P ) Prof. S. Boyd, EE364b, Stanford University 11
13 Interpretation change coordinates so uncertainty is isotropic (same in all directions), i.e., E is unit ball take subgradient step with fixed length 1/(n + 1) Shor calls ellipsoid method gradient method with space dilation in direction of gradient (which, strangely enough, didn t catch on) Prof. S. Boyd, EE364b, Stanford University 12
14 Example PWL function f(x) = max m i=1 (at i x + b i), with n = 20, m = f 0 f(x(k) ) f(x (k) ) p g (k)t P (k) g (k) k Prof. S. Boyd, EE364b, Stanford University 13
15 f (k) best f k Prof. S. Boyd, EE364b, Stanford University 14
16 Improvements keep track of best upper and lower bounds: u k = min i=1,...,k f(x(i) ), stop when u k l k ǫ l k = max (f(x (i) ) ) g (i)t P (i) g (i) i=1,...,k can propagate Cholesky factor of P (avoids problem of P 0 due to numerical roundoff) Prof. S. Boyd, EE364b, Stanford University 15
17 3 2 f 1 U k 0 L k k Prof. S. Boyd, EE364b, Stanford University 16
18 Proof of convergence assumptions: f is Lipschitz: f(y) f(x) G y x E (0) is ball with radius R suppose f(x (i) ) > f + ǫ, i = 0,...,k then f(x) f + ǫ = x E (k) since at iteration i we only discard points with f f(x (i) ) Prof. S. Boyd, EE364b, Stanford University 17
19 from Lipschitz condition, x x ǫ/g = f(x) f + ǫ = x E (k) so B = {x x x ǫ/g} E (k) hence vol(b) vol(e (k) ), so α n (ǫ/g) n e k/2n vol(e (0) ) = e k/2n α n R n (α n is volume of unit ball in R n ) therefore k 2n 2 log(rg/ǫ) Prof. S. Boyd, EE364b, Stanford University 18
20 E (0) x x (k) E (k) B = {x x x ǫ/g} f(x) f + ǫ conclusion: for k > 2n 2 log(rg/ǫ), min i=0,...,k f(x(i) ) f + ǫ Prof. S. Boyd, EE364b, Stanford University 19
21 Interpretation of complexity since x E 0 = {x x x (0) R}, our prior knowledge of f is f [f(x (0) ) GR, f(x (0) )] our prior uncertainty in f is GR after k iterations our knowledge of f is f [ ] min i=0,...,k f(x(i) ) ǫ, min i=0,...,k f(x(i) ) posterior uncertainty in f is ǫ Prof. S. Boyd, EE364b, Stanford University 20
22 iterations required: 2n 2 log RG ǫ = 2n 2 log prior uncertainty posterior uncertainty efficiency: 0.72/n 2 bits per gradient evaluation Prof. S. Boyd, EE364b, Stanford University 21
23 Deep cut ellipsoid method minimum volume ellipsoid containing ellipsoid intersected with halfspace with h 0, is given by x + E { z g T (z x) + h 0 } = x 1 + αn n + 1 P g P + = n2 (1 α 2 ) n 2 1 ( P ) 2(1 + αn) (n + 1)(1 + α) P g gt P where g = g gt Pg, α = h gt Pg (if α > 1, intersection is empty) Prof. S. Boyd, EE364b, Stanford University 22
24 Ellipsoid method with deep objective cuts 10 0 deep cuts shallow cuts 10 1 f (k) best f k Prof. S. Boyd, EE364b, Stanford University 23
25 Inequality constrained problems minimize f 0 (x) subject to f i (x) 0, i = 1,...,m if x (k) feasible, update ellipsoid with objective cut g T 0 (z x (k) ) + f 0 (x (k) ) f (k) best 0, g 0 f 0 (x (k) ) f (k) best is best objective value of feasible iterates so far if x (k) infeasible, update ellipsoid with feasibility cut assuming f j (x (k) ) > 0 g T j (z x (k) ) + f j (x (k) ) 0, g j f j (x (k) ) Prof. S. Boyd, EE364b, Stanford University 24
26 Stopping criterion if x (k) is feasible, we have lower bound on p as before: p f 0 (x (k) ) g (k)t 0 P (k) g (k) 0 if x (k) is infeasible, we have for all x E (k) f j (x) f j (x (k) ) + g (k)t j (x x (k) ) f j (x (k) ) + inf z E (k) g (k)t (z x (k) ) = f j (x (k) ) g (k)t j P (k) g (k) j Prof. S. Boyd, EE364b, Stanford University 25
27 hence, problem is infeasible if for some j, f j (x (k) ) g (k)t j P (k) g (k) j > 0 stopping criteria: if x (k) is feasible and if f j (x (k) ) g (k)t 0 P (k) g (k) 0 ǫ (x (k) is ǫ-suboptimal) g (k)t j P (k) g (k) j > 0 (problem is infeasible) Prof. S. Boyd, EE364b, Stanford University 26
28 Epigraph ellipsoid method use deep cut ellipsoid method to solve problem with variables (x, t) minimize t subject to f 0 (x) t, f i (x) 0, i = 1,...,m when (x (k), t (k) ) infeasible for epigraph problem, use standard deep feasibility cut if f 0 (x (k) ) > t (k), use cut t g T 0 (x x (k) ) + f 0 (x (k) ) if f j (x (k) ) > 0, use cut g T j (x x(k) ) + f j (x (k) ) 0 when (x (k), t (k) ) feasible for epigraph problem, use cut t f 0 (x (k) ) Prof. S. Boyd, EE364b, Stanford University 27
29 Epigraph ellipsoid example 10 0 epigraph method non-epigraph deep cuts 10 1 f (k) best f k Prof. S. Boyd, EE364b, Stanford University 28
Ellipsoid Method. ellipsoid method. convergence proof. inequality constraints. feasibility problems. Prof. S. Boyd, EE392o, Stanford University
Ellipsoid Method ellipsoid method convergence proof inequality constraints feasibility problems Prof. S. Boyd, EE392o, Stanford University Challenges in cutting-plane methods can be difficult to compute
More informationDecomposition Methods
Decomposition Methods separable problems, complicating variables primal decomposition dual decomposition complicating constraints general decomposition structures Prof. S. Boyd, EE364b, Stanford University
More information1 Overview. 2 The Gradient Descent Algorithm. AM 221: Advanced Optimization Spring 2016
AM 22: Advanced Optimization Spring 206 Prof. Yaron Singer Lecture 9 February 24th Overview In the previous lecture we reviewed results from multivariate calculus in preparation for our journey into convex
More informationTrust Region Methods for Unconstrained Optimisation
Trust Region Methods for Unconstrained Optimisation Lecture 9, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Trust
More informationSupport Vector Machines: Training with Stochastic Gradient Descent
Support Vector Machines: Training with Stochastic Gradient Descent Machine Learning Spring 2018 The slides are mainly from Vivek Srikumar 1 Support vector machines Training by maximizing margin The SVM
More informationLecture Quantitative Finance Spring Term 2015
implied Lecture Quantitative Finance Spring Term 2015 : May 7, 2015 1 / 28 implied 1 implied 2 / 28 Motivation and setup implied the goal of this chapter is to treat the implied which requires an algorithm
More informationPart 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)
Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective
More informationOutline. 1 Introduction. 2 Algorithms. 3 Examples. Algorithm 1 General coordinate minimization framework. 1: Choose x 0 R n and set k 0.
Outline Coordinate Minimization Daniel P. Robinson Department of Applied Mathematics and Statistics Johns Hopkins University November 27, 208 Introduction 2 Algorithms Cyclic order with exact minimization
More informationConvex-Cardinality Problems Part II
l 1 -norm Methods for Convex-Cardinality Problems Part II total variation iterated weighted l 1 heuristic matrix rank constraints Prof. S. Boyd, EE364b, Stanford University Total variation reconstruction
More informationConvex-Cardinality Problems
l 1 -norm Methods for Convex-Cardinality Problems problems involving cardinality the l 1 -norm heuristic convex relaxation and convex envelope interpretations examples recent results Prof. S. Boyd, EE364b,
More informationWhat can we do with numerical optimization?
Optimization motivation and background Eddie Wadbro Introduction to PDE Constrained Optimization, 2016 February 15 16, 2016 Eddie Wadbro, Introduction to PDE Constrained Optimization, February 15 16, 2016
More informationFinancial Optimization ISE 347/447. Lecture 15. Dr. Ted Ralphs
Financial Optimization ISE 347/447 Lecture 15 Dr. Ted Ralphs ISE 347/447 Lecture 15 1 Reading for This Lecture C&T Chapter 12 ISE 347/447 Lecture 15 2 Stock Market Indices A stock market index is a statistic
More informationIE 495 Lecture 11. The LShaped Method. Prof. Jeff Linderoth. February 19, February 19, 2003 Stochastic Programming Lecture 11 Slide 1
IE 495 Lecture 11 The LShaped Method Prof. Jeff Linderoth February 19, 2003 February 19, 2003 Stochastic Programming Lecture 11 Slide 1 Before We Begin HW#2 $300 $0 http://www.unizh.ch/ior/pages/deutsch/mitglieder/kall/bib/ka-wal-94.pdf
More informationFeb. 4 Math 2335 sec 001 Spring 2014
Feb. 4 Math 2335 sec 001 Spring 2014 Propagated Error in Function Evaluation Let f (x) be some differentiable function. Suppose x A is an approximation to x T, and we wish to determine the function value
More informationInteger Programming Models
Integer Programming Models Fabio Furini December 10, 2014 Integer Programming Models 1 Outline 1 Combinatorial Auctions 2 The Lockbox Problem 3 Constructing an Index Fund Integer Programming Models 2 Integer
More informationPenalty Functions. The Premise Quadratic Loss Problems and Solutions
Penalty Functions The Premise Quadratic Loss Problems and Solutions The Premise You may have noticed that the addition of constraints to an optimization problem has the effect of making it much more difficult.
More informationDM559/DM545 Linear and integer programming
Department of Mathematics and Computer Science University of Southern Denmark, Odense May 22, 2018 Marco Chiarandini DM559/DM55 Linear and integer programming Sheet, Spring 2018 [pdf format] Contains Solutions!
More informationStochastic Dual Dynamic Programming Algorithm for Multistage Stochastic Programming
Stochastic Dual Dynamic Programg Algorithm for Multistage Stochastic Programg Final presentation ISyE 8813 Fall 2011 Guido Lagos Wajdi Tekaya Georgia Institute of Technology November 30, 2011 Multistage
More informationApproximate Composite Minimization: Convergence Rates and Examples
ISMP 2018 - Bordeaux Approximate Composite Minimization: Convergence Rates and S. Praneeth Karimireddy, Sebastian U. Stich, Martin Jaggi MLO Lab, EPFL, Switzerland sebastian.stich@epfl.ch July 4, 2018
More informationA Robust Option Pricing Problem
IMA 2003 Workshop, March 12-19, 2003 A Robust Option Pricing Problem Laurent El Ghaoui Department of EECS, UC Berkeley 3 Robust optimization standard form: min x sup u U f 0 (x, u) : u U, f i (x, u) 0,
More informationThe method of false position is also an Enclosure or bracketing method. For this method we will be able to remedy some of the minuses of bisection.
Section 2.2 The Method of False Position Features of BISECTION: Plusses: Easy to implement Almost idiot proof o If f(x) is continuous & changes sign on [a, b], then it is GUARANTEED to converge. Requires
More informationGLOBAL CONVERGENCE OF GENERAL DERIVATIVE-FREE TRUST-REGION ALGORITHMS TO FIRST AND SECOND ORDER CRITICAL POINTS
GLOBAL CONVERGENCE OF GENERAL DERIVATIVE-FREE TRUST-REGION ALGORITHMS TO FIRST AND SECOND ORDER CRITICAL POINTS ANDREW R. CONN, KATYA SCHEINBERG, AND LUíS N. VICENTE Abstract. In this paper we prove global
More informationIs Greedy Coordinate Descent a Terrible Algorithm?
Is Greedy Coordinate Descent a Terrible Algorithm? Julie Nutini, Mark Schmidt, Issam Laradji, Michael Friedlander, Hoyt Koepke University of British Columbia Optimization and Big Data, 2015 Context: Random
More informationCSCI 1951-G Optimization Methods in Finance Part 00: Course Logistics Introduction to Finance Optimization Problems
CSCI 1951-G Optimization Methods in Finance Part 00: Course Logistics Introduction to Finance Optimization Problems January 26, 2018 1 / 24 Basic information All information is available in the syllabus
More informationCS227-Scientific Computing. Lecture 6: Nonlinear Equations
CS227-Scientific Computing Lecture 6: Nonlinear Equations A Financial Problem You invest $100 a month in an interest-bearing account. You make 60 deposits, and one month after the last deposit (5 years
More informationFebruary 2 Math 2335 sec 51 Spring 2016
February 2 Math 2335 sec 51 Spring 2016 Section 3.1: Root Finding, Bisection Method Many problems in the sciences, business, manufacturing, etc. can be framed in the form: Given a function f (x), find
More informationOn solving multistage stochastic programs with coherent risk measures
On solving multistage stochastic programs with coherent risk measures Andy Philpott Vitor de Matos y Erlon Finardi z August 13, 2012 Abstract We consider a class of multistage stochastic linear programs
More informationLecture 17: More on Markov Decision Processes. Reinforcement learning
Lecture 17: More on Markov Decision Processes. Reinforcement learning Learning a model: maximum likelihood Learning a value function directly Monte Carlo Temporal-difference (TD) learning COMP-424, Lecture
More informationMaking Gradient Descent Optimal for Strongly Convex Stochastic Optimization
for Strongly Convex Stochastic Optimization Microsoft Research New England NIPS 2011 Optimization Workshop Stochastic Convex Optimization Setting Goal: Optimize convex function F ( ) over convex domain
More informationSOLVING ROBUST SUPPLY CHAIN PROBLEMS
SOLVING ROBUST SUPPLY CHAIN PROBLEMS Daniel Bienstock Nuri Sercan Özbay Columbia University, New York November 13, 2005 Project with Lucent Technologies Optimize the inventory buffer levels in a complicated
More informationChapter 7 One-Dimensional Search Methods
Chapter 7 One-Dimensional Search Methods An Introduction to Optimization Spring, 2014 1 Wei-Ta Chu Golden Section Search! Determine the minimizer of a function over a closed interval, say. The only assumption
More informationFinding Roots by "Closed" Methods
Finding Roots by "Closed" Methods One general approach to finding roots is via so-called "closed" methods. Closed methods A closed method is one which starts with an interval, inside of which you know
More informationThe Irrevocable Multi-Armed Bandit Problem
The Irrevocable Multi-Armed Bandit Problem Ritesh Madan Qualcomm-Flarion Technologies May 27, 2009 Joint work with Vivek Farias (MIT) 2 Multi-Armed Bandit Problem n arms, where each arm i is a Markov Decision
More informationPortfolio selection with multiple risk measures
Portfolio selection with multiple risk measures Garud Iyengar Columbia University Industrial Engineering and Operations Research Joint work with Carlos Abad Outline Portfolio selection and risk measures
More informationConvergence of trust-region methods based on probabilistic models
Convergence of trust-region methods based on probabilistic models A. S. Bandeira K. Scheinberg L. N. Vicente October 24, 2013 Abstract In this paper we consider the use of probabilistic or random models
More informationUnblinded Sample Size Re-Estimation in Bioequivalence Trials with Small Samples. Sam Hsiao, Cytel Lingyun Liu, Cytel Romeo Maciuca, Genentech
Unblinded Sample Size Re-Estimation in Bioequivalence Trials with Small Samples Sam Hsiao, Cytel Lingyun Liu, Cytel Romeo Maciuca, Genentech Goal Describe simple adjustment to CHW method (Cui, Hung, Wang
More information6.896 Topics in Algorithmic Game Theory February 10, Lecture 3
6.896 Topics in Algorithmic Game Theory February 0, 200 Lecture 3 Lecturer: Constantinos Daskalakis Scribe: Pablo Azar, Anthony Kim In the previous lecture we saw that there always exists a Nash equilibrium
More informationLarge-Scale SVM Optimization: Taking a Machine Learning Perspective
Large-Scale SVM Optimization: Taking a Machine Learning Perspective Shai Shalev-Shwartz Toyota Technological Institute at Chicago Joint work with Nati Srebro Talk at NEC Labs, Princeton, August, 2008 Shai
More informationRecall: Data Flow Analysis. Data Flow Analysis Recall: Data Flow Equations. Forward Data Flow, Again
Data Flow Analysis 15-745 3/24/09 Recall: Data Flow Analysis A framework for proving facts about program Reasons about lots of little facts Little or no interaction between facts Works best on properties
More informationAn adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity
An adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity Coralia Cartis, Nick Gould and Philippe Toint Department of Mathematics,
More informationOPTIMAL PORTFOLIO CONTROL WITH TRADING STRATEGIES OF FINITE
Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 005 Seville, Spain, December 1-15, 005 WeA11.6 OPTIMAL PORTFOLIO CONTROL WITH TRADING STRATEGIES OF
More informationEssays on Some Combinatorial Optimization Problems with Interval Data
Essays on Some Combinatorial Optimization Problems with Interval Data a thesis submitted to the department of industrial engineering and the institute of engineering and sciences of bilkent university
More informationQueens College, CUNY, Department of Computer Science Computational Finance CSCI 365 / 765 Spring 2018 Instructor: Dr. Sateesh Mane. September 16, 2018
Queens College, CUNY, Department of Computer Science Computational Finance CSCI 365 / 765 Spring 208 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 208 2 Lecture 2 September 6, 208 2. Bond: more general
More informationCS 174: Combinatorics and Discrete Probability Fall Homework 5. Due: Thursday, October 4, 2012 by 9:30am
CS 74: Combinatorics and Discrete Probability Fall 0 Homework 5 Due: Thursday, October 4, 0 by 9:30am Instructions: You should upload your homework solutions on bspace. You are strongly encouraged to type
More informationCS 3331 Numerical Methods Lecture 2: Functions of One Variable. Cherung Lee
CS 3331 Numerical Methods Lecture 2: Functions of One Variable Cherung Lee Outline Introduction Solving nonlinear equations: find x such that f(x ) = 0. Binary search methods: (Bisection, regula falsi)
More informationOptimization for Chemical Engineers, 4G3. Written midterm, 23 February 2015
Optimization for Chemical Engineers, 4G3 Written midterm, 23 February 2015 Kevin Dunn, kevin.dunn@mcmaster.ca McMaster University Note: No papers, other than this test and the answer booklet are allowed
More informationSensitivity Analysis with Data Tables. 10% annual interest now =$110 one year later. 10% annual interest now =$121 one year later
Sensitivity Analysis with Data Tables Time Value of Money: A Special kind of Trade-Off: $100 @ 10% annual interest now =$110 one year later $110 @ 10% annual interest now =$121 one year later $100 @ 10%
More informationStability in geometric & functional inequalities
Stability in geometric & functional inequalities A. Figalli The University of Texas at Austin www.ma.utexas.edu/users/figalli/ Alessio Figalli (UT Austin) Stability in geom. & funct. ineq. Krakow, July
More informationThe Probabilistic Method - Probabilistic Techniques. Lecture 7: Martingales
The Probabilistic Method - Probabilistic Techniques Lecture 7: Martingales Sotiris Nikoletseas Associate Professor Computer Engineering and Informatics Department 2015-2016 Sotiris Nikoletseas, Associate
More informationTHE TRAVELING SALESMAN PROBLEM FOR MOVING POINTS ON A LINE
THE TRAVELING SALESMAN PROBLEM FOR MOVING POINTS ON A LINE GÜNTER ROTE Abstract. A salesperson wants to visit each of n objects that move on a line at given constant speeds in the shortest possible time,
More information18.440: Lecture 32 Strong law of large numbers and Jensen s inequality
18.440: Lecture 32 Strong law of large numbers and Jensen s inequality Scott Sheffield MIT 1 Outline A story about Pedro Strong law of large numbers Jensen s inequality 2 Outline A story about Pedro Strong
More informationUniversal Portfolios
CS28B/Stat24B (Spring 2008) Statistical Learning Theory Lecture: 27 Universal Portfolios Lecturer: Peter Bartlett Scribes: Boriska Toth and Oriol Vinyals Portfolio optimization setting Suppose we have
More informationA Trust Region Algorithm for Heterogeneous Multiobjective Optimization
A Trust Region Algorithm for Heterogeneous Multiobjective Optimization Jana Thomann and Gabriele Eichfelder 8.0.018 Abstract This paper presents a new trust region method for multiobjective heterogeneous
More informationStochastic Approximation Algorithms and Applications
Harold J. Kushner G. George Yin Stochastic Approximation Algorithms and Applications With 24 Figures Springer Contents Preface and Introduction xiii 1 Introduction: Applications and Issues 1 1.0 Outline
More informationCEC login. Student Details Name SOLUTIONS
Student Details Name SOLUTIONS CEC login Instructions You have roughly 1 minute per point, so schedule your time accordingly. There is only one correct answer per question. Good luck! Question 1. Searching
More informationLecture 5: Iterative Combinatorial Auctions
COMS 6998-3: Algorithmic Game Theory October 6, 2008 Lecture 5: Iterative Combinatorial Auctions Lecturer: Sébastien Lahaie Scribe: Sébastien Lahaie In this lecture we examine a procedure that generalizes
More informationAdaptive cubic overestimation methods for unconstrained optimization
Report no. NA-07/20 Adaptive cubic overestimation methods for unconstrained optimization Coralia Cartis School of Mathematics, University of Edinburgh, The King s Buildings, Edinburgh, EH9 3JZ, Scotland,
More informationOn-line Supplement for Constraint Aggregation in Column Generation Models for Resource-Constrained Covering Problems
Submitted to INFORMS Journal on Computing manuscript (Please, provide the mansucript number!) Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes
More information4 Martingales in Discrete-Time
4 Martingales in Discrete-Time Suppose that (Ω, F, P is a probability space. Definition 4.1. A sequence F = {F n, n = 0, 1,...} is called a filtration if each F n is a sub-σ-algebra of F, and F n F n+1
More informationOptimization 101. Dan dibartolomeo Webinar (from Boston) October 22, 2013
Optimization 101 Dan dibartolomeo Webinar (from Boston) October 22, 2013 Outline of Today s Presentation The Mean-Variance Objective Function Optimization Methods, Strengths and Weaknesses Estimation Error
More informationRobust Dual Dynamic Programming
1 / 18 Robust Dual Dynamic Programming Angelos Georghiou, Angelos Tsoukalas, Wolfram Wiesemann American University of Beirut Olayan School of Business 31 May 217 2 / 18 Inspired by SDDP Stochastic optimization
More informationGlobal convergence rate analysis of unconstrained optimization methods based on probabilistic models
Math. Program., Ser. A DOI 10.1007/s10107-017-1137-4 FULL LENGTH PAPER Global convergence rate analysis of unconstrained optimization methods based on probabilistic models C. Cartis 1 K. Scheinberg 2 Received:
More informationAccelerated Stochastic Gradient Descent Praneeth Netrapalli MSR India
Accelerated Stochastic Gradient Descent Praneeth Netrapalli MSR India Presented at OSL workshop, Les Houches, France. Joint work with Prateek Jain, Sham M. Kakade, Rahul Kidambi and Aaron Sidford Linear
More informationThis method uses not only values of a function f(x), but also values of its derivative f'(x). If you don't know the derivative, you can't use it.
Finding Roots by "Open" Methods The differences between "open" and "closed" methods The differences between "open" and "closed" methods are closed open ----------------- --------------------- uses a bounded
More informationMS&E 246: Lecture 2 The basics. Ramesh Johari January 16, 2007
MS&E 246: Lecture 2 The basics Ramesh Johari January 16, 2007 Course overview (Mainly) noncooperative game theory. Noncooperative: Focus on individual players incentives (note these might lead to cooperation!)
More informationChoice. A. Optimal choice 1. move along the budget line until preferred set doesn t cross the budget set. Figure 5.1.
Choice 34 Choice A. Optimal choice 1. move along the budget line until preferred set doesn t cross the budget set. Figure 5.1. Optimal choice x* 2 x* x 1 1 Figure 5.1 2. note that tangency occurs at optimal
More informationEcon 582 Nonlinear Regression
Econ 582 Nonlinear Regression Eric Zivot June 3, 2013 Nonlinear Regression In linear regression models = x 0 β (1 )( 1) + [ x ]=0 [ x = x] =x 0 β = [ x = x] [ x = x] x = β it is assumed that the regression
More informationInterpolation. 1 What is interpolation? 2 Why are we interested in this?
Interpolation 1 What is interpolation? For a certain function f (x we know only the values y 1 = f (x 1,,y n = f (x n For a point x different from x 1,,x n we would then like to approximate f ( x using
More informationPrinciples of Financial Computing
Principles of Financial Computing Prof. Yuh-Dauh Lyuu Dept. Computer Science & Information Engineering and Department of Finance National Taiwan University c 2008 Prof. Yuh-Dauh Lyuu, National Taiwan University
More informationCongestion Control In The Internet Part 1: Theory. JY Le Boudec 2015
1 Congestion Control In The Internet Part 1: Theory JY Le Boudec 2015 Plan of This Module Part 1: Congestion Control, Theory Part 2: How it is implemented in TCP/IP Textbook 2 3 Theory of Congestion Control
More informationMultistage risk-averse asset allocation with transaction costs
Multistage risk-averse asset allocation with transaction costs 1 Introduction Václav Kozmík 1 Abstract. This paper deals with asset allocation problems formulated as multistage stochastic programming models.
More informationMaximum Contiguous Subsequences
Chapter 8 Maximum Contiguous Subsequences In this chapter, we consider a well-know problem and apply the algorithm-design techniques that we have learned thus far to this problem. While applying these
More informationIntroduction to Numerical Methods (Algorithm)
Introduction to Numerical Methods (Algorithm) 1 2 Example: Find the internal rate of return (IRR) Consider an investor who pays CF 0 to buy a bond that will pay coupon interest CF 1 after one year and
More informationINTRODUCTION TO MODERN PORTFOLIO OPTIMIZATION
INTRODUCTION TO MODERN PORTFOLIO OPTIMIZATION Abstract. This is the rst part in my tutorial series- Follow me to Optimization Problems. In this tutorial, I will touch on the basic concepts of portfolio
More informationOptimization Models one variable optimization and multivariable optimization
Georg-August-Universität Göttingen Optimization Models one variable optimization and multivariable optimization Wenzhong Li lwz@nju.edu.cn Feb 2011 Mathematical Optimization Problems in optimization are
More informationCatalyst Acceleration for Gradient-Based Non-Convex Optimization
Catalyst Acceleration for Gradient-Based Non-Convex Optimization Courtney Paquette, Hongzhou Lin, Dmitriy Drusvyatskiy, Julien Mairal, Zaid Harchaoui To cite this version: Courtney Paquette, Hongzhou Lin,
More informationStochastic Dual Dynamic Programming
1 / 43 Stochastic Dual Dynamic Programming Operations Research Anthony Papavasiliou 2 / 43 Contents [ 10.4 of BL], [Pereira, 1991] 1 Recalling the Nested L-Shaped Decomposition 2 Drawbacks of Nested Decomposition
More informationHints on Some of the Exercises
Hints on Some of the Exercises of the book R. Seydel: Tools for Computational Finance. Springer, 00/004/006/009/01. Preparatory Remarks: Some of the hints suggest ideas that may simplify solving the exercises
More informationCS360 Homework 14 Solution
CS360 Homework 14 Solution Markov Decision Processes 1) Invent a simple Markov decision process (MDP) with the following properties: a) it has a goal state, b) its immediate action costs are all positive,
More informationSo we turn now to many-to-one matching with money, which is generally seen as a model of firms hiring workers
Econ 805 Advanced Micro Theory I Dan Quint Fall 2009 Lecture 20 November 13 2008 So far, we ve considered matching markets in settings where there is no money you can t necessarily pay someone to marry
More informationOptimization Methods in Management Science
Problem Set Rules: Optimization Methods in Management Science MIT 15.053, Spring 2013 Problem Set 6, Due: Thursday April 11th, 2013 1. Each student should hand in an individual problem set. 2. Discussing
More informationCalibration Lecture 1: Background and Parametric Models
Calibration Lecture 1: Background and Parametric Models March 2016 Motivation What is calibration? Derivative pricing models depend on parameters: Black-Scholes σ, interest rate r, Heston reversion speed
More informationExercise sheet 10. Discussion: Thursday,
Exercise sheet 10 Discussion: Thursday, 04.02.2016. Exercise 10.1 Let K K n o, t > 0. Show that N (K, t B n ) N (K, 4t B n ) N (B n, (t/16)k ), N (B n, t K) N (B n, 4t K) N (K, (t/16)b n ). Hence, e.g.,
More informationA NOTE ON A SQUARE-ROOT RULE FOR REINSURANCE. Michael R. Powers and Martin Shubik. June 2005 COWLES FOUNDATION DISCUSSION PAPER NO.
A NOTE ON A SQUARE-ROOT RULE FOR REINSURANCE By Michael R. Powers and Martin Shubik June 2005 COWLES FOUNDATION DISCUSSION PAPER NO. 1521 COWLES FOUNDATION FOR RESEARCH IN ECONOMICS YALE UNIVERSITY Box
More informationTechnical Report Doc ID: TR April-2009 (Last revised: 02-June-2009)
Technical Report Doc ID: TR-1-2009. 14-April-2009 (Last revised: 02-June-2009) The homogeneous selfdual model algorithm for linear optimization. Author: Erling D. Andersen In this white paper we present
More information2. This algorithm does not solve the problem of finding a maximum cardinality set of non-overlapping intervals. Consider the following intervals:
1. No solution. 2. This algorithm does not solve the problem of finding a maximum cardinality set of non-overlapping intervals. Consider the following intervals: E A B C D Obviously, the optimal solution
More informationIntroduction to Operations Research
Introduction to Operations Research Unit 1: Linear Programming Terminology and formulations LP through an example Terminology Additional Example 1 Additional example 2 A shop can make two types of sweets
More information1 The EOQ and Extensions
IEOR4000: Production Management Lecture 2 Professor Guillermo Gallego September 16, 2003 Lecture Plan 1. The EOQ and Extensions 2. Multi-Item EOQ Model 1 The EOQ and Extensions We have explored some of
More informationOn Complexity of Multistage Stochastic Programs
On Complexity of Multistage Stochastic Programs Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205, USA e-mail: ashapiro@isye.gatech.edu
More informationAlgorithmic Game Theory and Applications. Lecture 11: Games of Perfect Information
Algorithmic Game Theory and Applications Lecture 11: Games of Perfect Information Kousha Etessami finite games of perfect information Recall, a perfect information (PI) game has only 1 node per information
More informationPortfolio Management and Optimal Execution via Convex Optimization
Portfolio Management and Optimal Execution via Convex Optimization Enzo Busseti Stanford University April 9th, 2018 Problems portfolio management choose trades with optimization minimize risk, maximize
More informationLecture 19: March 20
CS71 Randomness & Computation Spring 018 Instructor: Alistair Sinclair Lecture 19: March 0 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They may
More informationWorst-case-expectation approach to optimization under uncertainty
Worst-case-expectation approach to optimization under uncertainty Wajdi Tekaya Joint research with Alexander Shapiro, Murilo Pereira Soares and Joari Paulo da Costa : Cambridge Systems Associates; : Georgia
More informationRichardson Extrapolation Techniques for the Pricing of American-style Options
Richardson Extrapolation Techniques for the Pricing of American-style Options June 1, 2005 Abstract Richardson Extrapolation Techniques for the Pricing of American-style Options In this paper we re-examine
More informationHomework. Part 1. Computer Implementation: Solve Wilson problem by the Lindo and compare the results with your graphical solution.
Homework. Part 1. Computer Implementation: Solve Wilson problem by the Lindo and compare the results with your graphical solution. Graphical Solution is attached to email. Lindo The results of the Wilson
More informationLecture 7: Bayesian approach to MAB - Gittins index
Advanced Topics in Machine Learning and Algorithmic Game Theory Lecture 7: Bayesian approach to MAB - Gittins index Lecturer: Yishay Mansour Scribe: Mariano Schain 7.1 Introduction In the Bayesian approach
More informationLecture 10: Performance measures
Lecture 10: Performance measures Prof. Dr. Svetlozar Rachev Institute for Statistics and Mathematical Economics University of Karlsruhe Portfolio and Asset Liability Management Summer Semester 2008 Prof.
More informationMengdi Wang. July 3rd, Laboratory for Information and Decision Systems, M.I.T.
Practice July 3rd, 2012 Laboratory for Information and Decision Systems, M.I.T. 1 2 Infinite-Horizon DP Minimize over policies the objective cost function J π (x 0 ) = lim N E w k,k=0,1,... DP π = {µ 0,µ
More informationFirst-Order Methods. Stephen J. Wright 1. University of Wisconsin-Madison. IMA, August 2016
First-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) First-Order Methods IMA, August 2016 1 / 48 Smooth
More informationDirect Methods for linear systems Ax = b basic point: easy to solve triangular systems
NLA p.1/13 Direct Methods for linear systems Ax = b basic point: easy to solve triangular systems... 0 0 0 etc. a n 1,n 1 x n 1 = b n 1 a n 1,n x n solve a n,n x n = b n then back substitution: takes n
More information