Technical Report Doc ID: TR April-2009 (Last revised: 02-June-2009)

Similar documents
Interior-Point Algorithm for CLP II. yyye

On two homogeneous self-dual approaches to. linear programming and its extensions

A Primal Dual Decomposition-Based Interior Point Approach to Two-Stage Stochastic Linear Programming

Game Theory Tutorial 3 Answers

Martingale Pricing Theory in Discrete-Time and Discrete-Space Models

Lecture 5: Iterative Combinatorial Auctions

Homework # 8 - [Due on Wednesday November 1st, 2017]

Socially-Optimal Design of Crowdsourcing Platforms with Reputation Update Errors

Optimization in Finance

An adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Introduction to Operations Research

SCHOOL OF BUSINESS, ECONOMICS AND MANAGEMENT. BF360 Operations Research

Allocation of Risk Capital via Intra-Firm Trading

Financial Optimization ISE 347/447. Lecture 15. Dr. Ted Ralphs

56:171 Operations Research Midterm Exam Solutions October 22, 1993

Advanced Operations Research Prof. G. Srinivasan Dept of Management Studies Indian Institute of Technology, Madras

Fractional Black - Scholes Equation

What can we do with numerical optimization?

Stochastic Programming and Financial Analysis IE447. Midterm Review. Dr. Ted Ralphs

56:171 Operations Research Midterm Examination Solutions PART ONE

Outline. 1 Introduction. 2 Algorithms. 3 Examples. Algorithm 1 General coordinate minimization framework. 1: Choose x 0 R n and set k 0.

Optimization for Chemical Engineers, 4G3. Written midterm, 23 February 2015

Global convergence rate analysis of unconstrained optimization methods based on probabilistic models

Decomposition Methods

CSCI 1951-G Optimization Methods in Finance Part 00: Course Logistics Introduction to Finance Optimization Problems

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Optimization Methods in Management Science

Exercise List: Proving convergence of the (Stochastic) Gradient Descent Method for the Least Squares Problem.

Scenario reduction and scenario tree construction for power management problems

Income and Efficiency in Incomplete Markets

Optimal prepayment of Dutch mortgages*

Trust Region Methods for Unconstrained Optimisation

Math Models of OR: More on Equipment Replacement

56:171 Operations Research Midterm Examination Solutions PART ONE

Information Processing and Limited Liability

DM559/DM545 Linear and integer programming

Evaluating Strategic Forecasters. Rahul Deb with Mallesh Pai (Rice) and Maher Said (NYU Stern) Becker Friedman Theory Conference III July 22, 2017

FINANCIAL OPTIMIZATION

Online Appendix Optimal Time-Consistent Government Debt Maturity D. Debortoli, R. Nunes, P. Yared. A. Proofs

Solving dynamic portfolio choice problems by recursing on optimized portfolio weights or on the value function?

Financial Risk Management

Problem set Fall 2012.

Econ 582 Nonlinear Regression

Ramsey s Growth Model (Solution Ex. 2.1 (f) and (g))

56:171 Operations Research Midterm Examination October 28, 1997 PART ONE

Essays on Some Combinatorial Optimization Problems with Interval Data

IE 495 Lecture 11. The LShaped Method. Prof. Jeff Linderoth. February 19, February 19, 2003 Stochastic Programming Lecture 11 Slide 1

Online Appendix: Extensions

The Irrevocable Multi-Armed Bandit Problem

Econ 172A - Slides from Lecture 7

Ph.D. Preliminary Examination MICROECONOMIC THEORY Applied Economics Graduate Program August 2017

Chapter 7 One-Dimensional Search Methods

COMP331/557. Chapter 6: Optimisation in Finance: Cash-Flow. (Cornuejols & Tütüncü, Chapter 3)

A Harmonic Analysis Solution to the Basket Arbitrage Problem

Dynamic Replication of Non-Maturing Assets and Liabilities

The Correlation Smile Recovery

Eco504 Spring 2010 C. Sims MID-TERM EXAM. (1) (45 minutes) Consider a model in which a representative agent has the objective. B t 1.

14.03 Fall 2004 Problem Set 2 Solutions

COS402- Artificial Intelligence Fall Lecture 17: MDP: Value Iteration and Policy Iteration

A Robust Option Pricing Problem

Financial Giffen Goods: Examples and Counterexamples

Optimization Methods in Finance

Bank Leverage and Social Welfare

Handout 4: Deterministic Systems and the Shortest Path Problem

Continuing Education Course #287 Engineering Methods in Microsoft Excel Part 2: Applied Optimization

Finding optimal arbitrage opportunities using a quantum annealer

Dynamic Portfolio Execution Detailed Proofs

Large-Scale SVM Optimization: Taking a Machine Learning Perspective

Journal of Computational and Applied Mathematics. The mean-absolute deviation portfolio selection problem with interval-valued returns

Optimization Models for Quantitative Asset Management 1

Implementing an Agent-Based General Equilibrium Model

Duality Theory and Simulation in Financial Engineering

LINEAR PROGRAMMING. Homework 7

Lecture 7: Linear programming, Dedicated Bond Portfolios

6.896 Topics in Algorithmic Game Theory February 10, Lecture 3

Game Theory. Lecture Notes By Y. Narahari. Department of Computer Science and Automation Indian Institute of Science Bangalore, India October 2012

LP OPTIMUM FOUND AT STEP 2 OBJECTIVE FUNCTION VALUE

Lecture 7: Bayesian approach to MAB - Gittins index

PORTFOLIO selection problems are usually tackled with

Final exam solutions

STATE UNIVERSITY OF NEW YORK AT ALBANY Department of Economics. Ph. D. Comprehensive Examination: Macroeconomics Fall, 2016

RESEARCH ARTICLE. The Penalized Biclustering Model And Related Algorithms Supplemental Online Material

56:171 Operations Research Midterm Examination October 25, 1991 PART ONE

How Much Competition is a Secondary Market? Online Appendixes (Not for Publication)

and, we have z=1.5x. Substituting in the constraint leads to, x=7.38 and z=11.07.

A Multi-Stage Stochastic Programming Model for Managing Risk-Optimal Electricity Portfolios. Stochastic Programming and Electricity Risk Management

Econ 172A, W2002: Final Examination, Solutions

Fundamental Theorems of Welfare Economics

MULTISTAGE PORTFOLIO OPTIMIZATION AS A STOCHASTIC OPTIMAL CONTROL PROBLEM

A Convex Parimutuel Formulation for Contingent Claim Markets

Stochastic Proximal Algorithms with Applications to Online Image Recovery

A THREE-FACTOR CONVERGENCE MODEL OF INTEREST RATES

Worst-Case Value-at-Risk of Derivative Portfolios

Lecture Quantitative Finance Spring Term 2015

DRAFT. 1 exercise in state (S, t), π(s, t) = 0 do not exercise in state (S, t) Review of the Risk Neutral Stock Dynamics

Indexing and Price Informativeness

Optimal construction of a fund of funds

Pricing Dynamic Solvency Insurance and Investment Fund Protection

OPTIMISATION IN CREDIT WHERE CAN OPTIMISATION HELP YOU MAKE BETTER DECISIONS AND BOOST PROFITABILITY

Transcription:

Technical Report Doc ID: TR-1-2009. 14-April-2009 (Last revised: 02-June-2009) The homogeneous selfdual model algorithm for linear optimization. Author: Erling D. Andersen In this white paper we present the homogeneous self-dual interior point methods which forms the basis for several commercial optimization software packages such as MOSEK. 1 Introduction The linear optimization problem. c T x s.t. Ax = b, x 0 may have an optimal solution, be primal infeasible or be dual infeasible for a particular set of data c R n, b R m, A R m n. In fact the problem can be both primal dual infeasible for some data where (1) is denoted dual infeasible if the dual problem (1) max b T y s.t. A T y + s = c, s 0 (2) corresponding to (1) is infeasible. The vector s is the so-called dual slacks. 2 The homogenous self dual model However, most methods for solving (1) assume that the problem has an optimal solution. This is in particular true for interior-point methods. To overcome this problem it has been suggested to solve the homogeneous self-dual model 0 s.t. Ax bτ = 0, A T y +cτ 0, b T y c T x 0, x 0, τ 0 (3) instead of (1). Clearly, (3) is a homogeneous LP is self-dual which essentially follows from the constraints form a skew-symmetric system. The interpretation of (3) is τ is a homogenizing variable the constraints represent primal feasibility, dual feasibility, reversed weak duality. The homogeneous model (3) was first studied by Goldman Tucker [2] in 1956 they proved (3) always has a nontrivial solution (x, y, τ ) satisfying x j s j = 0, x j + s j > 0, j, τ κ = 0, τ + κ > 0, (4) www.mosek.com Page 1 of 4

where s := cτ A T y 0 κ := b T y c T x 0. A solution to (3) satisfying the condition (4) is said to be a strictly complementary solution. Moreover, Goldman Tucker showed that if (x, τ, y, s, κ ) is any strictly complementary solution, then exactly one of the two following situations occurs: τ > 0 if only if (1) has an optimal solution. primal-dual solution to (1). In this case (x, y, s )/τ is an optimal κ > 0 if only if (1) is primal or dual infeasible. In the case b T y > 0 (c T x < 0) then (1) is primal (dual) infeasible. The conclusion is that a strictly complementary solution to (3) provides all the information required, because in the case τ > 0 then an optimal primal-dual solution to (1) is trivially given by (x, y, s) = (x, y, s )/τ. Otherwise, the problem is primal or dual infeasible. Therefore, the main algorithmic idea is to compute a strictly complementary solution to (3) instead of solving (1) directly. 3 The homogenous algorithm Ye, Todd, Mizuno [6] suggested to solve (3) by solving the problem n 0 z s.t. Ax bτ bz = 0, A T y +cτ + cz 0, b T y c T x + dz 0, y c T x dτ = n 0, x 0, τ 0, (5) where b := Ax 0 bτ 0, c := cτ 0 + A T y 0 + s 0, d := c T x 0 b T y 0 + κ 0, n 0 := (x 0 ) T s 0 + τ 0 κ 0 (x 0, τ 0, y 0, s 0, κ 0 ) = (e, 1, 0, e, 1) (e is a n vector of all ones). It can be proved that the problem (5) always has an optimal solution. Moreover, the optimal value is identical to zero it is easy to verify that if (x, τ, y, z) is an optimal strictly complementary solution to (5), then (x, τ, y) is a strictly complementary solution to (3). Hence, the problem (5) can solved using any method that generates an optimal strictly complementary solution because the problem always has a solution. Note by construction then (x, τ, y, z) = (x 0, τ 0, y 0, 1) is an interior feasible solution to (5). This implies that the problem (1) can be solved by most feasible-interior-point algorithms. Xu, Hung, Ye [4] suggest an alternative solution method which is also an interior-point algorithm, but specially adapted to the problem (3). The so-called homogeneous algorithm can be stated as follows: 1. Choose (x 0, τ 0, y 0, s 0, κ 0 ) such that (x 0, τ 0, s 0, κ 0 ) > 0. Choose ε f, ε g > 0 γ (0, 1) let η := 1 γ. 2. k := 0. 3. Compute: 4. If then terate. rp k := bτ k Ax k, rd k := cτ k A T y k s k, rg k := κ k + c T x k b T y k, µ k := (xk ) T s k +τ k κ k n+1. (r k p; r k d ; rk g ) ε f µ k ε g, www.mosek.com Page 2 of 4

5. Solve the linear equations Ad x bd τ = ηr k p, A T d y + d s cd τ = ηr k d, c T d x + b T d y d κ = ηr k g, S k d x + X k d s = X k s k + γµ k e, κ k d τ + τ k d κ = τ k κ k + γµ k for (d x, d τ, d y, d s, d κ ) where X k := diag(x k ) S k := diag(s k ). 6. For some θ (0, 1) let α k be the optimal objective value to max s.t. θα x k τ k s k κ k + α d x d τ d s d κ 0, α θ 1. 7. x k+1 τ k+1 y k+1 s k+1 κ k+1 := x k τ k y k s k κ k + αk d x d τ d y d s d κ. 8. k = k + 1. 9. goto 3 The following facts can be proved about the algorithm p = (1 (1 γ)α k )rp k, d = (1 (1 γ)α k )rd k, (6) g = (1 (1 γ)α k )rg k, ((x k+1 ) T s k+1 + τ k+1 κ k+1 ) = (1 (1 γ)α k )((x k ) T s k + τ k κ k ) which shows that the primal residuals (r p ), the dual residuals (r d ), the gap residual (r g ), the complementary gap (x T s + τκ) all are reduced strictly if α k > 0 at the same rate. This shows that (x k, τ k, y k, s k, κ k ) generated by the algorithm converges towards an optimal solution to (3) ( the teration criteria in step 4 is ultimately reached). In principle the initial point the stepsize α k should be chosen such that j (x k j s k j, τ k κ k ) βµ k, for k = 0, 1,... is satisfied for some β (0, 1) because this guarantees (x k, τ k, y k, s k, κ k ) converges towards a strictly complementary solution. Finally, it is possible to prove that the algorithm has the complexity O(n 3.5 L) given an appropriate choice of the starting point the algorithmic parameters. 4 Teration Note (6) (6) implies that that rp, k rd k, rk g, ((x k ) T s k +τ k κ k ) all converge towards zero at exactly the same rate. This implies that feasibility optimality is reached at the same time. Therefore, if the algorithm is stopped prematurely then solution will neither be feasible nor optimal. Moreover, relaxing ε g without relaxing ε f is not likely to have much effect. This can be seen by making the reasonable assumptions that (rp; 0 rd; 0 rg) 0 µ 0 ε g ε f. (7) www.mosek.com Page 3 of 4

5 Warmstart It is well known that the simplex algorithm easily can be warmstarted when a sequence of closely related optimization problems has to be solved. This can in many cases cut the computational time significantly. Although there are no guarantees for that. It is also possible warmstart an interior-point algorithm if an initial solution satisfying the conditions in step 4 (r 0 p; r 0 d; r 0 g) µ 0 are small. Moreover, the initial solution should satisfy j (x 0 js 0 j, τ 0 κ 0 ) βµ 0 for a reasonably large β e.g. β = 0.1. Such an initial solution virtually never known because usually either the primal or dual solution is vastly infeasible. Therefore, in practice it is hard to warmstart an interior-point algorithm with any efficiency gain. 6 Further reading Further details about the homogeneous algorithm can be seen in [3, 5]. Issues related to implementing the homogeneous algorithm are discussed in [1, 4]. References [1] E. D. Andersen K. D. Andersen. The MOSEK interior point optimizer for linear programg: an implementation of the homogeneous algorithm. In J. B. G. Frenk, C. Roos, T. Terlaky, S. Zhang, editors, High Performance Optimization Techniques, Proceedings of the HPOPT-II conference, 1997. forthcog. [2] A. J. Goldman A. W. Tucker. Theory of linear programg. In H. W. Kuhn A. W. Tucker, editors, Linear Inequalities related Systems, pages 53 97, Princeton, New Jersey, 1956. Princeton University Press. [3] C. Roos, T. Terlaky, J. -Ph. Vial. Theory algorithms for linear optimization: an interior point approach. John Wiley Sons, New York, 1997. [4] X. Xu, P. -F. Hung, Y. Ye. A simplified homogeneous self-dual linear programg algorithm its implementation. Annals of Operations Research, 62:151 171, 1996. [5] Y. Ye. Interior point algorithms: theory analysis. John Wiley Sons, New York, 1997. [6] Y. Ye, M. J. Todd, S. Mizuno. An O( nl) - iteration homogeneous self-dual linear programg algorithm. Math. Oper. Res., 19:53 67, 1994. www.mosek.com Page 4 of 4

the fast path to optimum MOSEK ApS provides optimization software which help our clients to make better decisions. Our customer base consists of financial institutions companies, engineering software vendors, among others. The company was established in 1997 by Erling D. Andersen Knud D. Andersen it specializes in creating advanced software for solution of mathematical optimization problems. In particular, the company focuses on solution of large-scale linear, quadratic, conic optimization problems. Mosek ApS Fruebjergvej 3 2100 Copenhagen Denmark www.mosek.com info@mosek.com