Chapter 7 One-Dimensional Search Methods

Similar documents
This method uses not only values of a function f(x), but also values of its derivative f'(x). If you don't know the derivative, you can't use it.

Golden-Section Search for Optimization in One Dimension

Solution of Equations

CS227-Scientific Computing. Lecture 6: Nonlinear Equations

1 Overview. 2 The Gradient Descent Algorithm. AM 221: Advanced Optimization Spring 2016

Introduction to Numerical Methods (Algorithm)

Lecture Quantitative Finance Spring Term 2015

4.2 Rolle's Theorem and Mean Value Theorem

Using derivatives to find the shape of a graph

February 2 Math 2335 sec 51 Spring 2016

CS 3331 Numerical Methods Lecture 2: Functions of One Variable. Cherung Lee

Figure (1) The approximation can be substituted into equation (1) to yield the following iterative equation:

Falling Cat 2. Falling Cat 3. Falling Cats 5. Falling Cat 4. Acceleration due to Gravity Consider a cat falling from a branch

Support Vector Machines: Training with Stochastic Gradient Descent

Feb. 4 Math 2335 sec 001 Spring 2014

Trust Region Methods for Unconstrained Optimisation

Solutions of Equations in One Variable. Secant & Regula Falsi Methods

Predicting the Success of a Retirement Plan Based on Early Performance of Investments

lecture 31: The Secant Method: Prototypical Quasi-Newton Method

PORTFOLIO OPTIMIZATION AND EXPECTED SHORTFALL MINIMIZATION FROM HISTORICAL DATA

1 The Solow Growth Model

THE TRAVELING SALESMAN PROBLEM FOR MOVING POINTS ON A LINE

25 Increasing and Decreasing Functions

Online Shopping Intermediaries: The Strategic Design of Search Environments

Numerical Analysis Math 370 Spring 2009 MWF 11:30am - 12:25pm Fowler 110 c 2009 Ron Buckmire

Penalty Functions. The Premise Quadratic Loss Problems and Solutions

Lecture 17: More on Markov Decision Processes. Reinforcement learning

This appendix discusses two extensions of the cost concepts developed in Chapter 10.

Analysing and computing cash flow streams

4 Reinforcement Learning Basic Algorithms

On the use of time step prediction

Homework #4. CMSC351 - Spring 2013 PRINT Name : Due: Thu Apr 16 th at the start of class

What can we do with numerical optimization?

Dynamic Programming: An overview. 1 Preliminaries: The basic principle underlying dynamic programming

Fundamental Theorems of Welfare Economics

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Elementary Statistics

Properties of IRR Equation with Regard to Ambiguity of Calculating of Rate of Return and a Maximum Number of Solutions

16 MAKING SIMPLE DECISIONS

Definition 4.1. In a stochastic process T is called a stopping time if you can tell when it happens.

Lecture l(x) 1. (1) x X

Math 1526 Summer 2000 Session 1

Budget Management In GSP (2018)

Essays on Some Combinatorial Optimization Problems with Interval Data

Yao s Minimax Principle

Optimization Prof. A. Goswami Department of Mathematics Indian Institute of Technology, Kharagpur. Lecture - 18 PERT

Chapter 1 Microeconomics of Consumer Theory

Decision Trees An Early Classifier

Principles of Financial Computing

Golden rule. The golden rule allocation is the stationary, feasible allocation that maximizes the utility of the future generations.

V(0.1) V( 0.5) 0.6 V(0.5) V( 0.5)

5.3 Interval Estimation

Game Theory and Economics Prof. Dr. Debarshi Das Department of Humanities and Social Sciences Indian Institute of Technology, Guwahati.

GOOD LUCK! 2. a b c d e 12. a b c d e. 3. a b c d e 13. a b c d e. 4. a b c d e 14. a b c d e. 5. a b c d e 15. a b c d e. 6. a b c d e 16.

Lecture 17 Option pricing in the one-period binomial model.

Handout 4: Deterministic Systems and the Shortest Path Problem

7. Infinite Games. II 1

Econ 582 Nonlinear Regression

Strategies for Improving the Efficiency of Monte-Carlo Methods

TDT4171 Artificial Intelligence Methods

AS/ECON AF Answers to Assignment 1 October Q1. Find the equation of the production possibility curve in the following 2 good, 2 input

CS360 Homework 14 Solution

ECON 459 Game Theory. Lecture Notes Auctions. Luca Anderlini Spring 2017

Dynamic Marketing Budget Allocation across Countries, Products, and Marketing Activities

Instantaneous rate of change (IRC) at the point x Slope of tangent

Chapter 19: Compensating and Equivalent Variations

Extend the ideas of Kan and Zhou paper on Optimal Portfolio Construction under parameter uncertainty

Week 1 Variables: Exploration, Familiarisation and Description. Descriptive Statistics.

Outline. 1 Introduction. 2 Algorithms. 3 Examples. Algorithm 1 General coordinate minimization framework. 1: Choose x 0 R n and set k 0.

Chapter 23: Choice under Risk

EconS Constrained Consumer Choice

Symmetric Game. In animal behaviour a typical realization involves two parents balancing their individual investment in the common

Final exam solutions

Principles of Financial Computing

1 Consumption and saving under uncertainty

The method of false position is also an Enclosure or bracketing method. For this method we will be able to remedy some of the minuses of bisection.

STOCK PRICE PREDICTION: KOHONEN VERSUS BACKPROPAGATION

Product Di erentiation: Exercises Part 1

Probability. An intro for calculus students P= Figure 1: A normal integral

17 MAKING COMPLEX DECISIONS

Sensitivity Analysis with Data Tables. 10% annual interest now =$110 one year later. 10% annual interest now =$121 one year later

Elif Özge Özdamar T Reinforcement Learning - Theory and Applications February 14, 2006

The Baumol-Tobin and the Tobin Mean-Variance Models of the Demand

Lecture 2: Making Good Sequences of Decisions Given a Model of World. CS234: RL Emma Brunskill Winter 2018

March 30, Why do economists (and increasingly, engineers and computer scientists) study auctions?

MAE384 HW1 Discussion

Interpolation. 1 What is interpolation? 2 Why are we interested in this?

Lecture 3: Factor models in modern portfolio choice

1 Maximizing profits when marginal costs are increasing

Computational Finance Least Squares Monte Carlo

16 MAKING SIMPLE DECISIONS

Finding Roots by "Closed" Methods

The Optimization Process: An example of portfolio optimization

Chapter 6: Supply and Demand with Income in the Form of Endowments

Complex Decisions. Sequential Decision Making

ECON Microeconomics II IRYNA DUDNYK. Auctions.

2D5362 Machine Learning

October An Equilibrium of the First Price Sealed Bid Auction for an Arbitrary Distribution.

Quadratic Modeling Elementary Education 10 Business 10 Profits

CPS 270: Artificial Intelligence Markov decision processes, POMDPs

Transcription:

Chapter 7 One-Dimensional Search Methods An Introduction to Optimization Spring, 2014 1 Wei-Ta Chu

Golden Section Search! Determine the minimizer of a function over a closed interval, say. The only assumption is that the objective function is unimodal, which means that it has only one local minimizer.! The method is based on evaluating the objective function at different points in the interval. We choose these points in such a way that an approximation to the minimizer may be achieved in as few evaluations as possible.! Narrow the range progressively until the minimizer is boxed in with sufficient accuracy. 2

Golden Section Search! We have to evaluate at two intermediate points. We choose the intermediate points in such a way that the reduction in the range is symmetric.! If, then the minimizer must lie in the range! If, then the minimizer is located in the range 3

Golden Section Search! We would like to minimize the number of objective function evaluations.! Suppose. Then, we know that. Because is already in the uncertainty interval and is already known, we can make coincide with. Thus, only one new evaluation of at would be necessary. 4

Golden Section Search! Without loss of generality, imagine that the original range is of unit length. Then, Because and Because we require, we take Observe that 5 Dividing a range in the ratio of to has the effect that the ratio of the shorter segment to the longer equals to the ratio of the longer to the sum of the two. This rule is called golden section.

Golden Section Search! The uncertainty range is reduced by the ratio at every stage. Hence, N steps of reduction using the golden section method reduces the range by the factor 6

Example! Use the golden section search to find the value of that minimizes in the range [0,2]. Locate this value of to within a range of 0.3.! After N stage the range [0,2] is reduced by. So we choose N so that. N=4 will do.! Iteration 1. We evaluate at two intermediate points and. We have, so the uncertainty interval is reduced to 7

Example! Iteration 2. We choose to coincide with, and need only be evaluated at one new point, Now,, so the uncertainty interval is reduced to 8

Example! Iteration 3. We set and compute We have So. Hence, the uncertainty interval is further reduced to! Iteration 4. We set and We have. Thus, the value of that minimizes is located in the interval. Note that 9

Fibonacci Search! Suppose now that we are allowed to vary the value from stage to stage.! As in the golden section search, our goal is to select successive values of,, such that only one new function evaluation is required at each stage. After some manipulations, we obtain 10

Fibonacci Search! Suppose that we are given a sequence satisfying the conditions above and we use this sequence in our search algorithm. Then, after N iterations, the uncertainty range is reduced by a factor of! What sequence minimizes the reduction factor above?! This is a constrained optimization problem 11

Fibonacci Search! The Fibonacci sequence is defined as follows. Let and. Then, for! Some values of elements in the Fibonacci sequence 1 2 3 5 8 13 21 34! It turns out the solution to the optimization problem above is 12

Fibonacci Search! The resulting algorithm is called the Fibonacci search method.! In this method, the uncertainty range is reduced by the factor! The reduction factor is less than that of the golden section method.! There is an anomaly in the final iteration, because! Recall that we need two intermediate points at each stage, one comes from a previous iteration and another is a new evaluation point. However, with, the two intermediate points coincide in the middle of the uncertainty interval, and thus we cannot further reduce the uncertainty range. 13

Fibonacci Search! To get around this problem, we perform the new evaluation for the last iteration using, where is a small number.! The new evaluation point is just to the left or right of the midpoint of the uncertainty interval.! As a result of the modification, the reduction in the uncertainty range at the last iteration may be either or depending on which of the two points has the smaller objective function value. Therefore, in the worst case, the reduction factor in the uncertainty range for the Fibonacci method is 14

Example! Consider the function. Use the Fibonacci search method to find the value of that minimizes over the range [0,2]. Locate this value of to within the range 0.3.! After N steps the range is reduced by in the worst case. We need to choose N such that! Thus, we need! If we choose, then N=4 will do. 15

Example! Iteration 1. We start with We then compute! The range is reduced to 16

Example! Iteration 2. We have so the range is reduced to 17

Example! Iteration 3. We compute The range is reduced to 18

Example! Iteration 4. We choose. We have The range is reduced to! Note that 19

Newton s Method! In the problem of minimizing a function of a single variable! Assume that at each measurement point we can calculate,, and.! We can fit a quadratic function through that matches its first and second derivatives with that of the function.! Note that,, and! Instead of minimizing, we minimize its approximation. The first order necessary condition for a minimizer of yields setting, we obtain 20

Example! Using Newton s method, find the minimizer of The initial value is. The required accuracy is in the sense that we stop when! We compute! Hence,! Proceeding in a similar manner, we obtain 21 We can assume that is a strict minimizer Corollary 6.1

Newton s Method! Newton s method works well if everywhere. However, if for some, Newton s method may fail to converge to the minimizer.! Newton s method can also be viewed as a way to drive the first derivative of to zero. If we set, then we obtain 22

Example! We apply Newton s method to improve a first approximation,, to the root of the equation! We have! Performing two iterations yields 23

Newton s Method! Newton s method for solving equations of the form is also referred to as Newton s method of tangents.! If we draw a tangent to at the given point, then the tangent line intersects the x-axis at the point, which we expect to be closer to the root of.! Note that the slope of at is 24

Newton s Method! Newton s method of tangents may fail if the first approximation to the root is such that the ratio is not small enough.! Thus, an initial approximation to the root is very important. 25

Secant Method! Newton s method for minimizing uses second derivatives of! If the second derivative is not available, we may attempt to approximate it using first derivative information. We may approximate with! Using the foregoing approximation of the second derivative, we obtain the algorithm called the secant method. 26

Secant Method! Note that the algorithm requires two initial points to start it, which we denote and. The secant algorithm can be represented in the following equivalent form:! Like Newton s method, the secant method does not directly involve values of. Instead, it tries to drive the derivative to zero.! In fact, as we did for Newton s method, we can interpret the secant method as an algorithm for solving equations of the form. 27

Secant Method! The secant algorithm for finding a root of the equation takes the form or equivalently,! In this figure, unlike Newton s method, the secant method uses the secant between the th and th points to determine the th point. 28

Example! We apply the secant method to find the root of the equation! We perform two iterations, with starting points and. We obtain 29

Example! Suppose that the voltage across a resistor in a circuit decays according to the model, where is the voltage at time and is the resistance value.! Given measurements of the voltage at times, respectively, we wish to find the best estimate of. By the best estimate we mean the value of that minimizes the total squared error between the measured voltages and the voltages predicted by the model.! We derive an algorithm to find the best estimate of using the secant method. The objective function is 30

Example! Hence, we have! The secant algorithm for the problem is 31

Remarks on Line Search Methods! Iterative algorithms for solving such optimization problems involve a line search at every iteration.! Let be a function that we wish to minimize. Iterative algorithms for finding a minimizer of are of the form where is a given initial point and is chosen to minimized. The vector is called the search direction. The secant method 32

Remarks on Line Search Methods! Note that choice of involves a one-dimensional minimization. This choice ensures that under appropriate conditions,.! We may, for example, use the secant method to find. In this case, we need the derivative of! This is obtained by the chain rule. Therefore, applying the secant method for the line search requires the gradient, the initial search point, and the search direction 33

Remarks on Line Search Methods! Line search algorithms used in practice are much more involved than the one-dimensional search methods.! Determining the value of that exactly minimizes may be computationally demanding; even worse, the minimizer of may not even exist.! Practical experience suggests that it is better to allocate more computation time on iterating the optimization algorithm rather than performing exact line searches. 34