CS 3331 Numerical Methods Lecture 2: Functions of One Variable. Cherung Lee

Similar documents
Solution of Equations

CS227-Scientific Computing. Lecture 6: Nonlinear Equations

Lecture Quantitative Finance Spring Term 2015

Numerical Analysis Math 370 Spring 2009 MWF 11:30am - 12:25pm Fowler 110 c 2009 Ron Buckmire

lecture 31: The Secant Method: Prototypical Quasi-Newton Method

Solutions of Equations in One Variable. Secant & Regula Falsi Methods

Partial Fractions. A rational function is a fraction in which both the numerator and denominator are polynomials. For example, f ( x) = 4, g( x) =

Chapter 7 One-Dimensional Search Methods

February 2 Math 2335 sec 51 Spring 2016

Feb. 4 Math 2335 sec 001 Spring 2014

The method of false position is also an Enclosure or bracketing method. For this method we will be able to remedy some of the minuses of bisection.

1 Overview. 2 The Gradient Descent Algorithm. AM 221: Advanced Optimization Spring 2016

Principles of Financial Computing

MATH20330: Optimization for Economics Homework 1: Solutions

Worksheet A ALGEBRA PMT

25 Increasing and Decreasing Functions

Math Lab 5 Assignment

Outline. 1 Introduction. 2 Algorithms. 3 Examples. Algorithm 1 General coordinate minimization framework. 1: Choose x 0 R n and set k 0.

Trust Region Methods for Unconstrained Optimisation

What can we do with numerical optimization?

Lecture 17: More on Markov Decision Processes. Reinforcement learning

1 Economical Applications

This method uses not only values of a function f(x), but also values of its derivative f'(x). If you don't know the derivative, you can't use it.

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Additional Review Exam 1 MATH 2053 Please note not all questions will be taken off of this. Study homework and in class notes as well!

Third-order iterative methods. free from second derivative

Interpolation. 1 What is interpolation? 2 Why are we interested in this?

Sensitivity Analysis with Data Tables. 10% annual interest now =$110 one year later. 10% annual interest now =$121 one year later

Name Date Student id #:

Calibration Lecture 1: Background and Parametric Models

Logarithmic and Exponential Functions

BARUCH COLLEGE MATH 2003 SPRING 2006 MANUAL FOR THE UNIFORM FINAL EXAMINATION

AP CALCULUS AB CHAPTER 4 PRACTICE PROBLEMS. Find the location of the indicated absolute extremum for the function. 1) Maximum 1)

Final Exam Review - Business Calculus - Spring x x

Penalty Functions. The Premise Quadratic Loss Problems and Solutions

Some derivative free quadratic and cubic convergence iterative formulas for solving nonlinear equations

( ) 4 ( )! x f) h(x) = 2cos x + 1

MTH6154 Financial Mathematics I Interest Rates and Present Value Analysis

F A S C I C U L I M A T H E M A T I C I

t g(t) h(t) k(t)

1 The continuous time limit

Notation for the Derivative:

Decomposing Rational Expressions Into Partial Fractions

An adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity

Falling Cat 2. Falling Cat 3. Falling Cats 5. Falling Cat 4. Acceleration due to Gravity Consider a cat falling from a branch

PRELIMINARY EXAMINATION 2018 MATHEMATICS GRADE 12 PAPER 1. Time: 3 hours Total: 150 PLEASE READ THE FOLLOWING INSTRUCTIONS CAREFULLY

Department of Mathematics. Mathematics of Financial Derivatives

1.1 Forms for fractions px + q An expression of the form (x + r) (x + s) quadratic expression which factorises) may be written as

Page Points Score Total: 100

2-4 Completing the Square

The Intermediate Value Theorem states that if a function g is continuous, then for any number M satisfying. g(x 1 ) M g(x 2 )

Problem Set 4 Answers

Principles of Financial Computing

Techniques for Calculating the Efficient Frontier

Introduction to Numerical Methods (Algorithm)

Representing Risk Preferences in Expected Utility Based Decision Models

Exam Review. Exam Review

MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question. C) y = - 39x - 80 D) y = x + 8 5

ST. DAVID S MARIST INANDA

Monotone, Convex and Extrema

MTH6154 Financial Mathematics I Interest Rates and Present Value Analysis

Kantorovich-type Theorems for Generalized Equations

Notes for Econ202A: Consumption

Finding Zeros of Single- Variable, Real Func7ons. Gautam Wilkins University of California, San Diego

Chapter 1. 1) simple interest: Example : someone interesting 4000$ for 2 years with the interest rate 5.5% how. Ex (homework):

On the Optimality of a Family of Binary Trees Techical Report TR

Lecture l(x) 1. (1) x X

Lecture 2: Marginal Functions, Average Functions, Elasticity, the Marginal Principle, and

MATH3075/3975 FINANCIAL MATHEMATICS TUTORIAL PROBLEMS

Econ 582 Nonlinear Regression

Analysing and computing cash flow streams

MA 162: Finite Mathematics - Chapter 1

Chapter 5. Continuous Random Variables and Probability Distributions. 5.1 Continuous Random Variables

Optimizing Portfolios

Golden-Section Search for Optimization in One Dimension

Finding Roots by "Closed" Methods

4.1 Exponential Functions. Copyright Cengage Learning. All rights reserved.

6. Continous Distributions

Scenario Generation and Sampling Methods

Name: Math 10250, Final Exam - Version A May 8, 2007

You are responsible for upholding the University of Maryland Honor Code while taking this exam.

Lecture 7: Bayesian approach to MAB - Gittins index

A discretionary stopping problem with applications to the optimal timing of investment decisions.

WEEK 1 REVIEW Lines and Linear Models. A VERTICAL line has NO SLOPE. All other lines have change in y rise y2-

Portfolio selection with multiple risk measures

Chapter 4 Partial Fractions

X ln( +1 ) +1 [0 ] Γ( )

Topic #1: Evaluating and Simplifying Algebraic Expressions

SA2 Unit 4 Investigating Exponentials in Context Classwork A. Double Your Money. 2. Let x be the number of assignments completed. Complete the table.

MATH 105 CHAPTER 2 page 1

Homework #4. CMSC351 - Spring 2013 PRINT Name : Due: Thu Apr 16 th at the start of class

Makinde, V. 1,. Akinboro, F.G 1., Okeyode, I.C. 1, Mustapha, A.O. 1., Coker, J.O. 2., and Adesina, O.S. 1.

When Is Factoring Used?

u (x) < 0. and if you believe in diminishing return of the wealth, then you would require

Math Analysis Midterm Review. Directions: This assignment is due at the beginning of class on Friday, January 9th

Interest Compounded Annually. Table 3.27 Interest Computed Annually

Interest Formulas. Simple Interest

SYLLABUS AND SAMPLE QUESTIONS FOR MSQE (Program Code: MQEK and MQED) Syllabus for PEA (Mathematics), 2013

Calculus for Business Economics Life Sciences and Social Sciences 13th Edition Barnett SOLUTIONS MANUAL Full download at:

Steepest descent and conjugate gradient methods with variable preconditioning

Transcription:

CS 3331 Numerical Methods Lecture 2: Functions of One Variable Cherung Lee

Outline Introduction Solving nonlinear equations: find x such that f(x ) = 0. Binary search methods: (Bisection, regula falsi) Newton-typed methods: (Newton s method, secant method) Higher order methods: (Muller s method) Accelerating convergence: Aitken s 2 method 1

Introduction 2

Motivating problem How to estimate compound interest rate? Example: Suppose a bank loans you 200,000 with compound interest rate. After 10 year, you need to repay 400,000 (principal+interest). Suppose the frequency of compounding is yearly. How much is the annual percentage rate (APR)? Equation of the compound interest: 20,000(1+r) 10 = 40,000. How to solve f(r) = (1 + r) 10 2 = 0? r = 10 2 1 7.1773% 3

Amortized Loan Loan repaid in a series of payments for principal and interest. Formula: (r: interest-rate, a: payment, n: period) Suppose x k is the debt in the k s period. x k = (1 + r)x k 1 a = (1 + r) 2 x k 2 (1 + r)a a =... = x 0 (1 + r) k a (1 + r)k 1 r x 0 is the principal and x n = 0 x 0 (1+r) n a (1+r)n 1 r = 0. How to solve f(r) = 20(1 + r) 10 4 (1+r)10 1 r = 0? 4

Useful tools from calculus LVF pp.10 Intermediate value theorem If f(x) is a continuous function on the interval [a, b], and f(a) < 0 < f(b) or f(b) < 0 < f(a), then there is a number c [a, b] such that f(c) = 0. Taylor s theorem If f(x) and all its kth derivatives are continuous on [a, b], k = 1 n, and f (n+1) exists on (a, b), then for any c (a, b) and x [a, b], (ξ is between c and x.) f(x) = n k=0 1 k! f(k) (c)(x c) k + 1 (n + 1)! f(n+1) (ξ)(x c) n+1. 5

Solving Nonlinear Equations 6

Bisection method LVF pp.52-55 Binary search on the given interval [a, b]. Suppose f(a) and f(b) have opposite signs. Let m = (a + b)/2. Three things could happen for f(m). f(m) = 0 m is the solution. f(m) has the same sigh as f(a) solution in [m, b]. f(m) has the same sigh as f(b) solution in [a, m]. Linear convergence with rate 1/2. 7

Pros and cons Pros Easy to implement. Guarantee to converge with guaranteed convergent rate. No derivative required. Cost per iteration (function value evaluation) is very cheap. Cons Slow convergence. Do not work for double roots, like solving (x 1) 2 = 0 8

Regula falsi (false position) LVF pp.57-59 Straight line approximation + intermediate value theorem Given two points (a, f(a)),(b, f(b)), a b, the line equation L(x) = y = f(b) + and its root, L(s) = 0, is s = b f(a) f(b) (x b), a b a b f(a) f(b) f(b). Use intermediate value theorem to determine x [a, s] or x [s, b] 9

Convergence of regula falsi Consider a special case: (b, f(b)) is fixed. Note [s, b] may not go to zero. (compare to bisection method.) Change measurement s x a x = (b s) (b x ) (b a) (b x ) b s = f(b) (b a). f(a) f(b) Let m = f(b) f(a) f(b) < 1. s x a x = m(b a) (b x ) (b a) (b x ) < 1 Linear convergence 10

Newton s method LVF pp.66-71 Approximate f(x) by the tangent line f(x k )+(x x k )f (x k ). Find the minimum of the square error min x f(x) 0 2 d(f(x)) 2 /dx = 0 The minimizer is x k+1 = x k f(x k) f (x k ) Convergent conditions f(x), f (x), f (x) are continuous near x, and f (x) 0. [ x 0 is sufficiently close to x max f. ] 2min f x 0 x < 1. 11

Convergence of Newton s method LVF pp.70-71 Taylor expansion: for some η between x and x k f(x ) = f(x k ) + (x x k )f (x k ) + (x x k ) 2 f (η) = 0 2 x = x k f(x k )/f (x k ) (x x k ) 2 f (η) 2f (x k ) Substitute Newton s step x k f(x k )/f (x k ) = x k+1. x x k+1 = (x x k ) 2 f (η) 2f (x k ) Quadratic convergence with λ = f (x ) 2f (x ). 12

Oscillations in Newton s method LVF pp.71 Solve f(x) = x 3 3x 2 + x + 3 = 0 with x 0 = 1. 6 6 5 5 4 4 3 3 2 2 1 1 0 0-1 -1-2 -1-0.5 0 0.5 1 1.5 2 2.5 3-2 -1-0.5 0 0.5 1 1.5 2 2.5 3 13

Newton s method for repeated roots LVF pp.72 If x is a repeated root, Newton s method converges linearly. Newton s method can be regarded as a fixed-point iteration. g(x) = x f(x)/f (x), x n+1 = g(x n ) = x n f(x n )/f (x n ). Convergence of fixed-point iteration: LVF pp.22-23. Taylor expansion of g(x) about x n near x x n+1 = g(x n ) = g(x ) + g (x )(x n x ) + g (ξ) 2 (x n x ) 2. Quadratic convergence if g (x ) = 0. 14

case 1 If f(x ) is a simple root, (f (x ) 0) g (x) = 1 f (x)f (x) f(x)f (x) (f (x)) 2 = 1 1+ f(x)f (x) (f (x)) 2 = f(x)f (x) (f (x)) 2. g (x ) = 0 case 2 If f(x ) is a repeated root, (f (x ) = 0) Assume f(x) = (x x ) 2 h(x) where h(x ) 0. f (x) = 2(x x )h(x) + (x x ) 2 h (x). g(x) = x f(x) f (x) = x (x x )h(x) 2h(x)+(x x )h (x). Let a(x) = 2h(x) + (x x )h (x). (we will use that to simply the proof).

g (x) = 1 (h(x)+(x x )h (x))a(x) (x x )h(x)a (x) (a(x)) 2 a(x ) = 2h(x ) + (x x )h (x ) = 2h(x ) 0 g (x ) = 1 (h(x ) + (x x )h (x ))a(x ) (x x )h(x )a (x ) a(x ) 2 = 1 h(x ) a(x ) = h(x ) 2h(x = 1 1/2 0. ) When x is a repeated root, convergence is linear. How to modify it to restore the quadratic convergence? For f(x)=(x x ) 2 h(x), let g(x)=x 2 f(x) f (x) g (x )=0. The algorithm becomes x k+1 = x k 2 f(x k) f (x k )

Secant method LVF pp.60-65 Newton s method requires derivative at each step. f (x k ) can be approximated by f(x k 1) f(x k ) x k 1 x, which make k x k+1 = x k Convergent conditions x k 1 x k f(x k 1 ) f(x k ) f(x k). f(x), f (x), f (x) are continuous near x, and f (x) 0. Initial guesses x 0, x 1 are sufficiently close to x. max(m x 0 x, M x 1 x ) < 1, where M = max f /2min f 15

Convergence of the secant method Let e k = x k x e k+1 = x k+1 x = x k Using Taylor expansion x k 1 x k f(x k 1 ) f(x k ) f(x k) x = (x k 1 x )f(x k ) (x k x )f(x k 1 ) f(x k 1 ) f(x k ) = e k 1f(x k ) e k f(x k 1 ) f(x k 1 ) f(x k ) f(x k ) f(x ) + e k f (x ) + e 2 k f (x )/2 + O(e 3 k ) f(x k 1 ) = 0 f(x ) + e k 1 f (x ) + e 2 k 1 f (x )/2 + O(e 3 k 1 ) = 0 16

f(x k 1 ) f(x k ) = (e k 1 e k )f (x ) + (e 2 k 1 e2 k )f (x )/2 + O(e 3 k 1 ) (e k 1 e k )f (x ) (We assume e k is small enough so that e k 3 e k 2 e k.) e k f(x k 1 ) e k 1 f(x k ) = Summarizing above equations (e k 1 e k e k e k 1 )f (x ) + (e k e 2 k 1 e2 k e k 1)f (x )/2 + O(e 3 k 1 ) e k e k 1 (e k 1 e k )f (x )/2 e k+1 = e k 1f(x k ) e k f(x k 1 ) f(x k 1 ) f(x k ) = e ke k 1 (e k 1 e k )f (x )/2 (e k 1 e k )f (x ) = e k 1e k f (x ) 2f (x )

We want to prove e k+1 = C e k α e k 1 e k f (x ) 2f (x ) = C e k α Recursively, e k = C e k 1 α. Ce 1+α k 1 f (x ) 2f (x ) = C1+α e k 1 α2 f (x ) 2f (x ) = Cα e k 1 α2 α 1 e k 1 α2 α 1 equals to a constant, α 2 α 1 = 0. α = (1 + 5)/2 = 1.618 C = f (x ) 1/α 2f (x ) f (x ) 0.618 2f (x ) Superlinear convergence with λ = f (x ) 0.618 2f (x )

Muller s method LVF pp.73-77 Approximate f(x) by a parabola. A parabola passes (x 1, f(x 1 )),(x 2, f(x 2 )),(x 3, f(x 3 )) is P(x) = f(x 3 ) + c 2 (x x 3 ) + d 1 (x x 3 )(x x 2 ), c 1 = f(x 1) f(x 3 ) x 1 x 3, c 2 = f(x 2) f(x 3 ) x 2 x 3, d 1 = c 1 c 2 x 1 x 2. We want to find a solution closer to x 3. Let y = x x 3 and rewrite P(x) as a function of y. P(x) = f(x 3 ) + c 2 (x x 3 ) + d 1 (x x 3 )(x x 2 ) = f(x 3 ) + c 2 (x x 3 ) + d 1 (x x 3 )(x x 3 + x 3 x 2 ) = f(x 3 ) + c 2 y + d 1 y(y + x 3 x 2 ) = f(x 3 ) + (c 2 + d 1 (x 3 x 2 ))y + d 1 y 2 17

Let s = c 2 + d 1 (x 3 x 2 ). The solution is y = s ± s 2 4d 1 f(x 3 ), x = x 3 s ± 2d 1 s 2 4d 1 f(x 3 ) Let x 4 be the solution closer to x 3, x 4 = x 3 s sign(s) s 2 4d 1 f(x 3 ) 2d 1, which equals to (in a more stable way) 2d 1 x 4 = x 3 2f(x 3 ). s + sign(s) s 2 4f(x 3 )d 1 x 4 is the a better approximation to x than x 3. Use (x 2, f(x 2 )),(x 3, f(x 3 )),(x 4, f(x 4 )) as next three parameters, and continue the process until converging.

Properties of Muller s method No derivative needed Can find complex roots Fails if f(x 1 ) = f(x 2 ) = f(x 3 ), when x is a repeated root. Superlinear convergence, p 1.84, with λ = f (x ) β / 2f (x ) β, where β = (p 1)/2. The proof is similar to the secant method s. 18

Accelerating convergence 19

Aitken s 2 method Accelerate the convergence of a linearly convergent sequence. Suppose {p k } k=0 p linearly, and (p k+1 p)/(p k p) > 0 for k > N, where N is some constant. Then the sequence q k = p k (p k+1 p k ) 2 p k+2 2p k+1 + p k converges to p, with better convergence order than p k, lim k q k p p k p = 0. LVF pp.197, also check last year s notes. 20

Sketch of the proof Since lim k (p k+1 p)/(p k p) = λ > 0, for large k p k+1 p p k p p k+2 p p k+1 p. Expanding the terms yields p p k (p k+1 p k ) 2 p k+2 2p k+1 + p k = q k. Comparing q k p and p k p for large k gives lim k q k p p k p = 0. 21