Jing Gao 1 and Jian Cao 2* 1 Introduction

Size: px
Start display at page:

Download "Jing Gao 1 and Jian Cao 2* 1 Introduction"

Transcription

1 Gao and Cao Journal of Inequalities and Applications (018) 018:108 R E S E A R C H Open Access A class of derivative-free trust-region methods with interior bactracing technique for nonlinear optimization problems subject to linear inequality constraints Jing Gao 1 and Jian Cao * * Correspondence: evercall@163.com School of Information echnology and Media, Beihua University, Jilin, P.R. China Full list of author information is available at the end of the article Abstract his paper focuses on a class of nonlinear optimization subject to linear inequality constraints with unavailable-derivative objective functions. We propose a derivative-free trust-region methods with interior bactracing technique for this optimization. he proposed algorithm has four properties. Firstly, the derivative-free strategy is applied to reduce the algorithm s requirement for first- or second-order derivatives information. Secondly, an interior bactracing technique ensures not only to reduce the number of iterations for solving trust-region subproblem but also the global convergence to standard stationary points. hirdly, the local convergence rate is analyzed under some reasonable assumptions. Finally, numerical experiments demonstrate that the new algorithm is effective. MSC: 49M37; 65K05; 90C30; 90C51 Keywords: Affine scaling; rust-region method; Inequality constraints; Derivative-free optimization; Interior bactracing technique 1 Introduction In this paper, we analyze the solution of following nonlinear optimization problem: min f (x) s.t. Ax b, (1) where f (x) is a nonlinear twice continuously differentiable function, but its first-order or second-order derivatives are not explicitly available, A def =[a 1, a,...,a m ] R m n with a i R n and b def =[b 1, b,...,b m ] R m. he feasible set, in (1), is denoted by def = x R n Ax b} and the strict interior feasible set is int( ) def = x R n Ax > b}. he Author(s) 018. his article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a lin to the Creative Commons license, and indicate if changes were made.

2 Gao and Cao Journal of Inequalities and Applications (018) 018:108 Page of 1.1 Affine-scaling matrix for inequality constraints he KK system of (1)is f (x) A λ f =0, diagax b}λ f =0, () Ax b 0, λ f 0, where λ f R m. A feasibility x is said to be the stationary point for problem (1), if there exists a vector 0 λ f R m such that the KK system ()holds. o solve this KK system, some effective affine-scaling algorithms are designed. Reference [1] proposed an affine-scaling trust-region method with interior-point technique for bound-constrained semismooth equations. Reference [] introduced affine-scaling interior-point Newton methods for bound-constrained nonlinear optimization. In particular, [3] proved the superlinear and quadratic convergence properties of affine-scaling interior-point Newton methods for bound optimization problems without strict complementarity assumption. Different affine-scaling matrix denotes different algorithm. In [4], the Diin affine scaling was denoted by D(x) def = diagax b} and D def = D(x ). (3) Moreover, diagonal matrix C f def = diag λ f } was presented in [4]. hen λ f could be obtained as a least-squares Lagrangian multiplier approximation computed by [ ] A L.S. D 1 λ f = [ f 0 ]. (4) One efficient affine-scaling interior-point trust-region model is the one which is presented in [5]and[6], written in the form min q f (p)= f p + 1 p H f p + 1 p A D 1 C f Ap subject to [ p; D 1 Ap ], where f (x ) is the gradient of f (x) atthecurrentiteration,h f approximation. Furthermore, f h f ε,where (5) is either f (x )orits h f = ( f (x ) A λ f ), (6) and ε is a small enough constant, is usually considered as the termination criterion in this class of algorithms. Motivation he above discussions illustrate that the affine-scaling interior-point trustregion method is an effective way to solve the nonlinear optimization problems with inequality constraints. he trust-region frame guarantees the stable numerical performance. However, in Eqs. (4) (6) the first- and second-order derivatives play important roles during the computational process, which maybe fail to solve the optimization problems lie

3 Gao and Cao Journal of Inequalities and Applications (018) 018:108 Page 3 of (1). If both the feasibility and the stability of the algorithm need to be guaranteed, we should consider the derivative-free trust-region methods. 1. Derivative-free technique for trust-region subproblem Since the first- or second-order derivatives of objective functions are not explicitly available, the derivative-free optimization algorithms have been favored by researchers for a time. he application forms of the derivative-free theory are devise [7, 8] and widely applied. Reference [9] proposed a derivative-free algorithm for least-squares minimization, and proved the local convergence in [10]. Reference [11] presented a derivative-free approach to constrained multiobjective nonsmooth optimization. Reference [1]presented a higher-order contingent derivative of perturbation maps in multiobjective optimization. In [13], Conn proposed an unconstrained derivative-free trust-region method. hey constructed the trust-region subproblem min s B(0; ) m = m(x + s)=m(x )+s g + 1 s H m s by using a polynomial interpolation technique, where m(x )=g,and m(x )=H m. Following this idea, we consider that Y = y 0, y1,...,yt } is an interpolation sample set around the current iteration point x, and we construct the trust-region subproblem min q m (p)=g p + 1 p H m p + 1 p A D 1 C m Ap s.t. [ p; D 1 Ap ]. (7) C m def = diag λ m } with λ m obtained from [ ] A L.S. D 1 λ m = [ ] g, (8) 0 h m = ( g A λ m ). (9) We should note that the gradient and Hessian in (5) and(7), (4) and(8), (6) and(9) are different. Meanwhile, since the algorithm in this paper adopts both the decrease direction p and the stepsize α to update the iteration point, we give a new definition of the error bounds between the objective function f (x + αp) and the approximation function m(x + αp) to ensure the global convergence. We shall show the details after assumption (A1). Assumption (A1) Suppose that a level set L(x 0 ) and a maximal radius max are given. Assume that f is twice continuously differentiable with Lipschitz continuous Hessian in an appropriate open domain containing the max neighborhood x L(x 0 ) B(x, max) of the set L(x 0 ). Definition 1 Given a function f satisfies (A1). M = m : R n R, m C } is a set of model functions. If there exist positive constants κ ef, κ eg, κ eh,andκ blh,suchthat,foranyx L(x 0 ), (0, max ], and α (0, 1], there is a model function m(x + αp) M, with Lipschitz continuous Hessian and corresponding Lipschitz constant bounded by κ blh,andsuchthat:

4 Gao and Cao Journal of Inequalities and Applications (018) 018:108 Page 4 of 1 the error between the Hessian of the model m(x + αp) and the Hessian of the function f (x + αp) satisfies f (x + αp) m(x + αp) κ eh α, p B(0, ); (10) the error between the gradient of the model m(x + αp) and the gradient of the function f (x + αp) satisfies f (x + αp) m(x + αp) κeg α, p B(0, ); (11) 3 the error between the model m(x + αp) and the function f (x + αp) satisfies f (x + αp) m(x + αp) κ ef α 3 3, p B(0, ). (1) Such a model m is called fully quadratic on B(x, ). In this paper, we aim to present a class of derivative-free trust-region method for nonlinear programming with linear inequality constraints. he main features of this paper are: We use the derivatives of approximation function m(x + αp) to replace the derivatives of objective function f (x + αp) to reduce the algorithm s requirement for gradient and Hessian of the iteration points. We solve an affine-scaling trust-region subproblem to find a feasible search direction in each iteration. Intheth iteration, a feasible search direction p is obtained from an affine-scaling trust-region subproblem. Meanwhile, interior bactracing sill will be applied both for determining stepsize α and for guaranteeing the feasibility of iteration point. We will show that the iteration points generated by the proposed algorithm could converge to the optimal points of (1). Local convergence will be given under some reasonable assumptions. his paper is organized as follows: we describe a class of derivative-free trust-region method in Sect.. he main results including global convergence property and local convergence rate will be discussed in Sect. 3. he numerical results will be illustrated in Sect. 4. Finally, we give some conclusions. Notation In this paper, is the -norm for a vector and the induced -norm for a matrix. B R n is a closed ball and B(x, ) is the closed ball centered at x, with radius >0.Y is a sample set and L(x 0 )=x R n f (x) f (x 0 ), Ax b} is the level set about the objective function f. We use the subscript f and subscript m to distinguish the relevant information between the original function and the approximate function. For example, H f is the Hessian of f at th iteration and H m is the Hessian of m at th iteration. A derivative-free trust region method with interior bactracing technique o solve the optimization problem (1) with not all available first- or second-order derivatives, we design a derivative-free trust-region method. An affine-scaling matrix is denoted by (3) for linear inequality constraints. We chose a stepsize α satisfying the following inequalities: f (x + α p ) f (x )+α κ 1 g p, with x + α p. (13a) (13b)

5 Gao and Cao Journal of Inequalities and Applications (018) 018:108 Page 5 of Moreover, set 1, if x + α p int( ), θ = 1 O( p ), otherwise, (14) where θ (θ 0,1], for some 0<θ 0 <1.heθ is to ensure the iterative points generated by the algorithm are strictly interior. Combining with (13a), (13b) and(14), this interior bactracing technique is to guarantee the feasibility of the iterative points. he algorithm possesses the trust-region property and the derivative-free technique is reflected in the trust-region subproblem (7) since the gradient g and Hessian H m come from the approximation function, which are different from f and H f in (5), satisfying the error bounds (11) and(1). We adopt g h m to be a termination criterion. Now we present the derivative-free trust-region method in detail (see Algorithm 1). Remar 1 We add a bactracing interior line-search technique in the algorithm. It is helpful to reducing the number of iterations. Equation (13a) isusedtoguaranteethedescent property of f (x)and(13b) ensures the feasibility of x + α p. Remar he scalar α, given in step 5, denotes the stepsize along p to the boundary (13b) of the linear inequality constraints def Ɣ = min a i x b i a i p a i x b i a i p } >0,i =1,,...,m, (15) with Ɣ def =+ if (a i x b i )/(a i p ) 0 for all i =1,,...,m. A ey property of the scalar α is that an arbitrary step α p to the point x + α p does not violate any linear inequality constraints. Remar 3Let M m = [ ] H m 0. (16) 0 C m he first-order necessary conditions of (7) implies that there exists v m 0suchthat [ ] [ ] [ ] p g A (M m + v m I) = + ˆp 0 D 1 λ m+1, [ ]) p v m ( =0. ˆp with (17) In order to obtain a suitable approximation function, Algorithm 1 needs to update the objective function of the trust-region subproblem if necessary. he model-improvement algorithm is applied only if g h m ε and at least one of the following holds: he model m(x + αp) is not certifiably fully quadratic on B(x, )or > ι g h m.itimproveson the current approximate function m(x +αp) to meet the requirements of the error bounds so that the model function becomes fully quadratic. We display the model-improvement

6 Gao and Cao Journal of Inequalities and Applications (018) 018:108 Page 6 of Algorithm 1 Derivative-free trust-region method with interior bactracing technique 1 Initialization: Given x 0, max >0, 0 (0, max ], 0 η 0 η 1 <1,(η 1 0)and θ 0 (0, 1) are constants. := 0. Construct model function: Let y = x, and obtain an interpolation point set Y = y 0,...,yt }. Construct a quadratic function m(x + αp), get the information such as g, H m.calculated, λ m and h m from (3), (8)and(9). 3 ermination criterion: If g h m > ε,thengotostep4.considerthemodelm on B(x, ), there are two cases would be happened: (a) If m is not fully quadratic on B(x, ) or > ι g h m holds, construct another model to modified m, = minmax, β g h m }, } and go to step 4. (b) Otherwise, we get the optimal point and stop. 4 rust-region subproblem: solve trust-region subproblem (7) to find descent direction p. 5 Step size: choose α =1,α, α,..., until the inequalities (13a)and(13b) are satisfied. 6Sets = α θ p by (14). 7CalculatePred(s )=m(x ) m(x + s ), Ared(s )=f (x ) f (x + s ), ρ = Ared(s ) Pred(s ). (a) If ρ η 1 or if both ρ η 0 and m is fully quadratic on B(x, ),thenset y +1 = x +1 = x + s,gotostep. (b) Otherwise x +1 = x. 8Ifρ < η 1,callAlgorithm to guarantee m is fully quadratic. (a) If m is not fully quadratic, modify. (b) Otherwise, set m +1 = m. 9 rust-region radius update: set minς, max } [, minς, max }] +1 ζ if ρ η 1 and < ι g h m, if ρ η 1 and ι g h m, if ρ < η 1 and m is fully quadratic, if ρ < η 1 and m is not fully quadratic. 10 Set := +1andgotostep. Algorithm Model-improvement mechanism 1 Initialization: set i =0, m (0) = m. Repeat:i = i +1; = ω i 1 ;Ensurem (i 1) satisfies the error bounds (10) (1)in Definition 1 on B(x, ω i 1 ). Until ι( g h m ) (i). mechanism in Algorithm which has the same principle as Algorithm proposed in [14], with a constant ω (0, 1).

7 Gao and Cao Journal of Inequalities and Applications (018) 018:108 Page 7 of 3 Main results and discussion In this section, we mainly discuss some properties about the proposed algorithm, including the discussion of the error bounds, the sufficiently descent property, the global and local convergence properties. First of all, we mae some necessary assumptions as follows. Assumptions (A) he level set L(x 0 ) is bounded. (A3) here exist positive constants κ gf and κ gm such that f κ gf and g κ gm, respectively, for all x L(x 0 ). (A4) here exist positive constants κ Hf and κ Hg such that H f κ Hf and H m κ Hm,respectively,forallx L(x 0 ). (A5) [A D 1 ] is full row ran for all x L(x 0 ). 3.1 Error bounds Observefirstthatsomeerrorboundsholdimmediately. Lemma 1 Suppose that (A1) (A5), the error bounds (10) (1) and the fact that max hold. If m is a fully quadratic model on B(x, ), then the following bound is true: h f h m κ h α. (18) Proof Using thetheoryofmatrixperturbationanalysis, Eqs. (4)and(8), we obtain λ f λ m ( AA + D ) 1 A f (x ) g, (19) where AA is a positive definition matrix and D is a diagonal matrix related with x L(x 0 ). By (A), there exists a constant κ λ >0suchthat (AA + D ) 1 A κ λ.hus, from (6), (9)andtheerrorbound(11),onehasthefactthat h f h m = f (x ) g A (λ f λ m ) f (x ) g + κλ A f (x ) g ( 1+κ λ A ) f (x ) g ( 1+κ λ A ) κ eg α ( 1+κ λ A ) κ eg max α. Clearly, the conclusion holds with κ h =(1+κ λ A )κ eg max. Lemma Suppose that (A1) (A5), the error bounds (10) (1) and the fact that max hold. If m is a fully quadratic model on B(x, ), for some constant κ, one has f (x ) h f g h m κ α. (0) Proof Using the triangle inequality, Cauchy Schwarz inequality, (18), (A3), the error bounds (10) (1)andthefactthatα (0, 1] and max successively, we obtain f (x ) h f g h m

8 Gao and Cao Journal of Inequalities and Applications (018) 018:108 Page 8 of f (x ) g hf + g h m h f f (x ) g h f + κ h α g (κ eg κ gf max + κ gm κ h )α, which implies that the inequality (0)holdswithκ = κ eg κ gf max + κ gm κ h. Lemma 3 Suppose that (A1) (A5), the error bounds (10) (1) and the fact that max hold. If f h f 0,then step 3 of Algorithm 1 will stop in a finite number of improvement steps. Proof Now we should prove that f h f must be zero if the loop of Algorithm is infinite. In fact, there are two cases could cause Algorithm to be implemented. One is that m is not fully quadratic, the other is that the radius > ι g h m.hensetm (0) = m, and improve the model to be fully quadratic on B(x, ), which denoted by m (1). If (g h m ) (1) of m (1) satisfies the inequality ι (g h m ) (1),Algorithm stops with = ι (g h m ) (1). Otherwise, ι (g h m ) (1) < holds. Algorithm will improve the model on B(x, ω ) andtheresultingmodelisdenotedbym ().Ifm() satisfies ι (g h m ) () ω,theprocedure stops. If not, the radius should be multiplied by ω and Algorithm will improve the model on B(x, ω ), and go on. he only case for Algorithm to be infinite is if ι ( g h m ) (i) < ω i 1 for all i 1. It implies lim ( g h ) (i) m =0. i + By the bound (0) f h f (g h m ) (i) κ ω i 1 α f h f f h f ( g h ) (i) m + ( g h ) (i) m for all i 1, we obtain κ ω i 1 α + ωi 1 ι κ ω i 1 + ωi 1 ι ( κ + 1 ) ω i 1. ι By the choice of ω (0, 1) the above inequality means that f h f =0.heconclusion shows us step 3 will stop in a finite number of improvements. 3. Sufficiently descent property In order to guarantee the global convergence property of the proposed algorithm, it is necessary to show that a sufficiently descent condition is satisfied at the th iteration. We

9 Gao and Cao Journal of Inequalities and Applications (018) 018:108 Page 9 of obtained in [6]ifstepp is the optimal point of the trust-region subproblem (7), there is a constant κ 3 >0suchthat g p 1 κ 3 g h m min, g h m 1 }. (1) M m Lemma 4 Suppose that (A1) (A5) and the error bounds (10) (1) hold. p is the solution of the trust-region subproblem (7). hen there must exist an appropriate α >0which satisfied inequalities (13a). Proof We start by considering the maximal step-length along the trust-region subproblem descent direction that preserves sufficient feasibility in the sense of the (13a). Successively using the mean value theorem and (11), we obviously obtain f (x ) f (x + αp ) = α f (x ) p 1 α p f (ξ )p κ eg α 3 3 αg p 1 α p f (ξ )p = ακ 1 g p κ eg α α(κ 1 1)g p 1 α p f (ξ )p, () where ξ (x, x + s ). herearetwocasesthatmaybeconsidered.hefirstisp f (ξ )p 0. By canceling the last term of Eqs. (), (1), g h m 1 κ λm κ Hm for large enough and the fact that max,itisthuseasytoseethatthereexistsanα =[ κ 3(1 κ 1 )κ Hm κ λm κ eg max ] 1 >0suchthat(13a) holds.hesecondcaseisp f (ξ )p > 0. Using the Cauchy Schwarz inequality and the fact that α (0, 1] and max,wededucethat f (x ) f (x + αp )+ακ 1 g p κ eg α α(κ 1 1)g p 1 α κ Hf p κ eg α max + α(κ 1 1)g p 1 α κ Hf = α [( κ eg max 1 ) ] κ Hf α +(κ 1 1)g p 0, when α = κ 3(1 κ 1 )κ Hm κ λm κ eg max + 1 κ H f > 0. hus the final conclusion obtained. We therefore see that it is reasonable to design line-search step criterion in step 5, which provided us a nonincreasing sequence f (x )}. Lemma 5 Let step p be the solution of the trust-region subproblem (7). Suppose that (A1) (A5) hold. hen there exists a positive constant κ 4 such that step p satisfies the following

10 Gao and Cao Journal of Inequalities and Applications (018) 018:108 Page 10 of sufficiently descent condition: 1 Pred(s ) κ 4 α θ g h m min, g h m 1 }, (3) M m for all g, h m, M m, and. Proof Combining now (7), (17), Lemma 4, θ (θ 0,1]andthefactthatα 1, we get Pred(s ) = m m(x + s ) = α θ g p (α θ ) p M m p [ ] (17) = α θ g p + (α θ ) g p + (α θ ) p v m ˆp ( ) α θ α θ 1 g p 1 κ 3α θ g h 1 m min, g h m 1 } M m 1 = κ 4 α θ g h m min, g h m 1 }. M m 3.3 Global convergence Every iteration point in the + 1th iteration will be chosen on the region B(x, α ). Followingthelemmaonefirstshowsthatthecurrentiterationmustbesuccessfulifα is small enough. Lemma 6 Suppose that (A1) (A5) and the error bounds (10) (1) hold. m quadratic on B(x, ), g h m 0and is fully α min 1, κ 4(1 η 1 ) κ Hm κ λm κ ef max } g h m 1, where κ λm is the bound of C m, for all x L(x 0 ). hen the th iteration is successful. Proof We notice that, for all and the model function m,onehasf (x )=m(x ). Let M f = [ H f 0 0 C f ],from(16) and (A3), we now that Mm κ Hm κ λm. hus combining g h m 1 κ Hm κ λm with the sufficient decrease condition (3), we immediately get 1 Pred(s ) κ 4 α θ g h m min, g h m 1 } 1 κ 4 α θ g M m h m. Using Eqs. (1), (3), the fact that α (0, 1] and θ (0, 1], we have ρ 1 = f (x ) f(x +1 ) m(x ) m(x +1 ) 1

11 Gao and Cao Journal of Inequalities and Applications (018) 018:108 Page 11 of f (x ) m(x ) m(x ) m(x +1 ) + m(x +1 ) f(x +1 ) m(x ) m(x +1 ) f (x )=m(x ) θ 1 1 η 1. κ ef α 3 θ 3 3 κ 4 α θ g h m 1 κ ef max α κ 4 g h m 1 hus ρ η 1 and the iteration is successful. Lemma 7 Suppose that (A1) (A5) and the error bounds (10) (1) hold. If the number of successful iteration is finite, then lim f (x ) h f =0. + Proof We consider that all the model-improving iterations before m becomes fully quadratic are less than a constant N. Suppose that the current iteration is an iteration after a successful one. It means that an infinite number of iterations are acceptable or not nice. In these two cases, is shrining. Furthermore, is reduced by a factor ζ at least once every N iterations, which implies 0. For the jth iteration, we denote the ith iteration after j by the index i j,then x j x ij N j 0, j +. Using the triangle inequality, we obtain f (xj ) h fj f (xj ) h fj f (x j ) h fij + f (xj ) h fij + f (x ij ) h fij + f (x ij ) h fij g i j h fij + g i j h fij g i j h mij + g i j h mij. he following wor is to show that all these terms on the right-hand side are converging to zero. Because of the Lipschitz continuity of f and the fact that x ij x j 0the first and second terms converge to zero. he inequalities (10) and(11) imply the third and fourth terms on the right-hand side are converging to zero. According to Lemma 3,if gi j h mij 0 for small enough ij, i j would be a successful iteration, which yield a contradiction. hus the last term converges to zero. Lemma 8 Suppose that (A1) (A5), the error bounds (10) (1) and (3) hold. Suppose furthermore that the strict complementarity of the problem (1) holds. hen lim inf g h m =0. + Proof he ey is that we may find a contradiction with the fact that f (x )} is a nonincreasing bounded sequence unless x is a stationary point. We thus have to verify that there exists some ɛ >0suchthatf (x )} isnotconvergentundertheassumptionof g h m ɛ.

12 Gao and Cao Journal of Inequalities and Applications (018) 018:108 Page 1 of We observe from (13a), Lemma 4 and (1)that f (x ) f(x + α p ) α κ 1 g p 1 α κ 1 κ 3 g h m min, g h m 1 } M m } ɛ α κ 1 κ 3 ɛ min, 0. (4) M m hus from (4), two cases should be considered next, that is, and lim inf α 0 (5) lim 0. (6) We now start the proof of (5). On one hand, α is accepted by (13b) theboundaryof inequality constraints along p.fromeq.(15) Ɣ = min a i x b i a i p a i x b i a i p } >0,i =1,,...,m, with α =+ if (a i x b i )/(a i p ) 0 for all i =1,,...,m, ˆp = D 1 Ap and (17), we now that there exists λ m+1 such that a i p = ( a i p ) 1 b i ˆp i = (a i p b i )λ i m +1, v m + λ i m +1 where ˆp i and λi m +1 are the ith components of the vectors ˆp and λ m+1,respectively.hence, there exists j 1,...,m such that α = a j x b j a v j m + λm +1 j p λ j v j m + λm +1. (7) m +1 λ m+1 From (17), we have [ ] [ ] [ ] A g p D 1 λ m+1 = +(M m + v m I). 0 ˆp Since [A D 1 ] is full row ran for all x L(x 0 ), λ m is bounded and m(x) is twice continuously differentiable, there exist κ 5 >0andκ 6 >0suchthat λ m+1 κ 5 +(κ 6 + v m ).

13 Gao and Cao Journal of Inequalities and Applications (018) 018:108 Page 13 of Using the fact that v m ( ( p ˆp ) ) = 0 and taing the norm to both sides of (17), we deduce that v m = v m ( p ˆp ) ( g A λ m + D 1 λ m ) 1 M m (p ; ˆp ) = g h m 1 M m (p ; ˆp ). And noting (p ; ˆp ), we can obtain v m g h m 1 M m. Combining the assumption g h m > ɛ with 0 deduced from (4), it is clear from the fact M m κ λm κ Hm that, for, lim v m =+. hus (7)implies that lim α 0. Furthermore, 0meansthatlim p = 0, from which we deduce that, for some 0<θ 0 <1andθ 1=O( p ), the strictly feasible stepsize θ (θ 0,1] 1. From the above, we have already seen that (5)holdsinthecasethatα is determined by (13b). here is another case that α is determined by (13a). In this case, we are able to verify that α = 1 is acceptable when sufficiently large. If not, κ 1 g p < f (x + p ) f (x ) must hold. Applying the aylor series, (10) (11), (A3) and the fact that max,we deduce that κ 1 g p < f (x + p ) f(x )= f(x ) p + 1 p f (ξ )p (κ eg max + 1 κ eh max + 1 ) κ Hm + g p, where ξ (x, x + s ). his inequality is equivalent to the form of (1 κ 1 )g p + (κ eg max + 1 κ eh max + 1 ) κ Hm >0. Moreover, (1)and g h m ɛ imply that (1 κ 1 )κ 3 ɛ min, ɛ κ Hm } + (κ eg max + 1 κ eh max + 1 ) κ Hm >0.

14 Gao and Cao Journal of Inequalities and Applications (018) 018:108 Page 14 of hus if (1 κ 1 )κ 3 ɛ κ eg max +κ eh max +κ Hm ɛ κ Hm we deduce from the inequality [ ( κ eg max + 1 κ eh max + 1 ) ] κ H m (1 κ 1 )κ 3 ɛ >0 that > (1 κ 1 )κ 3 ɛ κ eg max + κ eh max + κ Hm. Clearly, a contradiction appears here. It implies that α =1for sufficiently large. herefore (5)always holds. On the other hand, we should prove that (6) is true. From step 3 of Algorithm 1, we now that ι g h m. By the assumption that g h m ɛ,weobtain ιɛ. Whenever falls below a constant κ 7 given by κ 7 = min ɛ κ Hm κ λm, κ 4ɛ(1 η 1 ) κ ef max }, the th iteration is either successful or model-improving, and hence from step 9, we are able to deduce both that +1 and +1 ζ. Combining with the rules of step 9 we conclude that +1 minιɛ, ζ κ 7 } = κ 7.Itmeansthat 0, if g h m ɛ. In conclusion, the sequence f (x )} is not convergent if we suppose that g h m ɛ, which contradicts the fact that f (x )} is a nonincreasing bounded sequence. It implies that lim inf g h m =0. + Lemma 9 For any subsequence i } such that lim g i h mi = 0, (8) i + we also have lim f i h fi = 0. (9) i + Proof First, we note that, by (8), g i h mi ε when i sufficiently large. hus the criticality step ensures that the model m i is a fully quadratic function on the ball B(x i, i ), with i ι g i h mi for all i sufficiently large (if f i h fi 0). hen, using the bound (0) on the error between the terminal conditions of function and model, we have f i h fi g i h mi κ α i i κ ια i g i h mi κ ι g i h mi.

15 Gao and Cao Journal of Inequalities and Applications (018) 018:108 Page 15 of As a consequence, we have f i h fi = f i h fi g i h mi + g i h mi (κ α i ι +1) g i h mi (κ ι +1) g i h mi for all i sufficiently large. But g i h mi 0implies(9)holds. hen we obtain the global convergence derived from Lemmas 8 and 9. heorem 1 Suppose that (A1) (A5), the error bounds (10) (1) and (3) hold. Suppose furthermore that the strict complementarity of the problem (1) holds. Let x } R n be sequence generated by Algorithm 1. hen lim inf f h f =0. + he above theorem shows us there exists a limit point that is first-order critical. In fact, we are able to prove that all limit points of the sequence of iterations are first-order critical. heorem Suppose that (A1) (A5), the error bounds (10) (1) and (3) hold. Suppose furthermore that the strict complementarity of the problem (1) holds. Let x } R n be sequence generated by Algorithm 1. hen lim f h f =0. + Proof We first obtained from Lemma 7 that the theorem holds in the case when S is finite. Hence, we will assume that S is infinite. For the purpose of deriving a contradiction, we suppose that there exists a subsequence i } of successful or acceptable iterations such that f i h fi ɛ 1 > 0 (30) for some ɛ 1 > 0 and for all i. hen, because of Lemma 9,weobtain g i h mi ɛ >0 for some ɛ > 0 and for all i sufficiently large. Without loss of generality, we pic ɛ such that ɛ min ɛ 1 ( + κ eg ι), ɛ }. (31) Lemma 8 then ensures the existence, for each i } in the subsequence, of a first iteration l i > i such that g l i h mli < ɛ.byremovingelementsfrom i}, without loss of generality and without a change of notation, we thus see that there exists another subsequence indexed by l i } such that g h m ɛ for i l i and g l i h mli < ɛ, (3) for sufficiently large i, with inequality (30)being retained.

16 Gao and Cao Journal of Inequalities and Applications (018) 018:108 Page 16 of We now restrict our attention to the set K corresponding to the subsequence of iterations whose indices are in the set i N 0 N 0 : i l i }, where i and l i belong to the two subsequences given above in (31). We now that g h m ɛ for K. From Lemma 8 lim + α =0andby Lemma 5 we conclude that for any large enough K the iteration is either successful if the model is fully quadratic or model-improving otherwise. Moreover, for each K S we have f (x ) f (x + s ) η 1 [ m(x ) m(x + h ) ] 1 η 1 κ 4 α θ g h m min, g h m 1 }, M m and, for any such large enough, ɛ κ hm κ λm.hence,wehaveα θ f (x ) f (x +s ) η 1 κ 4 ɛ for K S sufficiently large. Since for any K large enough the iteration is either successful or model-improving and since for a model-improving iteration x +1 = x + s,we have, for all i sufficiently large, x i x li l i 1 j= i j K S x j x j+1 l i 1 j= i j K S α j θ j j 1 η 1 κ 4 ɛ [ f (xi ) f (x li ) ]. Because the sequence f (x )} is bounded below and monotonic decreasing, we see that the right-hand side of this inequality must converge to zero, and we therefore obtain Now, lim i + x i x li =0. f (x i ) h fi f (x i ) h fi f (x li ) h fli + f (x li ) h fli g l i h mli + g l i h mli. Since f is Lipschitz continuity, we see that the first term of the above inequality f (x i ) h fi f (x li ) h fli 0andisboundedbyɛ for i sufficiently large. Equation (3) implies the third term gl i h mli ɛ.from(31)weseethatm l i is a fully quadratic function on B(x li, ι gl i h mli ). Using (11)and(3), we deduce that f (x li ) h fli gl i h mli κ eg ιɛ for i sufficiently large. Combining with these bounds we obtain the consequence that f i h fi ( + κeg ι)ɛ 1 ɛ 1 for i large enough. his result contradicts (30), which implies the initial assumption is false and the theorem follows.

17 Gao and Cao Journal of Inequalities and Applications (018) 018:108 Page 17 of 3.4 Local convergence Having proved the global convergence, we now focus on the speed of the local convergence. For this motivation, more acceptable assumptions are given as follows. Assumptions (A6) x is the solution of problem (1), which satisfies the strong second-order sufficient condition, that is, let the columns of Z denote an orthogonal basis for the null space of [A D 1 ], then there exists ϖ >0such that d (Z M f Z )d ϖ d, d. (33) (A7) Let (M m M f )Z p lim = 0. (34) p his means that for large p ( ) Z M m Z p = p ( ) Z M f Z p + o ( p ). heorem 3 Suppose that (A1) (A7), the error bounds (10) (1) and (3) hold. x } is a sequence generated by Algorithm 1. Suppose furthermore that the strict complementarity of the problem (1) holds. hen, for sufficiently large, the stepsize α 1 and there exists ˆ >0such that K ˆ, K, where K is a large enough index. Proof According to the algorithm, the stepsize α isgivenin(15) Ɣ = min a i x b i a i p a i x b i a i p } >0,i =1,,...,m. From ˆp = D 1 Ap and (17), there exists λ m+1 such that a i p = ( a i p ) 1 b i ˆp i = (a i p b i )λ i m +1, (35) v m + λ i m +1 where ˆp i and λi m +1 are the ith component of the vectors ˆp and λ m+1,respectively. If p <,thenv m = 0. Since the strict complementarity of the problem (1)holdsat every limit point of x }, i.e., λ m +1 j + a j x b j > 0, for all large, λ m+1 = λ N m +1 >0when v m =0.So,λ j m +1 =(λ N m +1 ) j >0.From(35), it is clear that lim α =1. If p = 0, then v m+1.from(35), α = a j x b j a v j m + λ j p λ j m +1 m +1 v m + λ j m +1 λ j m +1. Fromtheabove,wehavefoundthatif g h m ε holds and 0, we conclude that lim α =+,andlim θ =1.

18 Gao and Cao Journal of Inequalities and Applications (018) 018:108 Page 18 of Further, by the condition on the strictly feasible stepsize θ 1=O( p ), and lim p =0,wehavelim θ =1. Wecanobtainfromabovethatlim Ɣ =+ when α is given in (15) alongp.it means that if α is determined by (13b), α 1forsufficientlylarge.hus f (x + p )=f (x )+ f p + 1 p H f p + o ( p ) ( 1 = f (x )+κ 1 g p + κ ) g p + f p g p ( g p + p H m p ) + 1 p (H f H m )p + o ( p ). (36) he error bound (11) showsus(g f ) p = o( p ).Henceweseefrom(36) that f (x + p ) f (x )+κ 1 g p at the th iteration. Combining with the fact that p A D 1 C m Ap 0, we now that x +1 = x + p.so f (x ) f(x + p ) m(x )+m(x + p ) = [g p + 1 ] [ p M m p f p + 1 p H f p + o ( p )] = (g f ) p + 1 p (H m H f )p + o ( p ) (10),(11) = o ( p ). By assumptions (A1) (A7), we can obtain ρ 1= f (x ) f (x + p )+m(x ) m(x + p ) Pred(p ) = [g p + 1 p M m p ] [ f(x ) p + 1 p H f p + o( p )] Pred(p ) = o( p ) Pred(p ). (37) By (16)and(17), we get [ ] [ ] } [ ] g p g A p = + 0 D 1 λ m+1 ˆp [ ] [ ] = p, p ˆp (M m + v m I) ˆp [ ] [ ] p, p ˆp M m. ˆp Let the columns of Z denote an orthogonal basis for the null space of [A D 1 ]. We get g p [p, ˆp ]M m [ p ˆp ] = p Z M m Z p. herefore, from (33) (34), we see that for

19 Gao and Cao Journal of Inequalities and Applications (018) 018:108 Page 19 of all large g p ϖ p + o ( p ). Hence, one has Pred(p )= g p 1 p M m p = g p 1 ( p Hm + A D 1 C A ) p = g p 1 p M f p + o ( p ) ϖ 4 p + o ( p ). (38) For a similar proof, we can obtain p 0. Combining (37) with(38), one has the fact that ρ 1. Hence there exists ˆ >0suchthatwhen p ˆ, ˆρ ρ η,and +1.Asp 0, there exists an index K such that p ˆ whenever K.hus,the conclusion holds. heorem 3 implies that the local convergence rate of Algorithm 1 depends on the Hessian at x and the local convergence rate of p.meanwhile,ifp is a quasi-newton step, for sufficiently large,thesequencex } will reach a superlinear local convergence rate to the optimal point x. 4 Numerical experiments We now demonstrate the experiment performance of the proposed derivative-free trustregion method. Environment: he algorithms are written in Matlab R009a and run on a PC with.66 GHz Intel(R) Core(M) Quad CPU and 4 G DDR. Initialization: he values 0 =,η 0 =0.5,η 1 =0.75,ζ =0.5,ς =1.5,ι =0.5,β =0.5, α =0.,ε =10 8 and ω =0.3areused. max isequalto4,6,8,respectively. ermination criteria: g h m ε. Problems: We first test 0 linear inequality constrained optimization problems (listed in able 1) from est Examples for Nonlinear Programming Codes [15, 16]. It is worth noting that the assumptions (A) (A5) play very important roles in the theoretical proof. Here (A) is a general assumption in the optimization problem and (A5) can be satisfied able 1 est problems No. Problem Dim x 0 1 HS1 [ 1, 1] 3 HS5 3 [100, 1.5, 3] 5 HS36 3 [10, 10, 10] 7 HS44 4 [0,0,0,0] 9 HS76 4 [0.5, 0.5, 0.5, 0.5] 11 HS31 [ 1., 1] 13 HS4 [0.1, 0.1] 15 HS50 3 [10, 10, 10] 17 HS53 3 [0,, 0] 19 HS331 [0.5, 0.1] No. Problem Dim x 0 HS4 [1,0.5] 4 HS35 3 [0.5, 0.5, 0.5] 6 HS37 3 [10, 10, 10] 8 HS45 5 [,,,,] 10 HS4 [0.1, 0.1] 1 HS3 [, 0.5] 14 HS3 [, 0.5] 16 HS51 3 [10, 10, 10] 18 HS68 5 [1, 1,..., 1] 0 HS340 3 [1, 1, 1]

20 Gao and Cao Journal of Inequalities and Applications (018) 018:108 Page 0 of if the iteration points are not optimal. According to the definitions of error bounds in our algorithm, the gradient (or Hessian) of the model function must be bounded if there exists a constant such that the gradient (or Hessian) norm of the objective function is bounded. herefore, most of the above test problems satisfy the assumptions (A) (A5). For example (HS1) min f (x) = 0.01x 1 + x 100 s.t. 10x 1 x 10 0, x 1 50, 50 x 50; ] f (x) [0.0x 1 = x , f (x) [ ] = 0 =. Of course, we will use the level set to limit the bound of f (x) during program execution, which will be much smaller than this value. Even if the boundedness of the gradient and of the Hessian of the objective functions cannot be satisfied at the same time, at least the boundedness within the level set can be guaranteed. We use the tool of Dolanand Moré [17] to analyze the efficiency of the given algorithm. Figures 1 and show that Algorithm 1 is feasible and has the robust property. Furthermore we test five simple linear inequality constrained optimization problems from [16] and compare the experiment results of different trust-region radius upper bound Figure 1 he total iteration number performance of Algorithm 1 Figure he CPU time performance of Algorithm 1

21 Gao and Cao Journal of Inequalities and Applications (018) 018:108 Page 1 of able Experiment results on linear inequality constrained optimization problems Problem name Results max =4 max =6 max =8 n nf CPUt nf CPUt nf CPUt HS HS F F HS F F HS HS max.able shows the experiment results, where nf represents the number of function evaluations, n is the dimension of the test problems and F means the algorithm terminated in the case that the iteration number exceeds the maximum number. he CPU times of the test problems are reported. able indicates that Algorithm 1 is executable to reach optimal point. he choice of max = 6 is made to enable us to carry out more gratifying results. But the results show that the number of iterations maybe higher than any other derivative-based algorithms. he reason we thin is that the derivatives of most of the test problems we chose are available and a derivative-free technique may increase the number of executions; then higher iteration numbers are necessary. 5 Conclusions In this paper, we propose an affine-scaling derivative-free method for linear inequality constrained optimizations. (1) his algorithm is mainly designed to solve the unavailable derivatives optimization problems in engineering. he proposed algorithm adopts interior bactracing technique and possesses the trust-region property. () he global convergence is proved by using the definition of fully quadratic. It shows that the iteration points generated by the proposed algorithm could converge to the optimal points of (1). Meanwhile, weget theresult thatthelocalconvergencerate of the proposed algorithm depends on p.ifp becomes the quasi-newton step, then the sequence x generated by the algorithm converges to x superlinearly. (3) he preliminary numerical experiments verify the new algorithm we proposed is feasible and effective for solving unavailable-derivative linear inequality constrained optimization problems. Acnowledgements his wor is supported by the National Science Foundation of China under Grant No , 13th five-year Science and echnology Project of Education Department of Jilin Province under Grant No. JJKH KJ, the PhD Start-up Fund of Natural Science Foundation of Beihua University and Youth raining Project Foundation of Beihua University. Competing interests he authors declare that they have no competing interests. Authors contributions All authors contributed equally and significantly in writing this article. All authors read and approved the final manuscript. Author details 1 School of Mathematics and Statistics, Beihua University, Jilin, P.R. China. School of Information echnology and Media, Beihua University, Jilin, P.R. China. Publisher s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

22 Gao and Cao Journal of Inequalities and Applications (018) 018:108 Page of Received: 17 January 018 Accepted: 6 April 018 References 1. Kanzow, C., Klug, A.: An interior-point affine-scaling trust-region method for semismooth equations with box constraints. Comput. Optim. Appl. 37(3), (007). Kanzow, C., Klug, A.: On affine-acaling interior-point Newton methods for nonlinear minimization with bound constraints, Comput. Optim. Appl. 35(), (006) 3. Heinenschloss, M., Ulbrich, M., Ulbrich, S.: Superlinear and quadratic convergence of affine-scaling interior-point Newton methods for problems with simple bounds without strict complementarity assumption. Math. Program. 86(3), (1999) 4. Liuzzi, G., Lucidi, S., Sciandrone, M.: Sequential penalty derivative-free methods for nonlinear constrained optimization. SIAM J. Optim. 0(5), (010) 5. Coleman,.F., Li, Y.: A trust region and affine scaling interior point method for nonconvex minimization with linear inequality constraints. Math. Program. 88(1),1 31 (1997) 6. Zhu, D.: A new affine scaling interior point algorithm for nonlinear optimization subject to linear equality and inequality constraints. J. Comput. Appl. Math. 161(1),1 5 (003) 7. Sahu, D.R., Yao, J.C.: A generalized hybrid steepest descent method and applications. J. Nonlinear Var. Anal. 1, (017) 8. Gibali, A.: wo simple relaxed perturbed extragradient methods for solving variational inequlities in Euclidean spaces. J. Nonlinear Var. Anal., (018) 9. Zhang, H., Conn, A.R., Scheinberg, K.: A derivative-free algorithm for least-squares minimization. SIAM J. Optim. 0(6), (010) 10. Zhang, H., Conn, A.R.: On the local convergence of a derivative-free algorithm for least-squares minimization. Comput. Optim. Appl. 51(), (01) 11. Liuzzi, G., Lucidi, S., Rinaldi, F.: A derivative-free approach to constrained multiobjective nonsmooth optimization. SIAM J. Optim. 6(4), (016) 1. ung, L..: Higher-order contingent derivative of perturbation maps in multiobjective optimization. J. Nonlinear Funct. Anal. 015,19 (015) 13. Conn, A.R., Scheinberg, K., Vicente, L.N.: Global convergence of general derivative-free trust-region algorithms to firstand second-order critical points. SIAM J. Optim. 0(1), (006) 14. Jing, G., Zhu, D.: An affine scaling derivative-free trust region method with interior bactracing technique for bounded-constrained nonlinear programming. J. Syst. Sci. Complex. 7(3), (014) 15. Hoc, W., Schittowsi, K.: est Examples for Nonlinear Programming Codes. Springer, Bayreuth (1987) 16. Schittowsi, K.: More test examples for nonlinear programming codes (1987) 17. Dolan, E.D., Moré, J.J.: Benchmaring optimization software with performance profiles. Math. Program. 91(), (00)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

GLOBAL CONVERGENCE OF GENERAL DERIVATIVE-FREE TRUST-REGION ALGORITHMS TO FIRST AND SECOND ORDER CRITICAL POINTS

GLOBAL CONVERGENCE OF GENERAL DERIVATIVE-FREE TRUST-REGION ALGORITHMS TO FIRST AND SECOND ORDER CRITICAL POINTS GLOBAL CONVERGENCE OF GENERAL DERIVATIVE-FREE TRUST-REGION ALGORITHMS TO FIRST AND SECOND ORDER CRITICAL POINTS ANDREW R. CONN, KATYA SCHEINBERG, AND LUíS N. VICENTE Abstract. In this paper we prove global

More information

Trust Region Methods for Unconstrained Optimisation

Trust Region Methods for Unconstrained Optimisation Trust Region Methods for Unconstrained Optimisation Lecture 9, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Trust

More information

A Trust Region Algorithm for Heterogeneous Multiobjective Optimization

A Trust Region Algorithm for Heterogeneous Multiobjective Optimization A Trust Region Algorithm for Heterogeneous Multiobjective Optimization Jana Thomann and Gabriele Eichfelder 8.0.018 Abstract This paper presents a new trust region method for multiobjective heterogeneous

More information

Convergence of trust-region methods based on probabilistic models

Convergence of trust-region methods based on probabilistic models Convergence of trust-region methods based on probabilistic models A. S. Bandeira K. Scheinberg L. N. Vicente October 24, 2013 Abstract In this paper we consider the use of probabilistic or random models

More information

Outline. 1 Introduction. 2 Algorithms. 3 Examples. Algorithm 1 General coordinate minimization framework. 1: Choose x 0 R n and set k 0.

Outline. 1 Introduction. 2 Algorithms. 3 Examples. Algorithm 1 General coordinate minimization framework. 1: Choose x 0 R n and set k 0. Outline Coordinate Minimization Daniel P. Robinson Department of Applied Mathematics and Statistics Johns Hopkins University November 27, 208 Introduction 2 Algorithms Cyclic order with exact minimization

More information

An adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity

An adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity An adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity Coralia Cartis, Nick Gould and Philippe Toint Department of Mathematics,

More information

1 Overview. 2 The Gradient Descent Algorithm. AM 221: Advanced Optimization Spring 2016

1 Overview. 2 The Gradient Descent Algorithm. AM 221: Advanced Optimization Spring 2016 AM 22: Advanced Optimization Spring 206 Prof. Yaron Singer Lecture 9 February 24th Overview In the previous lecture we reviewed results from multivariate calculus in preparation for our journey into convex

More information

What can we do with numerical optimization?

What can we do with numerical optimization? Optimization motivation and background Eddie Wadbro Introduction to PDE Constrained Optimization, 2016 February 15 16, 2016 Eddie Wadbro, Introduction to PDE Constrained Optimization, February 15 16, 2016

More information

Adaptive cubic overestimation methods for unconstrained optimization

Adaptive cubic overestimation methods for unconstrained optimization Report no. NA-07/20 Adaptive cubic overestimation methods for unconstrained optimization Coralia Cartis School of Mathematics, University of Edinburgh, The King s Buildings, Edinburgh, EH9 3JZ, Scotland,

More information

Nonlinear programming without a penalty function or a filter

Nonlinear programming without a penalty function or a filter Report no. NA-07/09 Nonlinear programming without a penalty function or a filter Nicholas I. M. Gould Oxford University, Numerical Analysis Group Philippe L. Toint Department of Mathematics, FUNDP-University

More information

A Stochastic Levenberg-Marquardt Method Using Random Models with Application to Data Assimilation

A Stochastic Levenberg-Marquardt Method Using Random Models with Application to Data Assimilation A Stochastic Levenberg-Marquardt Method Using Random Models with Application to Data Assimilation E Bergou Y Diouane V Kungurtsev C W Royer July 5, 08 Abstract Globally convergent variants of the Gauss-Newton

More information

4: SINGLE-PERIOD MARKET MODELS

4: SINGLE-PERIOD MARKET MODELS 4: SINGLE-PERIOD MARKET MODELS Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 4: Single-Period Market Models 1 / 87 General Single-Period

More information

Global convergence rate analysis of unconstrained optimization methods based on probabilistic models

Global convergence rate analysis of unconstrained optimization methods based on probabilistic models Math. Program., Ser. A DOI 10.1007/s10107-017-1137-4 FULL LENGTH PAPER Global convergence rate analysis of unconstrained optimization methods based on probabilistic models C. Cartis 1 K. Scheinberg 2 Received:

More information

No-arbitrage theorem for multi-factor uncertain stock model with floating interest rate

No-arbitrage theorem for multi-factor uncertain stock model with floating interest rate Fuzzy Optim Decis Making 217 16:221 234 DOI 117/s17-16-9246-8 No-arbitrage theorem for multi-factor uncertain stock model with floating interest rate Xiaoyu Ji 1 Hua Ke 2 Published online: 17 May 216 Springer

More information

Nonlinear programming without a penalty function or a filter

Nonlinear programming without a penalty function or a filter Math. Program., Ser. A (2010) 122:155 196 DOI 10.1007/s10107-008-0244-7 FULL LENGTH PAPER Nonlinear programming without a penalty function or a filter N. I. M. Gould Ph.L.Toint Received: 11 December 2007

More information

On the Superlinear Local Convergence of a Filter-SQP Method. Stefan Ulbrich Zentrum Mathematik Technische Universität München München, Germany

On the Superlinear Local Convergence of a Filter-SQP Method. Stefan Ulbrich Zentrum Mathematik Technische Universität München München, Germany On the Superlinear Local Convergence of a Filter-SQP Method Stefan Ulbrich Zentrum Mathemati Technische Universität München München, Germany Technical Report, October 2002. Mathematical Programming manuscript

More information

Exercise List: Proving convergence of the (Stochastic) Gradient Descent Method for the Least Squares Problem.

Exercise List: Proving convergence of the (Stochastic) Gradient Descent Method for the Least Squares Problem. Exercise List: Proving convergence of the (Stochastic) Gradient Descent Method for the Least Squares Problem. Robert M. Gower. October 3, 07 Introduction This is an exercise in proving the convergence

More information

A No-Arbitrage Theorem for Uncertain Stock Model

A No-Arbitrage Theorem for Uncertain Stock Model Fuzzy Optim Decis Making manuscript No (will be inserted by the editor) A No-Arbitrage Theorem for Uncertain Stock Model Kai Yao Received: date / Accepted: date Abstract Stock model is used to describe

More information

Is Greedy Coordinate Descent a Terrible Algorithm?

Is Greedy Coordinate Descent a Terrible Algorithm? Is Greedy Coordinate Descent a Terrible Algorithm? Julie Nutini, Mark Schmidt, Issam Laradji, Michael Friedlander, Hoyt Koepke University of British Columbia Optimization and Big Data, 2015 Context: Random

More information

Adaptive cubic regularisation methods for unconstrained optimization. Part II: worst-case function- and derivative-evaluation complexity

Adaptive cubic regularisation methods for unconstrained optimization. Part II: worst-case function- and derivative-evaluation complexity Adaptive cubic regularisation methods for unconstrained optimization. Part II: worst-case function- and derivative-evaluation complexity Coralia Cartis,, Nicholas I. M. Gould, and Philippe L. Toint September

More information

Martingale Pricing Theory in Discrete-Time and Discrete-Space Models

Martingale Pricing Theory in Discrete-Time and Discrete-Space Models IEOR E4707: Foundations of Financial Engineering c 206 by Martin Haugh Martingale Pricing Theory in Discrete-Time and Discrete-Space Models These notes develop the theory of martingale pricing in a discrete-time,

More information

Chapter 5 Finite Difference Methods. Math6911 W07, HM Zhu

Chapter 5 Finite Difference Methods. Math6911 W07, HM Zhu Chapter 5 Finite Difference Methods Math69 W07, HM Zhu References. Chapters 5 and 9, Brandimarte. Section 7.8, Hull 3. Chapter 7, Numerical analysis, Burden and Faires Outline Finite difference (FD) approximation

More information

CHOICE THEORY, UTILITY FUNCTIONS AND RISK AVERSION

CHOICE THEORY, UTILITY FUNCTIONS AND RISK AVERSION CHOICE THEORY, UTILITY FUNCTIONS AND RISK AVERSION Szabolcs Sebestyén szabolcs.sebestyen@iscte.pt Master in Finance INVESTMENTS Sebestyén (ISCTE-IUL) Choice Theory Investments 1 / 65 Outline 1 An Introduction

More information

Kantorovich-type Theorems for Generalized Equations

Kantorovich-type Theorems for Generalized Equations SWM ORCOS Kantorovich-type Theorems for Generalized Equations R. Cibulka, A. L. Dontchev, J. Preininger, T. Roubal and V. Veliov Research Report 2015-16 November, 2015 Operations Research and Control Systems

More information

Decomposition Methods

Decomposition Methods Decomposition Methods separable problems, complicating variables primal decomposition dual decomposition complicating constraints general decomposition structures Prof. S. Boyd, EE364b, Stanford University

More information

Nonlinear programming without a penalty function or a filter

Nonlinear programming without a penalty function or a filter Nonlinear programming without a penalty function or a filter N I M Gould Ph L Toint October 1, 2007 RAL-TR-2007-016 c Science and Technology Facilities Council Enquires about copyright, reproduction and

More information

25 Increasing and Decreasing Functions

25 Increasing and Decreasing Functions - 25 Increasing and Decreasing Functions It is useful in mathematics to define whether a function is increasing or decreasing. In this section we will use the differential of a function to determine this

More information

Variable-Number Sample-Path Optimization

Variable-Number Sample-Path Optimization Noname manuscript No. (will be inserted by the editor Geng Deng Michael C. Ferris Variable-Number Sample-Path Optimization the date of receipt and acceptance should be inserted later Abstract The sample-path

More information

CS 3331 Numerical Methods Lecture 2: Functions of One Variable. Cherung Lee

CS 3331 Numerical Methods Lecture 2: Functions of One Variable. Cherung Lee CS 3331 Numerical Methods Lecture 2: Functions of One Variable Cherung Lee Outline Introduction Solving nonlinear equations: find x such that f(x ) = 0. Binary search methods: (Bisection, regula falsi)

More information

MATH 5510 Mathematical Models of Financial Derivatives. Topic 1 Risk neutral pricing principles under single-period securities models

MATH 5510 Mathematical Models of Financial Derivatives. Topic 1 Risk neutral pricing principles under single-period securities models MATH 5510 Mathematical Models of Financial Derivatives Topic 1 Risk neutral pricing principles under single-period securities models 1.1 Law of one price and Arrow securities 1.2 No-arbitrage theory and

More information

Technical Report Doc ID: TR April-2009 (Last revised: 02-June-2009)

Technical Report Doc ID: TR April-2009 (Last revised: 02-June-2009) Technical Report Doc ID: TR-1-2009. 14-April-2009 (Last revised: 02-June-2009) The homogeneous selfdual model algorithm for linear optimization. Author: Erling D. Andersen In this white paper we present

More information

3.2 No-arbitrage theory and risk neutral probability measure

3.2 No-arbitrage theory and risk neutral probability measure Mathematical Models in Economics and Finance Topic 3 Fundamental theorem of asset pricing 3.1 Law of one price and Arrow securities 3.2 No-arbitrage theory and risk neutral probability measure 3.3 Valuation

More information

Lecture 7: Bayesian approach to MAB - Gittins index

Lecture 7: Bayesian approach to MAB - Gittins index Advanced Topics in Machine Learning and Algorithmic Game Theory Lecture 7: Bayesian approach to MAB - Gittins index Lecturer: Yishay Mansour Scribe: Mariano Schain 7.1 Introduction In the Bayesian approach

More information

Lecture Quantitative Finance Spring Term 2015

Lecture Quantitative Finance Spring Term 2015 implied Lecture Quantitative Finance Spring Term 2015 : May 7, 2015 1 / 28 implied 1 implied 2 / 28 Motivation and setup implied the goal of this chapter is to treat the implied which requires an algorithm

More information

Chapter 5 Univariate time-series analysis. () Chapter 5 Univariate time-series analysis 1 / 29

Chapter 5 Univariate time-series analysis. () Chapter 5 Univariate time-series analysis 1 / 29 Chapter 5 Univariate time-series analysis () Chapter 5 Univariate time-series analysis 1 / 29 Time-Series Time-series is a sequence fx 1, x 2,..., x T g or fx t g, t = 1,..., T, where t is an index denoting

More information

Chapter 3: Black-Scholes Equation and Its Numerical Evaluation

Chapter 3: Black-Scholes Equation and Its Numerical Evaluation Chapter 3: Black-Scholes Equation and Its Numerical Evaluation 3.1 Itô Integral 3.1.1 Convergence in the Mean and Stieltjes Integral Definition 3.1 (Convergence in the Mean) A sequence {X n } n ln of random

More information

Equivalence between Semimartingales and Itô Processes

Equivalence between Semimartingales and Itô Processes International Journal of Mathematical Analysis Vol. 9, 215, no. 16, 787-791 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/1.12988/ijma.215.411358 Equivalence between Semimartingales and Itô Processes

More information

SMOOTH CONVEX APPROXIMATION AND ITS APPLICATIONS SHI SHENGYUAN. (B.Sc.(Hons.), ECNU)

SMOOTH CONVEX APPROXIMATION AND ITS APPLICATIONS SHI SHENGYUAN. (B.Sc.(Hons.), ECNU) SMOOTH CONVEX APPROXIMATION AND ITS APPLICATIONS SHI SHENGYUAN (B.Sc.(Hons.), ECNU) A THESIS SUBMITTED FOR THE DEGREE OF MASTER OF SCIENCE DEPARTMENT OF MATHEMATICS NATIONAL UNIVERSITY OF SINGAPORE 2004

More information

ERROR ESTIMATES FOR LINEAR-QUADRATIC ELLIPTIC CONTROL PROBLEMS

ERROR ESTIMATES FOR LINEAR-QUADRATIC ELLIPTIC CONTROL PROBLEMS ERROR ESTIMATES FOR LINEAR-QUADRATIC ELLIPTIC CONTROL PROBLEMS Eduardo Casas Departamento de Matemática Aplicada y Ciencias de la Computación Universidad de Cantabria 39005 Santander, Spain. eduardo.casas@unican.es

More information

Chapter 7: Portfolio Theory

Chapter 7: Portfolio Theory Chapter 7: Portfolio Theory 1. Introduction 2. Portfolio Basics 3. The Feasible Set 4. Portfolio Selection Rules 5. The Efficient Frontier 6. Indifference Curves 7. The Two-Asset Portfolio 8. Unrestriceted

More information

A Simple Method for Solving Multiperiod Mean-Variance Asset-Liability Management Problem

A Simple Method for Solving Multiperiod Mean-Variance Asset-Liability Management Problem Available online at wwwsciencedirectcom Procedia Engineering 3 () 387 39 Power Electronics and Engineering Application A Simple Method for Solving Multiperiod Mean-Variance Asset-Liability Management Problem

More information

Application of an Interval Backward Finite Difference Method for Solving the One-Dimensional Heat Conduction Problem

Application of an Interval Backward Finite Difference Method for Solving the One-Dimensional Heat Conduction Problem Application of an Interval Backward Finite Difference Method for Solving the One-Dimensional Heat Conduction Problem Malgorzata A. Jankowska 1, Andrzej Marciniak 2 and Tomasz Hoffmann 2 1 Poznan University

More information

I. More Fundamental Concepts and Definitions from Mathematics

I. More Fundamental Concepts and Definitions from Mathematics An Introduction to Optimization The core of modern economics is the notion that individuals optimize. That is to say, individuals use the resources available to them to advance their own personal objectives

More information

Financial Optimization ISE 347/447. Lecture 15. Dr. Ted Ralphs

Financial Optimization ISE 347/447. Lecture 15. Dr. Ted Ralphs Financial Optimization ISE 347/447 Lecture 15 Dr. Ted Ralphs ISE 347/447 Lecture 15 1 Reading for This Lecture C&T Chapter 12 ISE 347/447 Lecture 15 2 Stock Market Indices A stock market index is a statistic

More information

Applied Mathematical Sciences, Vol. 8, 2014, no. 1, 1-12 HIKARI Ltd,

Applied Mathematical Sciences, Vol. 8, 2014, no. 1, 1-12 HIKARI Ltd, Applied Mathematical Sciences, Vol. 8, 2014, no. 1, 1-12 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2014.35258 Improving the Robustness of Difference of Convex Algorithm in the Research

More information

A class of coherent risk measures based on one-sided moments

A class of coherent risk measures based on one-sided moments A class of coherent risk measures based on one-sided moments T. Fischer Darmstadt University of Technology November 11, 2003 Abstract This brief paper explains how to obtain upper boundaries of shortfall

More information

The ruin probabilities of a multidimensional perturbed risk model

The ruin probabilities of a multidimensional perturbed risk model MATHEMATICAL COMMUNICATIONS 231 Math. Commun. 18(2013, 231 239 The ruin probabilities of a multidimensional perturbed risk model Tatjana Slijepčević-Manger 1, 1 Faculty of Civil Engineering, University

More information

Modelling long term interest rates for pension funds

Modelling long term interest rates for pension funds Modelling long term interest rates for pension funds Michel Vellekoop Netspar and the University of Amsterdam Actuarial and Risk Measures Workshop on Pension Plans and Related Topics University of Piraeus,

More information

Universal regularization methods varying the power, the smoothness and the accuracy arxiv: v1 [math.oc] 16 Nov 2018

Universal regularization methods varying the power, the smoothness and the accuracy arxiv: v1 [math.oc] 16 Nov 2018 Universal regularization methods varying the power, the smoothness and the accuracy arxiv:1811.07057v1 [math.oc] 16 Nov 2018 Coralia Cartis, Nicholas I. M. Gould and Philippe L. Toint Revision completed

More information

Online Appendix Optimal Time-Consistent Government Debt Maturity D. Debortoli, R. Nunes, P. Yared. A. Proofs

Online Appendix Optimal Time-Consistent Government Debt Maturity D. Debortoli, R. Nunes, P. Yared. A. Proofs Online Appendi Optimal Time-Consistent Government Debt Maturity D. Debortoli, R. Nunes, P. Yared A. Proofs Proof of Proposition 1 The necessity of these conditions is proved in the tet. To prove sufficiency,

More information

Essays on Some Combinatorial Optimization Problems with Interval Data

Essays on Some Combinatorial Optimization Problems with Interval Data Essays on Some Combinatorial Optimization Problems with Interval Data a thesis submitted to the department of industrial engineering and the institute of engineering and sciences of bilkent university

More information

The Real Numbers. Here we show one way to explicitly construct the real numbers R. First we need a definition.

The Real Numbers. Here we show one way to explicitly construct the real numbers R. First we need a definition. The Real Numbers Here we show one way to explicitly construct the real numbers R. First we need a definition. Definitions/Notation: A sequence of rational numbers is a funtion f : N Q. Rather than write

More information

The Correlation Smile Recovery

The Correlation Smile Recovery Fortis Bank Equity & Credit Derivatives Quantitative Research The Correlation Smile Recovery E. Vandenbrande, A. Vandendorpe, Y. Nesterov, P. Van Dooren draft version : March 2, 2009 1 Introduction Pricing

More information

Stochastic Programming and Financial Analysis IE447. Midterm Review. Dr. Ted Ralphs

Stochastic Programming and Financial Analysis IE447. Midterm Review. Dr. Ted Ralphs Stochastic Programming and Financial Analysis IE447 Midterm Review Dr. Ted Ralphs IE447 Midterm Review 1 Forming a Mathematical Programming Model The general form of a mathematical programming model is:

More information

On Complexity of Multistage Stochastic Programs

On Complexity of Multistage Stochastic Programs On Complexity of Multistage Stochastic Programs Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205, USA e-mail: ashapiro@isye.gatech.edu

More information

Bounds on some contingent claims with non-convex payoff based on multiple assets

Bounds on some contingent claims with non-convex payoff based on multiple assets Bounds on some contingent claims with non-convex payoff based on multiple assets Dimitris Bertsimas Xuan Vinh Doan Karthik Natarajan August 007 Abstract We propose a copositive relaxation framework to

More information

Steepest descent and conjugate gradient methods with variable preconditioning

Steepest descent and conjugate gradient methods with variable preconditioning Ilya Lashuk and Andrew Knyazev 1 Steepest descent and conjugate gradient methods with variable preconditioning Ilya Lashuk (the speaker) and Andrew Knyazev Department of Mathematics and Center for Computational

More information

On the oracle complexity of first-order and derivative-free algorithms for smooth nonconvex minimization

On the oracle complexity of first-order and derivative-free algorithms for smooth nonconvex minimization On the oracle complexity of first-order and derivative-free algorithms for smooth nonconvex minimization C. Cartis, N. I. M. Gould and Ph. L. Toint 22 September 2011 Abstract The (optimal) function/gradient

More information

A distributed Laplace transform algorithm for European options

A distributed Laplace transform algorithm for European options A distributed Laplace transform algorithm for European options 1 1 A. J. Davies, M. E. Honnor, C.-H. Lai, A. K. Parrott & S. Rout 1 Department of Physics, Astronomy and Mathematics, University of Hertfordshire,

More information

BOUNDS FOR THE LEAST SQUARES RESIDUAL USING SCALED TOTAL LEAST SQUARES

BOUNDS FOR THE LEAST SQUARES RESIDUAL USING SCALED TOTAL LEAST SQUARES BOUNDS FOR THE LEAST SQUARES RESIDUAL USING SCALED TOTAL LEAST SQUARES Christopher C. Paige School of Computer Science, McGill University Montreal, Quebec, Canada, H3A 2A7 paige@cs.mcgill.ca Zdeněk Strakoš

More information

Optimization for Chemical Engineers, 4G3. Written midterm, 23 February 2015

Optimization for Chemical Engineers, 4G3. Written midterm, 23 February 2015 Optimization for Chemical Engineers, 4G3 Written midterm, 23 February 2015 Kevin Dunn, kevin.dunn@mcmaster.ca McMaster University Note: No papers, other than this test and the answer booklet are allowed

More information

INTERIM CORRELATED RATIONALIZABILITY IN INFINITE GAMES

INTERIM CORRELATED RATIONALIZABILITY IN INFINITE GAMES INTERIM CORRELATED RATIONALIZABILITY IN INFINITE GAMES JONATHAN WEINSTEIN AND MUHAMET YILDIZ A. We show that, under the usual continuity and compactness assumptions, interim correlated rationalizability

More information

(Ir)rational Exuberance: Optimism, Ambiguity and Risk

(Ir)rational Exuberance: Optimism, Ambiguity and Risk (Ir)rational Exuberance: Optimism, Ambiguity and Risk Anat Bracha and Don Brown Boston FRB and Yale University October 2013 (Revised) nat Bracha and Don Brown (Boston FRB and Yale University) (Ir)rational

More information

A Note on Error Estimates for some Interior Penalty Methods

A Note on Error Estimates for some Interior Penalty Methods A Note on Error Estimates for some Interior Penalty Methods A. F. Izmailov 1 and M. V. Solodov 2 1 Moscow State University, Faculty of Computational Mathematics and Cybernetics, Department of Operations

More information

3 Arbitrage pricing theory in discrete time.

3 Arbitrage pricing theory in discrete time. 3 Arbitrage pricing theory in discrete time. Orientation. In the examples studied in Chapter 1, we worked with a single period model and Gaussian returns; in this Chapter, we shall drop these assumptions

More information

On the complexity of the steepest-descent with exact linesearches

On the complexity of the steepest-descent with exact linesearches On the complexity of the steepest-descent with exact linesearches Coralia Cartis, Nicholas I. M. Gould and Philippe L. Toint 9 September 22 Abstract The worst-case complexity of the steepest-descent algorithm

More information

Online Supplement: Price Commitments with Strategic Consumers: Why it can be Optimal to Discount More Frequently...Than Optimal

Online Supplement: Price Commitments with Strategic Consumers: Why it can be Optimal to Discount More Frequently...Than Optimal Online Supplement: Price Commitments with Strategic Consumers: Why it can be Optimal to Discount More Frequently...Than Optimal A Proofs Proof of Lemma 1. Under the no commitment policy, the indifferent

More information

Sublinear Time Algorithms Oct 19, Lecture 1

Sublinear Time Algorithms Oct 19, Lecture 1 0368.416701 Sublinear Time Algorithms Oct 19, 2009 Lecturer: Ronitt Rubinfeld Lecture 1 Scribe: Daniel Shahaf 1 Sublinear-time algorithms: motivation Twenty years ago, there was practically no investigation

More information

Sy D. Friedman. August 28, 2001

Sy D. Friedman. August 28, 2001 0 # and Inner Models Sy D. Friedman August 28, 2001 In this paper we examine the cardinal structure of inner models that satisfy GCH but do not contain 0 #. We show, assuming that 0 # exists, that such

More information

4 Reinforcement Learning Basic Algorithms

4 Reinforcement Learning Basic Algorithms Learning in Complex Systems Spring 2011 Lecture Notes Nahum Shimkin 4 Reinforcement Learning Basic Algorithms 4.1 Introduction RL methods essentially deal with the solution of (optimal) control problems

More information

Singular Stochastic Control Models for Optimal Dynamic Withdrawal Policies in Variable Annuities

Singular Stochastic Control Models for Optimal Dynamic Withdrawal Policies in Variable Annuities 1/ 46 Singular Stochastic Control Models for Optimal Dynamic Withdrawal Policies in Variable Annuities Yue Kuen KWOK Department of Mathematics Hong Kong University of Science and Technology * Joint work

More information

SYLLABUS AND SAMPLE QUESTIONS FOR MS(QE) Syllabus for ME I (Mathematics), 2012

SYLLABUS AND SAMPLE QUESTIONS FOR MS(QE) Syllabus for ME I (Mathematics), 2012 SYLLABUS AND SAMPLE QUESTIONS FOR MS(QE) 2012 Syllabus for ME I (Mathematics), 2012 Algebra: Binomial Theorem, AP, GP, HP, Exponential, Logarithmic Series, Sequence, Permutations and Combinations, Theory

More information

Online Shopping Intermediaries: The Strategic Design of Search Environments

Online Shopping Intermediaries: The Strategic Design of Search Environments Online Supplemental Appendix to Online Shopping Intermediaries: The Strategic Design of Search Environments Anthony Dukes University of Southern California Lin Liu University of Central Florida February

More information

Laurence Boxer and Ismet KARACA

Laurence Boxer and Ismet KARACA SOME PROPERTIES OF DIGITAL COVERING SPACES Laurence Boxer and Ismet KARACA Abstract. In this paper we study digital versions of some properties of covering spaces from algebraic topology. We correct and

More information

Infinite Reload Options: Pricing and Analysis

Infinite Reload Options: Pricing and Analysis Infinite Reload Options: Pricing and Analysis A. C. Bélanger P. A. Forsyth April 27, 2006 Abstract Infinite reload options allow the user to exercise his reload right as often as he chooses during the

More information

Viability, Arbitrage and Preferences

Viability, Arbitrage and Preferences Viability, Arbitrage and Preferences H. Mete Soner ETH Zürich and Swiss Finance Institute Joint with Matteo Burzoni, ETH Zürich Frank Riedel, University of Bielefeld Thera Stochastics in Honor of Ioannis

More information

THE OPTIMAL ASSET ALLOCATION PROBLEMFOR AN INVESTOR THROUGH UTILITY MAXIMIZATION

THE OPTIMAL ASSET ALLOCATION PROBLEMFOR AN INVESTOR THROUGH UTILITY MAXIMIZATION THE OPTIMAL ASSET ALLOCATION PROBLEMFOR AN INVESTOR THROUGH UTILITY MAXIMIZATION SILAS A. IHEDIOHA 1, BRIGHT O. OSU 2 1 Department of Mathematics, Plateau State University, Bokkos, P. M. B. 2012, Jos,

More information

Forecast Horizons for Production Planning with Stochastic Demand

Forecast Horizons for Production Planning with Stochastic Demand Forecast Horizons for Production Planning with Stochastic Demand Alfredo Garcia and Robert L. Smith Department of Industrial and Operations Engineering Universityof Michigan, Ann Arbor MI 48109 December

More information

Collinear Triple Hypergraphs and the Finite Plane Kakeya Problem

Collinear Triple Hypergraphs and the Finite Plane Kakeya Problem Collinear Triple Hypergraphs and the Finite Plane Kakeya Problem Joshua Cooper August 14, 006 Abstract We show that the problem of counting collinear points in a permutation (previously considered by the

More information

Ellipsoid Method. ellipsoid method. convergence proof. inequality constraints. feasibility problems. Prof. S. Boyd, EE392o, Stanford University

Ellipsoid Method. ellipsoid method. convergence proof. inequality constraints. feasibility problems. Prof. S. Boyd, EE392o, Stanford University Ellipsoid Method ellipsoid method convergence proof inequality constraints feasibility problems Prof. S. Boyd, EE392o, Stanford University Challenges in cutting-plane methods can be difficult to compute

More information

CSCI 1951-G Optimization Methods in Finance Part 00: Course Logistics Introduction to Finance Optimization Problems

CSCI 1951-G Optimization Methods in Finance Part 00: Course Logistics Introduction to Finance Optimization Problems CSCI 1951-G Optimization Methods in Finance Part 00: Course Logistics Introduction to Finance Optimization Problems January 26, 2018 1 / 24 Basic information All information is available in the syllabus

More information

Stochastic Proximal Algorithms with Applications to Online Image Recovery

Stochastic Proximal Algorithms with Applications to Online Image Recovery 1/24 Stochastic Proximal Algorithms with Applications to Online Image Recovery Patrick Louis Combettes 1 and Jean-Christophe Pesquet 2 1 Mathematics Department, North Carolina State University, Raleigh,

More information

Lecture 5: Iterative Combinatorial Auctions

Lecture 5: Iterative Combinatorial Auctions COMS 6998-3: Algorithmic Game Theory October 6, 2008 Lecture 5: Iterative Combinatorial Auctions Lecturer: Sébastien Lahaie Scribe: Sébastien Lahaie In this lecture we examine a procedure that generalizes

More information

Exact shape-reconstruction by one-step linearization in EIT

Exact shape-reconstruction by one-step linearization in EIT Exact shape-reconstruction by one-step linearization in EIT Bastian von Harrach harrach@ma.tum.de Department of Mathematics - M1, Technische Universität München, Germany Joint work with Jin Keun Seo, Yonsei

More information

First-Order Methods. Stephen J. Wright 1. University of Wisconsin-Madison. IMA, August 2016

First-Order Methods. Stephen J. Wright 1. University of Wisconsin-Madison. IMA, August 2016 First-Order Methods Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) First-Order Methods IMA, August 2016 1 / 48 Smooth

More information

AMH4 - ADVANCED OPTION PRICING. Contents

AMH4 - ADVANCED OPTION PRICING. Contents AMH4 - ADVANCED OPTION PRICING ANDREW TULLOCH Contents 1. Theory of Option Pricing 2 2. Black-Scholes PDE Method 4 3. Martingale method 4 4. Monte Carlo methods 5 4.1. Method of antithetic variances 5

More information

Interior-Point Algorithm for CLP II. yyye

Interior-Point Algorithm for CLP II.   yyye Conic Linear Optimization and Appl. Lecture Note #10 1 Interior-Point Algorithm for CLP II Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/

More information

On the Lower Arbitrage Bound of American Contingent Claims

On the Lower Arbitrage Bound of American Contingent Claims On the Lower Arbitrage Bound of American Contingent Claims Beatrice Acciaio Gregor Svindland December 2011 Abstract We prove that in a discrete-time market model the lower arbitrage bound of an American

More information

Pricing Problems under the Markov Chain Choice Model

Pricing Problems under the Markov Chain Choice Model Pricing Problems under the Markov Chain Choice Model James Dong School of Operations Research and Information Engineering, Cornell University, Ithaca, New York 14853, USA jd748@cornell.edu A. Serdar Simsek

More information

Solving dynamic portfolio choice problems by recursing on optimized portfolio weights or on the value function?

Solving dynamic portfolio choice problems by recursing on optimized portfolio weights or on the value function? DOI 0.007/s064-006-9073-z ORIGINAL PAPER Solving dynamic portfolio choice problems by recursing on optimized portfolio weights or on the value function? Jules H. van Binsbergen Michael W. Brandt Received:

More information

Stability in geometric & functional inequalities

Stability in geometric & functional inequalities Stability in geometric & functional inequalities A. Figalli The University of Texas at Austin www.ma.utexas.edu/users/figalli/ Alessio Figalli (UT Austin) Stability in geom. & funct. ineq. Krakow, July

More information

Econ 582 Nonlinear Regression

Econ 582 Nonlinear Regression Econ 582 Nonlinear Regression Eric Zivot June 3, 2013 Nonlinear Regression In linear regression models = x 0 β (1 )( 1) + [ x ]=0 [ x = x] =x 0 β = [ x = x] [ x = x] x = β it is assumed that the regression

More information

Convergence Analysis of Monte Carlo Calibration of Financial Market Models

Convergence Analysis of Monte Carlo Calibration of Financial Market Models Analysis of Monte Carlo Calibration of Financial Market Models Christoph Käbe Universität Trier Workshop on PDE Constrained Optimization of Certain and Uncertain Processes June 03, 2009 Monte Carlo Calibration

More information

Approximate Composite Minimization: Convergence Rates and Examples

Approximate Composite Minimization: Convergence Rates and Examples ISMP 2018 - Bordeaux Approximate Composite Minimization: Convergence Rates and S. Praneeth Karimireddy, Sebastian U. Stich, Martin Jaggi MLO Lab, EPFL, Switzerland sebastian.stich@epfl.ch July 4, 2018

More information

MS-E2114 Investment Science Lecture 5: Mean-variance portfolio theory

MS-E2114 Investment Science Lecture 5: Mean-variance portfolio theory MS-E2114 Investment Science Lecture 5: Mean-variance portfolio theory A. Salo, T. Seeve Systems Analysis Laboratory Department of System Analysis and Mathematics Aalto University, School of Science Overview

More information

HIGH ORDER DISCONTINUOUS GALERKIN METHODS FOR 1D PARABOLIC EQUATIONS. Ahmet İzmirlioğlu. BS, University of Pittsburgh, 2004

HIGH ORDER DISCONTINUOUS GALERKIN METHODS FOR 1D PARABOLIC EQUATIONS. Ahmet İzmirlioğlu. BS, University of Pittsburgh, 2004 HIGH ORDER DISCONTINUOUS GALERKIN METHODS FOR D PARABOLIC EQUATIONS by Ahmet İzmirlioğlu BS, University of Pittsburgh, 24 Submitted to the Graduate Faculty of Art and Sciences in partial fulfillment of

More information

Regret Minimization and Correlated Equilibria

Regret Minimization and Correlated Equilibria Algorithmic Game heory Summer 2017, Week 4 EH Zürich Overview Regret Minimization and Correlated Equilibria Paolo Penna We have seen different type of equilibria and also considered the corresponding price

More information

MATH 121 GAME THEORY REVIEW

MATH 121 GAME THEORY REVIEW MATH 121 GAME THEORY REVIEW ERIN PEARSE Contents 1. Definitions 2 1.1. Non-cooperative Games 2 1.2. Cooperative 2-person Games 4 1.3. Cooperative n-person Games (in coalitional form) 6 2. Theorems and

More information

Option Pricing under Delay Geometric Brownian Motion with Regime Switching

Option Pricing under Delay Geometric Brownian Motion with Regime Switching Science Journal of Applied Mathematics and Statistics 2016; 4(6): 263-268 http://www.sciencepublishinggroup.com/j/sjams doi: 10.11648/j.sjams.20160406.13 ISSN: 2376-9491 (Print); ISSN: 2376-9513 (Online)

More information