DASC: A DECOMPOSITION ALGORITHM FOR MULTISTAGE STOCHASTIC PROGRAMS WITH STRONGLY CONVEX COST FUNCTIONS
|
|
- Sherman Logan
- 6 years ago
- Views:
Transcription
1 DASC: A DECOMPOSITION ALGORITHM FOR MULTISTAGE STOCHASTIC PROGRAMS WITH STRONGLY CONVEX COST FUNCTIONS Vincent Guigues School of Applied Mathematics, FGV Praia de Botafogo, Rio de Janeiro, Brazil vguigues@fgv.br Abstract. We introduce DASC, a decomposition method akin to Stochastic Dual Dynamic Programming SDDP which solves some multistage stochastic optimization problems having strongly convex cost functions. Similarly to SDDP, DASC approximates cost-to-go functions by a maximum of lower bounding functions called cuts. However, contrary to SDDP where cuts are affine functions, the cuts computed with DASC are quadratic functions. We also prove the convergence of DASC. Keywords: Strongly convex value function and Monte-Carlo sampling and Stochastic Programming and SDDP. AMS subject classifications: 90C15, 90C Introduction Stochastic Dual Dynamic Programming SDDP, introduced in [13], is a sampling-based extension of the Nested Decomposition algorithm [1] which builds policies for some multistage stochastic optimization problems. It has been used to solve many real-life problems and several extensions of the method have been considered such as DOASA [15], CUPPS [3], ReSA [11], AND [2], and more recently risk-averse [8], [9], [14], [5], [16], [17], [12] or inexact [7] variants. SDDP builds approximations for the cost-to-go functions which take the form of a maximum of affine functions called cuts. We propose an extension of this algorithm called DASC, which is a Decomposition Algorithm for multistage stochastic programs having Strongly Convex cost functions. Similarly to SDDP, at each iteration the algorithm computes in a forward pass a sequence of trial points which are used in a backward pass to build lower bounding functions called cuts. However, contrary to SDDP where cuts are affine functions, the cuts computed with DASC are quadratic functions and therefore the cost-to-go functions are approximated by a maximum of quadratic functions. The outline of the study is as follows. In Section 2, we give in Proposition 2.3 a simple condition ensuring that the value function of a convex optimization problem is strongly convex. In Section 3, we introduce the class of optimization problems to which DASC applies and the necessary assumptions. DASC algorithm, which is based on Proposition 2.3, is given in Section 4, while convergence of the algorithm is shown in Section Strong convexity of the value function Let be a norm on R m and let f : X R be a function defined on a convex subset X R m. Definition 2.1 Strongly convex functions. f is strongly convex on X R m with constant of strong convexity α > 0 with respect to norm iff for all 0 t 1, x, y X. ftx + 1 ty tfx + 1 tfy αt1 t y x 2, 2 We have the following equivalent characterization of strongly convex functions: Proposition 2.2. Let X R m be a convex set. Function f : X R is strongly convex on X with constant of strong convexity α > 0 with respect to norm iff 2.1 fy fx + s T y x + α 2 y x 2, x, y X, s fx. 1
2 Let X R m and Y R n be two nonempty convex sets. Let A be a p n real matrix, let B be a p m real matrix, let f : Y X R, and let g : Y X R q. For b R p, we define the value function { inf fy, x 2.2 Qx = y Sx := {y Y, Ay + Bx = b, gy, x 0}. DASC algorithm is based on Proposition 2.3 below giving conditions ensuring that Q is strongly convex: Proposition 2.3. Consider value function Q given by 2.2. Assume that i X, Y are nonempty and convex sets such that X domq and Y is closed, ii f, g are lower semicontinuous and the components g i of g are convex functions. If additionally f is strongly convex on Y X with constant of strong convexity α with respect to norm on R m+n, then Q is strongly convex on X with constant of strong convexity α with respect to norm on R m. Proof. Take x 1, x 2 X. Since X domq the sets Sx 1 and Sx 2 are nonempty. Our assumptions imply that there are y 1 Sx 1 and y 2 Sx 2 such that Qx 1 = fy 1, x 1 and Qx 2 = fy 2, x 2. Then for every 0 t 1, by convexity arguments we have that ty ty 2 Stx tx 2 and therefore Qtx tx 2 fty ty 2, tx tx 2 tfy 1, x tfy 2, x αt1 t y 2, x 2 y 1, x 1 2 tqx tqx αt1 t x 2 x 1 2, which completes the proof. 3. Problem formulation and assumptions We consider multistage stochastic optimization problems of the form T inf E 3.3 ξ2,...,ξ x T [ f t x t ξ 1, ξ 2,..., ξ t, x t 1 ξ 1, ξ 2,..., ξ t 1, ξ t ] 1,...,x T t=1 x t ξ 1, ξ 2,..., ξ t X t x t 1 ξ 1, ξ 2,..., ξ t 1, ξ t a.s., x t F t -measurable, t = 1,..., T, where x 0 is given, ξ 1 is deterministic, ξ t T t=2 is a stochastic process, F t is the sigma-algebra F t := σξ j, j t, and X t x t 1, ξ t, t = 1,..., T, can be of two types: S1 X t x t 1, ξ t = {x t R n : x t X t : x t 0, A t x t + B t x t 1 = b t } in this case, for short, we say that X t is of type S1; S2 X t x t 1, ξ t = {x t R n : x t X t, g t x t, x t 1, ξ t 0, A t x t + B t x t 1 = b t }. In this case, for short, we say that X t is of type S2. For both kinds of constraints, ξ t contains in particular the random elements in matrices A t, B t, and vector b t. Note that a mix of these types of constraints is allowed: for instance we can have X 1 of type S1 and X 2 of type S2. We make the following assumption on ξ t : H0 ξ t is interstage independent and for t = 2,..., T, ξ t is a random vector taking values in R K with a discrete distribution and a finite support Θ t = {ξ t1,..., ξ tm } with p ti = Pξ t = ξ ti > 0, i = 1,..., M, while ξ 1 is deterministic. 1 We will denote by A tj, B tj, and b tj the realizations of respectively A t, B t, and b t in ξ tj. For this problem, we can write Dynamic Programming equations: assuming that ξ 1 is deterministic, the first stage problem is { infx1 R 3.4 Q 1 x 0 = n F 1x 1, x 0, ξ 1 := f 1 x 1, x 0, ξ 1 + Q 2 x 1 x 1 X 1 x 0, ξ 1 for x 0 given and for t = 2,..., T, Q t x t 1 = E ξt [Q t x t 1, ξ t ] with { infxt R 3.5 Q t x t 1, ξ t = n F tx t, x t 1, ξ t := f t x t, x t 1, ξ t + Q t+1 x t x t X t x t 1, ξ t, 1 To alleviate notation and without loss of generality, we have assumed that the number M of possible realizations of ξt, the size K of ξ t, and n of x t do not depend on t. 2
3 with the convention that Q T +1 is null. We set X 0 = {x 0 } and make the following assumptions H1 on the problem data: for t = 1,..., T, H1-a for every x t, x t 1 R n the function f t x t, x t 1, is measurable and for every j = 1,..., M, the function f t,, ξ tj is strongly convex on X t X t 1 with constant of strong convexity α tj > 0 with respect to norm 2 ; H1-b X t is nonempty, convex, and compact; H1-c there exists ε t > 0 such that for every j = 1,..., M, for every x t 1 X εt t 1, the set X tx t 1, ξ tj rix t is nonempty. If X t is of type S2 we additionally assume that: H1-d for t = 1,..., T, there exists ε t > 0 such that for every j = 1,..., M, each component g ti,, ξ tj, i = 1,..., p, of the function g t,, ξ tj is convex on X t X εt t 1 ; H1-e for t = 2,..., T, j = 1,..., M, there exists x tjt 1, x tjt X t 1 rix t such that A tj x tjt + B tj x tjt 1 = b tj, and x tjt 1, x tjt ri{g t,, ξ tj 0}. Remark 3.1. For a problem of form 3.3 where the strong convexity assumption of functions f t,, ξ tj fails to hold, if for every t, j function f t,, ξ tj is convex and if the columns of matrix A tj B tj are independant then we may reformulate the problem pushing and penalizing the linear coupling constraints in the objective, ending up with the strongly convex cost function f t,, ξ tj +ρ t A tj x t +B tj x t 1 b tj 2 2 in variables x t, x t 1 for stage t realization ξ tj for some well chosen penalization ρ t > DASC Algorithm Due to Assumption H0, the M T 1 realizations of ξ t T t=1 form a scenario tree of depth T + 1 where the root node n 0 associated to a stage 0 with decision x 0 taken at that node has one child node n 1 associated to the first stage with ξ 1 deterministic. We denote by N the set of nodes, by Nodest the set of nodes for stage t and for a node n of the tree, we define: Cn: the set of children nodes the empty set for the leaves; x n : a decision taken at that node; p n : the transition probability from the parent node of n to n; ξ n : the realization of process ξ t at node n 2 : for a node n of stage t, this realization ξ n contains in particular the realizations b n of b t, A n of A t, and B n of B t ; ξ [n] : the history of the realizations of process ξ t from the first stage node n 1 to node n: for a node n of stage t, the i-th component of ξ [n] is ξ P t i n for i = 1,..., t, where P : N N is the function associating to a node its parent node the empty set for the root node. Similary to SDDP, at iteration k, trial points x k n are computed in a forward pass for all nodes n of the scenario tree replacing recourse functions Q t+1 by the approximations Q k 1 t+1 available at the beginning of this iteration. In a backward pass, we then select a set of nodes n k 1, n k 2,..., n k T with nk 1 = n 1, and for t 2, n k t a node of stage t, child of node n k t 1 corresponding to a sample ξ 1 k, ξ 2 k,..., ξ T k of ξ 1, ξ 2,..., ξ T. For t = 2,..., T, a cut 4.6 Ct k x t 1 = θt k + βt k, x t 1 x k n + α t k t 1 2 x t 1 x k n 2 k 2 t 1 2 The same notation ξindex is used to denote the realization of the process at node Index of the scenario tree and the value of the process ξ t for stage Index. The context will allow us to know which concept is being referred to. In particular, letters n and m will only be used to refer to nodes while t will be used to refer to stages. 3
4 is computed for Q t at x k where n k t α t = M p tj α tj, j=1 and where the computation of coefficients θ k t, β k t is given below. We show in Section 5 that cut C k t is a lower bounding function for Q t. Contrary to SDDP where cuts are affine functions our cuts are quadratic functions. In the end of iteration k, we obtain the lower approximations Q k t of Q t, t = 2,..., T + 1, given by Q k t x t 1 = max 1 l k Cl t x t 1, which take the form of a maximum of quadratic functions. The detailed steps of the DASC algorithm are given below. DASC, Step 1: Initialization. For t = 2,..., T, take as initial approximations Q 0 t. Set x 1 n 0 = x 0, set the iteration count k to 1, and Q 0 T DASC, Step 2: Forward pass. The forward pass performs the following computations: For t = 1,..., T, For every node n of stage t 1, For every child node m of node n, compute an optimal solution x k m of 4.8 Q k 1 t x k n, ξ m = where x k n 0 = x 0. End For End For End For { inf x m Ft k 1 x m, x k n, ξ m := f t x m, x k n, ξ m + Q k 1 t+1 x m x m X t x k n, ξ m, DASC, Step 3: Backward pass. We select a set of nodes n k 1, n k 2,..., n k T with nk t a node of stage t n k 1 = n 1 and for t 2, n k t a child node of n k t 1 corresponding to a sample ξ 1 k, ξ 2 k,..., ξ T k of ξ 1, ξ 2,..., ξ T. Set θt k +1 = α T +1 = 0 and βt k +1 = 0 which defines Ck T For t = T,..., 2, For every child node m of n = n k t 1 Compute an optimal solution x Bk m 4.9 Q k t xk n, ξ m = of { inf Ft k x m, x k x m n, ξ m := f t x m, x k n, ξ m + Q k t+1x m x m X t x k n, ξ m. For the problem above, if X t is of type S1 we define the Lagrangian Lx m, λ, µ = Ft k x m, x k n, ξ m + λ T A m x m + B m x k n b m and take optimal Lagrange multipliers λ k m. If X t is of type S2 we define the Lagrangian Lx m, λ, µ = Ft k x m, x k n, ξ m + λ T A m x m + B m x k n b m + µ T g t x m, x k n, ξ m and take optimal Lagrange multipliers λ k m, µ k m. If X t is of type S1, denoting by SG ftx Bk m,,ξm x k n a subgradient of convex function f t x Bk m,, ξ m at x k n, we compute θt km = Q k t xk n, ξ m and β km = SG ftx Bk m,,ξm x k n + B T mλ k m. If X t is of type S2 denoting by SG gtix Bk m,,ξm x k n a subgradient of convex function g ti x Bk m,, ξ m at x k n we compute θt km = Q k t xk n, ξ m and β km = SG ftx Bk m,,ξm x k n + B T mλ k m + 4 p i=1 µ k misg gtix Bk m,,ξm x k n.
5 End For The new cut Ct k is obtained computing 4.10 θt k = p m θt km and βt k = End For DASC, Step 4: Do k k + 1 and go to Step 2. p m β km. Remark 4.1. In DASC, decisions are computed at every iteration for all the nodes of the scenario tree in the forward pass. However, in practice, sampling will be used in the forward pass to compute at iteration k decisions only for the nodes n k 1,..., n k T and their children nodes. The variant of DASC written above is convenient for the convergence analysis of the method, presented in the next section. From this convergence analysis, it is possible to show the convergence of the variant of DASC which uses sampling in the forward pass see also Remark 5.4 in [7] and Remark 4.3 in [10]. 5. Convergence analysis In Theorem 5.2 below we show the convergence of DASC making the following additional assumption: H2 The samples in the backward passes are independent: ξ k 2,..., ξ k T is a realization of ξk = ξ k 2,..., ξ k T ξ 2,..., ξ T and ξ 1, ξ 2,..., are independent. We will make use of the following lemma: Lemma 5.1. Let Assumptions H0 and H1 hold. Then for t = 2,..., T + 1, function Q t is convex and Lipschitz continuous on X t 1. Proof. The proof is analogue to the proofs of Lemma 3.2 in [6] and Lemma 2.2 in [4]. Theorem 5.2. Consider the sequences of stochastic decisions x k n and of recourse functions Q k t generated by DASC. Let Assumptions H0, H1 and H2 hold. Then i almost surely, for t = 2,..., T + 1, the following holds: Ht : n Nodest 1, lim k + Q tx k n Q k t x k n = 0. ii Almost surely, the limit of the sequence F1 k 1 x k n 1, x 0, ξ 1 k of the approximate first stage optimal values and of the sequence Q k 1 x 0, ξ 1 k is the optimal value Q 1 x 0 of 3.3. Let Ω = Θ 2... Θ T be the sample space of all possible sequences of scenarios equipped with the product P of the corresponding probability measures. Define on Ω the random variable x = x 1,..., x T as follows. For ω Ω, consider the corresponding sequence of decisions x k nω n N k 1 computed by DASC. Take any accumulation point x nω n N of this sequence. If Z t is the set of F t -measurable functions, define x 1ω,..., x T ω taking x t ω : Z t R n given by x t ωξ 1,..., ξ t = x mω where m is given by ξ [m] = ξ 1,..., ξ t for t = 1,..., T. Then Px 1,..., x T is an optimal solution to 3.3 = 1. Proof. Let us prove i. We first check by induction on k and backward induction on t that for all k 0, for all t = 2,..., T + 1, for any node n of stage t 1 and decision x n taken at that node we have 5.11 Q t x n C k t x n, almost surely. For any fixed k, relation 5.11 holds for t = T + 1 and if it holds until iteration k for t + 1 with t {2,..., T }, we deduce that for any node n of stage t 1 and decision x n taken at that node we have Q t+1 x n Q k t+1x n, Q t x n, ξ m Q k t x n, ξ m for any child node m of n. Now note that function x m, x n Q k t+1x m is convex as a maximum of convex functions and recalling that x m, x n f t x m, x n, ξ m is strongly convex with constant of strong convexity α tm, the function x m, x n f t x m, x n, ξ m + Q k t+1x m is also strongly convex with the same parameter of strong convexity. Using Proposition 2.3, it follows that Q k t, ξ m is strongly convex with constant of strong convexity α tm. 5
6 Using Lemma 2.1 in [6] we have that β km Q k t, ξ mx k n k t 1. Recalling characterization 2.1 of strongly convex functions see Proposition 2.2, we get for any x n X t 1 : Q k t x n, ξ m Q k t xk, ξ n k m + β km, x n x k + t 1 nt 1 αtm k 2 x n x k nt 1 2 k 2 and therefore for any node n of stage t 1 and decision x n taken at that node we have Q t x n = p m Q t x n, ξ m 5.12 p m Q k t x n, ξ m p m Q k t xk n, ξ k m + β km, x n x k t 1 n + α tm k t 1 2 x n x k n 2 k 2 t 1 = θ k t + β k t, x n x k n k t 1 + αt 2 x n x k n k t = C k t x n. This completes the induction step and shows 5.11 for every t, k. Let Ω 1 be the event on the sample space Ω of sequences of scenarios such that every scenario is sampled an infinite number of times. Due to H2, this event has probability one. Take an arbitrary realization ω of DASC in Ω 1. To simplify notation we will use x k n, Q k t, θt k, βt k instead of x k nω, Q k t ω, θt k ω, βt k ω. We want to show that Ht, t = 2,..., T + 1, hold for that realization. The proof is by backward induction on t. For t = T + 1, Ht holds by definition of Q T +1, Q k T +1. Now assume that Ht + 1 holds for some t {2,..., T }. We want to show that Ht holds. Take an arbitrary node n Nodest 1. For this node we define S n = {k 1 : n k t 1 = n} the set of iterations such that the sampled scenario passes through node n. Observe that S n is infinite because the realization of DASC is in Ω 1. We first show that lim Q t x k n Q k t x k n = 0. k +,k S n For k S n, we have n k t 1 = n, i.e., x k n = x k n k t 1, which implies, using 5.11, that 5.13 Q t x k n Q k t x k n C k t x k n = θ k t = p m θ km t = by definition of Ct k and θt k. It follows that for any k S n we have 0 Q t x k n Q k t x k n p m Q t x k n, ξ m Q k t xk n, ξ m 5.14 = = = p m Q k t xk n, ξ m p m Q t x k n, ξ m Q k 1 t x k n, ξ m p m Q t x k n, ξ m Ft k 1 x k m, x k n, ξ m p m Q t x k n, ξ m f t x k m, x k n, ξ m Q k 1 t+1 xk m p m Q t x k n, ξ m F t x k m, x k n, ξ m + Q t+1 x k m Q k 1 t+1 xk m p m Q t+1 x k m Q k 1 t+1 xk m, where for the last inequality we have used the definition of Q t and the fact that x k m X t x k n, ξ m. Next, recall that Q t+1 is convex; by Lemma 5.1 functions Q k t+1 k are Lipschitz continuous; and for all k 1 we have Q k t+1 Q k+1 t+1 Q t+1 on compact set X t. Therefore, the induction hypothesis lim Q t+1x k m Q k t+1x k m = 0 k + 6
7 implies, using Lemma A.1 in [4], that 5.15 lim k + Q t+1x k m Q k 1 t+1 xk m = 0. Plugging 5.15 into 5.14 we obtain 5.16 lim k +,k S n Q t x k n Q k t x k n = 0. It remains to show that 5.17 lim k +,k / S n Q t x k n Q k t x k n = 0. The relation above can be proved using Lemma 5.4 in [10] which can be applied since A relation 5.16 holds convergence was shown for the iterations in S n, B the sequence Q k t k is monotone, i.e., Q k t Q k 1 t for all k 1, C Assumption H2 holds, and D ξt 1 k is independent on x j n, j = 1,..., k, Q j t, j = 1,..., k 1. 3 Therefore, we have shown i. ii can be proved as Theorem 5.3-ii in [7] using i. Acknowledgments The author s research was partially supported by an FGV grant, CNPq grant /2016-9, and FAPERJ grant E-26/ /2014. References [1] J.R. Birge. Decomposition and partitioning methods for multistage stochastic linear programs. Oper. Res., 33: , [2] J.R. Birge and C. J. Donohue. The Abridged Nested Decomposition Method for Multistage Stochastic Linear Programs with Relatively Complete Recourse. Algorithmic of Operations Research, 1:20 30, [3] Z.L. Chen and W.B. Powell. Convergent Cutting-Plane and Partial-Sampling Algorithm for Multistage Stochastic Linear Programs with Recourse. J. Optim. Theory Appl., 102: , [4] P. Girardeau, V. Leclere, and A.B. Philpott. On the convergence of decomposition methods for multistage stochastic convex programs. Mathematics of Operations Research, 40: , [5] V. Guigues. SDDP for some interstage dependent risk-averse problems and application to hydro-thermal planning. Computational Optimization and Applications, 57: , [6] V. Guigues. Convergence analysis of sampling-based decomposition methods for risk-averse multistage stochastic convex programs. SIAM Journal on Optimization, 26: , [7] V. Guigues. Inexact decomposition methods for solving deterministic and stochastic convex dynamic programming equations. arxiv, [8] V. Guigues and W. Römisch. Sampling-based decomposition methods for multistage stochastic programs based on extended polyhedral risk measures. SIAM J. Optim., 22: , [9] V. Guigues and W. Römisch. SDDP for multistage stochastic linear programs based on spectral risk measures. Operations Research Letters, 40: , [10] V. Guigues, W. Tekaya, and M. Lejeune. Regularized decomposition methods for deterministic and stochastic convex optimization and application to portfolio selection with direct transaction and market impact costs. Optimization OnLine, [11] M. Hindsberger and A. B. Philpott. Resa: A method for solving multi-stage stochastic linear programs. SPIX Stochastic Programming Symposium, [12] V. Kozmik and D.P. Morton. Evaluating policies in risk-averse multi-stage stochastic programming. Mathematical Programming, 152: , [13] M.V.F. Pereira and L.M.V.G Pinto. Multi-stage stochastic optimization applied to energy planning. Math. Program., 52: , [14] A. Philpott and V. de Matos. Dynamic sampling algorithms for multi-stage stochastic programs with risk aversion. European Journal of Operational Research, 218: , [15] A. B. Philpott and Z. Guan. On the convergence of stochastic dual dynamic programming and related methods. Oper. Res. Lett., 36: , [16] A. Shapiro. Analysis of stochastic dual dynamic programming method. European Journal of Operational Research, 209:63 72, Lemma 5.4 in [10] is similar to the end of the proof of Theorem 4.1 in [6] and uses the Strong Law of Large Numbers. This lemma itself applies the ideas of the end of the convergence proof of SDDP given in [4], which was given with a different more general sampling scheme in the backward pass. 7
8 [17] A. Shapiro, D. Dentcheva, and A. Ruszczyński. Lectures on Stochastic Programming: Modeling and Theory. SIAM, Philadelphia,
Worst-case-expectation approach to optimization under uncertainty
Worst-case-expectation approach to optimization under uncertainty Wajdi Tekaya Joint research with Alexander Shapiro, Murilo Pereira Soares and Joari Paulo da Costa : Cambridge Systems Associates; : Georgia
More informationMULTISTAGE STOCHASTIC PROGRAMS WITH A RANDOM NUMBER OF STAGES: DYNAMIC PROGRAMMING EQUATIONS, SOLUTION METHODS, AND APPLICATION TO PORTFOLIO SELECTION
MULTISTAGE STOCHASTIC PROGRAMS WITH A RANDOM NUMBER OF STAGES: DYNAMIC PROGRAMMING EQUATIONS, SOLUTION METHODS, AND APPLICATION TO PORTFOLIO SELECTION Vincent Guigues School of Applied Mathematics, FGV
More informationMultistage risk-averse asset allocation with transaction costs
Multistage risk-averse asset allocation with transaction costs 1 Introduction Václav Kozmík 1 Abstract. This paper deals with asset allocation problems formulated as multistage stochastic programming models.
More informationRisk aversion in multi-stage stochastic programming: a modeling and algorithmic perspective
Risk aversion in multi-stage stochastic programming: a modeling and algorithmic perspective Tito Homem-de-Mello School of Business Universidad Adolfo Ibañez, Santiago, Chile Joint work with Bernardo Pagnoncelli
More informationStochastic Dual Dynamic Programming Algorithm for Multistage Stochastic Programming
Stochastic Dual Dynamic Programg Algorithm for Multistage Stochastic Programg Final presentation ISyE 8813 Fall 2011 Guido Lagos Wajdi Tekaya Georgia Institute of Technology November 30, 2011 Multistage
More informationAssessing Policy Quality in Multi-stage Stochastic Programming
Assessing Policy Quality in Multi-stage Stochastic Programming Anukal Chiralaksanakul and David P. Morton Graduate Program in Operations Research The University of Texas at Austin Austin, TX 78712 January
More informationRobust Dual Dynamic Programming
1 / 18 Robust Dual Dynamic Programming Angelos Georghiou, Angelos Tsoukalas, Wolfram Wiesemann American University of Beirut Olayan School of Business 31 May 217 2 / 18 Inspired by SDDP Stochastic optimization
More informationStochastic Dual Dynamic Programming
1 / 43 Stochastic Dual Dynamic Programming Operations Research Anthony Papavasiliou 2 / 43 Contents [ 10.4 of BL], [Pereira, 1991] 1 Recalling the Nested L-Shaped Decomposition 2 Drawbacks of Nested Decomposition
More informationStochastic Dual Dynamic integer Programming
Stochastic Dual Dynamic integer Programming Shabbir Ahmed Georgia Tech Jikai Zou Andy Sun Multistage IP Canonical deterministic formulation ( X T ) f t (x t,y t ):(x t 1,x t,y t ) 2 X t 8 t x t min x,y
More informationReport for technical cooperation between Georgia Institute of Technology and ONS - Operador Nacional do Sistema Elétrico Risk Averse Approach
Report for technical cooperation between Georgia Institute of Technology and ONS - Operador Nacional do Sistema Elétrico Risk Averse Approach Alexander Shapiro and Wajdi Tekaya School of Industrial and
More informationInvestigation of the and minimum storage energy target levels approach. Final Report
Investigation of the AV@R and minimum storage energy target levels approach Final Report First activity of the technical cooperation between Georgia Institute of Technology and ONS - Operador Nacional
More informationOutline. 1 Introduction. 2 Algorithms. 3 Examples. Algorithm 1 General coordinate minimization framework. 1: Choose x 0 R n and set k 0.
Outline Coordinate Minimization Daniel P. Robinson Department of Applied Mathematics and Statistics Johns Hopkins University November 27, 208 Introduction 2 Algorithms Cyclic order with exact minimization
More informationOn solving multistage stochastic programs with coherent risk measures
On solving multistage stochastic programs with coherent risk measures Andy Philpott Vitor de Matos y Erlon Finardi z August 13, 2012 Abstract We consider a class of multistage stochastic linear programs
More informationModeling Time-dependent Randomness in Stochastic Dual Dynamic Programming
Modeling Time-dependent Randomness in Stochastic Dual Dynamic Programming Nils Löhndorf Department of Information Systems and Operations Vienna University of Economics and Business Vienna, Austria nils.loehndorf@wu.ac.at
More informationYao s Minimax Principle
Complexity of algorithms The complexity of an algorithm is usually measured with respect to the size of the input, where size may for example refer to the length of a binary word describing the input,
More informationOn Complexity of Multistage Stochastic Programs
On Complexity of Multistage Stochastic Programs Alexander Shapiro School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0205, USA e-mail: ashapiro@isye.gatech.edu
More informationApproximations of Stochastic Programs. Scenario Tree Reduction and Construction
Approximations of Stochastic Programs. Scenario Tree Reduction and Construction W. Römisch Humboldt-University Berlin Institute of Mathematics 10099 Berlin, Germany www.mathematik.hu-berlin.de/~romisch
More informationNon replication of options
Non replication of options Christos Kountzakis, Ioannis A Polyrakis and Foivos Xanthos June 30, 2008 Abstract In this paper we study the scarcity of replication of options in the two period model of financial
More informationFinancial Optimization ISE 347/447. Lecture 15. Dr. Ted Ralphs
Financial Optimization ISE 347/447 Lecture 15 Dr. Ted Ralphs ISE 347/447 Lecture 15 1 Reading for This Lecture C&T Chapter 12 ISE 347/447 Lecture 15 2 Stock Market Indices A stock market index is a statistic
More information1 Consumption and saving under uncertainty
1 Consumption and saving under uncertainty 1.1 Modelling uncertainty As in the deterministic case, we keep assuming that agents live for two periods. The novelty here is that their earnings in the second
More informationChapter 7: Portfolio Theory
Chapter 7: Portfolio Theory 1. Introduction 2. Portfolio Basics 3. The Feasible Set 4. Portfolio Selection Rules 5. The Efficient Frontier 6. Indifference Curves 7. The Two-Asset Portfolio 8. Unrestriceted
More informationMartingale Pricing Theory in Discrete-Time and Discrete-Space Models
IEOR E4707: Foundations of Financial Engineering c 206 by Martin Haugh Martingale Pricing Theory in Discrete-Time and Discrete-Space Models These notes develop the theory of martingale pricing in a discrete-time,
More informationApproximation of Continuous-State Scenario Processes in Multi-Stage Stochastic Optimization and its Applications
Approximation of Continuous-State Scenario Processes in Multi-Stage Stochastic Optimization and its Applications Anna Timonina University of Vienna, Abraham Wald PhD Program in Statistics and Operations
More informationArbitrage Conditions for Electricity Markets with Production and Storage
SWM ORCOS Arbitrage Conditions for Electricity Markets with Production and Storage Raimund Kovacevic Research Report 2018-03 March 2018 ISSN 2521-313X Operations Research and Control Systems Institute
More informationIntroduction to Probability Theory and Stochastic Processes for Finance Lecture Notes
Introduction to Probability Theory and Stochastic Processes for Finance Lecture Notes Fabio Trojani Department of Economics, University of St. Gallen, Switzerland Correspondence address: Fabio Trojani,
More informationMartingales. by D. Cox December 2, 2009
Martingales by D. Cox December 2, 2009 1 Stochastic Processes. Definition 1.1 Let T be an arbitrary index set. A stochastic process indexed by T is a family of random variables (X t : t T) defined on a
More informationA No-Arbitrage Theorem for Uncertain Stock Model
Fuzzy Optim Decis Making manuscript No (will be inserted by the editor) A No-Arbitrage Theorem for Uncertain Stock Model Kai Yao Received: date / Accepted: date Abstract Stock model is used to describe
More informationStochastic Programming and Financial Analysis IE447. Midterm Review. Dr. Ted Ralphs
Stochastic Programming and Financial Analysis IE447 Midterm Review Dr. Ted Ralphs IE447 Midterm Review 1 Forming a Mathematical Programming Model The general form of a mathematical programming model is:
More informationScenario reduction and scenario tree construction for power management problems
Scenario reduction and scenario tree construction for power management problems N. Gröwe-Kuska, H. Heitsch and W. Römisch Humboldt-University Berlin Institute of Mathematics Page 1 of 20 IEEE Bologna POWER
More informationFinite Additivity in Dubins-Savage Gambling and Stochastic Games. Bill Sudderth University of Minnesota
Finite Additivity in Dubins-Savage Gambling and Stochastic Games Bill Sudderth University of Minnesota This talk is based on joint work with Lester Dubins, David Heath, Ashok Maitra, and Roger Purves.
More informationForecast Horizons for Production Planning with Stochastic Demand
Forecast Horizons for Production Planning with Stochastic Demand Alfredo Garcia and Robert L. Smith Department of Industrial and Operations Engineering Universityof Michigan, Ann Arbor MI 48109 December
More information1 Precautionary Savings: Prudence and Borrowing Constraints
1 Precautionary Savings: Prudence and Borrowing Constraints In this section we study conditions under which savings react to changes in income uncertainty. Recall that in the PIH, when you abstract from
More informationOn Existence of Equilibria. Bayesian Allocation-Mechanisms
On Existence of Equilibria in Bayesian Allocation Mechanisms Northwestern University April 23, 2014 Bayesian Allocation Mechanisms In allocation mechanisms, agents choose messages. The messages determine
More informationOn the Lower Arbitrage Bound of American Contingent Claims
On the Lower Arbitrage Bound of American Contingent Claims Beatrice Acciaio Gregor Svindland December 2011 Abstract We prove that in a discrete-time market model the lower arbitrage bound of an American
More information4: SINGLE-PERIOD MARKET MODELS
4: SINGLE-PERIOD MARKET MODELS Marek Rutkowski School of Mathematics and Statistics University of Sydney Semester 2, 2016 M. Rutkowski (USydney) Slides 4: Single-Period Market Models 1 / 87 General Single-Period
More informationAsymptotic results discrete time martingales and stochastic algorithms
Asymptotic results discrete time martingales and stochastic algorithms Bernard Bercu Bordeaux University, France IFCAM Summer School Bangalore, India, July 2015 Bernard Bercu Asymptotic results for discrete
More informationDynamic sampling algorithms for multi-stage stochastic programs with risk aversion
Dynamic sampling algorithms for multi-stage stochastic programs with risk aversion A.B. Philpott y and V.L. de Matos z October 7, 2011 Abstract We consider the incorporation of a time-consistent coherent
More informationDynamic Portfolio Execution Detailed Proofs
Dynamic Portfolio Execution Detailed Proofs Gerry Tsoukalas, Jiang Wang, Kay Giesecke March 16, 2014 1 Proofs Lemma 1 (Temporary Price Impact) A buy order of size x being executed against i s ask-side
More informationGlobal convergence rate analysis of unconstrained optimization methods based on probabilistic models
Math. Program., Ser. A DOI 10.1007/s10107-017-1137-4 FULL LENGTH PAPER Global convergence rate analysis of unconstrained optimization methods based on probabilistic models C. Cartis 1 K. Scheinberg 2 Received:
More informationNo-arbitrage theorem for multi-factor uncertain stock model with floating interest rate
Fuzzy Optim Decis Making 217 16:221 234 DOI 117/s17-16-9246-8 No-arbitrage theorem for multi-factor uncertain stock model with floating interest rate Xiaoyu Ji 1 Hua Ke 2 Published online: 17 May 216 Springer
More information6.231 DYNAMIC PROGRAMMING LECTURE 10 LECTURE OUTLINE
6.231 DYNAMIC PROGRAMMING LECTURE 10 LECTURE OUTLINE Rollout algorithms Cost improvement property Discrete deterministic problems Approximations of rollout algorithms Discretization of continuous time
More informationIn Discrete Time a Local Martingale is a Martingale under an Equivalent Probability Measure
In Discrete Time a Local Martingale is a Martingale under an Equivalent Probability Measure Yuri Kabanov 1,2 1 Laboratoire de Mathématiques, Université de Franche-Comté, 16 Route de Gray, 253 Besançon,
More information3.2 No-arbitrage theory and risk neutral probability measure
Mathematical Models in Economics and Finance Topic 3 Fundamental theorem of asset pricing 3.1 Law of one price and Arrow securities 3.2 No-arbitrage theory and risk neutral probability measure 3.3 Valuation
More information4 Reinforcement Learning Basic Algorithms
Learning in Complex Systems Spring 2011 Lecture Notes Nahum Shimkin 4 Reinforcement Learning Basic Algorithms 4.1 Introduction RL methods essentially deal with the solution of (optimal) control problems
More information2.1 Mathematical Basis: Risk-Neutral Pricing
Chapter Monte-Carlo Simulation.1 Mathematical Basis: Risk-Neutral Pricing Suppose that F T is the payoff at T for a European-type derivative f. Then the price at times t before T is given by f t = e r(t
More informationGAME THEORY. Department of Economics, MIT, Follow Muhamet s slides. We need the following result for future reference.
14.126 GAME THEORY MIHAI MANEA Department of Economics, MIT, 1. Existence and Continuity of Nash Equilibria Follow Muhamet s slides. We need the following result for future reference. Theorem 1. Suppose
More informationDecomposition Methods
Decomposition Methods separable problems, complicating variables primal decomposition dual decomposition complicating constraints general decomposition structures Prof. S. Boyd, EE364b, Stanford University
More informationA class of coherent risk measures based on one-sided moments
A class of coherent risk measures based on one-sided moments T. Fischer Darmstadt University of Technology November 11, 2003 Abstract This brief paper explains how to obtain upper boundaries of shortfall
More informationExponential utility maximization under partial information
Exponential utility maximization under partial information Marina Santacroce Politecnico di Torino Joint work with M. Mania AMaMeF 5-1 May, 28 Pitesti, May 1th, 28 Outline Expected utility maximization
More informationDynamic Risk Management in Electricity Portfolio Optimization via Polyhedral Risk Functionals
Dynamic Risk Management in Electricity Portfolio Optimization via Polyhedral Risk Functionals A. Eichhorn and W. Römisch Humboldt-University Berlin, Department of Mathematics, Germany http://www.math.hu-berlin.de/~romisch
More informationLecture Notes 1
4.45 Lecture Notes Guido Lorenzoni Fall 2009 A portfolio problem To set the stage, consider a simple nite horizon problem. A risk averse agent can invest in two assets: riskless asset (bond) pays gross
More informationMultistage Stochastic Demand-side Management for Price-Making Major Consumers of Electricity in a Co-optimized Energy and Reserve Market
Multistage Stochastic Demand-side Management for Price-Making Major Consumers of Electricity in a Co-optimized Energy and Reserve Market Mahbubeh Habibian Anthony Downward Golbon Zakeri Abstract In this
More information1 Overview. 2 The Gradient Descent Algorithm. AM 221: Advanced Optimization Spring 2016
AM 22: Advanced Optimization Spring 206 Prof. Yaron Singer Lecture 9 February 24th Overview In the previous lecture we reviewed results from multivariate calculus in preparation for our journey into convex
More informationRevenue Management Under the Markov Chain Choice Model
Revenue Management Under the Markov Chain Choice Model Jacob B. Feldman School of Operations Research and Information Engineering, Cornell University, Ithaca, New York 14853, USA jbf232@cornell.edu Huseyin
More informationLECTURE 2: MULTIPERIOD MODELS AND TREES
LECTURE 2: MULTIPERIOD MODELS AND TREES 1. Introduction One-period models, which were the subject of Lecture 1, are of limited usefulness in the pricing and hedging of derivative securities. In real-world
More informationMarkov Decision Processes II
Markov Decision Processes II Daisuke Oyama Topics in Economic Theory December 17, 2014 Review Finite state space S, finite action space A. The value of a policy σ A S : v σ = β t Q t σr σ, t=0 which satisfies
More informationLecture 8: Asset pricing
BURNABY SIMON FRASER UNIVERSITY BRITISH COLUMBIA Paul Klein Office: WMC 3635 Phone: (778) 782-9391 Email: paul klein 2@sfu.ca URL: http://paulklein.ca/newsite/teaching/483.php Economics 483 Advanced Topics
More informationPricing Problems under the Markov Chain Choice Model
Pricing Problems under the Markov Chain Choice Model James Dong School of Operations Research and Information Engineering, Cornell University, Ithaca, New York 14853, USA jd748@cornell.edu A. Serdar Simsek
More informationLecture 17: More on Markov Decision Processes. Reinforcement learning
Lecture 17: More on Markov Decision Processes. Reinforcement learning Learning a model: maximum likelihood Learning a value function directly Monte Carlo Temporal-difference (TD) learning COMP-424, Lecture
More informationStochastic Proximal Algorithms with Applications to Online Image Recovery
1/24 Stochastic Proximal Algorithms with Applications to Online Image Recovery Patrick Louis Combettes 1 and Jean-Christophe Pesquet 2 1 Mathematics Department, North Carolina State University, Raleigh,
More informationarxiv: v1 [math.pr] 6 Apr 2015
Analysis of the Optimal Resource Allocation for a Tandem Queueing System arxiv:1504.01248v1 [math.pr] 6 Apr 2015 Liu Zaiming, Chen Gang, Wu Jinbiao School of Mathematics and Statistics, Central South University,
More informationGLOBAL CONVERGENCE OF GENERAL DERIVATIVE-FREE TRUST-REGION ALGORITHMS TO FIRST AND SECOND ORDER CRITICAL POINTS
GLOBAL CONVERGENCE OF GENERAL DERIVATIVE-FREE TRUST-REGION ALGORITHMS TO FIRST AND SECOND ORDER CRITICAL POINTS ANDREW R. CONN, KATYA SCHEINBERG, AND LUíS N. VICENTE Abstract. In this paper we prove global
More informationLecture l(x) 1. (1) x X
Lecture 14 Agenda for the lecture Kraft s inequality Shannon codes The relation H(X) L u (X) = L p (X) H(X) + 1 14.1 Kraft s inequality While the definition of prefix-free codes is intuitively clear, we
More informationSupport Vector Machines: Training with Stochastic Gradient Descent
Support Vector Machines: Training with Stochastic Gradient Descent Machine Learning Spring 2018 The slides are mainly from Vivek Srikumar 1 Support vector machines Training by maximizing margin The SVM
More informationEquivalence between Semimartingales and Itô Processes
International Journal of Mathematical Analysis Vol. 9, 215, no. 16, 787-791 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/1.12988/ijma.215.411358 Equivalence between Semimartingales and Itô Processes
More informationScenario tree generation for stochastic programming models using GAMS/SCENRED
Scenario tree generation for stochastic programming models using GAMS/SCENRED Holger Heitsch 1 and Steven Dirkse 2 1 Humboldt-University Berlin, Department of Mathematics, Germany 2 GAMS Development Corp.,
More informationConvergence Analysis of Monte Carlo Calibration of Financial Market Models
Analysis of Monte Carlo Calibration of Financial Market Models Christoph Käbe Universität Trier Workshop on PDE Constrained Optimization of Certain and Uncertain Processes June 03, 2009 Monte Carlo Calibration
More informationLecture 8: Introduction to asset pricing
THE UNIVERSITY OF SOUTHAMPTON Paul Klein Office: Murray Building, 3005 Email: p.klein@soton.ac.uk URL: http://paulklein.se Economics 3010 Topics in Macroeconomics 3 Autumn 2010 Lecture 8: Introduction
More informationChapter 5 Finite Difference Methods. Math6911 W07, HM Zhu
Chapter 5 Finite Difference Methods Math69 W07, HM Zhu References. Chapters 5 and 9, Brandimarte. Section 7.8, Hull 3. Chapter 7, Numerical analysis, Burden and Faires Outline Finite difference (FD) approximation
More information3 Arbitrage pricing theory in discrete time.
3 Arbitrage pricing theory in discrete time. Orientation. In the examples studied in Chapter 1, we worked with a single period model and Gaussian returns; in this Chapter, we shall drop these assumptions
More informationMEASURING OF SECOND ORDER STOCHASTIC DOMINANCE PORTFOLIO EFFICIENCY
K Y BERNETIKA VOLUM E 46 ( 2010), NUMBER 3, P AGES 488 500 MEASURING OF SECOND ORDER STOCHASTIC DOMINANCE PORTFOLIO EFFICIENCY Miloš Kopa In this paper, we deal with second-order stochastic dominance (SSD)
More informationGame Theory: Normal Form Games
Game Theory: Normal Form Games Michael Levet June 23, 2016 1 Introduction Game Theory is a mathematical field that studies how rational agents make decisions in both competitive and cooperative situations.
More informationCalibration Estimation under Non-response and Missing Values in Auxiliary Information
WORKING PAPER 2/2015 Calibration Estimation under Non-response and Missing Values in Auxiliary Information Thomas Laitila and Lisha Wang Statistics ISSN 1403-0586 http://www.oru.se/institutioner/handelshogskolan-vid-orebro-universitet/forskning/publikationer/working-papers/
More informationLecture 4: Divide and Conquer
Lecture 4: Divide and Conquer Divide and Conquer Merge sort is an example of a divide-and-conquer algorithm Recall the three steps (at each level to solve a divideand-conquer problem recursively Divide
More informationMultirate Multicast Service Provisioning I: An Algorithm for Optimal Price Splitting Along Multicast Trees
Mathematical Methods of Operations Research manuscript No. (will be inserted by the editor) Multirate Multicast Service Provisioning I: An Algorithm for Optimal Price Splitting Along Multicast Trees Tudor
More informationLecture 7: Bayesian approach to MAB - Gittins index
Advanced Topics in Machine Learning and Algorithmic Game Theory Lecture 7: Bayesian approach to MAB - Gittins index Lecturer: Yishay Mansour Scribe: Mariano Schain 7.1 Introduction In the Bayesian approach
More informationProgressive Hedging for Multi-stage Stochastic Optimization Problems
Progressive Hedging for Multi-stage Stochastic Optimization Problems David L. Woodruff Jean-Paul Watson Graduate School of Management University of California, Davis Davis, CA 95616, USA dlwoodruff@ucdavis.edu
More informationMATH 5510 Mathematical Models of Financial Derivatives. Topic 1 Risk neutral pricing principles under single-period securities models
MATH 5510 Mathematical Models of Financial Derivatives Topic 1 Risk neutral pricing principles under single-period securities models 1.1 Law of one price and Arrow securities 1.2 No-arbitrage theory and
More informationOptimal stopping problems for a Brownian motion with a disorder on a finite interval
Optimal stopping problems for a Brownian motion with a disorder on a finite interval A. N. Shiryaev M. V. Zhitlukhin arxiv:1212.379v1 [math.st] 15 Dec 212 December 18, 212 Abstract We consider optimal
More informationOptimal construction of a fund of funds
Optimal construction of a fund of funds Petri Hilli, Matti Koivu and Teemu Pennanen January 28, 29 Introduction We study the problem of diversifying a given initial capital over a finite number of investment
More informationA Stochastic Levenberg-Marquardt Method Using Random Models with Application to Data Assimilation
A Stochastic Levenberg-Marquardt Method Using Random Models with Application to Data Assimilation E Bergou Y Diouane V Kungurtsev C W Royer July 5, 08 Abstract Globally convergent variants of the Gauss-Newton
More informationThe Correlation Smile Recovery
Fortis Bank Equity & Credit Derivatives Quantitative Research The Correlation Smile Recovery E. Vandenbrande, A. Vandendorpe, Y. Nesterov, P. Van Dooren draft version : March 2, 2009 1 Introduction Pricing
More informationInformation Acquisition under Persuasive Precedent versus Binding Precedent (Preliminary and Incomplete)
Information Acquisition under Persuasive Precedent versus Binding Precedent (Preliminary and Incomplete) Ying Chen Hülya Eraslan March 25, 2016 Abstract We analyze a dynamic model of judicial decision
More informationStochastic Optimal Control
Stochastic Optimal Control Lecturer: Eilyan Bitar, Cornell ECE Scribe: Kevin Kircher, Cornell MAE These notes summarize some of the material from ECE 5555 (Stochastic Systems) at Cornell in the fall of
More informationAMH4 - ADVANCED OPTION PRICING. Contents
AMH4 - ADVANCED OPTION PRICING ANDREW TULLOCH Contents 1. Theory of Option Pricing 2 2. Black-Scholes PDE Method 4 3. Martingale method 4 4. Monte Carlo methods 5 4.1. Method of antithetic variances 5
More informationMultistage Stochastic Mixed-Integer Programs for Optimizing Gas Contract and Scheduling Maintenance
Multistage Stochastic Mixed-Integer Programs for Optimizing Gas Contract and Scheduling Maintenance Zhe Liu Siqian Shen September 2, 2012 Abstract In this paper, we present multistage stochastic mixed-integer
More informationThe ruin probabilities of a multidimensional perturbed risk model
MATHEMATICAL COMMUNICATIONS 231 Math. Commun. 18(2013, 231 239 The ruin probabilities of a multidimensional perturbed risk model Tatjana Slijepčević-Manger 1, 1 Faculty of Civil Engineering, University
More informationOptimal Stopping. Nick Hay (presentation follows Thomas Ferguson s Optimal Stopping and Applications) November 6, 2008
(presentation follows Thomas Ferguson s and Applications) November 6, 2008 1 / 35 Contents: Introduction Problems Markov Models Monotone Stopping Problems Summary 2 / 35 The Secretary problem You have
More informationMath-Stat-491-Fall2014-Notes-V
Math-Stat-491-Fall2014-Notes-V Hariharan Narayanan December 7, 2014 Martingales 1 Introduction Martingales were originally introduced into probability theory as a model for fair betting games. Essentially
More informationA Trust Region Algorithm for Heterogeneous Multiobjective Optimization
A Trust Region Algorithm for Heterogeneous Multiobjective Optimization Jana Thomann and Gabriele Eichfelder 8.0.018 Abstract This paper presents a new trust region method for multiobjective heterogeneous
More informationOnline Appendix Optimal Time-Consistent Government Debt Maturity D. Debortoli, R. Nunes, P. Yared. A. Proofs
Online Appendi Optimal Time-Consistent Government Debt Maturity D. Debortoli, R. Nunes, P. Yared A. Proofs Proof of Proposition 1 The necessity of these conditions is proved in the tet. To prove sufficiency,
More informationThe Limiting Distribution for the Number of Symbol Comparisons Used by QuickSort is Nondegenerate (Extended Abstract)
The Limiting Distribution for the Number of Symbol Comparisons Used by QuickSort is Nondegenerate (Extended Abstract) Patrick Bindjeme 1 James Allen Fill 1 1 Department of Applied Mathematics Statistics,
More informationOptimal energy management and stochastic decomposition
Optimal energy management and stochastic decomposition F. Pacaud P. Carpentier J.P. Chancelier M. De Lara JuMP-dev workshop, 2018 ENPC ParisTech ENSTA ParisTech Efficacity 1/23 Motivation We consider a
More informationIEOR E4602: Quantitative Risk Management
IEOR E4602: Quantitative Risk Management Risk Measures Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com Reference: Chapter 8
More informationOn the Optimality of a Family of Binary Trees Techical Report TR
On the Optimality of a Family of Binary Trees Techical Report TR-011101-1 Dana Vrajitoru and William Knight Indiana University South Bend Department of Computer and Information Sciences Abstract In this
More informationScenario Generation and Sampling Methods
Scenario Generation and Sampling Methods Güzin Bayraksan Tito Homem-de-Mello SVAN 2016 IMPA May 9th, 2016 Bayraksan (OSU) & Homem-de-Mello (UAI) Scenario Generation and Sampling SVAN IMPA May 9 1 / 30
More informationPart 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)
Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective
More informationSYLLABUS AND SAMPLE QUESTIONS FOR MSQE (Program Code: MQEK and MQED) Syllabus for PEA (Mathematics), 2013
SYLLABUS AND SAMPLE QUESTIONS FOR MSQE (Program Code: MQEK and MQED) 2013 Syllabus for PEA (Mathematics), 2013 Algebra: Binomial Theorem, AP, GP, HP, Exponential, Logarithmic Series, Sequence, Permutations
More informationProblem Set 3. Thomas Philippon. April 19, Human Wealth, Financial Wealth and Consumption
Problem Set 3 Thomas Philippon April 19, 2002 1 Human Wealth, Financial Wealth and Consumption The goal of the question is to derive the formulas on p13 of Topic 2. This is a partial equilibrium analysis
More informationOptimizing Portfolios
Optimizing Portfolios An Undergraduate Introduction to Financial Mathematics J. Robert Buchanan 2010 Introduction Investors may wish to adjust the allocation of financial resources including a mixture
More information