On-demand, Spot, or Both: Dynamic Resource Allocation for Executing Batch Jobs in the Cloud

Size: px
Start display at page:

Download "On-demand, Spot, or Both: Dynamic Resource Allocation for Executing Batch Jobs in the Cloud"

Transcription

1 On-demnd, Spot, or Both: Dynmic Resource Alloction for Executing Btch Jobs in the Cloud Ishi Menche Microsoft Reserch Ohd Shmir Weizmnn Institute Nvendu Jin Microsoft Reserch Abstrct Cloud computing provides n ttrctive computing prdigm in which computtionl resources re rented on-demnd to users with zero cpitl nd mintennce costs. Cloud providers offer different pricing options to meet computing requirements of wide vriety of pplictions. An ttrctive option for btch computing is spot-instnces, which llows users to plce bids for spre computing instnces nd rent them t (often) substntilly lower price compred to the fixed on-demnd price. However, this rises three min chllenges for users: how mny instnces to rent t ny time? wht type (on-demnd, spot, or both)? nd wht bid vlue to use for spot instnces? In prticulr, renting on-demnd risks high costs while renting spot instnces risks job interruption nd delyed completion when the spot mrket price exceeds the bid. This pper introduces n online lerning lgorithm for resource lloction to ddress this fundmentl trdeoff between computtion cost nd performnce. Our lgorithm dynmiclly dpts resource lloction by lerning from its performnce on prior job executions while incorporting history of spot prices nd worklod chrcteristics. We provide theoreticl bounds on its performnce nd prove tht the verge regret of our pproch (compred to the best policy in hindsight) vnishes to zero with time. Evlution on trces from lrge dtcenter cluster shows tht our lgorithm outperforms greedy lloction heuristics nd quickly converges to smll set of best performing policies. Introduction This pper presents n online lerning pproch tht lloctes resources for executing btch jobs on cloud pltforms by dptively mnging the trdeoff between the cost of renting compute instnces nd the user-centric utility of finishing jobs by their specified due dtes. Cloud computing is revolutionizing computing s ser- Figure : The vrition in Amzon EC2 spot mrket prices for lrge computing instnces in the US Est-cost region: Linux (left) nd Windows (right). The fixed on-demnd price for Linux nd Windows instnces is.34 nd.48, respectively. vice due to its cost-efficiency nd flexibility. By llowing multiplexing of lrge resources pools mong users, the cloud enbles gility the bility to dynmiclly scle-out nd scle-in ppliction instnces cross hosting servers. Mjor cloud computing providers include Amzon EC2, Microsoft s Windows Azure, Google AppEngine, nd IBM s Smrt Business cloud offerings. The common cloud pricing schemes re (i) reserved, (ii) on-demnd, nd (iii) spot. Reserved instnces offer users to mke one-time pyment for reserving instnces over -3 yers nd then receive discounted hourly pricing on usge. On-demnd instnces llow users to py for instnces by the hour without ny long-term commitment. Spot instnces, offered by Amzon EC2, llow users to bid for spre instnces nd to run them s long s their bid price is bove the spot mrket price. For btch pplictions with flexibility on when they cn run (e.g., Monte Crlo simultions, softwre testing, imge processing, web crwling), renting spot instnces cn significntly reduce the execution costs. Indeed, severl enterprises clim to sve 5%-66% in computing costs by using spot instnces over on-demnd instnces, or their combintion [3]. Reserved instnces re most beneficil for hosting long running services (e.g., web pplictions), nd my

2 lso be used for btch jobs, especilly if future lod cn be predicted [9]. The focus of this work, however, is on mnging the choice between on-demnd nd spot instnces, which re suitble for btch jobs tht perform computtion for bounded period. Customers fce fundmentl chllenge of how to combine on-demnd nd spot instnces to execute their jobs. On one hnd, lwys renting on-demnd incurs high costs. On the other hnd, spot instnces with low bid price risks high dely before the job gets strted (till the bid is ccepted), or frequent interruption during its execution (when the spot mrket price exceeds the bid). Figure shows the vrition in Amzon EC2 spot prices for their US est cost region for Linux nd Windows instnces of type lrge. We observe tht spot mrket prices exhibit significnt fluctution, nd t times exceed even the ondemnd price. For btch jobs requiring strict completion dedlines, this fluctution cn directly impct the result qulity. For exmple, web serch requires frequent crwling nd updte of serch index s the freshness of this dt ffects the end-user experience, product purchses, nd dvertisement revenues [2]. Unfortuntely, most customers resort to simple heuristics to ddress these issues while renting computing instnces; we exemplify this observtion by nlyzing severl cse studies, reported on the Amzon EC2 website [3]. Litmus [6] offers testing tools to mrketing professionls for their web site designs nd emil cmpigns. Its heuristic for resource lloction is to first lunch spot instnces nd then on-demnd instnces if spot instnces do not get llocted within 2 minutes. Their bid price is set to be bove the on-demnd price to improve the probbility of their bid getting ccepted. Similrly, BrowserMob [7], strtup tht provides website lod testing nd monitoring services, ttempts to lunch spot instnces first t low bid price. If instnces do not lunch within 7 minutes, it switches to ondemnd. Other compnies mnully ssign dely sensitive jobs to on-demnd instnces, nd dely-tolernt ones to spot instnces. In generl, these schemes do not provide ny pyoff gurntees or how fr do they operte from the optiml cost vs. performnce point. Further, s expected, these pproches re limited in terms of explored policies, which ccount for only smll portion of the stte spce. Note tht strwmn of simply witing for the spot instnces t the lowest price nd purchsing in bulk risks delyed job completion, insufficient resources (due to limit on spot instnces nd job prllelism constrints), or both. Therefore, given fluctuting nd unpredictble spot prices (Fig. ), users do not hve n effective wy of reinforcing the better performing policies. In this pper, we propose n online lerning pproch for utomted resource lloction for btch pplictions, which blnces the fundmentl trdeoff between cloud computing costs nd job due dtes. Intuitively, given set of jobs nd resource lloction policies, our lgorithm continuously djusts per-policy weights bsed on their performnce on job executions, in order to reinforce best performing policies. In ddition, the lerning method tkes into ccount prior history of spot prices nd chrcteristics of input jobs to dpt policy weights. Finlly, to prevent overfitting to only smll set of policies, our pproch llows defining brod rnge of prmeterized policy combintions (bsed on discussion with users nd cloud opertors) such s () rent on-demnd, spot instnces, or both; (b) vry spot bid prices in predefined rnge; nd (c) choose bid vlue bsed on pst spot mrket prices. Note tht these policy combintions re illustrtive, not comprehensive, in the sense tht dditionl prmeterized fmilies of policies cn be defined nd integrted into our frmework. Likewise, our lerning pproch cn incorporte other resource lloction prmeters being provided by cloud pltforms e.g., Virtul Mchine (VM) instnce type, dtcenter/region. Our proposed lgorithm is bsed on mchine lerning pproches (e.g., [8]), which im to lern good performing policies given set of cndidte policies. While these schemes provide performnce gurntees with respect to the optiml policy in hindsight, they re not pplicble s-is to our problem. In prticulr, they require pyoff vlue per execution step to mesure how well policy is performing nd to tune the lerning process. However, in btch computing, the performnce of policy cn only be clculted fter the job hs completed. Thus, these schemes do not explicitly ddress the issue of dely in getting feedbck on how well prticulr policy performed in executing jobs. Our online lerning lgorithm hndles bounded dely nd provides forml gurntees on its performnce which scles with the mount of dely nd the totl number of jobs to be processed. We evlute our lgorithms vi simultions on job trce from dtcenter cluster nd Amzon EC2 spot mrket prices. We show tht our pproch outperforms greedy resource lloction heuristics in terms of totl pyoff in prticulr, the verge regret of our pproch (compred to the best policy in hindsight) vnishes to zero with time. Further, it provides fst convergence while only using smll mount of trining dt. Finlly, our lgorithm enbles interpreting the lloction strtegy of the output policies, llowing users to pply them directly in prctice. 2 Bckground nd System Model In this section we first provide bckground on the online lerning frmework nd then describe the problem setup nd the prmeterized set of policies for resource 2

3 lloction. Regret-minimizing online lerning. Our online lerning frmework is bsed on the substntil body of work on lerning lgorithms tht mke repeted decisions while iming to minimize regret. The regret of n lgorithm is defined s the difference between the cumultive performnce of the sequence of its decisions nd the cumultive performnce of the best fixed decision in hindsight. We present only brief overview of these lgorithms due to spce constrints. In generl, n online decision problem cn be formulted s repeted gme between lerner (or decision mker) nd the environment. The gme proceeds in rounds. In ech round j, the environment (possibly controlled by n dversry) ssigns rewrd f j () to ech possible ction, which is not reveled beforehnd to the lerner. The lerner then chooses one of the ctions j, possibly in rndomized mnner. The verge pyoff of n ction is the verge of rewrds J J j= f j() over the time horizon J, nd the lerner s verge pyoff is the verge received rewrd J J j= f j( j ) over the time horizon. The verge regret of the lerner is defined s mx J J j= f j() J J i= f j( j ), nmely the difference between the verge pyoff of the best ction nd the lerner s sequence of ctions. The gol of the lerner is to minimize the verge regret, nd pproch the verge gin of the best ction. Severl lerning lgorithms hve been proposed tht pproch zero verge regret s the time horizon J pproches infinity, even ginst fully dptive dversry [8]. Our problem of llocting between on-demnd nd spot instnces cn be cst s problem of repeted decision mking in which the resource lloction lgorithm must decide in repeted fshion over which policies to use for meeting job due dtes while minimizing job execution costs. However, our problem lso differs from stndrd online lerning, in tht the pyoff of ech policy is not reveled immeditely fter it is chosen, but only fter some dely (due to the time it tkes to process job). This requires us to develop modified online lgorithm nd nlysis. Problem Setup. Our problem setup focuses on single enterprise whose btch jobs rrive over time. Jobs my rrive t ny point in time, however job rrivl is monitored every fixed time intervl of L minutes e.g., L = 5. For simplicity, we ssume tht ech hour is evenly divided into fixed number of such time intervls (nmely, 6/L). We refer to this fixed time intervl s time slot (or slot); the time slots re indexed by t =,2,... Jobs. Ech job j is chrcterized by five prmeters: (i) Arrivl slot A j : If job j rrives t time [L(t ),Lt ), then A j = t. (ii) Due dte d j N (mesured in hours): If the job is not completed fter d j time units since its rrivl A j, it becomes invlid nd further execution yields zero vlue. (iii) Job size z j (mesured in CPU instnce hours to be executed): Note tht for mny btch jobs such s prmeter sweep pplictions nd softwre testing, z j is known in dvnce. Otherwise, smll bounded over-estimte of z j suffices. (iv) Prllelism constrint c j : The mximl degree of prllelism i.e., the upper bound on number of instnces tht cn be simultneously ssigned to the job. (v) Vlue function: V j : N R +, which is monotoniclly non-incresing function with V j (τ)= τ > d j. Thus, job j is described by the tuple{a j,d j,z j,c j,v j }. The job j is sid to be ctive t time slot τ if less thn d j hours hve pssed since its rrivl A j, nd the totl instnce hours ssigned so fr re less thn z j. Alloction updtes. Ech job j is llocted computing instnces during its execution. Given the existing cloud pricing model of chrging bsed on hourly boundries, the instnce lloction of ech ctive job is updted every hour. The i-th lloction updte for job j is formlly defined s triplet of the form (o i j,si j,bi j ). oi j denotes the number of ssigned on-demnd instnces; s i j denotes the number of ssigned spot instnces nd b i j denotes their bid vlues. The prllelism constrint trnsltes to o i j + si j c j. Note tht NOP decision i.e., llocting zero resources to job, is hndled by setting o i j nd si j to zero. Spot instnces. The spot instnces ssigned to job operte until the spot mrket price exceeds the bid price. However, s Figure shows, the spot prices my chnge unpredictbly implying tht spot instnces cn get terminted t ny time. Formlly, consider some job j; let us normlize the hour intervl to the closed intervl [,]. Let y i j [,] be the point in time in which the spot price exceeded the i-th bid for job j; formlly, y i j = inf y [,]{p s (y)>b i j }, where p s( ) is the spot price, nd y i j if the spot price does not exceed the bid. Then the cost of utilizing spot instnces for job j for its i-th lloction is given by s i j ˆpi j, where ˆpi j = yi j p j(y)dy, nd the totl mount of work crried out for this job by spot instnces is s i j yi j (with the exception of the time slot in which the job is completed, for which the totl mount of work is smller). Note tht under spot pricing, the instnce is chrged for the full hour even if the job finishes erlier. However, if the instnce is terminted due to mrket price exceeding the bid, the user is not chrged for the lst prtil hour of execution. Further, we ssume tht the cloud pltform provides dvnce notifiction of the instnce revoction in this scenrio. Finlly, s in [23] studies dynmic checkpointing strtegies for scenrios where customers might incur substntil overheds due to out-of-bid sitution. For simplicity, we do not model such scenrios in this pper. However, we note tht the techniques developed in [23] re complementry, nd cn be pplied in conjunction to our online lerning 3

4 Amzon EC2, our model llows spot instnces to be persistent, in the sense tht the user s bid will keep being submitted fter ech instnce termintion, until the job gets completed or the user cncels it. On-Demnd instnces. The price for n on-demnd instnce is fixed nd is denoted by p (per-unit per timeintervl). As bove, the instnce hour is pid entirely, even if the job finishes before the end of the hourly intervl. Utility. The utility for user is defined s the difference between the overll vlue obtined from executing ll its jobs nd the totl costs pid for their execution. Formlly, let T j be the number of hours for which job j is executed (ctul durtion is rounded up to the next hour). Note tht if the job did not complete by its lifetime d j, we set T j = d j + nd lloction T j j =(,,). The utility for job j is given by: U j ( j,...,t j j )= V j (T j ) T j i= { ˆp i j s i j + p oi j} () The overll user utility is then simply the sum of job utilities: U() = j U j ( j,...,t j j ). The objective of our online lerning lgorithm is to mximize the totl user utility. For simplicity, we restrict ttention to dedline vlue functions, which re vlue functions of the form V j (i)= v j, for ll i [,...,d j ] nd V j (i) = otherwise, i.e., completing job j by its due dte hs fixed positive vlue [2]. Note tht our lerning pproch cn be esily extended to hndle generl vlue functions. Remrk. We mke n implicit ssumption tht user immeditely gets the mount of instnces it requests if the price is right (i.e., if it pys the required price for on-demnd instnces, or if its bid is higher thn mrket price for spot instnces. In prctice, however, user might exhibit delys in getting ll the required instnces, especilly if it requires lrge mount of simultneous instnces. While we could semlessly incorporte such delys into our model nd solution frmework, we ignore this spect here in order to keep the exposition simple. Resource Alloction Policies. Our lgorithmic frmework llows defining brod rnge of policies for llocting resources to jobs nd the objective of our online lerning lgorithm is to pproch the performnce of the best policy in hindsight. We describe the prmeterized set of policies in this section, nd present the lerning lgorithm to dpt these policies, in detil in Section 3. For ech ctive job, policy tkes s input the job specifiction nd (possibly) history of spot prices, nd outputs n lloction. Formlly, policy π is mpping of the form π : J R + R + R n + A, which for every ctive job j t time τ tkes s input: frmework. (i) the job specifiction of j: {A j,d j,z j,c j,v j } (ii) the remining work for the job z τ j (iii) the totl execution cost C j incurred for j up to time τ (nmely, Cj τ = τ t =A j s t j ˆpt j + p ot j, nd (iv) history sequence p s ( ) of pst spot prices. In return, the policy outputs n lloction. As expected, the set of possible policies define n explosively lrge stte spce. In prticulr, we must crefully hndle ll possible instnce types (spot, on-demnd, both, or NOP), different spot bid prices, nd their exponentil number of combintions in ll possible job execution sttes. Of course, no pproch cn do n exhustive serch of the policy stte spce in n efficient mnner. Therefore, our frmework follows best-effort pproch to tckle this problem by exploring s mny policies s possible in the prcticl operting rnge e.g., spot bid price close to zero hs very low probbility of being ccepted; similrly, bidding is futile when the spot mrket price is bove the on-demnd price. We ddress this issue in detil in Section 3. An elegnt wy to generte this prcticl set of policies is to describe them by smll number of control prmeters so tht ny prticulr choice of prmeters defines single policy. We consider two bsic fmilies of prmeterized policies, which represent different wys to incorporte the trdeoff between on-demnd instnces nd spot-instnces: () Dedline-Centric. This fmily of policies is prmeterized by dedline threshold M. If the job s dedline is more thn M time units wy, the job ttempts llocting only spot-instnces. Otherwise (i.e., dedline is getting closer), it uses only on-demnd instnces. Further, it rejects jobs if they become nonprofitble (i.e., cost incurred exceeds utility vlue) or if it cnnot finish on time (since dedline vlue function V j will become zero). (2) Rte-Centric. This fmily of policies is prmeterized by fixed rte σ of llocting on-demnd instnces per round. In ech round, the policy ttempts to ssign c j instnces to job j s follows: it requests σ c j instnces on-demnd (for simplicity, we ignore rounding issues) t price p. It lso requests ( σ) c j spot instnces, using bid price strtegy which will be described shortly. The policy monitors the mount of job processed so fr, nd if there is risk of not completing the job by its due dte, it switches to ondemnd only. As bove, it rejects jobs if they become non-profitble or if it cnnot finish on time. A pseudocode implementing this intuition is presented in Algorithm. The pseudo-code for the dedline-centric fmily is similr nd thus omitted for brevity. We next describe two different methods to set the bids for the spot instnces. Ech of the policies bove cn 4

5 use ech of the methods described below: (i) Fixed bid. A fixed bid vlue b is used throughout. (ii) Vrible bid. The bid price is chosen dptively bsed on pst spot mrket prices (which mkes sense s long s the prices re not too fluctuting nd unpredictble). The vrible bid method is prmeterized by weight γ nd sfety prmeter ε to hndle smll price vritions. At ech round, the bid price for spot instnces is set s the weighted verge of pst spot prices (where the effective horizon is determined by the weight γ) plus ε. For brevity, we shll often use the terms fixed-bid policies or vrible-bid policies, to indicte tht policy (either dedline-centric or rte-centric) uses the fixed-bid method or the vrible-bid method, respectively. Observe tht vrible bid policies represent one simple lterntive for exploiting the knowledge bout pst spot prices. The design of more sophisticted policies tht utilize price history, such s policies tht incorporte potentil sesonlity vrition, is left s n interesting direction for future work. ALGORITHM : Rtio-centric Policy Prmeters (with Fixed-Bid method): On-demnd rte σ [,]; bid b R + Prmeters (with Vrible-Bid method): On-demnd rte σ [,]; weight γ [,]; sfety prmeter ε R + Input: Job prmeters {d j,z j,c j,v j } If c j d j < z j or p σ z j > v j, drop job //Job too lrge or expensive to hndle profitbly for Time slot t in which the job is ctive do If job is done, return Let m be the number of remining time slots till job dedline (including the current one) Let r be the remining job size Let q be the cost incurred so fr in treting the job // Check if more on-demnd instnces needed to ensure timely job completion if(σ + m )min{r,c j }<r then // Check if running job just with on-demnd is still worthwhile if p r+ q<v j then Request min{r,c j } on-demnd instnces else Drop job end if else Request σ min{r,c j } on-demnd instnces Request ( σ) min{r,c j } spot instnces t price: Fixed-Bid method: Bid Price b Vrible-Bid method: Z Z = y γτ y dy is normliztion constnt end if end for y p s(y)γ τ y dy+ε, where Note tht these policy sets include, s specil cses, some simple heuristics tht re used in prctice [3]; for exmple, heuristics tht plce fixed bid or choose bid t rndom ccording to some distribution (both with the option of switching to on-demnd instnces t some point). These heuristics (nd similr others) cn be implemented by fixing the weights given to the different policies (e.g., to implement policy which selects the bid uniformly t rndom, set equl weights for policies tht use the fixed-bid method nd zero weights for the policies tht use the vrible-bid method). The lerning pproch which we describe below is nturlly more flexible nd powerful, s it dpts the weights of the different policies bsed on performnce. More generlly, we emphsize tht our frmework cn certinly include dditionl fmilies of prmeterized policies, while our focus on the bove two fmilies is for simplicity nd proof of concept. In ddition, our lerning pproch cn incorporte other prmeters for resource lloction tht re provided by cloud pltforms e.g., VM instnce type, dtcenter/region. At the sme time, some of these prmeters my be set priori bsed on user constrints e.g., n extr-lrge instnce my be fixed to ccommodte lrge working sets of n ppliction in memory, nd dtcenter my be fixed due to ppliction dt stored in tht loction. 3 The Online Lerning Algorithm In this section we first give n overview of the lgorithm, nd then describe how the lgorithm is derived nd provide theoreticl gurntees on its performnce. Algorithm Overview. The lerning lgorithm pseudocode is presented s Algorithm 2. The lgorithm works by mintining distribution over the set of lloction policies (described in Section 2). When job rrives, it picks policy t rndom ccording to tht distribution, nd uses tht policy to hndle the job. After the job finishes execution, the performnce of ech policy on tht job is evluted, nd its probbility weight is modified in ccordnce with its performnce. The updte is such tht high-performing policies (s mesured by f j (π)) re ssigned reltively higher weight thn low-performing policies. The multiplictive form of the updte ensures strong theoreticl gurntees (s shown lter) nd prcticl performnce. The rte of modifiction is controlled by step-size prmeter η j, which slowly decys throughout the lgorithm s run. Our lgorithm lso uses prmeter d defined s n upper bound on the number of jobs tht rrive during ny single job s execution. Intuitively, d is mesure of the dely incurred between choosing which policy to tret given job, till we cn evlute its performnce on tht job. Thus, d is closely relted to job lifetimes d j defined in Section 2. Note tht while d j is mesured in time units (e.g., hours), d mesures the number of new jobs rriv- 5

6 ing during given job s execution. We gin emphsize tht this dely is wht sets our setting prt from stndrd online lerning, where the feedbck on ech policy s performnce is immedite, nd necessittes modified lgorithm nd nlysis. The running time of the lgorithm scles linerly with the number of policies nd thus our frmework cn del with (polynomilly) lrge sets of policies. It should be mentioned tht there exist online lerning techniques which cn efficiently hndle exponentilly lrge policy sets by tking the set structure into ccount (e.g. [8], Chpter 5). Incorporting these techniques here remins n interesting direction for future work. We ssume, without loss of generlity, tht the pyoff for ech job is bounded in the rnge [,]. If this does not hold, then one cn simply feed the lgorithm with normlized vlues of the pyoffs f i ( j). In prctice, it is enough for the pyoffs to be on the order of ± on verge for the lgorithm to work well, s shown in our experiments in Section 4. ALGORITHM 2: Online Lerning Algorithm Input: Set of n policies π prmeterized by{,..., n}, upper bound d on jobs lifetime Initilize w =(/n,/n,...,/n) for j =,...,J do Receive job j Pick policy π with probbility w j,π, nd pply to job j if j d then w j+ := w j else η j := 2log(n)/d( j d) for π =,...,n do Compute f j (π) to be the utility for job j d, ssuming we used policy π w j+,π := w j,π exp ( η j f j (π) ) end for for π =,...,n do w j+,π := w j+,π / n r= w j+,r end for end if end for Derivtion of the Algorithm.Next we provide forml derivtion of the lgorithm s well s theoreticl gurntees. The setting of our lerning frmework cn be bstrcted s follows: we divide time into rounds such tht round j strts when job j rrives. At ech such round, we mke some choice on how to del with the rriving job. The choice is mde by picking policy π j from fixed set of n policies, which will be prmeterized by {,..., n}. However, initilly, we do not know the utility of our policy choice s future spot prices re unknown. We cn eventully compute this utility in retrospect, but only fter d rounds hve elpsed nd the relevnt spot prices re reveled. Let f j (π j d ) denote the utility function of the policy choice π j d mde in round j d. Note tht ccording to our model, this function cn be evluted given the spot prices till round j. Thus, f j(π j d ) is our totl pyoff from ll the jobs we hndled. We mesure the lgorithm s performnce in terms of verge regret with respect to ny fixed choice in hindsight, i.e., mx π J f j (π) J f j (π j d ). Generlly speking, online lerning lgorithms ttempt to minimize this regret, nd ensure tht s J increses the verge regret converges to, hence the lgorithm s performnce converges to tht of the single best policy in hindsight. A crucil dvntge of online lerning is tht this cn be ttined without ny sttisticl ssumptions on the job chrcteristics or the price fluctutions. When d =, this problem reduces to the stndrd setting of online lerning, where we immeditely obtin feedbck on the chosen policy s performnce. However, s discussed in Section, this setting does not pply here becuse the function f j does not depend on the lerner s current policy choice π j, but rther on its choice t n erlier round, π j d. Hence, there is dely between the lgorithm s decision nd feedbck on the decision s outcome. Our lgorithm is bsed on the following rndomized pproch. The lerner first picks n n-dimensionl distribution vector w = (/n,...,/n), whose entries re indexed by the policies π. At every round j, the lerner chooses policy π j {,...,n} with probbility w j,π j. If j d, the lerner lets w j+ = w j. Otherwise it updtes the distribution ccording to w j+,π = w j,π exp(η j f j (π)) n π= w j,i exp(η j f j (i)), where η j is step-size prmeter. Agin, this form of updte puts more weight to higher-performing policies, s mesured by f j (π). Theoreticl Gurntees. The following result quntifies the regret of the lgorithm, s well s the (theoreticlly optiml) choice of the step-size prmeter η j. This theorem shows tht the verge regret of the lgorithm scles with the jobs lifetime bound d, nd decys to zero with the number of jobs J. Specificlly, s J increses, the performnce of our lgorithm converges to tht of the best-performing policy in hindsight. This behvior is to be expected from lerning lgorithm, nd crucilly, occurs without ny sttisticl ssumptions on the jobs chrcteristics or the price fluctutions. The performnce lso depends - but very wekly - on the size 6

7 n of our set of policies. From mchine lerning perspective, the result shows tht the multiplictive-updte mechnism tht we build upon cn indeed be dpted to delyed feedbck setting, by dpting the step-size to the dely bound, thus retining its simplicity nd sclbility. Theorem. Suppose (without loss of generlity) tht f j for ll j =,...,J is bounded in [,]. For the lgorithm described bove, suppose we pick η j = log(n)/2d( j d). Then for ny δ (,), it holds with probbility t lest δ over the lgorithm s rndomness tht mx π J J f j (π) j= J J 2d log(n/δ) f j (π j d ) 9. j= J To prove Theorem, we will use the following two lemms: Lemm. Consider the sequence of distribution vectors w +d,...,w defined by w +d =(/n,...,/n) nd π {,...,n}, w j+,π = w j,π exp(η j f j (π)) n π= w j,π exp(η j f j (π)), where η j = log(n)/( j d) for some (,2]. Then it holds tht n J log(n) mx f j (π) w j,π f j (π) 4. π {,...,n} π= Proof. For ny j =,...,J, let g j (π)= f j (π). Then the updte step specified in the lemm cn be equivlently written s π {,...,n}, w j+,π = w j,π exp( η j g j (π)) n π= w j,π exp( η j g j (π)). The initiliztion of w +d nd the updte step specified bove is identicl to the exponentilly-weighted verge forecster of [8], lso known s the Hedge lgorithm [9]. Using the proof of Theorem 2.3 from [8] (see pg. 2), we hve tht for ny prmeter, if we pick η j = log(n)/ j, then n π= w j,π g j (π) min π {,...,n} g j (π) T log(n) (J+ )log(n) log(n) Since (J+ )log(n)/ J log(n)/+ log(n)/, the expression bove cn be upper bounded by T log(n) J log(n) log(n) T log(n) J log(n) Since (,2], this is t most 4 J log(n)/, so we get n π= w j,π g j (π) min π {,...,n} g j (π) 4 J log(n)/. The result stted in the lemm follows by re-substituting g j (π) = f j (π), nd using the fct tht π w j,π =. Lemm 2. Let,..., n [,] nd η > be fixed. For ny distribution vector w in the n-simplex, if we define w to be the new distribution vector π {,...,n}, w π = w π exp( η π ) n r= w r exp( η π ), Then w w 4min{,η}. Proof. If η > /2, the bound is trivil, since for ny two distribution vectors w,w, it holds tht w w 2. Thus, let us ssume tht η /2. We hve w w = = n π= w π n w π w n π = w w π= π= ( ) exp( η π ) n r= w. r exp( η π ) Since w =, we cn pply Hölder s inequlity, nd upper bound the bove by mx π exp( η π ) n r= w r exp( η π ). (2) Using the inequlity x exp( x) for ll x, we hve tht exp( η π ) η π n r= w r exp( η π ), η π so we cn upper bound Eq. (2) by mx π η π = mx η π 2η mx π η π 2η, π π using our ssumption tht π for ll π, nd tht η π η /2. Using these lemms, we re now redy to prove Theorem : Proof. Our gol is to upper bound f j (π) f j (π j d ) (3) for ny fixed π. We note tht π,...,π J re independent rndom vribles, since their rndomness stems only 7

8 from the independent smpling of ech π j from ech w j. Thus, the regret cn be seen s function over J independent rndom vribles. Moreover, for ny choice of π,...,π J, if we replce π j by ny other π j, the regret expression will chnge by t most. Invoking Mc- Dirmid s inequlity [8], which cptures how close to their expecttion re such stble rndom functions, it follows tht Eq. (3) is t most E [ ] f j (π) f j (π j d ) + log(/δ)j (4) with probbility t lest δ. We now turn to bound the expecttion in Eq. (4). We hve [ ] E f j (π) f j (π j d ) = f j (π) E π j d w j d [ f j (π j d )]. (5) On the other hnd, by Lemm, we hve tht for ny fixed π, J log(n) f j (π) E π w j [ f j (π)] 4. (6) where we ssume η j = log(n)/ j. Thus, the min component of the proof is to upper bound ( ) E π w j [ f j (π)] E π j d w j d [ f j (π j d )] = n π= (w j d,π w j,π ) f j (π). (7) By Hölder s inequlity nd the tringle inequlity, this is t most w j d w j d i= w j i+ w j i which by Lemm 2, is t most 4 d i= min{,η j i}, where we tke η j i = if j i < +d (this refers to the distribution vectors w,...,w d, which don t chnge, nd hence the norm of their difference is zero). This in turn cn be bounded by 4d 4min{,η j } = 4d { min, } log(n). j d Combining this upper bound on Eq. (7) with Eq. (6), we cn upper bound Eq. (5) by { J log(n) 4 + 4d min, J log(n) 4 + 4d J log(n) 4 + 8d J log(n). log(n) j d } log(n) j d Picking = /2d, we get tht Eq. (5) is t most 8 2d log(n)j. Combining this with Eq. (4), nd noting it is probbilistic upper bound on Eq. (3), we get tht f j (π) f j (π j d ) 8 2d log(n)j+ log(/δ)j 8 2d log(n/δ)j+ log(n/δ)j (8 2d+ ) log(n/δ)j 9 2d log(n/δ)j. Dividing by J, nd noting tht the inequlity holds simultneously with respect to ny fixed π, the theorem follows. 4 Evlution In this section we evlute the performnce of our lerning lgorithm vi simultions on synthetic job dt s well s rel dtset from lrge btch computing cluster. The benefits of using synthetic dtsets is tht it llows the flexibility to evlute our pproch under wide rnge of worklods. Before continuing, we would like to emphsize tht the contribution of our pper is beyond the design of prticulr sets of policies - there re mny other policies which cn potentilly be designed for our tsk. Wht we provide is met-lgorithm which cn work on ny possible policy set, nd in our experiments we intend to exemplify this on plusible policy sets which cn be esily understood nd interpreted. Throughout this section, the prmeters of the different policies re set such tht the entire rnge of plusible policies is covered (with limittion of discretiztion). For exmple, the spot-price time series in Section 4.2 rnges between.2 nd.68 (see Fig. 6()). Accordingly, we llow the fixed bids b to rnge between.5 nd.7 with 5 cents resolution. Higher thn.7 bids perform exctly s the.7 bid, hence cn be excluded; bids of. or lower will lwys be rejected, hence cn be excluded s well. 4. Simultions on Synthetic Dt Setup: For ll the experiments on synthetic dt, we use the following setup. Job rrivls re generted ccord- 8

9 ing to Poisson distribution with men minutes; job size z j (in instnce-hours) is chosen uniformly nd independently t rndom up to mximum size of, nd the prllelism constrint c j ws fixed t 2 instncehours. Job vlues scle with the job size nd the instnce prices. More precisely, we generte the vlue s x p z j, where x is uniform rndom vrible in[.5, 2], nd p is the on-demnd price. Similrly, job dedlines lso scle with size nd re chosen to be x z j /c j, where x is uniformly rndom on [,2]. As discussed in Section 3, the on-demnd nd spot prices re normlized (divided by ) to ensure tht the verge pyoff per job is on the order of ±. The on-demnd price is.25 per hour, while spot prices re updted every 5 minutes (the wy we generte spot prices vries cross experiments). Resource lloction policies. We generte prmeterized set of policies. Specificlly, we use 24 dedlinecentric policies, nd sme number of rte-centric policies. These policy set uses six vlues for M (M {,...,5}) nd σ (σ {,.2,.4,.6,.8,}), respectively. For either policy set, we hve policies tht use the fixed-bid method (b {.,.5,.2,.25}), nd policies tht use the vrible-bid method (weight γ {,.2,.4,.6,.8}, nd sfety prmeter ε {,.2,.4,.6,.8,.}). Simultion results: Experiment. In the first experiment, we compre the totl pyoff cross k jobs of ll the 48 policies to our lgorithm. Spot prices re chosen independently nd rndomly s.5 +.5x, where x is stndrd Gussin rndom vrible (negtive vlues were clipped to ). The results presented below pertin to single run of the lgorithm, s they were virtully identicl cross independent runs. Figure 2 shows the totl pyoff for the 48 policies for this dtset. The first 24 policies re rte-centric policies, while the remining 24 re dedline-centric policies. The performnce of our lgorithm is mrked using dshed line. As cn be seen, our lgorithm performs close to the best policies in hindsight. Further, it is interesting to note tht we hve both dedline-centric nd rte-centric policies mong the best policies, indicting tht one needs to consider both sets s cndidte policies. We perform three dditionl experiment with similr setup to the bove, in order to obtin insights on the properties nd inner-working of the lgorithm. To be ble to dive deeper into the nlysis, we use only the 24 rtecentric policies. The only element tht we modify cross experiments is the sttisticl properties of the spot-prices sequence. Experiment 2. Spot prices re generted s bove, except tht we use. s their men (opposed to.2 bove). After executing jobs, our lgorithm performs close to tht of the best policy s it ssigns probbility close Totl Pyoff (x e5) Figure 2: Totl pyoff for processing 2k jobs cross ech of the 48 resource lloction policies (while lgorithm s pyoff is shown s dshed blck line). The first 24 policies re rte-centric, nd the lst 24 policies re dedline-centric Distribution fter 5 Jobs Distribution fter 25 Jobs Distribution fter Jobs Distribution fter 5 Jobs Figure 3: Evlution under sttionry spot-price distribution (men spot price of.): ssigned per policy fter executing 5,, 25 nd 5 jobs. to for tht policy, while outperforming 99 out of totl 24 policies. Further, its verge regret is only.3 s opposed to 7.5 on verge cross ll policies. Note tht the upper bound on the dely in this experiment is d = 66, i.e., up to 66 jobs re being processed while single job finishes execution. This shows tht our pproch cn hndle significnt dely in getting feedbck, while still performing close to the best policy. In this experiment, the best policy in hindsight uses fixed-bid of.25. This cn be explined by considering the prmeters of our simultion: since the on-demnd price is.25 nd the spot price is lwys reltively lower, bid of.25 lwys yields lloction of spot instnces for the entire hour. This result lso highlights the esy interprettion of the resource lloction strtegy of the best policy. Figure 3 shows the probbility ssignment for ech policy over time by our lgorithm fter executing 5,, 25 nd 5 jobs. We observe tht s the number of processed jobs increse, our lgorithm provides performnce close to the best policy in hindsight. Experiment 3. In the next experiment, the spot prices is set s bove for the first % of the jobs, nd then the 9

10 Totl Pyoff (x,) Totl Pyoff (x,) Figure 4: Evlution under non-sttionry distribution (men spot price of.2): () Totl pyoff for executing k jobs cross ech of the 24 policies (while lgorithm s pyoff is shown s dshed blck line) nd (b) the finl probbility ssigned per policy by our lerning lgorithm. Figure 5: Evlution under highly dynmic distribution (hourly spot prices lternte between.3 nd zero): () Totl pyoff for processing k jobs cross ech of the 24 policies (lgorithm s pyoff is shown s dshed blck line), nd (b) the finl probbility ssigned per policy by our lerning lgorithm men is incresed to.2 (rther thn.) during the execution of the lst 9% jobs. This setup corresponds to non-sttionry distribution: lerning lgorithm which simply ttempts to find the best policy t the beginning nd stick to it, will be severely penlized when the dynmics of spot prices chnge. Figure 4 shows the evlution results. We observe tht our online lgorithm is ble to dpt to chnging dynmics nd converges to probbility weight distribution different from the previous setting; Overll, our lgorithm ttins n verge regret of only.5, s opposed to 4.8 on verge cross 24 bseline policies. Note tht in this setting, the best policies re those which rely purely on on-demnd instnces insted of spot instnces. This is expected becuse the spot prices tend to be only slightly lower thn the on-demnd price, nd their dynmic voltility mke them unttrctive in comprison. This result demonstrtes tht there re indeed scenrios where the dilemm between choosing ondemnd vs. spot instnces is importnt nd cn significntly impct performnce, nd tht no single instnce type is lwys suitble. Experiment 4. This time we set the spot price to lternte between.3 for one hour nd then zero in the next. This vrition is fvorble for vrible-bid policies with smll γ, which use smll history of spot prices to determine their next bid. Such policies quickly dpt when the spot price drops. In contrst, fixed-bid policies nd vriblebid policies with lrge γ suffer, s their bid price is not sufficiently dptive. Figure 5 shows the results. We find tht the group of highest-pyoff policies re those for which γ = i.e., they use the lst spot price to choose bid for the current round, nd thus quickly dpt to chnging spot prices. Further, our lgorithm quickly detects nd dpts to the best policies in this setting. The verge regret obtined by our lgorithm is.8 compred to 4.5 on verge for our bseline policies. Moreover, the lgorithm s overll performnce is better thn 92 out of 24 policies. spot price Time (Hr) Totl Pyoff (x e9) Figure 6: Evlution on rel dtset: () Amzon EC2 spot pricing dt (subset of dt from Figure ) for Linux instnces of type lrge. The fixed on-demnd price is.34; (b) Totl pyoff for processing 2k jobs cross ech of the 54 resource lloction policies (while lgorithm s pyoff is shown s dshed blck line) 4.2 Evlution on Rel Dtsets Setup: Worklod dt. We use job trces from lrge btch computing cluster for two dys consisting of bout 6 MpReduce jobs. Ech MpReduce job comprises multiple phses of execution where the next phse cn strt only fter ll tsks in the previous phse hve completed. The trce includes the runtime of the job in server CPU hours (totcpuhours), the totl number of servers llocted to it (totservers) nd the mximum number of servers llocted to job per phse (mxserversperphse). Since our job model differs from the MpReduce model in terms of phse dependency, we construct the prllelism constrint from the trce s follows: since the verge running time of server is totcpuhours totservers, we set the prllelism bound c j for ech job to be c j = mxserversperphse totcpuhours totservers. Note tht this bound is in terms of CPU hours s required. Since the dedline vlues per job re not specified, we use the job completion time s its dedline. For ssigning vlues per job, we generte them using the sme pproch s for synthetic dtsets. Specificlly, we ssign rndom vlue for ech job j equl to its totl size (in CPU hours) times the on-demnd price times B = (α + N j ) where α = 5 nd N j [,] is drwn uniformly t rndom. The job trce is replicted to generte 2k jobs. Spot Prices. We use subset of the historicl spot

11 price from Amzon EC2 s shown in Figure for lrge Linux instnces. Figure 6() shows the selected smple of spot price history showing significnt price vrition over time. Intuitively, we expect tht overll tht policies tht use lrge rtio of spot instnces will perform better since on verge, the spot price is bout hlf of the on-demnd price. Resource Alloction Prices. We generted totl of 54 policies, hlf rte-centric nd hlf dedline-centric. In ech hlf, the first 72 re fixed-bid policies (i.e. policies tht use the fixed-bid method) in incresing order of (on-demnd rte, bid price). The remining 8 vrible-bid policies re in incresing order of (ondemnd rte, weight, sfety prmeter). The possible vlues for the different prmeters re s described for the synthetic dt experiments, with the exception tht we llow more options for the fixed bid price, b {.5,.2,.25,.3,.35,.4,.45,.5,.55,.6,.65,.7}. Evluting our online lgorithm on the rel trce poses severl new chllenges compred to the synthetic dtsets in Section 4.. First, jobs sizes nd hence their vlues re highly vrible, to the effect tht the difference in size between smll nd lrge jobs cn be of six orders of mgnitude. Second, spot prices cn exhibit high vribility, or lterntively be lmost stble towrds the end s exemplified in Figure 6(). Simultion results: Figure 6(b) shows the results for typicl run of this experiment. Notbly, the pyoff of our lgorithm outperforms the performnce of most of individul policies, nd obtins comprble performnce to the best individul policies (which re subset of the rte-centric policies). We repeted the experiment 2 times, nd obtined the following results: The verge regret per job for our lerning lgorithm is 27 ± 43, while the verge regret cross policies is 7654± Note tht the verge regret of our lgorithm is round 34 times better (on verge) thn the verge regret cross policies. Figure 7 shows the evolution of policy weights over time for typicl run, until converging to finl policy weights (fter hndling the entire 2 jobs). We observe tht our lgorithm evolves from preferring reltively lrge subset of both dedline-centric nd rtecentric policies (t round 5 jobs) to preferring only rte-centric policies, both fixed-bid nd vrible-bid (t round 2 jobs). Eventully, the lgorithm converges to single rte-centric policy with fixed bid. This behvior cn be explined bsed on spot pricing dt in Figure 6(): Due to initilly high vribility in spot prices, our lgorithm lterntes between fixed-bid policies nd vrible-bid policies, which try to lern from pst prices. However, since the prices show little vribility for the remining two thirds of the dt, the lgorithm progressively dpts its weight for the fixed-bid policy, which is 6 x Distribution fter 5 jobs Distribution fter 3 jobs Distribution fter jobs Distribution fter 5 jobs Figure 7: Evlution on rel dtset: The probbility ssigned per policy by our lerning lgorithm fter processing 5,, 3 nd 5 jobs. The lgorithm converges to single policy (fixed-bid rte-centric policy) mrked by n rrow. commensurte with the lmost stble pricing curve. 5 Relted literture While there exist other potentil pproches to our problem, we considered n online lerning pproch due to its lck of ny stochstic ssumptions, its online (rther thn offline) nture, its cpbility to work on rbitrry policy sets, nd its bility to dpt to delyed feedbck. The ide of pplying online lerning lgorithms for sequentil decision-mking tsks is well known ([9]), nd there re quite few ppers which study vrious engineering pplictions (e.g., [, 5,, 5]). However, these efforts do not del with the problem of delyed feedbck s it violtes the stndrd frmework of online lerning. The issue of dely hs been previously considered (see [4] nd references therein), but re either not in the context of the online techniques we re using, or propose lessprcticl solutions such s running mny multiple copies of the lgorithm in prllel. In ny cse, we re not wre of ny prior study of dely-tolernt online lerning procedures for our ppliction domin. The lunch of commercil cloud computing offerings hs motivted the systems reserch community to investigte how to exploit this mrket for efficient resource lloction nd cost reductions. Some solution concepts re borrowed from erlier works on executing jobs in multiple grids (e.g., [2] nd references therein). However, new techniques re required in the cloud computing context, which directly incorporte cost considertions nd vriety of instnce renting options. The hve been numerous works in this context deling with different provider nd customers scenrios. One brnch of

12 ppers consider the uto-scling problem, where n ppliction owner hs to decide on the right number nd type of VMs to be purchsed, nd dynmiclly dpt resources s function of chnging worklod conditions (see, e.g., [7, 6] nd references therein). We focus the reminder of our literture survey on cloud resource mngement ppers tht include spot instnces s one of the lloction options. Some ppers focus on building sttisticl models for spot prices which cn be then used to decide when to purchse EC2 spot instnces (see, e.g., [3, ]). Similrly, [24] exmines the sttisticl properties of customer worklod with the objective of helping the cloud determine how much resources to llocte for spot instnces. In the context of lrge-scle btch pplictions, [4] proposes probbilistic model for bidding in spot prices while tking into ccount job termintion probbilities. However, [4] focuses on pre-computtion of fixed (nondptive) bid, which is determined greedily bsed on existing mrket conditions; moreover, the suggested frmework does not support n utomtic selection between on-demnd nd spot instnces. [22] uses genetic lgorithm to quickly pproximte the preto-set of mkespn nd cost for bg of tsks; ech underlying resource configurtion consists of different mix of on-demnd nd spot instnces. The setting in [22] is fundmentlly different thn ours, since [22] optimizes globl mkespn objective, while we ssume tht jobs hve individul dedlines. Finlly, [2] proposes ner-optiml bidding strtegies for cloud service brokers tht utilize the spot instnce mrket to reduce the computtionl cost while mximizing the profit. Our work differs from [2] in two min spects. First, unlike [2], our online lerning frmework does not require ny distributionl ssumptions on the spot price evolution (or the job model). Second, our model my ssocite different vlue nd dedline for ech job, wheres in [2] the vlue is only function of job size, nd dedlines re not explicitly treted. 6 Conclusion In this pper we design nd evlute n online lerning lgorithm for utomted nd dptive resource lloction for executing btch jobs over cloud computing pltforms. Our bsic model cn be extended to solve other resource lloction problems in cloud domins such s renting smll vs. medium vs. lrge instnces, choosing computing regions, nd different bundling options in terms of CPU, memory, network nd storge. We expect tht the lerning frmework developed here would be useful in ddressing these extensions. An interesting direction for future reserch is incorporting reserved instnces, for long-term hndling of multiple jobs. This mkes the lgorithm stteful, in the sense tht its ctions ffect the pyoffs of policies chosen in the future. This does not ccord with our current theoreticl frmework, but my be hndled using different tools from competitive nlysis. Acknowledgements. We thnk our shepherd Alexndru Iosup nd the ICAC reviewers for the useful feedbck. References [] AGMON BEN-YEHUDA, O., BEN-YEHUDA, M., SCHUSTER, A., AND TSAFRIR, D. Deconstructing Amzon EC2 spot instnce pricing. ACM Trnsctions on Economics nd Computtion, 3 (23), 6. [2] ALIZADEH, M., GREENBERG, A., MALTZ, D., PADHYE, J., PATEL, P., PRABHAKAR, B., SEN- GUPTA, S., AND SRIDHARAN, M. Dt center TCP (DCTCP). In ACM SIGCOMM Computer Communiction Review (2), vol. 4, ACM, pp [3] AWS Cse Studies. solutions/cse-studies/. [4] ANDRZEJAK, A., KONDO, D., AND YI, S. Decision model for cloud computing under sl constrints. In MASCOTS (2). [5] ARI, I., AMER, A., GRAMACY, R., MILLER, E., BRANDT, S., AND LONG, D. Acme: Adptive cching using multiple experts. In WDAS (22). [6] AZAR, Y., BEN-AROYA, N., DEVANUR, N. R., AND JAIN, N. Cloud scheduling with setup cost. In Proceedings of the 25th ACM symposium on Prllelism in lgorithms nd rchitectures (23), ACM, pp [7] Browsermob. solutions/cse-studies/browsermob. [8] CESA-BIANCHI, N., AND LUGOSI, G. Prediction, lerning, nd gmes. Cmbridge University Press, 26. [9] FREUND, Y., AND SCHAPIRE, R. E. A decisiontheoretic generliztion of on-line lerning nd n ppliction to boosting. J. Comput. Syst. Sci. 55, (997), [] GRAMACY, R., WARMUTH, M., BRANDT, S., AND ARI, I. Adptive cching by refetching. In NIPS (22). 2

A Closer Look at Bond Risk: Duration

A Closer Look at Bond Risk: Duration W E B E X T E S I O 4C A Closer Look t Bond Risk: Durtion This Extension explins how to mnge the risk of bond portfolio using the concept of durtion. BOD RISK In our discussion of bond vlution in Chpter

More information

3: Inventory management

3: Inventory management INSE6300 Ji Yun Yu 3: Inventory mngement Concordi Februry 9, 2016 Supply chin mngement is bout mking sequence of decisions over sequence of time steps, fter mking observtions t ech of these time steps.

More information

On-demand, Spot, or Both: Dynamic Resource Allocation for Executing Batch Jobs in the Cloud

On-demand, Spot, or Both: Dynamic Resource Allocation for Executing Batch Jobs in the Cloud On-demand, Spot, or Both: Dynamic Resource Allocation for Executing Batch Jobs in the Cloud Ishai Menache, Microsoft Research; Ohad Shamir, Weizmann Institute; Navendu Jain, Microsoft Research https://www.usenix.org/conference/icac4/technical-sessions/presentation/menache

More information

UNIT 7 SINGLE SAMPLING PLANS

UNIT 7 SINGLE SAMPLING PLANS UNIT 7 SINGLE SAMPLING PLANS Structure 7. Introduction Objectives 7. Single Smpling Pln 7.3 Operting Chrcteristics (OC) Curve 7.4 Producer s Risk nd Consumer s Risk 7.5 Averge Outgoing Qulity (AOQ) 7.6

More information

A Fuzzy Inventory Model With Lot Size Dependent Carrying / Holding Cost

A Fuzzy Inventory Model With Lot Size Dependent Carrying / Holding Cost IOSR Journl of Mthemtics (IOSR-JM e-issn: 78-578,p-ISSN: 9-765X, Volume 7, Issue 6 (Sep. - Oct. 0, PP 06-0 www.iosrournls.org A Fuzzy Inventory Model With Lot Size Dependent Crrying / olding Cost P. Prvthi,

More information

What is Monte Carlo Simulation? Monte Carlo Simulation

What is Monte Carlo Simulation? Monte Carlo Simulation Wht is Monte Crlo Simultion? Monte Crlo methods re widely used clss of computtionl lgorithms for simulting the ehvior of vrious physicl nd mthemticl systems, nd for other computtions. Monte Crlo lgorithm

More information

A portfolio approach to the optimal funding of pensions

A portfolio approach to the optimal funding of pensions Economics Letters 69 (000) 01 06 www.elsevier.com/ locte/ econbse A portfolio pproch to the optiml funding of pensions Jysri Dutt, Sndeep Kpur *, J. Michel Orszg b, b Fculty of Economics University of

More information

The Market Approach to Valuing Businesses (Second Edition)

The Market Approach to Valuing Businesses (Second Edition) BV: Cse Anlysis Completed Trnsction & Guideline Public Comprble MARKET APPROACH The Mrket Approch to Vluing Businesses (Second Edition) Shnnon P. Prtt This mteril is reproduced from The Mrket Approch to

More information

3/1/2016. Intermediate Microeconomics W3211. Lecture 7: The Endowment Economy. Today s Aims. The Story So Far. An Endowment Economy.

3/1/2016. Intermediate Microeconomics W3211. Lecture 7: The Endowment Economy. Today s Aims. The Story So Far. An Endowment Economy. 1 Intermedite Microeconomics W3211 Lecture 7: The Endowment Economy Introduction Columbi University, Spring 2016 Mrk Den: mrk.den@columbi.edu 2 The Story So Fr. 3 Tody s Aims 4 Remember: the course hd

More information

UNIVERSITY OF NOTTINGHAM. Discussion Papers in Economics BERTRAND VS. COURNOT COMPETITION IN ASYMMETRIC DUOPOLY: THE ROLE OF LICENSING

UNIVERSITY OF NOTTINGHAM. Discussion Papers in Economics BERTRAND VS. COURNOT COMPETITION IN ASYMMETRIC DUOPOLY: THE ROLE OF LICENSING UNIVERSITY OF NOTTINGHAM Discussion Ppers in Economics Discussion Pper No. 0/0 BERTRAND VS. COURNOT COMPETITION IN ASYMMETRIC DUOPOLY: THE ROLE OF LICENSING by Arijit Mukherjee April 00 DP 0/0 ISSN 160-48

More information

Option exercise with temptation

Option exercise with temptation Economic Theory 2008) 34: 473 501 DOI 10.1007/s00199-006-0194-3 RESEARCH ARTICLE Jinjun Mio Option exercise with tempttion Received: 25 Jnury 2006 / Revised: 5 December 2006 / Published online: 10 Jnury

More information

(a) by substituting u = x + 10 and applying the result on page 869 on the text, (b) integrating by parts with u = ln(x + 10), dv = dx, v = x, and

(a) by substituting u = x + 10 and applying the result on page 869 on the text, (b) integrating by parts with u = ln(x + 10), dv = dx, v = x, and Supplementry Questions for HP Chpter 5. Derive the formul ln( + 0) d = ( + 0) ln( + 0) + C in three wys: () by substituting u = + 0 nd pplying the result on pge 869 on the tet, (b) integrting by prts with

More information

JFE Online Appendix: The QUAD Method

JFE Online Appendix: The QUAD Method JFE Online Appendix: The QUAD Method Prt of the QUAD technique is the use of qudrture for numericl solution of option pricing problems. Andricopoulos et l. (00, 007 use qudrture s the only computtionl

More information

A ppendix to. I soquants. Producing at Least Cost. Chapter

A ppendix to. I soquants. Producing at Least Cost. Chapter A ppendix to Chpter 0 Producing t est Cost This ppendix descries set of useful tools for studying firm s long-run production nd costs. The tools re isoqunts nd isocost lines. I soqunts FIGURE A0. SHOWS

More information

Optimal firm's policy under lead time- and price-dependent demand: interest of customers rejection policy

Optimal firm's policy under lead time- and price-dependent demand: interest of customers rejection policy Optiml firm's policy under led time- nd price-dependent demnd: interest of customers rejection policy Abduh Syid Albn Université Grenoble Alpes, G-SCOP, F-38000 Grenoble, Frnce bduh-syid.lbn@grenoble-inp.org

More information

The Okun curve is non-linear

The Okun curve is non-linear Economics Letters 70 (00) 53 57 www.elsevier.com/ locte/ econbse The Okun curve is non-liner Mtti Viren * Deprtment of Economics, 004 University of Turku, Turku, Finlnd Received 5 My 999; ccepted 0 April

More information

Menu costs, firm size and price rigidity

Menu costs, firm size and price rigidity Economics Letters 66 (2000) 59 63 www.elsevier.com/ locte/ econbse Menu costs, firm size nd price rigidity Robert A. Buckle *, John A. Crlson, b School of Economics nd Finnce, Victori University of Wellington,

More information

Multi-Step Reinforcement Learning: A Unifying Algorithm

Multi-Step Reinforcement Learning: A Unifying Algorithm Multi-Step Reinforcement Lerning: A Unifying Algorithm Kristopher De Asis, 1 J. Fernndo Hernndez-Grci, 1 G. Zchris Hollnd, 1 Richrd S. Sutton Reinforcement Lerning nd Artificil Intelligence Lbortory, University

More information

Technical Appendix. The Behavior of Growth Mixture Models Under Nonnormality: A Monte Carlo Analysis

Technical Appendix. The Behavior of Growth Mixture Models Under Nonnormality: A Monte Carlo Analysis Monte Crlo Technicl Appendix 1 Technicl Appendix The Behvior of Growth Mixture Models Under Nonnormlity: A Monte Crlo Anlysis Dniel J. Buer & Ptrick J. Currn 10/11/2002 These results re presented s compnion

More information

CS 188 Introduction to Artificial Intelligence Fall 2018 Note 4

CS 188 Introduction to Artificial Intelligence Fall 2018 Note 4 CS 188 Introduction to Artificil Intelligence Fll 2018 Note 4 These lecture notes re hevily bsed on notes originlly written by Nikhil Shrm. Non-Deterministic Serch Picture runner, coming to the end of

More information

THE FINAL PROOF SUPPORTING THE TURNOVER FORMULA.

THE FINAL PROOF SUPPORTING THE TURNOVER FORMULA. THE FINAL PROOF SUPPORTING THE TURNOVER FORMULA. I would like to thnk Aris for his mthemticl contriutions nd his swet which hs enled deeper understnding of the turnover formul to emerge. His contriution

More information

UNinhabited aerial vehicles (UAVs) are becoming increasingly

UNinhabited aerial vehicles (UAVs) are becoming increasingly A Process Algebr Genetic Algorithm Sertc Krmn Tl Shim Emilio Frzzoli Abstrct A genetic lgorithm tht utilizes process lgebr for coding of solution chromosomes nd for defining evolutionry bsed opertors is

More information

Chapter 3: The Reinforcement Learning Problem. The Agent'Environment Interface. Getting the Degree of Abstraction Right. The Agent Learns a Policy

Chapter 3: The Reinforcement Learning Problem. The Agent'Environment Interface. Getting the Degree of Abstraction Right. The Agent Learns a Policy Chpter 3: The Reinforcement Lerning Problem The Agent'Environment Interfce Objectives of this chpter: describe the RL problem we will be studying for the reminder of the course present idelized form of

More information

Rational Equity Bubbles

Rational Equity Bubbles ANNALS OF ECONOMICS AND FINANCE 14-2(A), 513 529 (2013) Rtionl Equity Bubbles Ge Zhou * College of Economics, Zhejing University Acdemy of Finncil Reserch, Zhejing University E-mil: flhszh@gmil.com This

More information

Effects of Entry Restriction on Free Entry General Competitive Equilibrium. Mitsuo Takase

Effects of Entry Restriction on Free Entry General Competitive Equilibrium. Mitsuo Takase CAES Working Pper Series Effects of Entry Restriction on Free Entry Generl Competitive Euilirium Mitsuo Tkse Fculty of Economics Fukuok University WP-2018-006 Center for Advnced Economic Study Fukuok University

More information

Chapter 2: Relational Model. Chapter 2: Relational Model

Chapter 2: Relational Model. Chapter 2: Relational Model Chpter : Reltionl Model Dtbse System Concepts, 5 th Ed. See www.db-book.com for conditions on re-use Chpter : Reltionl Model Structure of Reltionl Dtbses Fundmentl Reltionl-Algebr-Opertions Additionl Reltionl-Algebr-Opertions

More information

Open Space Allocation and Travel Costs

Open Space Allocation and Travel Costs Open Spce Alloction nd Trvel Costs By Kent Kovcs Deprtment of Agriculturl nd Resource Economics University of Cliforni, Dvis kovcs@priml.ucdvis.edu Pper prepred for presenttion t the Americn Agriculturl

More information

Reinforcement Learning. CS 188: Artificial Intelligence Fall Grid World. Markov Decision Processes. What is Markov about MDPs?

Reinforcement Learning. CS 188: Artificial Intelligence Fall Grid World. Markov Decision Processes. What is Markov about MDPs? CS 188: Artificil Intelligence Fll 2010 Lecture 9: MDP 9/2/2010 Reinforcement Lerning [DEMOS] Bic ide: Receive feedbck in the form of rewrd Agent utility i defined by the rewrd function Mut (lern to) ct

More information

Chapter 4. Profit and Bayesian Optimality

Chapter 4. Profit and Bayesian Optimality Chpter 4 Profit nd Byesin Optimlity In this chpter we consider the objective of profit. The objective of profit mximiztion dds significnt new chllenge over the previously considered objective of socil

More information

Roadmap of This Lecture

Roadmap of This Lecture Reltionl Model Rodmp of This Lecture Structure of Reltionl Dtbses Fundmentl Reltionl-Algebr-Opertions Additionl Reltionl-Algebr-Opertions Extended Reltionl-Algebr-Opertions Null Vlues Modifiction of the

More information

Buckling of Stiffened Panels 1 overall buckling vs plate buckling PCCB Panel Collapse Combined Buckling

Buckling of Stiffened Panels 1 overall buckling vs plate buckling PCCB Panel Collapse Combined Buckling Buckling of Stiffened Pnels overll uckling vs plte uckling PCCB Pnel Collpse Comined Buckling Vrious estimtes hve een developed to determine the minimum size stiffener to insure the plte uckles while the

More information

ASYMMETRIC SWITCHING COSTS CAN IMPROVE THE PREDICTIVE POWER OF SHY S MODEL

ASYMMETRIC SWITCHING COSTS CAN IMPROVE THE PREDICTIVE POWER OF SHY S MODEL Document de trvil 2012-14 / April 2012 ASYMMETRIC SWITCHIG COSTS CA IMPROVE THE PREDICTIVE POWER OF SHY S MODEL Evens Slies OFCE-Sciences-Po Asymmetric switching costs cn improve the predictive power of

More information

Addition and Subtraction

Addition and Subtraction Addition nd Subtrction Nme: Dte: Definition: rtionl expression A rtionl expression is n lgebric expression in frction form, with polynomils in the numertor nd denomintor such tht t lest one vrible ppers

More information

Pillar 3 Quantitative Disclosure

Pillar 3 Quantitative Disclosure Pillr 3 Quntittive Disclosure In complince with the requirements under Bsel Pillr 3 nd the Monetry Authority of Singpore (MAS) Notice 637 Public Disclosure, vrious dditionl quntittive nd qulittive disclosures

More information

164 CHAPTER 2. VECTOR FUNCTIONS

164 CHAPTER 2. VECTOR FUNCTIONS 164 CHAPTER. VECTOR FUNCTIONS.4 Curvture.4.1 Definitions nd Exmples The notion of curvture mesures how shrply curve bends. We would expect the curvture to be 0 for stright line, to be very smll for curves

More information

Pillar 3 Quantitative Disclosure

Pillar 3 Quantitative Disclosure Pillr 3 Quntittive Disclosure In complince with the requirements under Bsel Pillr 3 nd the Monetry Authority of Singpore (MAS) Notice 637 Public Disclosure, vrious dditionl quntittive nd qulittive disclosures

More information

Information Acquisition and Disclosure: the Case of Differentiated Goods Duopoly

Information Acquisition and Disclosure: the Case of Differentiated Goods Duopoly Informtion Acquisition nd Disclosure: the Cse of Differentited Goods Duopoly Snxi Li Jinye Yn Xundong Yin We thnk Dvid Mrtimort, Thoms Mriotti, Ptrick Rey, Wilfried Snd-Zntmn, Frnces Xu nd Yongsheng Xu

More information

Cache CPI and DFAs and NFAs. CS230 Tutorial 10

Cache CPI and DFAs and NFAs. CS230 Tutorial 10 Cche CPI nd DFAs nd NFAs CS230 Tutoril 10 Multi-Level Cche: Clculting CPI When memory ccess is ttempted, wht re the possible results? ccess miss miss CPU L1 Cche L2 Cche Memory L1 cche hit L2 cche hit

More information

What Makes a Better Annuity?

What Makes a Better Annuity? Wht Mkes Better Annuity? Json S. Scott, John G. Wtson, nd Wei-Yin Hu My 2009 PRC WP2009-03 Pension Reserch Council Working Pper Pension Reserch Council The Whrton School, University of Pennsylvni 3620

More information

International Monopoly under Uncertainty

International Monopoly under Uncertainty Interntionl Monopoly under Uncertinty Henry Ary University of Grnd Astrct A domestic monopolistic firm hs the option to service foreign mrket through export or y setting up plnt in the host country under

More information

Voluntary provision of threshold public goods with continuous contributions: experimental evidence

Voluntary provision of threshold public goods with continuous contributions: experimental evidence Journl of Public Economics 71 (1999) 53 73 Voluntry provision of threshold public goods with continuous contributions: experimentl evidence Chrles Brm Cdsby *, Elizbeth Mynes, b Deprtment of Economics,

More information

The IndoDairy Smallholder Household Survey From Farm-to-Fact

The IndoDairy Smallholder Household Survey From Farm-to-Fact The Centre for Glol Food nd Resources The IndoDiry Smllholder Household Survey From Frm-to-Fct Fctsheet 7: Diry Frming Costs, Revenue nd Profitility Bckground This fctsheet uilds on the informtion summrised

More information

Hedging the volatility of Claim Expenses using Weather Future Contracts

Hedging the volatility of Claim Expenses using Weather Future Contracts Mrshll School of Business, USC Business Field Project t Helth Net, Inc. Investment Deprtment Hedging the voltility of Clim Epenses using Wether Future Contrcts by Arm Gbrielyn MSBA Cndidte co written by

More information

Rates of Return of the German PAYG System - How they can be measured and how they will develop

Rates of Return of the German PAYG System - How they can be measured and how they will develop Rtes of Return of the Germn PAYG System - How they cn be mesured nd how they will develop Christin Benit Wilke 97-2005 me Mnnheimer Forschungsinstitut Ökonomie und Demogrphischer Wndel Gebäude L 13, 17_D-68131

More information

PERSONAL FINANCE Grade Levels: 9-12

PERSONAL FINANCE Grade Levels: 9-12 PERSONAL FINANCE Grde Levels: 9-12 Personl Finnce llows the student to explore personl finncil decision-mking. It lso helps individuls use skills in money mngement, record-keeping, bnking, nd investing.

More information

PRICING CONVERTIBLE BONDS WITH KNOWN INTEREST RATE. Jong Heon Kim

PRICING CONVERTIBLE BONDS WITH KNOWN INTEREST RATE. Jong Heon Kim Kngweon-Kyungki Mth. Jour. 14 2006, No. 2, pp. 185 202 PRICING CONVERTIBLE BONDS WITH KNOWN INTEREST RATE Jong Heon Kim Abstrct. In this pper, using the Blck-Scholes nlysis, we will derive the prtil differentil

More information

CHAPTER-IV PRE-TEST ESTIMATOR OF REGRESSION COEFFICIENTS: PERFORMANCE UNDER LINEX LOSS FUNCTION

CHAPTER-IV PRE-TEST ESTIMATOR OF REGRESSION COEFFICIENTS: PERFORMANCE UNDER LINEX LOSS FUNCTION CHAPTER-IV PRE-TEST ESTIMATOR OF REGRESSION COEFFICIENTS: PERFORMANCE UNDER LINEX LOSS FUNCTION 4.1 INTRODUCTION It hs lredy been demonstrted tht the restricted lest squres estimtor is more efficient thn

More information

Problem Set for Chapter 3: Simple Regression Analysis ECO382 Econometrics Queens College K.Matsuda

Problem Set for Chapter 3: Simple Regression Analysis ECO382 Econometrics Queens College K.Matsuda Problem Set for Chpter 3 Simple Regression Anlysis ECO382 Econometrics Queens College K.Mtsud Excel Assignments You re required to hnd in these Excel Assignments by the due Mtsud specifies. Legibility

More information

Preference Cloud Theory: Imprecise Preferences and Preference Reversals Oben Bayrak and John Hey

Preference Cloud Theory: Imprecise Preferences and Preference Reversals Oben Bayrak and John Hey Preference Cloud Theory: Imprecise Preferences nd Preference Reversls Oben Byrk nd John Hey This pper presents new theory, clled Preference Cloud Theory, of decision-mking under uncertinty. This new theory

More information

Research Article Existence of Positive Solution to Second-Order Three-Point BVPs on Time Scales

Research Article Existence of Positive Solution to Second-Order Three-Point BVPs on Time Scales Hindwi Publishing Corportion Boundry Vlue Problems Volume 2009, Article ID 685040, 6 pges doi:10.1155/2009/685040 Reserch Article Existence of Positive Solution to Second-Order hree-point BVPs on ime Scles

More information

Market uncertainty, macroeconomic expectations and the European sovereign bond spreads.

Market uncertainty, macroeconomic expectations and the European sovereign bond spreads. Mrket uncertinty, mcroeconomic expecttions nd the Europen sovereign bond spreds. Dimitris A. Georgoutsos Athens University of Economics & Business, Deprtment of Accounting & Finnce 76, Ptission str., 434,

More information

Smart Investment Strategies

Smart Investment Strategies Smrt Investment Strtegies Risk-Rewrd Rewrd Strtegy Quntifying Greed How to mke good Portfolio? Entrnce-Exit Exit Strtegy: When to buy? When to sell? 2 Risk vs.. Rewrd Strtegy here is certin mount of risk

More information

checks are tax current income.

checks are tax current income. Humn Short Term Disbility Pln Wht is Disbility Insurnce? An esy explntion is; Disbility Insurnce is protection for your pycheck. Imgine if you were suddenly disbled, unble to work, due to n ccident or

More information

21 th October 2008 Glasgow eprints Service

21 th October 2008 Glasgow eprints Service Hirst, I. nd Dnbolt, J. nd Jones, E. (2008) Required rtes of return for corporte investment pprisl in the presence of growth opportunities. Europen Finncil Mngement 14(5):pp. 989-1006. http://eprints.gl.c.uk/4644/

More information

A Static Model for Voting on Social Security

A Static Model for Voting on Social Security A Sttic Model for Voting on Socil Security Henning Bohn Deprtment of Economics University of Cliforni t Snt Brbr Snt Brbr, CA 93106, USA; nd CESifo Phone: 1-805-893-4532; Fx: 1-805-893-8830. E-mil: bohn@econ.ucsb.edu

More information

Technical Report Global Leader Dry Bulk Derivatives. FIS Technical - Grains And Ferts. Highlights:

Technical Report Global Leader Dry Bulk Derivatives. FIS Technical - Grains And Ferts. Highlights: Technicl Report Technicl Anlyst FIS Technicl - Grins And Ferts Edwrd Hutn 44 20 7090 1120 Edwrdh@freightinvesr.com Highlights: SOY The weekly schstic is wrning slowing momentum in the mrket. USD 966 ¼

More information

Outline. CSE 326: Data Structures. Priority Queues Leftist Heaps & Skew Heaps. Announcements. New Heap Operation: Merge

Outline. CSE 326: Data Structures. Priority Queues Leftist Heaps & Skew Heaps. Announcements. New Heap Operation: Merge CSE 26: Dt Structures Priority Queues Leftist Heps & Skew Heps Outline Announcements Leftist Heps & Skew Heps Reding: Weiss, Ch. 6 Hl Perkins Spring 2 Lectures 6 & 4//2 4//2 2 Announcements Written HW

More information

MARKET POWER AND MISREPRESENTATION

MARKET POWER AND MISREPRESENTATION MARKET POWER AND MISREPRESENTATION MICROECONOMICS Principles nd Anlysis Frnk Cowell Note: the detil in slides mrked * cn only e seen if you run the slideshow July 2017 1 Introduction Presenttion concerns

More information

The Combinatorial Seller s Bid Double Auction: An Asymptotically Efficient Market Mechanism*

The Combinatorial Seller s Bid Double Auction: An Asymptotically Efficient Market Mechanism* The Combintoril Seller s Bid Double Auction: An Asymptoticlly Efficient Mret Mechnism* Rhul Jin IBM Wtson Reserch Hwthorne, NY rhul.jin@us.ibm.com Prvin Vriy EECS Deprtment University of Cliforni, Bereley

More information

OPEN BUDGET QUESTIONNAIRE UKRAINE

OPEN BUDGET QUESTIONNAIRE UKRAINE Interntionl Budget Prtnership OPEN BUDGET QUESTIONNAIRE UKRAINE September 28, 2007 Interntionl Budget Prtnership Center on Budget nd Policy Priorities 820 First Street, NE Suite 510 Wshington, DC 20002

More information

International Budget Partnership OPEN BUDGET QUESTIONNAIRE POLAND

International Budget Partnership OPEN BUDGET QUESTIONNAIRE POLAND Interntionl Budget Prtnership OPEN BUDGET QUESTIONNAIRE POLAND September 28, 2007 Interntionl Budget Prtnership Center on Budget nd Policy Priorities 820 First Street, NE Suite 510 Wshington, DC 20002

More information

INF 4130 Exercise set 4

INF 4130 Exercise set 4 INF 4130 Exercise set 4 Exercise 1 List the order in which we extrct the nodes from the Live Set queue when we do redth first serch of the following grph (tree) with the Live Set implemented s LIFO queue.

More information

A comparison of quadratic discriminant function with discriminant function based on absolute deviation from the mean

A comparison of quadratic discriminant function with discriminant function based on absolute deviation from the mean A comprison of qudrtic discriminnt function with discriminnt function bsed on bsolute devition from the men S. Gneslingm 1, A. Nnthkumr Siv Gnesh 1, 1 Institute of Informtion Sciences nd Technology College

More information

Does Population Aging Represent a Crisis for Rich Societies?

Does Population Aging Represent a Crisis for Rich Societies? First drft Does Popultion Aging Represent Crisis for Rich Societies? by Gry Burtless THE BROOKINGS INSTITUTION Jnury 2002 This pper ws prepred for session of the nnul meetings of the Americn Economic Assocition

More information

Technical Report Global Leader Dry Bulk Derivatives

Technical Report Global Leader Dry Bulk Derivatives Soybens Mrch 17 - Weekly Soybens Mrch 17 - Dily Weekly Close US$ 1,054 ½ RSI 59 MACD Bullish The hisgrm is widening S1 US$ 1,016 ½ S2 US$ 993 R1 US$ 1,071 R2 US$ 1,096 Dily Close US$ 1,030 RSI 60 MACD

More information

POLICY BRIEF 11 POTENTIAL FINANCING OPTIONS FOR LARGE CITIES

POLICY BRIEF 11 POTENTIAL FINANCING OPTIONS FOR LARGE CITIES POTENTIAL FINANCING OPTIONS FOR LARGE CITIES EXECUTIVE SUMMARY In South Afric lrge cities fce myrid of chllenges including rpid urbnistion, poverty, inequlity, unemployment nd huge infrstructure needs.

More information

FINANCIAL ANALYSIS I. INTRODUCTION AND METHODOLOGY

FINANCIAL ANALYSIS I. INTRODUCTION AND METHODOLOGY Dhk Wter Supply Network Improvement Project (RRP BAN 47254003) FINANCIAL ANALYSIS I. INTRODUCTION AND METHODOLOGY A. Introduction 1. The Asin Development Bnk (ADB) finncil nlysis of the proposed Dhk Wter

More information

CH 71 COMPLETING THE SQUARE INTRODUCTION FACTORING PERFECT SQUARE TRINOMIALS

CH 71 COMPLETING THE SQUARE INTRODUCTION FACTORING PERFECT SQUARE TRINOMIALS CH 7 COMPLETING THE SQUARE INTRODUCTION I t s now time to py our dues regrding the Qudrtic Formul. Wht, you my sk, does this men? It mens tht the formul ws merely given to you once or twice in this course,

More information

Announcements. CS 188: Artificial Intelligence Fall Recap: MDPs. Recap: Optimal Utilities. Practice: Computing Actions. Recap: Bellman Equations

Announcements. CS 188: Artificial Intelligence Fall Recap: MDPs. Recap: Optimal Utilities. Practice: Computing Actions. Recap: Bellman Equations CS 188: Artificil Intelligence Fll 2009 Lecture 10: MDP 9/29/2009 Announcement P2: Due Wednedy P3: MDP nd Reinforcement Lerning i up! W2: Out lte thi week Dn Klein UC Berkeley Mny lide over the coure dpted

More information

Gridworld Values V* Gridworld: Q*

Gridworld Values V* Gridworld: Q* CS 188: Artificil Intelligence Mrkov Deciion Procee II Intructor: Dn Klein nd Pieter Abbeel --- Univerity of Cliforni, Berkeley [Thee lide were creted by Dn Klein nd Pieter Abbeel for CS188 Intro to AI

More information

Continuous Optimal Timing

Continuous Optimal Timing Srlnd University Computer Science, Srbrücken, Germny My 6, 205 Outline Motivtion Preliminries Existing Algorithms Our Algorithm Empiricl Evlution Conclusion Motivtion Probbilistic models unrelible/unpredictble

More information

Recap: MDPs. CS 188: Artificial Intelligence Fall Optimal Utilities. The Bellman Equations. Value Estimates. Practice: Computing Actions

Recap: MDPs. CS 188: Artificial Intelligence Fall Optimal Utilities. The Bellman Equations. Value Estimates. Practice: Computing Actions CS 188: Artificil Intelligence Fll 2008 Lecture 10: MDP 9/30/2008 Dn Klein UC Berkeley Recp: MDP Mrkov deciion procee: Stte S Action A Trnition P(,) (or T(,, )) Rewrd R(,, ) (nd dicount γ) Strt tte 0 Quntitie:

More information

DYNAMIC PROGRAMMING REINFORCEMENT LEARNING. COGS 202 : Week 7 Presentation

DYNAMIC PROGRAMMING REINFORCEMENT LEARNING. COGS 202 : Week 7 Presentation DYNAMIC PROGRAMMING REINFORCEMENT LEARNING COGS 202 : Week 7 Preenttion OUTLINE Recp (Stte Vlue nd Action Vlue function) Computtion in MDP Dynmic Progrmming (DP) Policy Evlution Policy Improvement Policy

More information

Asymptotic Stability of a Rate Control System. with Communication Delays

Asymptotic Stability of a Rate Control System. with Communication Delays Asymptotic Stbility of Rte Control System with Communiction Delys Richrd J. L nd Priy Rnjn University of Mrylnd, College Prk {hyongl, priy}@eng.umd.edu 1 Abstrct We study the issue of symptotic stbility

More information

1. Detailed information about the Appellant s and Respondent s personal information including mobile no. and -id are to be furnished.

1. Detailed information about the Appellant s and Respondent s personal information including mobile no. and  -id are to be furnished. Revised Form 36 nd Form 36A for filing ppel nd cross objection respectively before income tx ppellte tribunl (Notifiction No. 72 dted 23.10.2018) Bckground CBDT issued drft notifiction vide press relese

More information

Outline. CS 188: Artificial Intelligence Spring Speeding Up Game Tree Search. Minimax Example. Alpha-Beta Pruning. Pruning

Outline. CS 188: Artificial Intelligence Spring Speeding Up Game Tree Search. Minimax Example. Alpha-Beta Pruning. Pruning CS 188: Artificil Intelligence Spring 2011 Lecture 8: Gme, MDP 2/14/2010 Pieter Abbeel UC Berkeley Mny lide dpted from Dn Klein Outline Zero-um determinitic two plyer gme Minimx Evlution function for non-terminl

More information

Inequality and the GB2 income distribution

Inequality and the GB2 income distribution Working Pper Series Inequlity nd the GB2 income distribution Stephen P. Jenkins ECINEQ WP 2007 73 ECINEC 2007-73 July 2007 www.ecineq.org Inequlity nd the GB2 income distribution Stephen P. Jenkins* University

More information

PROPOSAL FOR RULES CHANGE

PROPOSAL FOR RULES CHANGE PROPOSAL FOR RULES CHANGE S/NO.315 Rule Cnge Title: Mrket Rules Modifiction for LNG Vesting Sceme Submitted By : Compny: Dte: Telepone No. Ms Nerine Teo Energy Mrket Compny Pte Ltd 25 September 2012 67793000

More information

This paper is not to be removed from the Examination Halls

This paper is not to be removed from the Examination Halls This pper is not to be remove from the Exmintion Hlls UNIVESITY OF LONON FN3092 ZA (279 0092) BSc egrees n iploms for Grutes in Economics, Mngement, Finnce n the Socil Sciences, the iploms in Economics

More information

FINANCIAL ANALYSIS I. INTRODUCTION AND METHODOLOGY

FINANCIAL ANALYSIS I. INTRODUCTION AND METHODOLOGY West Bengl Drinking Wter Sector Improvement Project (RRP IND 49107-006) FINANCIAL ANALYSIS I. INTRODUCTION AND METHODOLOGY A. Introduction 1. A finncil nlysis hs been conducted for the proposed West Bengl

More information

Incentives from stock option grants: a behavioral approach

Incentives from stock option grants: a behavioral approach Incentives from stock option grnts: behviorl pproch Hmz Bhji To cite this version: Hmz Bhji. Incentives from stock option grnts: behviorl pproch. 6th Interntionl Finnce Conference (IFC)- Tunisi, Mr 2011,

More information

Life & Health teach-in. Conference call Zurich, 09 December 2010

Life & Health teach-in. Conference call Zurich, 09 December 2010 Life & Helth tech-in Conference cll Zurich, 09 December 2010 Tody s gend Introduction L&H overview nd underwriting Reporting nd performnce mesurement Susn Hollidy, Hed IR George Quinn, CFO Robyn Wytt,

More information

PSAKUIJIR Vol. 4 No. 2 (July-December 2015)

PSAKUIJIR Vol. 4 No. 2 (July-December 2015) Resonble Concession Period for Build Operte Trnsfer Contrct Projects: A Cse Study of Theun-Hiboun Hydropower Dm Project nd Ntionl Rod No. 14 A Project Pnysith Vorsing * nd Dr.Sounthone Phommsone ** Abstrct

More information

OPEN BUDGET QUESTIONNAIRE

OPEN BUDGET QUESTIONNAIRE Interntionl Budget Prtnership OPEN BUDGET QUESTIONNAIRE SOUTH KOREA September 28, 2007 Interntionl Budget Prtnership Center on Budget nd Policy Priorities 820 First Street, NE Suite 510 Wshington, DC 20002

More information

Swiss Re reports nine months 2017 loss of USD 468 million after large insurance claims from recent natural catastrophe events

Swiss Re reports nine months 2017 loss of USD 468 million after large insurance claims from recent natural catastrophe events News relese Swiss Re reports nine months 2017 loss of USD 468 million fter lrge insurnce clims from recent nturl ctstrophe events Group net loss of USD 468 million for the first nine months 2017; impcted

More information

Announcements. CS 188: Artificial Intelligence Fall Reinforcement Learning. Markov Decision Processes. Example Optimal Policies.

Announcements. CS 188: Artificial Intelligence Fall Reinforcement Learning. Markov Decision Processes. Example Optimal Policies. CS 188: Artificil Intelligence Fll 2008 Lecture 9: MDP 9/25/2008 Announcement Homework olution / review eion: Mondy 9/29, 7-9pm in 2050 Vlley LSB Tuedy 9/0, 6-8pm in 10 Evn Check web for detil Cover W1-2,

More information

Controlling a population of identical MDP

Controlling a population of identical MDP Controlling popultion of identicl MDP Nthlie Bertrnd Inri Rennes ongoing work with Miheer Dewskr (CMI), Blise Genest (IRISA) nd Hugo Gimert (LBRI) Trends nd Chllenges in Quntittive Verifiction Mysore,

More information

Non-Deterministic Search. CS 188: Artificial Intelligence Markov Decision Processes. Grid World Actions. Example: Grid World

Non-Deterministic Search. CS 188: Artificial Intelligence Markov Decision Processes. Grid World Actions. Example: Grid World CS 188: Artificil Intelligence Mrkov Deciion Procee Non-Determinitic Serch Dn Klein, Pieter Abbeel Univerity of Cliforni, Berkeley Exmple: Grid World Grid World Action A mze-like problem The gent live

More information

Insurance: Mathematics and Economics

Insurance: Mathematics and Economics Insurnce: Mthemtics nd Economics 43 008) 303 315 Contents lists vilble t ScienceDirect Insurnce: Mthemtics nd Economics journl homepge: www.elsevier.com/locte/ime he design of equity-indexed nnuities Phelim

More information

Annual EVM results Zurich, 15 March 2013

Annual EVM results Zurich, 15 March 2013 Zurich, 15 Mrch 2013 EVM methodology An integrted economic vlution nd ccounting frmework for business plnning, pricing, reserving, steering Shows direct connection between risk-tking nd vlue cretion Provides

More information

PSAS: Government transfers what you need to know

PSAS: Government transfers what you need to know PSAS: Government trnsfers wht you need to know Ferury 2018 Overview This summry will provide users with n understnding of the significnt recognition, presenttion nd disclosure requirements of the stndrd.

More information

arxiv: v1 [cs.lg] 23 Jan 2019

arxiv: v1 [cs.lg] 23 Jan 2019 Robust temporl difference lerning for criticl domins rxiv:1901.08021v1 [cs.lg] 23 Jn 2019 Richrd Klim University of Liverpool, UK richrd.klim@liverpool.c.uk Michel Kisers Centrum Wiskunde & Informtic,

More information

"Multilateralism, Regionalism, and the Sustainability of 'Natural' Trading Blocs"

Multilateralism, Regionalism, and the Sustainability of 'Natural' Trading Blocs "Multilterlism, Regionlism, nd the Sustinility of 'Nturl' Trding Blocs" y Eric Bond Deprtment of Economics Penn Stte June, 1999 Astrct: This pper compres the mximum level of world welfre ttinle in n incentive

More information

Public Trustee Trust Funds. Fo he dec Vie.i i [ 31, 116

Public Trustee Trust Funds. Fo he dec Vie.i i [ 31, 116 Fo he dec Vie.i i [ 31, 116 Mngement s Responsibility for the Finncil Sttements Mngement is responsible for the integrity of the finncil informtion reported by the Public Trustee of Nov Scoti. Fulfilling

More information

The Combinatorial Seller s Bid Double Auction: An Asymptotically Efficient Market Mechanism*

The Combinatorial Seller s Bid Double Auction: An Asymptotically Efficient Market Mechanism* The Combintoril Seller s Bid Double Auction: An Asymptoticlly Efficient Mret Mechnism* Rhul Jin nd Prvin Vriy EECS Deprtment University of Cliforni, Bereley (rjin,vriy)@eecs.bereley.edu We consider the

More information

Fractal Analysis on the Stock Price of C to C Electronic Commerce Enterprise Ming Chen

Fractal Analysis on the Stock Price of C to C Electronic Commerce Enterprise Ming Chen 6th Interntionl Conference on Electronic, Mechnicl, Informtion nd Mngement (EMIM 2016) Frctl Anlysis on the Stock Price of C to C Electronic Commerce Enterprise Ming Chen Soochow University, Suzhou, Chin

More information

This paper is not to be removed from the Examination Halls UNIVERSITY OF LONDON

This paper is not to be removed from the Examination Halls UNIVERSITY OF LONDON ~~FN3092 ZA 0 his pper is not to be remove from the Exmintion Hlls UNIESIY OF LONDON FN3092 ZA BSc egrees n Diploms for Grutes in Economics, Mngement, Finnce n the Socil Sciences, the Diploms in Economics

More information

Swiss Solvency Test (SST)

Swiss Solvency Test (SST) Swiss Solvency Test (SST) George Quinn, CFO 1 Key messges Swiss Solvency Test (SST) Cpitl mesures re becoming more consistent nd economic, with convergence between Swiss Re s internl model, SST nd Swiss

More information

Third Quarter 2017 Results & Outlook October 26, 2017 CMS MODEL: CONSISTENT PAST WITH A SUSTAINABLE FUTURE

Third Quarter 2017 Results & Outlook October 26, 2017 CMS MODEL: CONSISTENT PAST WITH A SUSTAINABLE FUTURE Third Qurter 2017 Results & Outlook October 26, 2017 CMS MODEL: CONSISTENT PAST WITH A SUSTAINABLE FUTURE This presenttion is mde s of the dte hereof nd contins forwrd-looking sttements s defined in Rule

More information

Grain Marketing: Using Balance Sheets

Grain Marketing: Using Balance Sheets 1 Fct Sheet 485 Grin Mrketing: Using Blnce Sheets Introduction Grin lnce sheets re estimtes of supply nd demnd. They re the key to understnding the grin mrkets. A grin frmer who understnds how to interpret

More information