Generalizing Demand Response Through Reward Bidding

Size: px
Start display at page:

Download "Generalizing Demand Response Through Reward Bidding"

Transcription

1 Generalzng Demand Response Through Reward Bddng Hongyao Ma Harvard Unversty Cambrdge, MA, US Davd C. Parkes Harvard Unversty Cambrdge, MA, US Valentn Robu Herot Watt Unversty Ednburgh, UK ABSTRACT Demand-sde response (DR) s emergng as a crucal technology to assure stablty of modern power grds. The uncertanty about the cost agents face for reducng consumpton mposes challenges n achevng relable, coordnated response. In recent work, Ma et al. [13] ntroduce DR as a mechansm desgn problem and solve t for a settng where an agent has a bnary preparaton decson and where, contngent on preparaton, the probablty an agent wll be able to reduce demand and the cost to do so are fxed. We generalze ths model to allow uncertanty n agents costs of respondng, and also multple levels of effort agents can exert n preparng. For both cases, the desgn of contngent payments now affects the probablty of response. We desgn a new, truthful and relable mechansm that uses a rewardbddng approach rather than the penalty-bddng approach. It has good performance when compared to natural benchmarks. The mechansm also extends to handle multple unts of demand response from each agent. Keywords mechansm desgn; demand response; relablty bounds 1. INTRODUCTION The task of mantanng an exact balance of the supply and demand n power systems s ncreasngly challengng, due to the ncreasng penetraton of ntermttent renewable generaton [24, 25], and the presence of more volatle types of loads, such as those from electrc vehcle chargng [21]. Ths has lead to an ncreasng nterest n demand-sde response (DR), n whch consumers commt to temporarly reduce or shft consumpton away from perods where generaton capacty does not meet the aggregate demand [15]. In contrast to the operatng reserves on the supply sde, where the cost and ablty for a generator to ncrease power output can be known wth hgh precson when plannng one day ahead, consumers on the demand sde face uncertanty about ther future costs for reducng consumpton. Consder an ndustral factory whch uses electrcty for the producton lne, transportng raw materals, and coolng. Its ablty to respond to a DR event may depend on the producton pro- Appears n: Proceedngs of the 16th Internatonal Conference on Autonomous Agents and Multagent Systems (AA- MAS 2017), S. Das, E. Durfee, K. Larson, M. Wnkoff (eds.), May 8 12, 2017, São Paulo, Brazl. Copyrght c 2017, Internatonal Foundaton for Autonomous Agents and Multagent Systems ( All rghts reserved. cedure, tme of day when called for DR, customer requests, and weather condtons, thus s hghly uncertan. Ths mposes challenges on selectng and ncentvzng a subset of the consumers to meet a total reducton target wth hgh probablty (what we call the global relablty constrant ), wthout selectng too many consumers to prepare or leadng to excessve economc dsrupton. Ma et al. [13] ntroduce relable DR as a two perod mechansm desgn problem, where the planner s the electrcty grd (or DR aggregator) and the agents are consumers nterested n offerng DR servces. In the plannng perod (perod zero) consumers may opt-n to a DR scheme and make reports to the mechansm based on ther probablstc nformaton on ther costs and abltes to respond. A subset of these consumers are selected and asked to prepare for demand reducton. Later, n the event DR s requred (perod one), based on the resolved uncertanty, selected consumers can decde on whether to follow-through and respond to receve a reward, or not to respond and pay a penalty. Two truthful and relable penalty-bddng mechansms wth fxed rewards are proposed n [13], where agents are selected n decreasng order of ther maxmum acceptable penaltes n the event of non-response. The model on agents types (each agent can choose to prepare for DR n perod zero at a fxed preparaton cost, and f so, n perod one, she wll be able to reduce, wth some fxed probablty, one unt of consumpton at a fxed opportunty cost), however, does not reflect the realty that wth hgher rewards and penaltes agents wll be ncentvzed to respond wth hgher probabltes. We generalze the model n the followng ways: () Uncertan costs: havng prepared n perod zero, agents are stll uncertan about the costs they wll face, and wll decde on whether to respond n perod one after the actual costs are realzed. Agents are more lkely to respond when the rewards and penaltes are hgh. () Mult-effort-level: agents may have multple levels of effort they can exert when preparng to respond. Hgher rewards and penaltes may nduce a hgher preparaton level, resultng n a hgher probablty of respondng. () Mult-unt: each agent s able to reduce a varyng amount of consumpton, and has probablstc nformaton about ts values for dfferent consumpton levels. The dependence of the global relablty constrant on both agents types and the payments contngent on agents responses ntroduces tensons among selectng a small set of agents, satsfyng the relablty constrant, and truthfully elctng nformaton from the agents. We wll see that the

2 penalty-bddng mechansms [13] do not generalze to acheve relable, domnant-strategy demand response. Our man contrbuton s to desgn a new, truthful and relable mechansm for ths generalzed settng that uses a reward-bddng approach. In partcular, the mechansm adopts a fxed penalty for non-response for all selected agents, and agents are selected n ncreasng order of ther mnmum acceptable rewards gven ths penalty. Thus, the mechansm mplements the dea of reward bddng. The reward offered to a selected agent s large enough that t wll choose to prepare to reduce demand, and follow-through wth hgh probablty. Ths s not possble wth penalty-bddng mechansms, because of a subtle nteracton between ncentve constrants and the need to select a set of agents such that the relablty constrant wll be satsfed. For the mult-unt scenaro, we generalze the mechansm by ntroducng lnear payment schedules where rewards and penaltes are defned per unt of consumpton. We also show that we can handle the possblty of multple levels of preparaton efforts n both the sngle and mult-unt scenaros. We demonstrate n smulaton that the reward-bddng mechansm acheves close to the frst best (.e. assumng the mechansm knows agent types and therefore how relable they would be gven certan payments) wth regard to the number of selected agents. We also benchmark aganst a spot aucton n whch a reverse aucton s used to acheve a requred reducton n consumpton. The reward-bddng mechansm acheves the same relablty wth lower expected total payments, and much less varance n payments. 1.1 Related Work Some pror work that consders uncertanty n agent types ncludes: research on promotng utlzaton of shared resources n the context of coordnaton problems [12], maxmzng socal welfare n a settng wth uncertanty about agent actons [18], and maxmzng an arlne s expected revenue n a settng where passengers have uncertanty about whether or not they may fly and thus refund menus can be useful [7]. Also related s work on dynamc mechansm desgn wth dynamc agent type [3, 4, 5, 16]. But none of ths pror work has the objectve to satsfy a probablstc constrant on the jont actons taken by the agents. Most closely related s a paper that ntroduced the problem of mechansm desgn for relable demand response [13]. The present work sgnfcantly generalzes ths by allowng for uncertan opportunty costs, multple effort levels, and varyng unts of possble consumpton reducton. A number of works on demand response have dscussed the concept of aggregaton of multple agents, both aggregaton of small ntermttent generators, and aggregaton of uncertan demands [25, 24, 22, 20]. Some of these works propose the use of scorng rules to ncentvze truthful reports about expected future generaton or consumpton [23, 1, 20]. Unlke a scorng rule approach, n ths work the rewards and penaltes of selected agents are determned by the market, from the reports made by other agents. Ths guarantees that rewards are set such that the selected subset of agents wll guarantee the system-wde relablty constrant. Other pror works on demand response markets (e.g. [11, 9, 17]) consder agents bddng usng supply curves, and study the market equlbra or these settngs. They do not, however take a mechansm desgn perspectve or guarantee truthful reportng. Prcng mechansms to ncentvze load shftng have also been studed n [2, 19, 10]. We focus on achevng relable DR n one future perod, whereas analyzng load shftng requres modelng of agents uncertan valuatons for dfferent consumpton profles over extended perods of tme. 2. PRELIMINARIES We now model the sngle-unt DR problem n whch each agent can reduce the same amount of consumpton, and defer the model and mechansm for mult-unt DR to Secton 4. Uncertan Costs. Let N = {1, 2,..., n} denote the set of agents, each of whch can prepare for demand response ahead at a cost of c > 0. If an agent prepares for demand reducton, her cost for reducng one unt of consumpton wll be a random varable V wth non-negatve support, fnte expectaton and cumulatve dstrbuton functon (CDF) F. V represents the uncertan opportunty cost for the loss of electrcty, the exact value of whch s not realzed untl later. The par θ = (c, F ) defnes an agent s type and s agent s prvate nformaton. Let θ = (θ 1,..., θ n) denote a type profle. We assume n our model that an agent can only respond f she frst prepares, and that the opportunty costs of agents are ndependently dstrbuted. Note that the dscrete sngle-unt (v, p, c ) model proposed n [13], where an agent can reduce one unt of consumpton wth probablty p at a cost of v f she prepares at the cost of c, s a specal case of the uncertan cost model, where V = v wth probablty p and V = (representng the hard constrant) wth probablty 1 p. Relablty Target. Denote M N + as the target capacty reducton that needs to be acheved. The objectve of the planner, the electrcty grd or an DR aggregator, s to select a small set of agents to prepare for DR ahead of tme and set the proper ncentve schemes such that the target reducton s met wth probablty at least τ (0, 1). (M, τ) s the system-wde relablty target. We make a deep market assumpton that there are enough agents n the economy such that f all are pad a hgh enough reward, the relablty target can be met. Ths holds for most real DR markets. Two-perod Mechansms. We consder demand response mechansms that run over two perods wth the the followng tmelne. Perod 0: Agents report nformaton to the mechansm, wth knowledge of ther types. The mechansm determnes for each selected agent the perod-one reward r 0 for reducng consumpton and penalty z 0 n case of non-response. Wth the knowledge of r and z, each selected agent decdes whether to prepare for demand response. Perod 1: The opportunty costs for respondng are realzed, and each agent decdes whether or not to do so based on r, z and the realzed value of V. For each selected agent, the mechansm pays r upon response, and charges z, otherwse.

3 We call the par of acton-contngent payments (r, z ) a payment schedule for demand response. Note that the mechansm s unable to observe selected agents choces on preparaton or ther realzed opportunty costs for reducng. A demand-response mechansm s domnant strategy ncentve compatble (DSIC) f truthful reportng maxmzes each agent s expected utlty regardless of the reports of other agents, and condtoned on the agent makng ratonal decsons (see Secton 2.1). A demand-response mechansm s ndvdually ratonal (IR) f each agents expected utlty for (truthful) partcpaton s non-negatve. Informally, a DSIC mechansm s truthful, and we can say that an IR mechansm ensures that agents wll choose to partcpate. 1 u (r, z ) r 0 () u (r, z ) p (r, z ) r 0 () p (r, z ) (a) Varyng r r r 1 u (r, z ) z 0 () u (r, z ) p (r, z ) z 0 () p (r, z ) (b) Varyng z z z Reward and Penalty Bddng. Consder agent facng a fxed penalty for non-response. If her reward for response f zero, she loses n expectaton thus wll be unwllng to accept the payment schedule. However, f she s offered a mllon dollars for respondng, there s no reason to reject. Intutvely, for any penalty, there s a mnmum acceptable reward and smlarly, fxng any reward, there would be a maxmum acceptable penalty, for the agent to be wllng to accept the DR payment schedule. We now nformally state the reward-bddng mechansm that we desgn n ths paper, and also the penalty-bddng mechansm [13]: Defnton (Reward-bddng - nformal). Fxng a unform penalty z, the reward-bddng mechansm selects agents n ncreasng order of ther mnmum acceptable rewards untl the relablty target s met, and pays each agent the hghest mnmum acceptable reward that she can clam to stll be selected. Defnton (Penalty-bddng - nformal). Fxng a unform reward r, the penalty-bddng mechansm selects agents n decreasng order of ther maxmum acceptable penaltes untl the relablty target s met, and pays each agent the lowest maxmum acceptable penalty that she can report to stll be selected. Fxng one of r and z s essental for selectng more relable agents, computng crtcal payments for truthful nformaton elctaton, and ncentvzng hgher response probabltes. These are not easly achevable n the general twodmensonal payment space where both r and z depend on agents reports. We defer the detaled dscussons to a full verson of ths paper. We now proceed wth the analyss of agents ratonal decsons, expected utltes, mnmum acceptable rewards and relablty n the followng secton. 2.1 Agents Decsons, Utltes and Relablty We frst analyze a selected agent ratonal decsons on preparaton and response when she faces a DR payment schedule (r, z ). Consder the followng cases: 1. If the agent does not prepare, she s unable to respond and wll be charged the penalty, thus her utlty s z ; 2. If the agent does prepare at a cost of c and decdes to respond, she gets pad reward r but ncurs an opportunty cost of V, thus her utlty s r V c ; 3. If the agent dd prepare but decdes not to respond, she wll be charged the penalty thus her utlty s z c. We can see that condtoned on preparaton, the utltymaxmzng decson n perod one would be to respond f and Fgure 1: Expected utlty and effectve relablty as functons of the reward r and the penalty z. only f (breakng tes n favor of respondng) r V c z c V r + z. Defne the relablty of ths agent gven the payment schedule as the probablty wth whch the agent responds, we have p (r, z ) P [V r + z ]. (1) Intutvely, a prepared agent responds only f the opportunty cost s small n comparson wth the reward and penalty, and a hgher reward or a hgher penalty may ncrease the probablty wth whch the agent responds. The expected utlty of a prepared agent at the end of perod zero s: u (r, z ) =E [(r V ) 1{V r + z }] z P [V > r + z ] c, (2) where 1{ } s the ndcator functon. Fxng z, the expected utltes as a functon of r are as llustrated n Fgure 1(a)(). Mnmum Acceptable Rewards. The followng lemma states useful propertes of the expected utlty functon. The proofs are straghtforward and thus omtted due to the space lmt. Lemma 1. Fxng z 0, the expected utlty functon u (r, z ) satsfes: 1. u (0, z ) = E [V 1{V z }] z P [V > z ] c < lm r + u (r, z ) = r u (r, z ) = P [V r + z ] = p (r, z ). 4. u (r, z ) s monotoncally ncreasng and convex n r. 5. There exsts a unque zero-crossng r 0 (z ) s.t. u (r 0, z ) = 0, see Fgure 1(a)(). Intutvely, Lemma 1 shows that f an agent s charged a fxed penalty but pad no reward, her expected utlty from preparng for DR s negatve. As the reward ncreases, her expected utlty contnuously ncreases and crosses zero at some pont r 0 (z ). Ths s the mnmum acceptable reward that the agent needs to be pad for her to be wllng to prepare for DR and also pay a penalty of z for non-response. Techncally, r 0 (z ) s a functon of z but we omt the argument when t s obvous from the context. Wth fxed reward r, we can prove parallel propertes of u (r, z ) as a functon of the penalty z (see Fgure 1(b)()) but omt the formal statements due to space lmt. u (r, z )

4 u 2 (0, z) u 1 (0, z) u (r, z) r 0 1 r 0 2 u 1 (r 1, z) u 2 (r 2, z) Fgure 2: Expected utltes as functons of r for agents n Example 1. s contnuously decreasng and convex n z, has partal dervatve z u (r, z ) = p (r, z ) 1, and has a unque zerocrossng z 0 (r ) representng the agent s maxmum acceptable penalty gven reward r. Preparaton Decsons and Effectve Relablty. Fxng z > 0, f the agent faces a reward smaller than her mnmum acceptable reward r < r 0 (z ) (or equvalently, f an agent faces penalty z > z 0 (r ) for some reward r ), her expected utlty, preparng or not, would be negatve. Thus an agent offered such a payment schedule would not accept and never reduce consumpton. When agent faces r r 0 (z ) (or equvalently z z 0 (r )), her expected utlty from preparng s non-negatve, thus she wll accept the payment schedule and choose n perod zero to prepare. Let X (r, z ) be a random varable ndcatng the number of unts reduced by agent, f she s offered the payment schedule (r, z ). X (r, z ) s Bernoull dstrbuted wth parameter r X (r, z ) Bernoull( p (r, z )) (3) p (r, z ) p (r, z ) 1{r r 0 (z)}, (4) snce she reduces consumpton by one unt wth probablty p (r, z ) (see (1)) f and only f she accepts the payment schedule and prepares, whch happens when r r 0 (z). We call p (r, z ) the effectve relablty of agent f offered the payment schedule (r, z ). p (r, z ) as a functon of the reward r and penalty z s llustrated n Fgure 1(). An mportant observaton from part 3 of Lemma 1 s that p (r, z ) relates to the partal dervatves of u (r, z ): the more relable an agent s, the more lkely that the agent s gong to be pad the reward (not to pay the penalty), thus the faster u (r, z ) ncreases as the r ncreases (the slower u (r, z ) decreases as the z ncreases). Thus, an agent s effectve relablty s fully determned by her expected utlty u (r, z ), and u (r, z ) fully characterzed the parts of an agent s type that s relevant to the DR problem. Before proceedng to the mechansms, we look at an example of two agents wth unformly dstrbuted costs. Example 1. Consder an economy wth two agents whose opportunty costs are unformly dstrbuted: V 1 U[0, 8], V 2 U[0, 20] and let the preparaton costs be c 1 = 2, c 2 = 1. Fxng the penalty z = 1 for both agents, the expected utltes computed accordng to (2) are as llustrated n Fgure 2. Solvng u (r, z) = 0 we get the mnmum acceptable rewards for the two agents: r1(z) 0 = 5.93 and r2(z) 0 = 7.94 From the dstrbutons of V of the two agents, we know that for any common reward and penalty the probablty that agent 1 responds s hgher. Ths corresponds to the steeper slope of u 1(r 1, z). In general, agents wth smaller mnmum acceptable rewards are more lkely to have a steeper slope, whch corresponds to hgher relablty. 3. THE REWARD-BIDDING MECHANISM We now desgn a truthful and relable mechansm for demand response, the reward-bddng mechansm, whch fxes a unform penalty for non-response and selects agents n ncreasng order of ther mnmum acceptable rewards. Note that reward-bddng s a drect-revelaton mechansm, where an agent s crtcal reward payment s determned usng not only the mnmum acceptable rewards but also the relablty nformaton reported by the rest of the agents. We frst provde notatons. Consder a post-prce mechansm where every agent s offered the same payment schedule (r, z). The random varable for the total reducton by all agents gven (r, z) s N X(r, z), where X(r, z) s the number of unts reduced by agent f offered (r, z), as defned n (3). We know from the deep market assumpton and the monotoncty of the effectve relablty p (r, z ) that for any fxed z, there exsts a mnmum unform reward r (N) (z) such that the relablty target (M, τ) s met. Formally { [ ] } r (N) (z) mn r R + s.t. P X (r, z) M τ. (5) N Smlarly, for each agent, we defne the mnmum subeconomy unform reward r (N\{}) (z) as the mnmum amount to offer to all agents but to acheve [ the relablty target: ] r (N\{}) (z) mn{r R + s.t. P j N\{} Xj(r, z) M τ}. Note that both r (N) (z) and r (N\{}) (z) depend on (M, τ), but we omt the arguments when t s clear from the context. Defnton 1. (Reward-Bddng Mechansm wth Penalty z) The reward bddng mechansm collects reported type profle ˆθ = (ˆθ 1,..., ˆθ n), computes for each agent the mnmum acceptable reward ˆr 0 (z), and for relablty target (M, τ) the mnmum unform reward r (N) (z) gven z, and the mnmum sub-economy rewards r (N\{}) (z). Then: Selecton rule (perod zero): select all agents that accept the mnmum unform reward r (N) (z),.e. x (ˆθ) = 1 f ˆr 0 (z) r (N) (z) and x (ˆθ) = 0, otherwse. Payment rule (evaluated n perod zero, payments made n perod one): for selected agents, pay reward r (ˆθ) = r (N\{}) upon demand reducton and charge penalty z (ˆθ) = z for non-response. No payment to or from unselected agents. We now examne the outcome of the reward-bddng mechansm for the economy ntroduced n Example 1 to show how the reward-bddng mechansm works. Example 1. (contnued) Consder a relablty target M = 1, τ = 0.9 and assume agents report truthfully to the rewardbddng mechansm wth penalty z = 1. If agent 1 s offered reward r = 6.2 > r1(z), 0 whch agent 2 s unwllng to accept, agent 1 accepts the payment schedule, prepares, and reduces wth probablty p 1(6.2, 1) = 7.2/8 = 0.9 and meets the relablty target. Thus r (N) (z) = 6.2 and agent 1 s the only selected agent. In the sub-economy N\{1}, we can compute that agent 2 needs to be pad at least r N\{1} (z) = 17 to satsfy the relablty constrant thus agent 1 s reward determned by the mechansm would be r 1 = r N\{1} (z) = 17. Wth r 1 = 17 and z 1 = 2 agent 1 actually always reduces consumpton thus M = 1 s acheved wth probablty one.

5 Theorem 1. The reward-bddng mechansm s DSIC, IR and always satsfes the relablty target. Proof. We frst prove DSIC and IR. Fx an agent. For all possble reports ˆθ that can make, there are two possble outcomes: to be selected, face payment (r N\{}, z), or not to be selected and face payment zero. Snce z s fxed and r N\{} depends only on the reports from the rest of the agents, all payments are ndependent to s own report (.e. agent-ndependence). To prove DSIC, we only need to show that the mechansm chooses the better outcome between the two for all agents (.e. agent-maxmzaton, see [14]). Observe that r N\{} (z) r (N) (z) for all, snce for any r, P [ j N Xj(r, z) M ] P [ j N\{} Xj(r, z) M ] always holds. For s.t. r 0 (z) r (N) (z), the expected utlty from the payment schedule (r N\{} (z), z) s therefore nonnegatve thus gettng selected s agent-maxmzng. For s.t. r 0 (z) > r (N) (z), r N\{} (z) = r (N) [ (z) holds, snce agent ] does not accept (r (N) (z), z), thus P j N Xj(r(N), z) M [ ] = P Xj(r(N), z) M. Her expected utlty from beng selected and face (r N\{} (z), z) s negatve, therefore not beng selected and gettng utlty zero s agent-maxmzng for her. IR also follows, snce all agents gets at least the expected utlty of not beng selected, whch s zero. What s left to prove s that the mechansm always guarantees the relablty target. Ths s straghtforward, observng that [ 1) for s.t. x (ˆθ) = 0, r 0 (z) > r (N) thus p (r (N), z) = 0, ] [ ] 2) P N X(r(N), z) M = P S X(r(N), z) M where S = { N s.t. x (ˆθ) = 1} s the set of all selected agents, 3) p (p, z ) s ncreasng n r thus so s the global relablty P [ S X(r, z) M] and fnally 4) r = r (N\{}) r (N) for S. Thus the probablty of achevng the relablty target M can be bounded by: [ ] [ ] P X (r, z) M = P X (r (N\{}), z) M S S [ ] [ ] P X (r (N), z) M = P X (r (N), z) M τ. S N Ths completes the proof of the theorem. 3.1 On Penalty Bddng We now provde some ntuton on why penalty-bddng [13] does not generalze to acheve the relablty target n a truthful manner for the uncertan cost scenaro. From the above dscussons, we know that the reward-bddng mechansm wth penalty z selects the smallest set of agents necessary to satsfy the relablty target, n the case that all agents are offered the same penalty z and reward r. Each selected agent s then rewarded the hghest mnmum acceptable reward that she can report and stll be selected assumng agent reports some type ˆθ such that ˆr 0 (z) > r (N\{}), the r (N) computed based on the new reports becomes r (N\{}) and agent s therefore no longer selected. What s crucal s that the probablty of meetng the target s monotone n the varyng part of the payment schedule: for uncertan costs, fxng z, and ncreasng r, the effectve relablty p (z, r ) weakly ncreases for all agent types. But whereas ths s the case for fxed reward and varyng penalty under the smple fxed cost model of [13] (.e. the (v, p, c ) model, fxng r and decreasng z more agents opt-n and non of them becomes less-relable), t s not the case for penaltybddng under the uncertan costs model. As s llustrated n Fgure 1(b), the effectve relablty p (r, z ) s frst ncreasng n z, however, once z exceeds z 0 (r), the agent no longer accepts the DR payment schedule and the effectve relablty drops to zero. To get a penalty-bddng mechansm to satsfy the global relablty constrant wthout selectng too many agents, we would need to set the penalty z for a selected agent to be hgh enough such that the effectve relablty s hgh, but low enough so the payment schedule would not be rejected. Ths range cannot be easly determned wthout usng agent s own report, but ths would, n turn, volate agent-ndependence and lose ncentve compatblty. In contrast, for reward-bddng where there s no non-monotoncty n the effectve relablty. Thus, we only need to guarantee a large enough set of agents are offered hgh enough rewards, and ths can be acheved wthout usng agent s report to determne her own payments. 3.2 Computaton of Relablty and Payments We now brefly dscuss the evaluaton of the relablty and the computaton of the mnmum rewards n (5). Let S be the set of agents s.t. p (r, z) > 0. The total reducton N X(r, z) = S X(r, z) s a Posson-bnomal dstrbuted random varable wth CDF: P [ S X(r, z) k] = k l=0 A S l A p(r, z) j A (1 p j(r, z)), where S c l s the set of all subsets of S of cardnalty l, and A c = S \A [6]. We refer readers to [13] for polynomal algorthms for the exact evaluaton of Posson-bnomal CDF. Upper bounds of r (N) (z) and also r (N \{}) (z) can be computed to arbtrary precson by dong a bnary search, startng wth some very small and very large r. The relablty target s always acheved, and ths approxmaton does not affect the ncentves of the agents, snce though the computaton s not exact, the approxmaton process s stll ndependent to agent s own report. 4. MULTI-UNIT GENERALIZATION The reward-bddng mechansm apples when agents can reduce multple but fxed number of unts of consumpton. We generalze agents type model for the scenaro where agents may prefer to reduce a varyng amount of consumpton dependng on the realzed values and the payments, and generalze our mechansm to truthfully acheve the relablty target usng a lnear ncentve scheme. The preparaton cost and multple-levels of preparaton are not modeled for smplcty of notaton, however, the model can be generalzed wthout requrng modfcatons of the mechansm. Uncertan Value Functons. In order to analyze agents decsons on reducng a varyng amount of consumpton, we need to consder agents values for consumng dfferent quanttes. Let Ω be the set of possble world states of agent, e.g. the set of all possble orders on cakes for agent whch s a bakery. We assume Ω s fnte for all for the smplcty of notaton. A world state ω Ω s realzed wth probablty f (ω ) (0, 1), and when ths s the case, the value agent derves from consumng q R + unts of electrcty s v (ω ) (q). We assume that for all N, ω Ω and q 0, v (ω ) (q)

6 s (A1) weakly ncreasng, (A2) rght contnuous, and (A3) bounded from above by some constant W > 0 and by q T for some T > 0. Intutvely, (A1) means excessve electrcty can be burnt free of cost; (A2) allows the value functons to be dscontnuous, (e.g. the agent gets some value only f she can turn on a machne whch burns at least q unts of electrcty), and (A3) prevents agents from beng wllng to consume nfnte amounts of electrcty or to pay nfnte prces for each unt of electrcty. The set of value functons and the dstrbuton of world states θ = {v (ω ) (q), f (ω )} ω Ω determnes an agent s type and s agent s prvate nformaton. Each agent knows her type n perod zero, but the actual world state and the value of consumng electrcty, s not realzed untl perod one. We assume that the realzatons of ω are ndependent among agents and cannot be observed by the planner. Fx the prce of each unt of electrcty as t > 0. In perod one, when the realzed value functon for agent s v (ω ) (q), the utlty of agent for consumng q unts of electrcty s v (ω ) (q) qt, thus the optmal consumpton decson s perod one s q (ω ) (t) arg max q R+ v (ω ) (q) qt. The dstrbuton of ω nduces a dstrbuton of consumpton by agent. Let Q (t) be the random varable ndcatng the number of unts consumed by agent when the prce of electrcty s t. We (t) wth probablty f (ω ). Let q (t) be the expected value of Q (t) and σ (t) be the standard devaton of Q (t). We assume that the grd knows q (t) and σ (t) from hstorcal data. know that Q (t) takes value q (ω ) Lnear Incentve Payment Schedules. Consder a lnear payment schedule (t, r, z), where electrcty costs t + z per unt, however, an agent s pad r per unt f her consumpton s below q (t). Smlar to the above dscusson, for any agent wth realzed world state ω facng the payment schedule (t, r, z), there s an optmal amount of consumpton (denoted q (ω ) (t, r, z)) that gves agent the hghest utlty u (ω ) (t, r, z, q (ω ) (t, r, z)). By consumng ths amount for every realzed world state, the expected utlty agent gets under the lnear payment schedule s u (t, r, z) = ω Ω f (ω ) u (ω ) (t, r, z, q (ω ) (t, r, z)). Smlar to Lemma 1, we can prove parallel propertes of q (ω ) (t, r, z) and u (t, r, z) that agents consume less energy as z and r ncreases, and the expected utltes are ncreasng n r and decreasng n z. Moreover, there exsts a mnmum acceptable reward r 0 (z) such that agent s wllng to accept the addtonal per-unt cost z nstead of gettng the standard prce schedule (t, 0, 0). Facng a DR payment schedule (t, r, z ), an agent decdes to take t ff. r r 0 (z), thus the random varable ndcatng agent s consumpton Q(t, r, z ) s equal to Q(t) at all tmes when r < r 0, but takes value q (ω ) (t, r, z ) wth probablty f (ω ) f r r 0 (z). 4.1 Mult-Unt DR Mechansm For a mult-unt DR mechansm that offers each agent the choce between a DR payment schedule (t, r, z ) and the flat rate (t), to acheve the relablty target (M, τ), we need: [ ] P q (t) M τ (6) N Q (t, r, z ) N We now analyze the mnmum unform rewards r (N) (z) and the sub-economy mnmum rewards r (N\{}) (z) to meet the target when the penalty z s fxed. Defne r (N) (z) as the mnmum r s.t. (6) can be met when z = z for all. However, [ defnng r (N\{}) (z) as the mnmum r such that P Qj(t, r, z) ] qj(t) M τ, does not guarantee P [ Qj(t, r, z) + Q(t) ] qj(t) + q(t) M τ because of the uncertanty n Q (t). In order to compute r (N\{}) (z) ndependent of agent s own reported nformaton, we need to fnd the mnmum reward r such that for all possble dstrbutons of some random [ varable Y s.t. E [Y ] = q (t) and std [Y ] = σ (t), P Qj(t, r, z) + Y ] qj(t) + q(t) M τ always holds. We can prove that such r (N\{}) (z) s no smaller than r (N) (z) for all. We now defne the mechansm. Defnton 2. (Mult-unt DR mechansm wth Penalty z) The mult-unt DR mechansm collects agents types, computes r 0 (z), r (N) (z) and r (N\{}) (z), and offers to each agent a flat rate (t) and a DR payment schedule (t, r, z) where r = r (N\{}) (z). Gven the offered contracts, each agent selects the her preferred contract, decdes on the preparaton effort, consumpton level, and then the mechansms pays rewards and charges penaltes accordngly. Theorem 2. The Mult-Unt DR Mechansm s DSIC, IR and always guarantees the relablty target. The proof of the theorem s smlar to that of Theorem 1, and s omtted due to space lmtatons. 4.2 Computaton of Threshold Payments We now dscuss the computaton of the mnmum subeconomy reward r (N\{}) (z). Techncally, we are lookng for the mnmum reward r such that mn F Y P Q j(t, r, z) + Y q j(t) + q (t) M τ s.t. E [Y ] = q (t) and std [Y ] = σ (t). The exact computaton s not easy, snce we would need to analyze the dstrbuton of summaton of several random varables and solve a constraned optmzaton problem n the functonal space (.e. space of all vald dstrbutons). However, we can bound the probablty and apply Chebyshev s nequalty [8] and show that for all ɛ R, P Q j(t, r, z) + Y q j(t) + q (t) M P Q j(t, r, z) q j(t) M ɛ P [Y q (t) + ɛ] ( ) 1 σ2 (t) P Q j(t, r, z) q j(t) M ɛ. ɛ 2 By settng ɛ(τ) s.t. 1 σ 2 (t)/ɛ(τ) 2 = [ τ and lookng for mnmum r s.t. P Qj(t, r, z) ] qj(t) M ɛ(τ) τ, we fnd an upper bound r (N\{}) (z) of r (N\{}) (z) s.t. the relablty constrant s guaranteed to be acheved by offerng (t, r (N\{}) (z), z) to all agents other than. We can

7 set r to be r (N\{}) (z) n the mult-unt DR mechansm, and know that the relablty constrant can always be met. 5. MULTIPLE EFFORT LEVELS We can also allow for agents who can exert multple levels of preparaton effort (these affectng the dstrbuton on perod one values). We do ths by reducng the mult-effortlevel model to the sngle level model. In ths way, the rewardbddng mechansm can be drectly appled. Ths can be done for both the unt-response and mult-unt response scenaros, but because of space lmtatons we only llustrate the dea n the sngle-unt case. Let K be the total number of levels of effort that agent can choose to exert durng preparaton. If agent prepares at level k at a cost of c (k), her opportunty cost would be dstrbuted accordng to F (k). Agent s type s descrbed by θ = (θ (1),..., θ K ) where θ (k) = (c (k), F (k) ). Ths subsumes the mult-level dscrete model, where each agent can prepare at cost c (k), whch enables her to respond wth probablty p (k) at the cost of v (k) for k = 1,..., K. Wth the same analyss, we know that gven agent prepares at level k, her expected utlty s of the form: u (k) (r, z ) =E V F (k) [(r V ) 1{V r + z }] z P V F (k) [V > r + z ] c (k). Snce each agent s nformed of the payment schedule (r, z ) at perod zero, she wll choose the preparaton effort level that maxmzes her expected utlty at (r, z ). The equvalent expected utlty for agent facng payment schedule (r, z ) s therefore: ū (r, z ) max u (k) (r, z ). k=1,...,k As the upper envelope of a set of ncreasng and convex functons, ū (r, z ) s also ncreasng and convex. A payment schedule (r, z ) nduces an optmal effort k = argmax k=1,...,k u (k) (r, z ) and the relablty stll corresponds to the slope of ū (r, z ): p (r, z ) = F (k ) (r + z ) = u (k ) (r, z ) = ū (r, z ). r r snce ū (r, z ) = u (k ) (r, z ) n a small neghborhood of r. Ths mples that the fve propertes that we proved n Lemma 1 also holds for ū (r, z ), whch can be consdered as the effectve expected utlty functon of agent and fully determnes the effectve relablty of the agent. Therefore, the mult-effort-level scenaro can be reduced to the sngleeffort-level case by settng u (r, z ) = ū (r, z ). Example 2. Consder an agent wth two possble effort levels. If she exerts the lower effort level at a cost of c (1) = 1, she s able to respond wth probablty p (1) = 0.5 at an opportunty cost of v (1) = 2. If she exerts the hgher level at a cost c (2) = 4 the opportunty cost stays the same but her probablty of beng able to respond s boosted to p (2) = 0.9. The expected utltes correspondng to the two effort levels when penalty s fxed at z = 1, and the effectve expected utlty ū (r, z) are as llustrated n Fgure 3. We know from ū (r, z) that wth r < (r 0 ) (1) (z) = 5, the agent does not z c (1) z c (2) u (k) (r, z) (r 0)(1) (r 0 r )(2) u (1) (r, z) ū u (2) (r, z) (r, z) Fgure 3: Expected utltes for dfferent effort levels and the upper envelope ū (r, z) for Example 2. accept the payment schedule. For 5 r < 8.5, where u (2) (r, z) crosses u (1) (r, z), the agent takes the lower effort level, and therefore responds wth probablty p (r, z) = 0.5. For r 8.5, the hgher effort level s taken thus agent responds wth probablty p (r, z) = SIMULATION RESULTS In ths secton we compare, through numercal smulaton, the performance of the reward-bddng mechansm aganst the best possble outcome (.e. the frst best wthout prvate nformaton) and a natural alternatve mechansm, the spot aucton, n whch demand reducton s purchased from agents when needed. 6.1 Comparson wth the Frst Best We compare the number of agents selected by the rewardbddng mechansm wth the frst best, whch assumes that the mechansm knows the types of agents and therefore how relable they would be gven certan payments. Throughout ths secton, we consder agents whose types follow the exponental model. Each agent faces a fxed preparaton cost c, and contngent on preparaton, the opportunty cost V s exponentally dstrbuted wth parameter λ s.t. E [V ] = λ 1. Facng payment schedule (r, z ), a prepared agent responds wth probablty 1 e λ (r +z ), thus the relablty of each agent can be boosted nfntely close to one, and the mnmum number of agents needed n the frst best would be equal to M. Let the total number of agents be n = 500 and the types be..d. unformly dstrbuted: c U[0, 1] and λ 1 U[0, 2]. We frst assume that the grd charges a penalty z = 1 n the reward-bddng mechansm and would lke to acheve a target reducton M = 100. Wth τ varyng from 0.9 to 0.999, the average number of selected agents over 1000 randomly generated economes s as shown n Fgure 4(a). The horzontal axs log rsk log 10 (1 τ) translate τ = 0.9 to 1 and τ = to 3. We can see that more agents are selected when the probablty target τ ncreases, and the mechansm does well n comparson wth the frst best. Fxng τ = 0.98, the number of agents selected by the reward bddng mechansm wth dfferent z s as shown n Fgure 4(b). The number of agents selected decreases as z ncreases, snce a hgher penalty, and the resultng hgher rewards (snce agents have hgher mnmum acceptable rewards when z ncreases) mproves the relablty of each of the selected agents. 6.2 Comparson Wth the Spot Aucton We now compare the reward-bddng mechansm wth a benchmark proposed n [13], the spot aucton. Wthout preselecton n perod zero of whch agents should nvest effort and prepare, the spot aucton purchases demand reducton

8 # of selected agents reward-bddng frst-best log 10 (1- ) # of selected agents reward-bddng frst-best z 0 ave total cost reward-bddng spot log 10 (1-τ) std total cost reward-bddng spot log 10 (1-τ) (a) Varyng τ (b) Varyng z (a) Average total cost (b) Std of total cost Fgure 4: Comparson between the number of selected agents n reward-bddng and the frst best. Fgure 5: Comparson between the average and standard devaton of total costs. from agents n perod one n the event that DR s requred usng a smple (M+1) st -prce aucton wth a reserve prce r,.e. the reserve sets an upper bound on the reward payment. Pure Nash Equlbrum on Preparaton. For an agent who prepared, t s a domnant strategy for her to bd n perod one the realzaton of her opportunty cost V, snce the preparaton cost c s sunk and the (M+1) st -prce aucton s truthful. What s not straghtforward s to decde whether to prepare n perod zero. For ths, we study economes wth exponental type agents where c = c for all N. Further, we assume these dstrbuton types are known and study the performance of the spot aucton under a (complete nformaton) Nash equlbrum of the preparaton game. 2 λ 1 n,.e. agents wth smaller ndexes face smaller opportunty costs n expectaton. We prove through analyzng a threshold structure of the equlbrum that for any reserve r 0, there exsts a pure Nash equlbrum n whch agents prepare f m(r) and do not prepare otherwse, for some 0 m(r) n. The full proof s left for an extended verson of the paper. To obtan the (asymmetrc) pure-strategy preparaton equlbrum, we compute va smulaton (over one mllon realzed cost profles) for each reserve prce r how many agents prepare n equlbrum and the resultng probablty of achevng the reducton target. The hgher the reserve prce r, the more agents prepare, the hgher the global relablty s and the hgher the total payment made to the agents. Assume w.l.o.g. λ 1 1 λ 1 Expermental Results. We now compare the equlbrum outcome of the spot aucton wth the truthful outcome of the reward-bddng mechansm. Consder a set of n = 500 wth exponental model types, where the preparaton cost c = 2, expected opportunty cost λ 1 = /100, and the reducton target M = 100. For the reward-bddng mechansm, we smply set the penalty z = 1. For the spot aucton, for each τ rangng from 0.9 to we choose the mnmum r such that there are enough agents preparng n equlbrum and the relablty target can be met. The mean and standard devaton (std) of the total costs (reward payments mnus collected penaltes) of one mllon nstances of agents realzed opportunty costs are shown n Fgure 5. Despte the unfar comparson (domnant strategy for reward bddng vs. Nash equlbrum, optmzed reserve r for spot aucton), the total cost under reward-bddng s lower than that of spot aucton. Moreover, the standard devaton of the total costs under the spot aucton s much hgher. Ths s because under the spot aucton, the total cost s low most of the tmes (when the (M+1) st bd s pad), however, wth small probablty where there are no more than M bds below r, r s pad to all agents, and ths results n a huge varance n the total payments. A hgh reserve r s needed n the spot aucton because the number of agents preparng hardly ncreases as r ncreases, thus r has be large enough such that enough agents bds fall below r and ther reductons are purchased n order to meet the relablty target. When there are too many agents preparng, wth hgh probablty agents are gettng pad only the (M+1) st bd nstead of the hgh reserve prce, and ths may not be enough to cover the opportunty cost. As a comparson, the mnmum reserve requred to acheve τ = s r = 5.76 under whch 101 agents prepare, and under reward-bddng wth z = 1, there are 103 agents selected and average reward selected agents face (note that the crtcal rewards dffer for dfferent agents) s around Both the reward and the penalty help wth mprovng boostng the relablty of each ndvdual agent. 7. CONCLUSIONS We studed the generalzed demand response problem where the desgn of contngent payments affect the probablty of response, and where each agent may reduce multple unts of consumpton. We desgn a new, truthful and relable mechansm that selects a small number of agents to prepare and does so at low costs when compared to natural benchmarks. In future work, we plan to understand whether t s possble to () desgn ndrect mechansms wth good performance where there s no need for agents to communcate ther full types, () meet the reducton target wth hgh probablty wthout reducng too much beyond the target, () optmze total welfare for both demand-sde response and supply-sde reserves at the same tme, whle retanng domnant-strategy equlbrum, and (v) generalze the model and mechansm for demand sde response over multple perods of tme. Acknowledgments Parkes s supported n part by the SEAS TomKat fund, Ma by the FAS Compettve Research Fund. Robu s supported of the EPSRC Natonal Centre for Energy Systems Integraton [EP/P001173/1].

9 REFERENCES [1] C. Akasads and G. Chalkadaks. Agent cooperatves for effectve power consumpton shftng. In Proceedngs of the Twenty-Seventh AAAI Conference on Artfcal Intellgence, [2] C. Akasads, K. Panagd, N. Panagotou, P. Sernan, A. Morton, I. A. Vetskas, L. Mavroul, and K. Goutsas. Incentves for reschedulng resdental electrcty consumpton to promote renewable energy usage. In SAI Intellgent Systems Conference (IntellSys), 2015, pages IEEE, [3] D. Bergemann and J. Välmäk. The dynamc pvot mechansm. Econometrca, 78: , [4] R. Cavallo, D. C. Parkes, and S. Sngh. Optmal coordnated learnng among self-nterested agents n the mult-armed bandt problem. In Proceedngs of the 22nd Conference on Uncertanty n Artfcal Intellgence (UAI 2006), Cambrdge, MA, [5] R. Cavallo, D. C. Parkes, and S. Sngh. Effcent mechansms wth dynamc populatons and dynamc types. Techncal report, Harvard Unversty, [6] S. X. Chen and J. S. Lu. Statstcal applcatons of the Posson-Bnomal and Condtonal Bernoull dstrbutons. Statstca Snca, 7: , [7] P. Courty and H. L. Sequental screenng. The Revew of Economc Studes, 67(4): , [8] G. Grmmett and D. Strzaker. Probablty and random processes. Oxford unversty press, [9] R. Johar and J. N. Tstskls. Parameterzed supply functon bddng: Equlbrum and effcency. Operatons Research, 59(5): , [10] R. Kota, G. Chalkadaks, V. Robu, A. Rogers, and N. R. Jennngs. Cooperatves for demand sde management. In Proc. 20th European Conference on AI (ECAI 12), pages , [11] N. L, L. Chen, and M. Dahleh. Demand response usng lnear supply functon bddng. IEEE Transactons on Smart Grd, 6(4): , [12] H. Ma, R. Mer, D. C. Parkes, and J. Zou. Contngent payment mechansms to maxmze resource utlzaton. arxv preprnt arxv: , [13] H. Ma, V. Robu, N. L, and D. C. Parkes. Incentvzng Relablty n Demand-Sde Response. In Proceedngs of the 25th Internatonal Jont Conference on Artfcal Intellgence (IJCAI 16), [14] N. Nsan. Introducton to mechansm desgn (for computer scentsts). In N. N. et al., edtor, Algorthmc game theory. Cambrdge Unversty Press, [15] P. Palensky and D. Detrch. Demand sde management: Demand response, ntellgent energy systems, and smart loads. Industral Informatcs, IEEE Transactons on, 7(3): , [16] D. C. Parkes and S. Sngh. An MDP-based approach to Onlne Mechansm Desgn. In Proc. 17th Annual Conf. on Neural Informaton Processng Systems (NIPS 03), [17] PJM. Energy & ancllary servces market operatons. Techncal report, PJM, AUGUST [18] R. Porter, A. Ronen, Y. Shoham, and M. Tennenholtz. Fault tolerant mechansm desgn. Artfcal Intellgence, 172(15): , [19] S. D. Ramchurn, P. Vytelngum, A. Rogers, and N. Jennngs. Agent-based control for decentralsed demand sde management n the smart grd. In The 10th Internatonal Conference on Autonomous Agents and Multagent Systems-Volume 1, pages Internatonal Foundaton for Autonomous Agents and Multagent Systems, [20] V. Robu, G. Chalkadaks, R. Kota, A. Rogers, and N. R. Jennngs. Rewardng cooperatve vrtual power plant formaton usng scorng rules. Energy, 117, Part 1:19 28, [21] V. Robu, E. H. Gerdng, S. Sten, D. C. Parkes, A. Rogers, and N. R. Jennngs. An onlne mechansm for mult-unt demand and ts applcaton to plug-n hybrd electrc vehcle chargng. J. Artf. Intell. Res. (JAIR), 48: , [22] V. Robu, M. Vnyals, A. Rogers, and N. R. Jennngs. Effcent buyer groups for predcton-of-use electrcty tarffs. In Proceedngs of the Twenty-Eghth AAAI Conference on Artfcal Intellgence, July 27-31, 2014, Québec Cty, Québec, Canada., pages , [23] H. T. Rose, A. Rogers, and E. H. Gerdng. A scorng rule-based mechansm for aggregate demand predcton n the smart grd. In Internatonal Conference on Autonomous Agents and Multagent Systems, AAMAS 2012, pages , [24] W. Su, J. Wang, and J. Roh. Stochastc energy schedulng n mcrogrds wth ntermttent renewable energy resources. Smart Grd, IEEE Transactons on, 5(4): , [25] B. Zhang, R. Johar, and R. Rajagopal. Competton and coalton formaton of renewable power producers. Power Systems, IEEE Transactons on, 30(3): , 2015.

Applications of Myerson s Lemma

Applications of Myerson s Lemma Applcatons of Myerson s Lemma Professor Greenwald 28-2-7 We apply Myerson s lemma to solve the sngle-good aucton, and the generalzaton n whch there are k dentcal copes of the good. Our objectve s welfare

More information

Single-Item Auctions. CS 234r: Markets for Networks and Crowds Lecture 4 Auctions, Mechanisms, and Welfare Maximization

Single-Item Auctions. CS 234r: Markets for Networks and Crowds Lecture 4 Auctions, Mechanisms, and Welfare Maximization CS 234r: Markets for Networks and Crowds Lecture 4 Auctons, Mechansms, and Welfare Maxmzaton Sngle-Item Auctons Suppose we have one or more tems to sell and a pool of potental buyers. How should we decde

More information

Elements of Economic Analysis II Lecture VI: Industry Supply

Elements of Economic Analysis II Lecture VI: Industry Supply Elements of Economc Analyss II Lecture VI: Industry Supply Ka Hao Yang 10/12/2017 In the prevous lecture, we analyzed the frm s supply decson usng a set of smple graphcal analyses. In fact, the dscusson

More information

- contrast so-called first-best outcome of Lindahl equilibrium with case of private provision through voluntary contributions of households

- contrast so-called first-best outcome of Lindahl equilibrium with case of private provision through voluntary contributions of households Prvate Provson - contrast so-called frst-best outcome of Lndahl equlbrum wth case of prvate provson through voluntary contrbutons of households - need to make an assumpton about how each household expects

More information

Price and Quantity Competition Revisited. Abstract

Price and Quantity Competition Revisited. Abstract rce and uantty Competton Revsted X. Henry Wang Unversty of Mssour - Columba Abstract By enlargng the parameter space orgnally consdered by Sngh and Vves (984 to allow for a wder range of cost asymmetry,

More information

Global Optimization in Multi-Agent Models

Global Optimization in Multi-Agent Models Global Optmzaton n Mult-Agent Models John R. Brge R.R. McCormck School of Engneerng and Appled Scence Northwestern Unversty Jont work wth Chonawee Supatgat, Enron, and Rachel Zhang, Cornell 11/19/2004

More information

Equilibrium in Prediction Markets with Buyers and Sellers

Equilibrium in Prediction Markets with Buyers and Sellers Equlbrum n Predcton Markets wth Buyers and Sellers Shpra Agrawal Nmrod Megddo Benamn Armbruster Abstract Predcton markets wth buyers and sellers of contracts on multple outcomes are shown to have unque

More information

A MODEL OF COMPETITION AMONG TELECOMMUNICATION SERVICE PROVIDERS BASED ON REPEATED GAME

A MODEL OF COMPETITION AMONG TELECOMMUNICATION SERVICE PROVIDERS BASED ON REPEATED GAME A MODEL OF COMPETITION AMONG TELECOMMUNICATION SERVICE PROVIDERS BASED ON REPEATED GAME Vesna Radonć Đogatovć, Valentna Radočć Unversty of Belgrade Faculty of Transport and Traffc Engneerng Belgrade, Serba

More information

Scribe: Chris Berlind Date: Feb 1, 2010

Scribe: Chris Berlind Date: Feb 1, 2010 CS/CNS/EE 253: Advanced Topcs n Machne Learnng Topc: Dealng wth Partal Feedback #2 Lecturer: Danel Golovn Scrbe: Chrs Berlnd Date: Feb 1, 2010 8.1 Revew In the prevous lecture we began lookng at algorthms

More information

Economics 1410 Fall Section 7 Notes 1. Define the tax in a flexible way using T (z), where z is the income reported by the agent.

Economics 1410 Fall Section 7 Notes 1. Define the tax in a flexible way using T (z), where z is the income reported by the agent. Economcs 1410 Fall 2017 Harvard Unversty Yaan Al-Karableh Secton 7 Notes 1 I. The ncome taxaton problem Defne the tax n a flexble way usng T (), where s the ncome reported by the agent. Retenton functon:

More information

CS 286r: Matching and Market Design Lecture 2 Combinatorial Markets, Walrasian Equilibrium, Tâtonnement

CS 286r: Matching and Market Design Lecture 2 Combinatorial Markets, Walrasian Equilibrium, Tâtonnement CS 286r: Matchng and Market Desgn Lecture 2 Combnatoral Markets, Walrasan Equlbrum, Tâtonnement Matchng and Money Recall: Last tme we descrbed the Hungaran Method for computng a maxmumweght bpartte matchng.

More information

ECE 586GT: Problem Set 2: Problems and Solutions Uniqueness of Nash equilibria, zero sum games, evolutionary dynamics

ECE 586GT: Problem Set 2: Problems and Solutions Uniqueness of Nash equilibria, zero sum games, evolutionary dynamics Unversty of Illnos Fall 08 ECE 586GT: Problem Set : Problems and Solutons Unqueness of Nash equlbra, zero sum games, evolutonary dynamcs Due: Tuesday, Sept. 5, at begnnng of class Readng: Course notes,

More information

Introduction to game theory

Introduction to game theory Introducton to game theory Lectures n game theory ECON5210, Sprng 2009, Part 1 17.12.2008 G.B. Ashem, ECON5210-1 1 Overvew over lectures 1. Introducton to game theory 2. Modelng nteractve knowledge; equlbrum

More information

Ch Rival Pure private goods (most retail goods) Non-Rival Impure public goods (internet service)

Ch Rival Pure private goods (most retail goods) Non-Rival Impure public goods (internet service) h 7 1 Publc Goods o Rval goods: a good s rval f ts consumpton by one person precludes ts consumpton by another o Excludable goods: a good s excludable f you can reasonably prevent a person from consumng

More information

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture #21 Scribe: Lawrence Diao April 23, 2013

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture #21 Scribe: Lawrence Diao April 23, 2013 COS 511: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture #21 Scrbe: Lawrence Dao Aprl 23, 2013 1 On-Lne Log Loss To recap the end of the last lecture, we have the followng on-lne problem wth N

More information

Problem Set 6 Finance 1,

Problem Set 6 Finance 1, Carnege Mellon Unversty Graduate School of Industral Admnstraton Chrs Telmer Wnter 2006 Problem Set 6 Fnance, 47-720. (representatve agent constructon) Consder the followng two-perod, two-agent economy.

More information

Lecture 7. We now use Brouwer s fixed point theorem to prove Nash s theorem.

Lecture 7. We now use Brouwer s fixed point theorem to prove Nash s theorem. Topcs on the Border of Economcs and Computaton December 11, 2005 Lecturer: Noam Nsan Lecture 7 Scrbe: Yoram Bachrach 1 Nash s Theorem We begn by provng Nash s Theorem about the exstance of a mxed strategy

More information

15-451/651: Design & Analysis of Algorithms January 22, 2019 Lecture #3: Amortized Analysis last changed: January 18, 2019

15-451/651: Design & Analysis of Algorithms January 22, 2019 Lecture #3: Amortized Analysis last changed: January 18, 2019 5-45/65: Desgn & Analyss of Algorthms January, 09 Lecture #3: Amortzed Analyss last changed: January 8, 09 Introducton In ths lecture we dscuss a useful form of analyss, called amortzed analyss, for problems

More information

Tests for Two Ordered Categorical Variables

Tests for Two Ordered Categorical Variables Chapter 253 Tests for Two Ordered Categorcal Varables Introducton Ths module computes power and sample sze for tests of ordered categorcal data such as Lkert scale data. Assumng proportonal odds, such

More information

Least Cost Strategies for Complying with New NOx Emissions Limits

Least Cost Strategies for Complying with New NOx Emissions Limits Least Cost Strateges for Complyng wth New NOx Emssons Lmts Internatonal Assocaton for Energy Economcs New England Chapter Presented by Assef A. Zoban Tabors Caramans & Assocates Cambrdge, MA 02138 January

More information

Appendix - Normally Distributed Admissible Choices are Optimal

Appendix - Normally Distributed Admissible Choices are Optimal Appendx - Normally Dstrbuted Admssble Choces are Optmal James N. Bodurtha, Jr. McDonough School of Busness Georgetown Unversty and Q Shen Stafford Partners Aprl 994 latest revson September 00 Abstract

More information

Consumption Based Asset Pricing

Consumption Based Asset Pricing Consumpton Based Asset Prcng Mchael Bar Aprl 25, 208 Contents Introducton 2 Model 2. Prcng rsk-free asset............................... 3 2.2 Prcng rsky assets................................ 4 2.3 Bubbles......................................

More information

Cyclic Scheduling in a Job shop with Multiple Assembly Firms

Cyclic Scheduling in a Job shop with Multiple Assembly Firms Proceedngs of the 0 Internatonal Conference on Industral Engneerng and Operatons Management Kuala Lumpur, Malaysa, January 4, 0 Cyclc Schedulng n a Job shop wth Multple Assembly Frms Tetsuya Kana and Koch

More information

Problem Set #4 Solutions

Problem Set #4 Solutions 4.0 Sprng 00 Page Problem Set #4 Solutons Problem : a) The extensve form of the game s as follows: (,) Inc. (-,-) Entrant (0,0) Inc (5,0) Usng backwards nducton, the ncumbent wll always set hgh prces,

More information

OPERATIONS RESEARCH. Game Theory

OPERATIONS RESEARCH. Game Theory OPERATIONS RESEARCH Chapter 2 Game Theory Prof. Bbhas C. Gr Department of Mathematcs Jadavpur Unversty Kolkata, Inda Emal: bcgr.umath@gmal.com 1.0 Introducton Game theory was developed for decson makng

More information

Stochastic Resource Auctions for Renewable Energy Integration

Stochastic Resource Auctions for Renewable Energy Integration Forty-Nnth Annual Allerton Conference Allerton House, UIUC, Illnos, USA September 28-30, 2011 Stochastc Resource Auctons for Renewable Energy Integraton Wenyuan Tang Department of Electrcal Engneerng Unversty

More information

Online Mechanisms for Charging Electric Vehicles in Settings with Varying Marginal Electricity Costs

Online Mechanisms for Charging Electric Vehicles in Settings with Varying Marginal Electricity Costs Proceedngs of the Twenty-Fourth Internatonal Jont Conference on Artfcal Intellgence (IJCAI 2015) Onlne Mechansms for Chargng Electrc Vehcles n Settngs wth Varyng Margnal Electrcty Costs Kechro Hayakawa

More information

Mechanisms for Efficient Allocation in Divisible Capacity Networks

Mechanisms for Efficient Allocation in Divisible Capacity Networks Mechansms for Effcent Allocaton n Dvsble Capacty Networks Antons Dmaks, Rahul Jan and Jean Walrand EECS Department Unversty of Calforna, Berkeley {dmaks,ran,wlr}@eecs.berkeley.edu Abstract We propose a

More information

Mechanism Design in Hidden Action and Hidden Information: Richness and Pure Groves

Mechanism Design in Hidden Action and Hidden Information: Richness and Pure Groves 1 December 13, 2016, Unversty of Tokyo Mechansm Desgn n Hdden Acton and Hdden Informaton: Rchness and Pure Groves Htosh Matsushma (Unversty of Tokyo) Shunya Noda (Stanford Unversty) May 30, 2016 2 1. Introducton

More information

UNIVERSITY OF NOTTINGHAM

UNIVERSITY OF NOTTINGHAM UNIVERSITY OF NOTTINGHAM SCHOOL OF ECONOMICS DISCUSSION PAPER 99/28 Welfare Analyss n a Cournot Game wth a Publc Good by Indraneel Dasgupta School of Economcs, Unversty of Nottngham, Nottngham NG7 2RD,

More information

Tests for Two Correlations

Tests for Two Correlations PASS Sample Sze Software Chapter 805 Tests for Two Correlatons Introducton The correlaton coeffcent (or correlaton), ρ, s a popular parameter for descrbng the strength of the assocaton between two varables.

More information

Games and Decisions. Part I: Basic Theorems. Contents. 1 Introduction. Jane Yuxin Wang. 1 Introduction 1. 2 Two-player Games 2

Games and Decisions. Part I: Basic Theorems. Contents. 1 Introduction. Jane Yuxin Wang. 1 Introduction 1. 2 Two-player Games 2 Games and Decsons Part I: Basc Theorems Jane Yuxn Wang Contents 1 Introducton 1 2 Two-player Games 2 2.1 Zero-sum Games................................ 3 2.1.1 Pure Strateges.............................

More information

references Chapters on game theory in Mas-Colell, Whinston and Green

references Chapters on game theory in Mas-Colell, Whinston and Green Syllabus. Prelmnares. Role of game theory n economcs. Normal and extensve form of a game. Game-tree. Informaton partton. Perfect recall. Perfect and mperfect nformaton. Strategy.. Statc games of complete

More information

II. Random Variables. Variable Types. Variables Map Outcomes to Numbers

II. Random Variables. Variable Types. Variables Map Outcomes to Numbers II. Random Varables Random varables operate n much the same way as the outcomes or events n some arbtrary sample space the dstncton s that random varables are smply outcomes that are represented numercally.

More information

c slope = -(1+i)/(1+π 2 ) MRS (between consumption in consecutive time periods) price ratio (across consecutive time periods)

c slope = -(1+i)/(1+π 2 ) MRS (between consumption in consecutive time periods) price ratio (across consecutive time periods) CONSUMPTION-SAVINGS FRAMEWORK (CONTINUED) SEPTEMBER 24, 2013 The Graphcs of the Consumpton-Savngs Model CONSUMER OPTIMIZATION Consumer s decson problem: maxmze lfetme utlty subject to lfetme budget constrant

More information

Random Variables. b 2.

Random Variables. b 2. Random Varables Generally the object of an nvestgators nterest s not necessarly the acton n the sample space but rather some functon of t. Techncally a real valued functon or mappng whose doman s the sample

More information

Online Appendix for Merger Review for Markets with Buyer Power

Online Appendix for Merger Review for Markets with Buyer Power Onlne Appendx for Merger Revew for Markets wth Buyer Power Smon Loertscher Lesle M. Marx July 23, 2018 Introducton In ths appendx we extend the framework of Loertscher and Marx (forthcomng) to allow two

More information

An Efficient Nash-Implementation Mechanism for Divisible Resource Allocation

An Efficient Nash-Implementation Mechanism for Divisible Resource Allocation SUBMITTED TO IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS 1 An Effcent Nash-Implementaton Mechansm for Dvsble Resource Allocaton Rahul Jan IBM T.J. Watson Research Center Hawthorne, NY 10532 rahul.jan@us.bm.com

More information

Bid-auction framework for microsimulation of location choice with endogenous real estate prices

Bid-auction framework for microsimulation of location choice with endogenous real estate prices Bd-aucton framework for mcrosmulaton of locaton choce wth endogenous real estate prces Rcardo Hurtuba Mchel Berlare Francsco Martínez Urbancs Termas de Chllán, Chle March 28 th 2012 Outlne 1) Motvaton

More information

RENEWABLE energy increasingly constitutes a greater

RENEWABLE energy increasingly constitutes a greater IEEE TRANSACTIONS ON SMART GRID, VOL. 0, NO. 0, JANUARY 0000 Aggregatng Correlated Wnd Power wth Full Surplus Extracton Wenyuan Tang, Member, IEEE, and Rahul Jan, Member, IEEE Abstract We study the problem

More information

Privatization and government preference in an international Cournot triopoly

Privatization and government preference in an international Cournot triopoly Fernanda A Ferrera Flávo Ferrera Prvatzaton and government preference n an nternatonal Cournot tropoly FERNANDA A FERREIRA and FLÁVIO FERREIRA Appled Management Research Unt (UNIAG School of Hosptalty

More information

Lecture 8. v i p i if i = ī, p i otherwise.

Lecture 8. v i p i if i = ī, p i otherwise. CS-621 Theory Gems October 11, 2012 Lecture 8 Lecturer: Aleksander Mądry Scrbes: Alna Dudeanu, Andre Gurgu 1 Mechansm Desgn So far, we were focusng on statc analyss of games. That s, we consdered scenaros

More information

A Case Study for Optimal Dynamic Simulation Allocation in Ordinal Optimization 1

A Case Study for Optimal Dynamic Simulation Allocation in Ordinal Optimization 1 A Case Study for Optmal Dynamc Smulaton Allocaton n Ordnal Optmzaton Chun-Hung Chen, Dongha He, and Mchael Fu 4 Abstract Ordnal Optmzaton has emerged as an effcent technque for smulaton and optmzaton.

More information

4: SPOT MARKET MODELS

4: SPOT MARKET MODELS 4: SPOT MARKET MODELS INCREASING COMPETITION IN THE BRITISH ELECTRICITY SPOT MARKET Rchard Green (1996) - Journal of Industral Economcs, Vol. XLIV, No. 2 PEKKA SULAMAA The obect of the paper Dfferent polcy

More information

Economic Design of Short-Run CSP-1 Plan Under Linear Inspection Cost

Economic Design of Short-Run CSP-1 Plan Under Linear Inspection Cost Tamkang Journal of Scence and Engneerng, Vol. 9, No 1, pp. 19 23 (2006) 19 Economc Desgn of Short-Run CSP-1 Plan Under Lnear Inspecton Cost Chung-Ho Chen 1 * and Chao-Yu Chou 2 1 Department of Industral

More information

University of Toronto November 9, 2006 ECO 209Y MACROECONOMIC THEORY. Term Test #1 L0101 L0201 L0401 L5101 MW MW 1-2 MW 2-3 W 6-8

University of Toronto November 9, 2006 ECO 209Y MACROECONOMIC THEORY. Term Test #1 L0101 L0201 L0401 L5101 MW MW 1-2 MW 2-3 W 6-8 Department of Economcs Prof. Gustavo Indart Unversty of Toronto November 9, 2006 SOLUTION ECO 209Y MACROECONOMIC THEORY Term Test #1 A LAST NAME FIRST NAME STUDENT NUMBER Crcle your secton of the course:

More information

University of Toronto November 9, 2006 ECO 209Y MACROECONOMIC THEORY. Term Test #1 L0101 L0201 L0401 L5101 MW MW 1-2 MW 2-3 W 6-8

University of Toronto November 9, 2006 ECO 209Y MACROECONOMIC THEORY. Term Test #1 L0101 L0201 L0401 L5101 MW MW 1-2 MW 2-3 W 6-8 Department of Economcs Prof. Gustavo Indart Unversty of Toronto November 9, 2006 SOLUTION ECO 209Y MACROECONOMIC THEORY Term Test #1 C LAST NAME FIRST NAME STUDENT NUMBER Crcle your secton of the course:

More information

On the Relationship between the VCG Mechanism and Market Clearing

On the Relationship between the VCG Mechanism and Market Clearing On the Relatonshp between the VCG Mechansm and Market Clearng Takash Tanaka 1 Na L 2 Kenko Uchda 3 Abstract We consder a socal cost mnmzaton problem wth equalty and nequalty constrants n whch a central

More information

Теоретические основы и методология имитационного и комплексного моделирования

Теоретические основы и методология имитационного и комплексного моделирования MONTE-CARLO STATISTICAL MODELLING METHOD USING FOR INVESTIGA- TION OF ECONOMIC AND SOCIAL SYSTEMS Vladmrs Jansons, Vtaljs Jurenoks, Konstantns Ddenko (Latva). THE COMMO SCHEME OF USI G OF TRADITIO AL METHOD

More information

General Examination in Microeconomic Theory. Fall You have FOUR hours. 2. Answer all questions

General Examination in Microeconomic Theory. Fall You have FOUR hours. 2. Answer all questions HARVARD UNIVERSITY DEPARTMENT OF ECONOMICS General Examnaton n Mcroeconomc Theory Fall 2010 1. You have FOUR hours. 2. Answer all questons PLEASE USE A SEPARATE BLUE BOOK FOR EACH QUESTION AND WRITE THE

More information

Problems to be discussed at the 5 th seminar Suggested solutions

Problems to be discussed at the 5 th seminar Suggested solutions ECON4260 Behavoral Economcs Problems to be dscussed at the 5 th semnar Suggested solutons Problem 1 a) Consder an ultmatum game n whch the proposer gets, ntally, 100 NOK. Assume that both the proposer

More information

Solution of periodic review inventory model with general constrains

Solution of periodic review inventory model with general constrains Soluton of perodc revew nventory model wth general constrans Soluton of perodc revew nventory model wth general constrans Prof Dr J Benkő SZIU Gödöllő Summary Reasons for presence of nventory (stock of

More information

CHAPTER 9 FUNCTIONAL FORMS OF REGRESSION MODELS

CHAPTER 9 FUNCTIONAL FORMS OF REGRESSION MODELS CHAPTER 9 FUNCTIONAL FORMS OF REGRESSION MODELS QUESTIONS 9.1. (a) In a log-log model the dependent and all explanatory varables are n the logarthmc form. (b) In the log-ln model the dependent varable

More information

Optimal Service-Based Procurement with Heterogeneous Suppliers

Optimal Service-Based Procurement with Heterogeneous Suppliers Optmal Servce-Based Procurement wth Heterogeneous Supplers Ehsan Elah 1 Saf Benjaafar 2 Karen L. Donohue 3 1 College of Management, Unversty of Massachusetts, Boston, MA 02125 2 Industral & Systems Engneerng,

More information

Introduction to PGMs: Discrete Variables. Sargur Srihari

Introduction to PGMs: Discrete Variables. Sargur Srihari Introducton to : Dscrete Varables Sargur srhar@cedar.buffalo.edu Topcs. What are graphcal models (or ) 2. Use of Engneerng and AI 3. Drectonalty n graphs 4. Bayesan Networks 5. Generatve Models and Samplng

More information

MgtOp 215 Chapter 13 Dr. Ahn

MgtOp 215 Chapter 13 Dr. Ahn MgtOp 5 Chapter 3 Dr Ahn Consder two random varables X and Y wth,,, In order to study the relatonshp between the two random varables, we need a numercal measure that descrbes the relatonshp The covarance

More information

Stochastic optimal day-ahead bid with physical future contracts

Stochastic optimal day-ahead bid with physical future contracts Introducton Stochastc optmal day-ahead bd wth physcal future contracts C. Corchero, F.J. Hereda Departament d Estadístca Investgacó Operatva Unverstat Poltècnca de Catalunya Ths work was supported by the

More information

Introduction. Chapter 7 - An Introduction to Portfolio Management

Introduction. Chapter 7 - An Introduction to Portfolio Management Introducton In the next three chapters, we wll examne dfferent aspects of captal market theory, ncludng: Brngng rsk and return nto the pcture of nvestment management Markowtz optmzaton Modelng rsk and

More information

Flight Delays, Capacity Investment and Welfare under Air Transport Supply-demand Equilibrium

Flight Delays, Capacity Investment and Welfare under Air Transport Supply-demand Equilibrium Flght Delays, Capacty Investment and Welfare under Ar Transport Supply-demand Equlbrum Bo Zou 1, Mark Hansen 2 1 Unversty of Illnos at Chcago 2 Unversty of Calforna at Berkeley 2 Total economc mpact of

More information

Finance 402: Problem Set 1 Solutions

Finance 402: Problem Set 1 Solutions Fnance 402: Problem Set 1 Solutons Note: Where approprate, the fnal answer for each problem s gven n bold talcs for those not nterested n the dscusson of the soluton. 1. The annual coupon rate s 6%. A

More information

Multifactor Term Structure Models

Multifactor Term Structure Models 1 Multfactor Term Structure Models A. Lmtatons of One-Factor Models 1. Returns on bonds of all maturtes are perfectly correlated. 2. Term structure (and prces of every other dervatves) are unquely determned

More information

Available online at ScienceDirect. Procedia Computer Science 24 (2013 ) 9 14

Available online at   ScienceDirect. Procedia Computer Science 24 (2013 ) 9 14 Avalable onlne at www.scencedrect.com ScenceDrect Proceda Computer Scence 24 (2013 ) 9 14 17th Asa Pacfc Symposum on Intellgent and Evolutonary Systems, IES2013 A Proposal of Real-Tme Schedulng Algorthm

More information

Quadratic Games. First version: February 24, 2017 This version: December 12, Abstract

Quadratic Games. First version: February 24, 2017 This version: December 12, Abstract Quadratc Games Ncolas S. Lambert Gorgo Martn Mchael Ostrovsky Frst verson: February 24, 2017 Ths verson: December 12, 2017 Abstract We study general quadratc games wth mult-dmensonal actons, stochastc

More information

The Efficiency of Uniform- Price Electricity Auctions: Evidence from Bidding Behavior in ERCOT

The Efficiency of Uniform- Price Electricity Auctions: Evidence from Bidding Behavior in ERCOT The Effcency of Unform- Prce Electrcty Auctons: Evdence from Bddng Behavor n ERCOT Steve Puller, Texas A&M (research jont wth Al Hortacsu, Unversty of Chcago) Tele-Semnar, March 4, 2008. 1 Outlne of Presentaton

More information

INTRODUCTION TO MACROECONOMICS FOR THE SHORT RUN (CHAPTER 1) WHY STUDY BUSINESS CYCLES? The intellectual challenge: Why is economic growth irregular?

INTRODUCTION TO MACROECONOMICS FOR THE SHORT RUN (CHAPTER 1) WHY STUDY BUSINESS CYCLES? The intellectual challenge: Why is economic growth irregular? INTRODUCTION TO MACROECONOMICS FOR THE SHORT RUN (CHATER 1) WHY STUDY BUSINESS CYCLES? The ntellectual challenge: Why s economc groth rregular? The socal challenge: Recessons and depressons cause elfare

More information

Optimization in portfolio using maximum downside deviation stochastic programming model

Optimization in portfolio using maximum downside deviation stochastic programming model Avalable onlne at www.pelagaresearchlbrary.com Advances n Appled Scence Research, 2010, 1 (1): 1-8 Optmzaton n portfolo usng maxmum downsde devaton stochastc programmng model Khlpah Ibrahm, Anton Abdulbasah

More information

A Constant-Factor Approximation Algorithm for Network Revenue Management

A Constant-Factor Approximation Algorithm for Network Revenue Management A Constant-Factor Approxmaton Algorthm for Networ Revenue Management Yuhang Ma 1, Paat Rusmevchentong 2, Ma Sumda 1, Huseyn Topaloglu 1 1 School of Operatons Research and Informaton Engneerng, Cornell

More information

Linear Combinations of Random Variables and Sampling (100 points)

Linear Combinations of Random Variables and Sampling (100 points) Economcs 30330: Statstcs for Economcs Problem Set 6 Unversty of Notre Dame Instructor: Julo Garín Sprng 2012 Lnear Combnatons of Random Varables and Samplng 100 ponts 1. Four-part problem. Go get some

More information

A Utilitarian Approach of the Rawls s Difference Principle

A Utilitarian Approach of the Rawls s Difference Principle 1 A Utltaran Approach of the Rawls s Dfference Prncple Hyeok Yong Kwon a,1, Hang Keun Ryu b,2 a Department of Poltcal Scence, Korea Unversty, Seoul, Korea, 136-701 b Department of Economcs, Chung Ang Unversty,

More information

Sequential equilibria of asymmetric ascending auctions: the case of log-normal distributions 3

Sequential equilibria of asymmetric ascending auctions: the case of log-normal distributions 3 Sequental equlbra of asymmetrc ascendng auctons: the case of log-normal dstrbutons 3 Robert Wlson Busness School, Stanford Unversty, Stanford, CA 94305-505, USA Receved: ; revsed verson. Summary: The sequental

More information

Financial mathematics

Financial mathematics Fnancal mathematcs Jean-Luc Bouchot jean-luc.bouchot@drexel.edu February 19, 2013 Warnng Ths s a work n progress. I can not ensure t to be mstake free at the moment. It s also lackng some nformaton. But

More information

/ Computational Genomics. Normalization

/ Computational Genomics. Normalization 0-80 /02-70 Computatonal Genomcs Normalzaton Gene Expresson Analyss Model Computatonal nformaton fuson Bologcal regulatory networks Pattern Recognton Data Analyss clusterng, classfcaton normalzaton, mss.

More information

Bayesian Budget Feasibility with Posted Pricing

Bayesian Budget Feasibility with Posted Pricing Bayesan Budget Feasblty wth Posted Prcng Erc Balkansk Harvard Unversty School of Engneerng and Appled Scences ercbalkansk@g.harvard.edu Jason D. Hartlne Northwestern Unversty EECS Department hartlne@eecs.northwestern.edu

More information

CHAPTER 3: BAYESIAN DECISION THEORY

CHAPTER 3: BAYESIAN DECISION THEORY CHATER 3: BAYESIAN DECISION THEORY Decson makng under uncertanty 3 rogrammng computers to make nference from data requres nterdscplnary knowledge from statstcs and computer scence Knowledge of statstcs

More information

Quadratic Games. First version: February 24, 2017 This version: August 3, Abstract

Quadratic Games. First version: February 24, 2017 This version: August 3, Abstract Quadratc Games Ncolas S. Lambert Gorgo Martn Mchael Ostrovsky Frst verson: February 24, 2017 Ths verson: August 3, 2018 Abstract We study general quadratc games wth multdmensonal actons, stochastc payoff

More information

2) In the medium-run/long-run, a decrease in the budget deficit will produce:

2) In the medium-run/long-run, a decrease in the budget deficit will produce: 4.02 Quz 2 Solutons Fall 2004 Multple-Choce Questons ) Consder the wage-settng and prce-settng equatons we studed n class. Suppose the markup, µ, equals 0.25, and F(u,z) = -u. What s the natural rate of

More information

Quiz on Deterministic part of course October 22, 2002

Quiz on Deterministic part of course October 22, 2002 Engneerng ystems Analyss for Desgn Quz on Determnstc part of course October 22, 2002 Ths s a closed book exercse. You may use calculators Grade Tables There are 90 ponts possble for the regular test, or

More information

Taxation and Externalities. - Much recent discussion of policy towards externalities, e.g., global warming debate/kyoto

Taxation and Externalities. - Much recent discussion of policy towards externalities, e.g., global warming debate/kyoto Taxaton and Externaltes - Much recent dscusson of polcy towards externaltes, e.g., global warmng debate/kyoto - Increasng share of tax revenue from envronmental taxaton 6 percent n OECD - Envronmental

More information

Wages as Anti-Corruption Strategy: A Note

Wages as Anti-Corruption Strategy: A Note DISCUSSION PAPER November 200 No. 46 Wages as Ant-Corrupton Strategy: A Note by dek SAO Faculty of Economcs, Kyushu-Sangyo Unversty Wages as ant-corrupton strategy: A Note dek Sato Kyushu-Sangyo Unversty

More information

Lecture Note 1: Foundations 1

Lecture Note 1: Foundations 1 Economcs 703 Advanced Mcroeconomcs Prof. Peter Cramton ecture Note : Foundatons Outlne A. Introducton and Examples B. Formal Treatment. Exstence of Nash Equlbrum. Exstence wthout uas-concavty 3. Perfect

More information

Jeffrey Ely. October 7, This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License.

Jeffrey Ely. October 7, This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License. October 7, 2012 Ths work s lcensed under the Creatve Commons Attrbuton-NonCommercal-ShareAlke 3.0 Lcense. Recap We saw last tme that any standard of socal welfare s problematc n a precse sense. If we want

More information

ISE High Income Index Methodology

ISE High Income Index Methodology ISE Hgh Income Index Methodology Index Descrpton The ISE Hgh Income Index s desgned to track the returns and ncome of the top 30 U.S lsted Closed-End Funds. Index Calculaton The ISE Hgh Income Index s

More information

Robust Stochastic Lot-Sizing by Means of Histograms

Robust Stochastic Lot-Sizing by Means of Histograms Robust Stochastc Lot-Szng by Means of Hstograms Abstract Tradtonal approaches n nventory control frst estmate the demand dstrbuton among a predefned famly of dstrbutons based on data fttng of hstorcal

More information

3: Central Limit Theorem, Systematic Errors

3: Central Limit Theorem, Systematic Errors 3: Central Lmt Theorem, Systematc Errors 1 Errors 1.1 Central Lmt Theorem Ths theorem s of prme mportance when measurng physcal quanttes because usually the mperfectons n the measurements are due to several

More information

Multiobjective De Novo Linear Programming *

Multiobjective De Novo Linear Programming * Acta Unv. Palack. Olomuc., Fac. rer. nat., Mathematca 50, 2 (2011) 29 36 Multobjectve De Novo Lnear Programmng * Petr FIALA Unversty of Economcs, W. Churchll Sq. 4, Prague 3, Czech Republc e-mal: pfala@vse.cz

More information

Horizontal Decomposition-based Stochastic Day-ahead Reliability Unit Commitment

Horizontal Decomposition-based Stochastic Day-ahead Reliability Unit Commitment 1 Horzontal Decomposton-based Stochastc Day-ahead Relablty Unt Commtment Yngzhong(Gary) Gu, Student Member, IEEE, Xng Wang, Senor Member, IEEE, Le Xe, Member, IEEE Abstract Ths paper presents a progressve

More information

3/3/2014. CDS M Phil Econometrics. Vijayamohanan Pillai N. Truncated standard normal distribution for a = 0.5, 0, and 0.5. CDS Mphil Econometrics

3/3/2014. CDS M Phil Econometrics. Vijayamohanan Pillai N. Truncated standard normal distribution for a = 0.5, 0, and 0.5. CDS Mphil Econometrics Lmted Dependent Varable Models: Tobt an Plla N 1 CDS Mphl Econometrcs Introducton Lmted Dependent Varable Models: Truncaton and Censorng Maddala, G. 1983. Lmted Dependent and Qualtatve Varables n Econometrcs.

More information

Dynamic Analysis of Knowledge Sharing of Agents with. Heterogeneous Knowledge

Dynamic Analysis of Knowledge Sharing of Agents with. Heterogeneous Knowledge Dynamc Analyss of Sharng of Agents wth Heterogeneous Kazuyo Sato Akra Namatame Dept. of Computer Scence Natonal Defense Academy Yokosuka 39-8686 JAPAN E-mal {g40045 nama} @nda.ac.jp Abstract In ths paper

More information

Optimising a general repair kit problem with a service constraint

Optimising a general repair kit problem with a service constraint Optmsng a general repar kt problem wth a servce constrant Marco Bjvank 1, Ger Koole Department of Mathematcs, VU Unversty Amsterdam, De Boelelaan 1081a, 1081 HV Amsterdam, The Netherlands Irs F.A. Vs Department

More information

The Vickrey-Target Strategy and the Core in Ascending Combinatorial Auctions

The Vickrey-Target Strategy and the Core in Ascending Combinatorial Auctions The Vckrey-Target Strategy and the Core n Ascendng Combnatoral Auctons Ryuj Sano ISER, Osaka Unversty Prelmnary Verson December 26, 2011 Abstract Ths paper consders a general class of combnatoral auctons

More information

A Framework for Analyzing the Economics of a Market for Grid Services

A Framework for Analyzing the Economics of a Market for Grid Services A Framework for Analyzng the Economcs of a Market for Grd Servces Robn Mason 1, Costas Courcoubets 2, and Natala Mlou 2 1 Unversty of Southampton, Hghfeld, Southampton, Unted Kngdom 2 Athens Unversty of

More information

IND E 250 Final Exam Solutions June 8, Section A. Multiple choice and simple computation. [5 points each] (Version A)

IND E 250 Final Exam Solutions June 8, Section A. Multiple choice and simple computation. [5 points each] (Version A) IND E 20 Fnal Exam Solutons June 8, 2006 Secton A. Multple choce and smple computaton. [ ponts each] (Verson A) (-) Four ndependent projects, each wth rsk free cash flows, have the followng B/C ratos:

More information

Benefit-Cost Analysis

Benefit-Cost Analysis Chapter 12 Beneft-Cost Analyss Utlty Possbltes and Potental Pareto Improvement Wthout explct nstructons about how to compare one person s benefts wth the losses of another, we can not expect beneft-cost

More information

Optimal policy for FDI incentives: An auction theory approach

Optimal policy for FDI incentives: An auction theory approach European Research Studes, Volume XII, Issue (3), 009 Optmal polcy for FDI ncentves: An aucton theory approach Abstract: Israel Lusk*, Mos Rosenbom** A multnatonal corporaton s (MNC) entry nto a host country

More information

Money, Banking, and Financial Markets (Econ 353) Midterm Examination I June 27, Name Univ. Id #

Money, Banking, and Financial Markets (Econ 353) Midterm Examination I June 27, Name Univ. Id # Money, Bankng, and Fnancal Markets (Econ 353) Mdterm Examnaton I June 27, 2005 Name Unv. Id # Note: Each multple-choce queston s worth 4 ponts. Problems 20, 21, and 22 carry 10, 8, and 10 ponts, respectvely.

More information

Participation and unbiased pricing in CDS settlement mechanisms

Participation and unbiased pricing in CDS settlement mechanisms Partcpaton and unbased prcng n CDS settlement mechansms Ahmad Pevand February 2017 Abstract The centralzed market for the settlement of credt default swaps (CDS), whch governs more than $10 trllon s worth

More information

GOODS AND FINANCIAL MARKETS: IS-LM MODEL SHORT RUN IN A CLOSED ECONOMIC SYSTEM

GOODS AND FINANCIAL MARKETS: IS-LM MODEL SHORT RUN IN A CLOSED ECONOMIC SYSTEM GOODS ND FINNCIL MRKETS: IS-LM MODEL SHORT RUN IN CLOSED ECONOMIC SSTEM THE GOOD MRKETS ND IS CURVE The Good markets assumpton: The producton s equal to the demand for goods Z; The demand s the sum of

More information

Analysis of Decentralized Decision Processes in Competitive Markets: Quantized Single and Double-Side Auctions

Analysis of Decentralized Decision Processes in Competitive Markets: Quantized Single and Double-Side Auctions Analyss of Decentralzed Decson Processes n Compettve Marets: Quantzed Sngle and Double-Sde Auctons Peng Ja and Peter E. Canes Abstract In ths paper two decentralzed decson processes for compettve marets

More information

A Virtual Deadline Scheduler for Window-Constrained Service Guarantees

A Virtual Deadline Scheduler for Window-Constrained Service Guarantees A Vrtual Deadlne Scheduler for Wndow-Constraned Servce Guarantees Yutng Zhang, Rchard West and Xn Q Computer Scence Department Boston Unversty Boston, MA 02215 {danazh,rchwest,xq}@cs.bu.edu Abstract Ths

More information

Appendix for Solving Asset Pricing Models when the Price-Dividend Function is Analytic

Appendix for Solving Asset Pricing Models when the Price-Dividend Function is Analytic Appendx for Solvng Asset Prcng Models when the Prce-Dvdend Functon s Analytc Ovdu L. Caln Yu Chen Thomas F. Cosmano and Alex A. Hmonas January 3, 5 Ths appendx provdes proofs of some results stated n our

More information