Robust Boosting and its Relation to Bagging

Size: px
Start display at page:

Download "Robust Boosting and its Relation to Bagging"

Transcription

1 Robust Boostng and ts Relaton to Baggng Saharon Rosset IBM T.J. Watson Research Center P. O. Box 218 Yorktown Heghts, NY ABSTRACT Several authors have suggested vewng boostng as a gradent descent search for a good ft n functon space. At each teraton observatons are re-weghted usng the gradent of the underlyng loss functon. We present an approach of weght decay for observaton weghts whch s equvalent to robustfyng the underlyng loss functon. At the extreme end of decay ths approach converges to Baggng, whch can be vewed as boostng wth a lnear underlyng loss functon. We llustrate the practcal usefulness of weght decay for mprovng predcton performance and present an equvalence between one form of weght decay and Huberzng a statstcal method for makng loss functons more robust. Categores and Subject Descrptors G.3 [Mathematcs of Computng]: Probablty and Statstcs; I.5.2 [Pattern Recognton]: Desgn Methodology General Terms Algorthms Keywords Boostng, Baggng, Robust Fttng 1. INTRODUCTION Boostng [9, 8] and Baggng [3] are two approaches to combnng weak models n order to buld predcton models that are sgnfcantly better. Much has been wrtten about the emprcal success of these approaches n creatng predcton models n actual modelng tasks [4, 1]. The theoretcal dscussons of these algorthms [4, 11, 12, 5, 16] have vewed them from varous perspectves. The general theoretcal and practcal consensus, however, s that the weak learners for boostng should be really weak, whle the weak learners for baggng should actually be strong. In tree termnology, one should use small trees when boostng and bg Permsson to make dgtal or hard copes of all or part of ths work for personal or classroom use s granted wthout fee provded that copes are not made or dstrbuted for proft or commercal advantage and that copes bear ths notce and the full ctaton on the frst page. To copy otherwse, to republsh, to post on servers or to redstrbute to lsts, requres pror specfc permsson and/or a fee. KDD 05, August 21 24, 2005, Chcago, Illnos, USA. Copyrght 2005 ACM X/05/ $5.00. trees for baggng. In ntutve bas-varance terms, we can say that baggng s manly a varance reducton (or stablzaton) operaton, whle boostng, n the way t flexbly combnes models, s also a bas reducton operaton,.e., t adds flexblty to the representaton beyond that of a sngle learner. In ths paper we present a vew of boostng and baggng whch allows us to connect them n a natural way, and n fact vew baggng as a form of boostng. Ths vew s nterestng because t facltates creaton of famles of ntermedate algorthms, whch offer a range of degrees of bas reducton between boostng and baggng. Our experments ndcate that, whle baggng s always sgnfcantly nferor to boostng n terms of predctve performance, n some cases ntermedate approaches can outperform standard boostng. Our exposton concentrates on 2-class classfcaton, ths beng the most common applcaton for both boostng and baggng, but the results mostly carry through to other learnng domans where boostng and baggng have been used, such as mult-class classfcaton, regresson [10] and densty estmaton [17]. 2. BOOSTING, BAGGING AND A CONNECTION A vew has emerged n the last few years of boostng as a gradent-based search for a good model n a large mplct feature space [16, 10]. More specfcally, the components of a 2-class classfcaton boostng algorthm are: A data sample {x, y } n =1, wth x R p and y { 1, +1} A loss functon C(y, f). The two most commonly used are the exponental loss of AdaBoost [9] and the logstc log-lkelhood of LogtBoost [11]. We wll assume throughout that C(y, f) s a monotone decreasng and convex functon of the margn m = yf, and so we wll sometmes wrte t as C(m). A dctonary H of weak learners, where each h H s a classfcaton model: h : R p { 1, +1}. The dctonary most commonly used n classfcaton boostng s classfcaton trees,.e., H s the set of all trees wth up to k splts. Gven these components, a boostng algorthm ncrementally bulds a good lnear combnaton of the dctonary functons: F (x) = h H β h h(x) 249

2 Where good s defned as makng the emprcal loss small: C(y F (x )) The actual ncremental algorthm s an exact or approxmate coordnate descent algorthm. At teraton t we have the current ft F t, and we look for the weak learner h t whch maxmzes the frst order decrease n the loss,.e., h t maxmzes C(y, F (x )) F =Ft β h or equvalently and more clearly t maxmzes C(y, F (x )) F =Ft h(x ) F (x ) whch n the case of two-class classfcaton s easly shown to be equvalent to mnmzng w I{y h(x )} (1) Where w = C(y,F (x )) F (x ) F =Ft. So we seek to fnd a weak learner whch mnmzes weghted error rate, wth the weghts beng the gradent of the loss. If we use the exponental loss: C(y, f) = exp( yf) (2) then t can be shown (e.g. [13]) that (1) s the exact classfcaton task whch AdaBoost [9], the orgnal and most famous boostng algorthm, solves for fndng the next weak learner. In ther orgnal AdaBoost mplementaton [8], Freund and Schapre suggested solvng (1) on a new tranng data set of sze n at each teraton, by samplng from the tranng dataset wth return wth probabltes proportonal to w. Ths facltates the use of methods for solvng non-weghted classfcaton problems for approxmately solvng (1). We wll term ths approach the samplng boostng algorthm. [10] has argued that samplng actually mproves the performance of boostng algorthms, by addng much needed randomness (hs approach s to solve the weghted verson of (1) on a sub-sample, but the basc motvaton apples to samplng boostng as well). Here s a formal descrpton of a samplng boostng algorthm, gven the nputs descrbed above: Algorthm 1. Generc gradent-based samplng boostng algorthm 1. Set β 0 = 0 (dmenson of β s H ). 2. For t = 1 : T, Comments: (a) Let F = β T t 1h(x ), = 1,..., n (the current ft, where h(x) s the vector of weak learner values at x). (b) Set w = C(y,F ) F, = 1,..., n. (c) Draw a sample {x, y } n =1 of sze n by re-samplng wth return from {x, y } n =1 wth probabltes proportonal to w (d) Identfy j t = arg mn j I{y h j(x )}. (e) Set β t,jt = β t 1,jt + ɛ and β t,k = β t 1,k, k j t. 1. Implementaton detals nclude the determnaton of T (or other stoppng crteron) and the approach for fndng the mnmum n step 2(d). 2. We have fxed the step sze to ɛ at each teraton (step 2(e)). Whle AdaBoost uses a lne search to determne step sze, t can be argued that a fxed (usually small ) ɛ step s theoretcally preferable (see [10, 18] for detals). 3. An mportant ssue n desgnng boostng algorthms s the selecton of the loss functon C(, ). Extreme loss functons, such as the exponental loss of AdaBoost (2), are not robust aganst outlers and msspecfed data, as they assgn overwhelmng weght to observatons whch have the smallest margns. [11] have thus suggested replacng the exponental loss wth the logstc log lkelhood loss: C(y, f) = log(1 + exp( yf)) (3) but n many practcal stuatons, n partcular when the two classes n the tranng data are separable n sp(h), ths loss can also be non-robust. See Secton The algorthm as descrbed here s not affected by postve affne transformatons of the loss functon,.e., runnng Algorthm 1 usng a loss functon C(m) s exactly the same as usng C (m) = ac(m) + b as long as a > 0. Baggng for classfcaton [3] s a dfferent model combnng approach. It searches, n each teraton, for the member of the dctonary H whch best classfes a bootstrap sample of the orgnal tranng data, and then averages the dscovered models to get a fnal bagged model. So, n fact, we can say that: Proposton 1. Baggng mplements Algorthm 1, usng a lnear loss, C(y, f) = yf (or any postve affne transformaton of t) Proof. From the defnton of the loss we get: F t,, yf (x) F =Ft 1 F (x ) So no matter what our current model s, all the gradents are always equal, hence the weghts wll be equal f we apply a samplng boostng teraton usng the lnear loss. In the case that all weghts are equal, the boostng samplng procedure descrbed above reduces to bootstrap samplng. Hence the baggng algorthm s a samplng boostng algorthm wth a lnear loss. Thus, we can consder Baggng as a boostng algorthm utlzng a very robust (lnear) loss. Ths loss s so robust that t requres no message passng between teratons through re-weghtng, snce the gradent of the loss does not depend on the current margns. 2.1 Robustness, convexty and boostng The gradent of a lnear loss does not emphasze lowmargn data ponts over hgh-margn ( well predcted ) ones. In fact, t s the most robust convex loss possble, n the followng sense: 250

3 Proposton 2. Any loss whch s a dfferentable, convex and decreasng functon of the margn has the property: m 1 < m 2 C (m 1 ) C (m 2 ) And a lnear loss s the only one whch attans equalty m 1, m 2. Proof. Immedate from convexty and monotoncty. [16] have used generalzaton error bounds to argue that a good loss for boostng would be even more robust than the lnear loss, and consequently non-convex. In partcular, they argue that both hgh-margn and low-margn observatons should have low weght, leadng to a sgmod-shaped loss functon. Non-convex loss functons present a sgnfcant computatonal challenge, whch [16] have solved for small dctonary examples. Although the dea of such outler tolerant loss functons s appealng, we lmt our dscusson to convex loss functons, whch facltate the use of standard fttng methodology, n partcular boostng. Our vew of baggng as boostng wth lnear loss allows us to nterpret the smlarty and dfference between the algorthms by lookng at the loss functons they are boostng. The lnear loss of baggng mples t s not emphaszng the badly predcted observatons, but rather treats all data equally. Thus t s more robust aganst outlers and more stable, but less adaptable to the data than boostng wth an exponental or logstc loss. 3. BOOSTING WITH WEIGHT DECAY The vew of baggng as a boostng algorthm, opens the door to creatng boostng-baggng hybrds, by robustfyng the loss functons used for boostng. These hybrds may combne the advantages of boostng and baggng to gve us new and useful algorthms. There are two ways to go about creatng these ntermedate algorthms: Defne a seres of loss functons startng from a boostng loss the exponental or logstc and convergng to the lnear loss of baggng. Implctly defne the ntermedate loss functons by decayng the weghts w gven by the boostng algorthm usng a boostng loss. The loss mpled wll be the one whose gradent corresponds to the decayed weghts. The two approaches are obvously equvalent through a dfferentaton or ntegraton operaton. We wll adopt the weght decay approach, but wll dscuss the loss functon mplcatons of the dfferent decay schemes. 3.1 Weght decay functons We would lke to change the loss C(, ) to be more robust, by frst decayng the (gradent) weghts w, then consderng the mplct effect of the decay on the loss. In general, we assume that we have a decay functon v(p, w) whch depends on a decay parameter p [0, 1] and the observaton weght w 0. We requre: v(1, w) = w,.e., no decay v(0, w) = 1,.e., no weghtng, whch mplctly assumes a lnear loss when consdered as a boostng algorthm and thus corresponds to Baggng Monotoncty: w 1 < w 2 v(p, w 1 ) < v(p, w 2 ) p. Contnuty n both p and w. And once we specfy a decay parameter p, the problem we solve n each boostng teraton, nstead of (1), s to fnd the dctonary functon h to mnmze: v(p, w )I{y h(x )} (4) whch we solve approxmately as a non-weghted problem, usng the samplng boostng approach of Algorthm Wndsorsed weghts and Huberzed loss For clarty and brevty, we concentrate n ths paper on one decay functon the boundng or Wndsorsng operator: v(p, w) = { 1 f w > p 1 p 1 p otherwse w p Snce w s the absolute gradent of the loss, boundng w means that we are huberzng the loss functon,.e., contnung t lnearly beyond some pont (ths operaton was suggested, n the context of squared error loss for regresson, by [14]). The loss functon correspondng to the decayed weghts s thus: C (p) (m) = { C(m) p C(m (p)) p m m (p) 1 p m > m (p) otherwse where m (p) s such that C (m (p)) = p (unque because the loss s convex and monotone 1 p decreasng) p=1 p=0.4 p=0.1 p= Fgure 1: Huberzed logstc loss, for dfferent values of p (as a functon of the margn). As we have mentoned, for the purpose of the samplng boostng algorthm, only the relatve szes of the weghts are mportant, and thus multplcaton of the loss functon by a constant of 1/p whch we have done to get C (p) from C and acheve the desred propertes of v(p, w) does not affect the algorthm for fxed p. Fgure 1 llustrates the effect of boundng on the logstc log lkelhood loss functon, for several values of p (for presentaton, ths plot uses the non-scaled verson of C,.e., (6) multpled by p). There are many other nterestng decay functons, such as the power transformaton: v(p, w) = w p (5) (6) 251

4 Ths transformaton s attractve because t does not ental the arbtrary threshold determnaton, but rather decays all weghts, wth the bgger ones decayed more. However, t s less nterpretable n terms of ts effect on the underlyng loss. For the exponental loss C(m) = exp( m), the power transformaton does not change the loss, snce [exp( m)] p = exp( mp). So the effect of ths transformaton s smply to slow the learnng rate (exactly equvalent to decreasng ɛ n Algorthm 1). 4. PROPERTIES OF HUBERIZED BOOSTING LOSS FUNCTIONS Our suggested huberzng transformaton n (6) to the orgnal loss functon (n our case, the exponental loss (2) or the logstc loss (3)) gves a modfed loss whch s lnear for small margns, then as the margn ncreases starts behavng lke the orgnal loss. We can therefore nterpret boostng wth the huberzed loss functon as baggng for a whle, untl the margns become bg enough and reach the nonhuberzed regon. If we boost long enough wth ths loss, we wll have most of the data margns n the non-huberzed regon, except for extreme outlers, and the resultng weghtng scheme wll revert to the standard boostng weghts, except for extreme outlers. We can thus consder boostng wth the huberzed loss to be more robust n the followng sense: By huberzng the loss functon, we are allowng the boostng algorthm to assgn relatvely small weghts to outlers and dffcult examples for a number of teratons, untl most observatons have become well explaned by attanng large margns Ths robustness argument s related to that of [11] and others for boostng wth the logstc loss (3) rather than the exponental (2), snce the logstc loss s approxmately lnear for negatve margns (as ts second dervatve vanshes when the margn goes to ). However, the logstc loss s very smlar to the exponental for non-negatve margns, as shown n [18]. Huberzng, on the other hand, allows for a flexble and specfc defnton of a lnear regon whch can be adapted to the modelng task at hand. It s nterestng to note that two desrable propertes of boostng loss functons are mantaned for ther huberzed versons: The boostng property Ths property states that f the hypothess space of weak learners allows weak learnng that s, weghted error rate of the weak learners of at most 1/2 λ for each teraton then the tranng margns wll ncrease, and generalzaton error wll go to 0 for large enough tranng samples (see [20] for more detals). Duffy and Helmbold ([7], Theorems 2-4) gve suffcent condtons for ths property to hold, whch apply to the exponental and logstc loss functons. These condtons nclude strct convexty, and so do not drectly apply to the huberzed versons. However closer nspecton of ther condtons exposes that we only need to re-state ther Theorem 2, whch states that all the (non-normalzed) margns are guaranteed to move beyond a fxed pont U wthn O(n 2 /λ 2 ) teratons, and never cross t back. We re-prove ths result for Huberzed loss functons, and take U = m to be the Huberzng pont. Once all the margns have moved beyond m, the loss functon becomes the logstc or exponental, whch are strctly convex, and Duffy and Helmbold s results prove the boostng property 1. Theorem 3. Assume the weak learnablty property for Algorthm 1,.e., w I{h t (x ) y } 1 λ, t (7) w 2 Then for the huberzed verson (6) of ether the exponental loss or the logstc loss, we get that after at most T = 4n2 C (p) (0) C (p) (m (p))λ 2 teratons we acheve total loss no bgger than C (p) (m (p)). That s: C (p) (y, F t (x )) C (p) (m (p)), t T and, consequently, all margns are guaranteed to be no bgger than m (p) from that teraton on. Proof outlne. Combnng the defnton of w wth (7) we get at each teraton: C(y, F (x )) F =Ft > 2λ β ht w (8) If our overall loss s bgger than C (p) (m (p)) then we can show that: w > C(p) (m (p)) 2 (for the exponental loss ths s a drect result of the property: C(p) (m) m = C(p) (m) and for the logstc loss we also use Proposton 1 of [18]). Next, we bound the second dervatve of the change n loss: 2 C (p) (y, F (x )) β 2 h t F =Ft 2 C (p) (m ) m 2 F =Ft (9) 2nC (p) (m (p)) (10) because the loss s convex and ts second dervatve s provably bounded by 2C (p) (m (p)). Takng these three facts (8,9,10) combned we get that at each boostng teraton f we take a step of sze λ we gan 2n an mprovement n the loss of at least: C (p) (m (p)) λ 2 4n as long as the total loss exceeds C (p) (m (p)). The actual step whch a lne-search boostng algorthm takes can be 1 Note that the generalzaton error property s proven for lne search boostng algorthms, not the ɛ-boostng approach we employ n our mplementaton below. [7] s results therefore mply that our suggested loss functons would also enjoy ths property, f embedded nto lne-search boostng mplementatons. 252

5 bgger than λ, but the mprovement n loss s guaranteed 2n to be at least that attaned by takng ths fxed step sze. Now, we observe that our ntal loss s nc (p) (0) and thus wthn ths number of teratons: T = 4n2 C (p) (0) C (p) (m (p))λ 2 we are guaranteed to have a non-postve loss f we stll have loss bgger than C (p) (m (p)). Ths obvously gves a contradcton and we conclude that the total loss after T teratons gves the desred result: C (p) (y, F T (x )) C (p) (m (p)) Snce the overall loss always decreases n every teraton n lne-search boostng, t can never exceed C (p) (m (p)) n subsequent teratons. Snce the loss functon s non-negatve decreasng, no margn can ever be smaller than m (p) n subsequent teratons. The margn maxmzng property [19] gve suffcent condtons for regularzed loss functons to be l p-margn maxmzng. Ths suffcent condton holds for the logstc and exponental loss functons, and also for ther huberzed versons, snce t only depends on the loss functon s behavor as the non-normalzed margn converges to. [18] explan how ths property extends approxmately to boostng. In essence, t mples that the robust boostng algorthm s seekng to maxmze the l 1 margn of the data examples, and under qute general condtons wll succeed n dong so. 5. EXPERIMENTS We now dscuss brefly some experments to examne the usefulness of weght decay and the stuatons n whch t may be benefcal. We use three datasets: the Spam and Waveform datasets, avalable from the UCI repostory [2]; and the Dgts handwrtten dgts recognton dataset, dscussed n [15]. These were chosen snce they are reasonably large and represent very dfferent problem domans. Snce we have lmted the dscusson here to 2-class models, we selected only two classes from the mult-class datasets: waveforms 1 and 2 from Waveform and the dgts 2 and 3 from Dgts (these were selected to make the problem as challengng as possble). The resultng 2-class data-sets are reasonably large rangng n sze between about 2000 and over In all three cases we used only 25% of the data for tranng and 75% for evaluaton, as our man goal s not to excel on the learnng task, but rather to make t dffcult and expose the dfferences between the models whch the dfferent algorthms buld. Our experments conssted of runnng Algorthm 1 usng varous loss functons, all obtaned by decayng the observaton weghts gven by the exponental loss functon (2). We used Wndsorsng decay as n (5) and thus the decayed versons correspond to huberzed versons of (2). In all our experments, baggng performed sgnfcantly worse than all versons of boostng, whch s consstent wth observatons made by varous researchers, that well-mplemented boostng algorthms almost nvarably domnate baggng (see for example Breman s own experments n [4]). It should be clear, however, that ths fact does not contradct the vew that baggng has some desrable propertes, n partcular a greater stablzng effect than boostng. We present n Fgure 2 the results of runnng Algorthm 1 wth p = 1 (.e., usng the non-decayed loss functon (2)), and wth p = (whch corresponds to huberzng the loss, as n (6), at around m = 9) on the three data-sets. We use two settngs for the weak learner : 10-node and 100-node trees. The learnng rate parameter ɛ s fxed at 0.1. The results n Fgure 2 represent averages over 20 random tran-test splts, wth estmated 2-standard devatons confdence bounds. These plots expose a few nterestng observatons: 1. Weght decay leads to a slower learnng rate. From an ntutve perspectve ths s to be expected, as a more robust loss corresponds to less aggressve learnng, by puttng less emphass on the hardest cases. 2. Weght decay s more useful when the weak learners are not weak, but rather strong, lke a 100-node tree. Ths s partcularly evdent for the Spam dataset, where the performance of the non-decayed exponental boostng deterorates sgnfcantly when we move from 10-node to 100-node learners, whle that of the decayed verson actually mproves sgnfcantly. Ths phenomenon s also as we expect, gven the more robust Huberzed loss mpled by the decay. 3. There s no consstent wnner between the non-decayed and the decayed versons. For the Spam and Waveform datasets t seems that f we choose the best settngs, we would choose the non-decayed loss wth small trees. For the Dgts data, the decayed loss seems to produce consstently better predcton models. Note that n our examples we dd not have bg robustness ssues as they pertan to extreme or hghly prevalent outlers n the predctors or the response. Rather, we examne the bas-varance tradeoff n employng more robust loss functons n dealng wth uncorrupted real-lfe datasets, and the dffcult cases they often contan. It s lkely that n extreme stuatons of many and/or bg outlers the advantage of usng robust loss functons would be more pronounced. 6. DISCUSSION The gradent descent vew of boostng allows us to desgn boostng algorthms for a varety of problems, and choose the loss functons whch we deem most approprate. [18] show that gradent boostng approxmately follows a path of l 1 -regularzed solutons to the chosen loss functon. Thus selecton of an approprate loss s a crtcal ssue n buldng useful algorthms. In ths paper we have shown that the gradent boostng paradgm covers baggng as well, and used t as ratonale for consderng new famles of loss functons hybrds of standard boostng loss functons and the baggng lnear loss functon as canddates for gradent-based boostng. The results seem promsng. There are some natural extensons to ths concept, whch may be even more promsng, n partcular the dea that the loss functon does not have to reman fxed throughout the boostng teratons. Thus, we can desgn dynamc loss functons whch change as the boostng teratons proceed, ether as a functon of the performance of the model or ndependently of t. It seems that 253

6 0.065 Spam 10 node trees Spam 100 node trees Waveform 10 node trees Waveform 100 node trees Dgts 10 node trees Dgts 100 node trees Fgure 2: Results of runnng samplng boostng wth p = 1 (sold) and p = (dashed) on the three datasets. wth 10-node and 100-node trees as the weak learners. The x-axs s the number of teratons, and the y-axs s the mean test-set error over 20 random tranng-test assgnments. The dotted lnes are 2-sd confdence bounds. 254

7 there are a lot of nterestng theoretcal and practcal questons that should come nto consderaton when we desgn such an algorthm, such as: Should the loss functon become more or less robust as the boostng teratons proceed? Should the loss functon become more or less robust f there are problematc data ponts? In ths context, t s nterestng to note some prevous work on boundng the boostng weghts to acheve more robust performance by [6]. The man dfference from our approach s that they operate on the re-normalzed AdaBoost weghts. Ther approach thus lacks the gradent descent nterpretaton n a new loss, snce AdaBoost s re-normalzaton of the weghts s equvalent to re-scalng the underlyng loss. Thus, [6] s approach amounts to Huberzng the exponental loss at a dfferent pont n each teraton. Ther algorthm also lacks a guaranteed boostng property lke the one we proved n Secton 4. However t would be nterestng to compare the emprcal mert of ther approach to ours and perhaps draw conclusons wth regard to usng adaptve robust loss functons. 7. ACKNOWLEDGMENTS Jerry Fredman, Trevor Haste and J Zhu contrbuted to ths paper through useful dscussons and advce. 8. REFERENCES [1] E. Bauer and R. Kohav. An emprcal comparson of votng classfcaton algorthms: baggng, boostng and varants. Machne Learnng, 36(1/2): , [2] C. Blake and C. Merz. Repostory of machne learnng databases. [ mlearn/mlrepostory.html]. Irvne, CA: Unversty of Calforna, Department of Informaton and Computer Scence., [3] L. Breman. Baggng predctors. Machne Learnng, 24(2): , [4] L. Breman. Arcng classfers. Annals of Statstcs, 26(3):801:849, [5] P. Buhlmann and B. Yu. Analyzng baggng. Annals of Statstcs, [6] C. Domngo and O. Watanabe. Madaboost: A modfcaton of adaboost. In 13th Annual Conference on Comp. Learnng Theory, [7] N. Duffy and D. Helmbold. Potental boosters? In Advance n Neural Informaton Processng, [8] Y. Freund and R. Scahpre. Experments wth a new boostng algorthm. In Internatonal Conference on Machne Learnng, [9] Y. Freund and R. Schapre. A decson-theoretc generalzaton of on-lne learnng and an applcaton to boostng. In European Conference on Computatonal Learnng Theory, pages 23 37, [10] J. Fredman. Greedy functon approxmaton: A gradent boostng machne. Annals of Statstcs, 29(5), [11] J. Fredman, T. Haste, and R. Tbshran. Addtve logstc regresson: a statstcal vew of boostng. Annals of Statstcs, 28: , [12] J. H. Fredman and P. Hall. On baggng and nonlnear estmaton. preprnt, [13] T. Haste, T. Tbshran, and J. Fredman. Elements of Statstcal Learnng. Sprnger-Verlag, New York, [14] P. Huber. Robust estmaton of a locaton parameter. Annals of Mathematcal Statstcs, 35(1):73:101, [15] Y. LeCun, B. Boser, J. Denker, D. Henderson, R. Howard, W. Hubbard, and L. Jackel. Handwrtten dgt recognton wth a back-propagaton network. In Advances n Neural Informaton Processng Systems 2, [16] L. Mason, J. Baxter, P. Bartlett, and M. Frean. Boostng algorthms as gradent descent. In Neural Informaton Processng Systems, volume 12, [17] S. Rosset and E. Segal. Boostng densty estmaton. In Advances n Neural Informaton Processng Systems 15, [18] S. Rosset, J. Zhu, and T. Haste. Boostng as a regularzed path to a maxmum margn classfer. Journal of Machne Learnng Research, 5, [19] S. Rosset, J. Zhu, and T. Haste. Margn maxmzng loss functons. In Advances n Neural Informaton Processng Systems 16, [20] R. E. Schapre, Y. Freund, P. Bartlett, and W. Lee. Boostng the margn: a new explanaton for the effectveness of votng methods. Annals of Statstcs, 26: ,

/ Computational Genomics. Normalization

/ Computational Genomics. Normalization 0-80 /02-70 Computatonal Genomcs Normalzaton Gene Expresson Analyss Model Computatonal nformaton fuson Bologcal regulatory networks Pattern Recognton Data Analyss clusterng, classfcaton normalzaton, mss.

More information

15-451/651: Design & Analysis of Algorithms January 22, 2019 Lecture #3: Amortized Analysis last changed: January 18, 2019

15-451/651: Design & Analysis of Algorithms January 22, 2019 Lecture #3: Amortized Analysis last changed: January 18, 2019 5-45/65: Desgn & Analyss of Algorthms January, 09 Lecture #3: Amortzed Analyss last changed: January 8, 09 Introducton In ths lecture we dscuss a useful form of analyss, called amortzed analyss, for problems

More information

CS 286r: Matching and Market Design Lecture 2 Combinatorial Markets, Walrasian Equilibrium, Tâtonnement

CS 286r: Matching and Market Design Lecture 2 Combinatorial Markets, Walrasian Equilibrium, Tâtonnement CS 286r: Matchng and Market Desgn Lecture 2 Combnatoral Markets, Walrasan Equlbrum, Tâtonnement Matchng and Money Recall: Last tme we descrbed the Hungaran Method for computng a maxmumweght bpartte matchng.

More information

Applications of Myerson s Lemma

Applications of Myerson s Lemma Applcatons of Myerson s Lemma Professor Greenwald 28-2-7 We apply Myerson s lemma to solve the sngle-good aucton, and the generalzaton n whch there are k dentcal copes of the good. Our objectve s welfare

More information

MgtOp 215 Chapter 13 Dr. Ahn

MgtOp 215 Chapter 13 Dr. Ahn MgtOp 5 Chapter 3 Dr Ahn Consder two random varables X and Y wth,,, In order to study the relatonshp between the two random varables, we need a numercal measure that descrbes the relatonshp The covarance

More information

Tests for Two Correlations

Tests for Two Correlations PASS Sample Sze Software Chapter 805 Tests for Two Correlatons Introducton The correlaton coeffcent (or correlaton), ρ, s a popular parameter for descrbng the strength of the assocaton between two varables.

More information

3: Central Limit Theorem, Systematic Errors

3: Central Limit Theorem, Systematic Errors 3: Central Lmt Theorem, Systematc Errors 1 Errors 1.1 Central Lmt Theorem Ths theorem s of prme mportance when measurng physcal quanttes because usually the mperfectons n the measurements are due to several

More information

OPERATIONS RESEARCH. Game Theory

OPERATIONS RESEARCH. Game Theory OPERATIONS RESEARCH Chapter 2 Game Theory Prof. Bbhas C. Gr Department of Mathematcs Jadavpur Unversty Kolkata, Inda Emal: bcgr.umath@gmal.com 1.0 Introducton Game theory was developed for decson makng

More information

Scribe: Chris Berlind Date: Feb 1, 2010

Scribe: Chris Berlind Date: Feb 1, 2010 CS/CNS/EE 253: Advanced Topcs n Machne Learnng Topc: Dealng wth Partal Feedback #2 Lecturer: Danel Golovn Scrbe: Chrs Berlnd Date: Feb 1, 2010 8.1 Revew In the prevous lecture we began lookng at algorthms

More information

Tests for Two Ordered Categorical Variables

Tests for Two Ordered Categorical Variables Chapter 253 Tests for Two Ordered Categorcal Varables Introducton Ths module computes power and sample sze for tests of ordered categorcal data such as Lkert scale data. Assumng proportonal odds, such

More information

CHAPTER 3: BAYESIAN DECISION THEORY

CHAPTER 3: BAYESIAN DECISION THEORY CHATER 3: BAYESIAN DECISION THEORY Decson makng under uncertanty 3 rogrammng computers to make nference from data requres nterdscplnary knowledge from statstcs and computer scence Knowledge of statstcs

More information

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture #21 Scribe: Lawrence Diao April 23, 2013

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture #21 Scribe: Lawrence Diao April 23, 2013 COS 511: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture #21 Scrbe: Lawrence Dao Aprl 23, 2013 1 On-Lne Log Loss To recap the end of the last lecture, we have the followng on-lne problem wth N

More information

occurrence of a larger storm than our culvert or bridge is barely capable of handling? (what is The main question is: What is the possibility of

occurrence of a larger storm than our culvert or bridge is barely capable of handling? (what is The main question is: What is the possibility of Module 8: Probablty and Statstcal Methods n Water Resources Engneerng Bob Ptt Unversty of Alabama Tuscaloosa, AL Flow data are avalable from numerous USGS operated flow recordng statons. Data s usually

More information

A MODEL OF COMPETITION AMONG TELECOMMUNICATION SERVICE PROVIDERS BASED ON REPEATED GAME

A MODEL OF COMPETITION AMONG TELECOMMUNICATION SERVICE PROVIDERS BASED ON REPEATED GAME A MODEL OF COMPETITION AMONG TELECOMMUNICATION SERVICE PROVIDERS BASED ON REPEATED GAME Vesna Radonć Đogatovć, Valentna Radočć Unversty of Belgrade Faculty of Transport and Traffc Engneerng Belgrade, Serba

More information

II. Random Variables. Variable Types. Variables Map Outcomes to Numbers

II. Random Variables. Variable Types. Variables Map Outcomes to Numbers II. Random Varables Random varables operate n much the same way as the outcomes or events n some arbtrary sample space the dstncton s that random varables are smply outcomes that are represented numercally.

More information

TCOM501 Networking: Theory & Fundamentals Final Examination Professor Yannis A. Korilis April 26, 2002

TCOM501 Networking: Theory & Fundamentals Final Examination Professor Yannis A. Korilis April 26, 2002 TO5 Networng: Theory & undamentals nal xamnaton Professor Yanns. orls prl, Problem [ ponts]: onsder a rng networ wth nodes,,,. In ths networ, a customer that completes servce at node exts the networ wth

More information

An Application of Alternative Weighting Matrix Collapsing Approaches for Improving Sample Estimates

An Application of Alternative Weighting Matrix Collapsing Approaches for Improving Sample Estimates Secton on Survey Research Methods An Applcaton of Alternatve Weghtng Matrx Collapsng Approaches for Improvng Sample Estmates Lnda Tompkns 1, Jay J. Km 2 1 Centers for Dsease Control and Preventon, atonal

More information

Understanding Annuities. Some Algebraic Terminology.

Understanding Annuities. Some Algebraic Terminology. Understandng Annutes Ma 162 Sprng 2010 Ma 162 Sprng 2010 March 22, 2010 Some Algebrac Termnology We recall some terms and calculatons from elementary algebra A fnte sequence of numbers s a functon of natural

More information

Economic Design of Short-Run CSP-1 Plan Under Linear Inspection Cost

Economic Design of Short-Run CSP-1 Plan Under Linear Inspection Cost Tamkang Journal of Scence and Engneerng, Vol. 9, No 1, pp. 19 23 (2006) 19 Economc Desgn of Short-Run CSP-1 Plan Under Lnear Inspecton Cost Chung-Ho Chen 1 * and Chao-Yu Chou 2 1 Department of Industral

More information

Linear Combinations of Random Variables and Sampling (100 points)

Linear Combinations of Random Variables and Sampling (100 points) Economcs 30330: Statstcs for Economcs Problem Set 6 Unversty of Notre Dame Instructor: Julo Garín Sprng 2012 Lnear Combnatons of Random Varables and Samplng 100 ponts 1. Four-part problem. Go get some

More information

Quiz on Deterministic part of course October 22, 2002

Quiz on Deterministic part of course October 22, 2002 Engneerng ystems Analyss for Desgn Quz on Determnstc part of course October 22, 2002 Ths s a closed book exercse. You may use calculators Grade Tables There are 90 ponts possble for the regular test, or

More information

Data Mining Linear and Logistic Regression

Data Mining Linear and Logistic Regression 07/02/207 Data Mnng Lnear and Logstc Regresson Mchael L of 26 Regresson In statstcal modellng, regresson analyss s a statstcal process for estmatng the relatonshps among varables. Regresson models are

More information

Likelihood Fits. Craig Blocker Brandeis August 23, 2004

Likelihood Fits. Craig Blocker Brandeis August 23, 2004 Lkelhood Fts Crag Blocker Brandes August 23, 2004 Outlne I. What s the queston? II. Lkelhood Bascs III. Mathematcal Propertes IV. Uncertantes on Parameters V. Mscellaneous VI. Goodness of Ft VII. Comparson

More information

CHAPTER 9 FUNCTIONAL FORMS OF REGRESSION MODELS

CHAPTER 9 FUNCTIONAL FORMS OF REGRESSION MODELS CHAPTER 9 FUNCTIONAL FORMS OF REGRESSION MODELS QUESTIONS 9.1. (a) In a log-log model the dependent and all explanatory varables are n the logarthmc form. (b) In the log-ln model the dependent varable

More information

Evaluating Performance

Evaluating Performance 5 Chapter Evaluatng Performance In Ths Chapter Dollar-Weghted Rate of Return Tme-Weghted Rate of Return Income Rate of Return Prncpal Rate of Return Daly Returns MPT Statstcs 5- Measurng Rates of Return

More information

CS54701: Information Retrieval

CS54701: Information Retrieval CS54701: Informaton Retreval Federated Search 22 March 2016 Prof. Chrs Clfton Federated Search Outlne Introducton to federated search Man research problems Resource Representaton Resource Selecton Results

More information

Production and Supply Chain Management Logistics. Paolo Detti Department of Information Engeneering and Mathematical Sciences University of Siena

Production and Supply Chain Management Logistics. Paolo Detti Department of Information Engeneering and Mathematical Sciences University of Siena Producton and Supply Chan Management Logstcs Paolo Dett Department of Informaton Engeneerng and Mathematcal Scences Unversty of Sena Convergence and complexty of the algorthm Convergence of the algorthm

More information

Price and Quantity Competition Revisited. Abstract

Price and Quantity Competition Revisited. Abstract rce and uantty Competton Revsted X. Henry Wang Unversty of Mssour - Columba Abstract By enlargng the parameter space orgnally consdered by Sngh and Vves (984 to allow for a wder range of cost asymmetry,

More information

Problems to be discussed at the 5 th seminar Suggested solutions

Problems to be discussed at the 5 th seminar Suggested solutions ECON4260 Behavoral Economcs Problems to be dscussed at the 5 th semnar Suggested solutons Problem 1 a) Consder an ultmatum game n whch the proposer gets, ntally, 100 NOK. Assume that both the proposer

More information

Equilibrium in Prediction Markets with Buyers and Sellers

Equilibrium in Prediction Markets with Buyers and Sellers Equlbrum n Predcton Markets wth Buyers and Sellers Shpra Agrawal Nmrod Megddo Benamn Armbruster Abstract Predcton markets wth buyers and sellers of contracts on multple outcomes are shown to have unque

More information

Elements of Economic Analysis II Lecture VI: Industry Supply

Elements of Economic Analysis II Lecture VI: Industry Supply Elements of Economc Analyss II Lecture VI: Industry Supply Ka Hao Yang 10/12/2017 In the prevous lecture, we analyzed the frm s supply decson usng a set of smple graphcal analyses. In fact, the dscusson

More information

ECE 586GT: Problem Set 2: Problems and Solutions Uniqueness of Nash equilibria, zero sum games, evolutionary dynamics

ECE 586GT: Problem Set 2: Problems and Solutions Uniqueness of Nash equilibria, zero sum games, evolutionary dynamics Unversty of Illnos Fall 08 ECE 586GT: Problem Set : Problems and Solutons Unqueness of Nash equlbra, zero sum games, evolutonary dynamcs Due: Tuesday, Sept. 5, at begnnng of class Readng: Course notes,

More information

Random Variables. b 2.

Random Variables. b 2. Random Varables Generally the object of an nvestgators nterest s not necessarly the acton n the sample space but rather some functon of t. Techncally a real valued functon or mappng whose doman s the sample

More information

Solution of periodic review inventory model with general constrains

Solution of periodic review inventory model with general constrains Soluton of perodc revew nventory model wth general constrans Soluton of perodc revew nventory model wth general constrans Prof Dr J Benkő SZIU Gödöllő Summary Reasons for presence of nventory (stock of

More information

Measures of Spread IQR and Deviation. For exam X, calculate the mean, median and mode. For exam Y, calculate the mean, median and mode.

Measures of Spread IQR and Deviation. For exam X, calculate the mean, median and mode. For exam Y, calculate the mean, median and mode. Part 4 Measures of Spread IQR and Devaton In Part we learned how the three measures of center offer dfferent ways of provdng us wth a sngle representatve value for a data set. However, consder the followng

More information

New Distance Measures on Dual Hesitant Fuzzy Sets and Their Application in Pattern Recognition

New Distance Measures on Dual Hesitant Fuzzy Sets and Their Application in Pattern Recognition Journal of Artfcal Intellgence Practce (206) : 8-3 Clausus Scentfc Press, Canada New Dstance Measures on Dual Hestant Fuzzy Sets and Ther Applcaton n Pattern Recognton L Xn a, Zhang Xaohong* b College

More information

A DUAL EXTERIOR POINT SIMPLEX TYPE ALGORITHM FOR THE MINIMUM COST NETWORK FLOW PROBLEM

A DUAL EXTERIOR POINT SIMPLEX TYPE ALGORITHM FOR THE MINIMUM COST NETWORK FLOW PROBLEM Yugoslav Journal of Operatons Research Vol 19 (2009), Number 1, 157-170 DOI:10.2298/YUJOR0901157G A DUAL EXTERIOR POINT SIMPLEX TYPE ALGORITHM FOR THE MINIMUM COST NETWORK FLOW PROBLEM George GERANIS Konstantnos

More information

Project Management Project Phases the S curve

Project Management Project Phases the S curve Project lfe cycle and resource usage Phases Project Management Project Phases the S curve Eng. Gorgo Locatell RATE OF RESOURCE ES Conceptual Defnton Realzaton Release TIME Cumulated resource usage and

More information

Appendix - Normally Distributed Admissible Choices are Optimal

Appendix - Normally Distributed Admissible Choices are Optimal Appendx - Normally Dstrbuted Admssble Choces are Optmal James N. Bodurtha, Jr. McDonough School of Busness Georgetown Unversty and Q Shen Stafford Partners Aprl 994 latest revson September 00 Abstract

More information

2.1 Rademacher Calculus... 3

2.1 Rademacher Calculus... 3 COS 598E: Unsupervsed Learnng Week 2 Lecturer: Elad Hazan Scrbe: Kran Vodrahall Contents 1 Introducton 1 2 Non-generatve pproach 1 2.1 Rademacher Calculus............................... 3 3 Spectral utoencoders

More information

3/3/2014. CDS M Phil Econometrics. Vijayamohanan Pillai N. Truncated standard normal distribution for a = 0.5, 0, and 0.5. CDS Mphil Econometrics

3/3/2014. CDS M Phil Econometrics. Vijayamohanan Pillai N. Truncated standard normal distribution for a = 0.5, 0, and 0.5. CDS Mphil Econometrics Lmted Dependent Varable Models: Tobt an Plla N 1 CDS Mphl Econometrcs Introducton Lmted Dependent Varable Models: Truncaton and Censorng Maddala, G. 1983. Lmted Dependent and Qualtatve Varables n Econometrcs.

More information

Finite Math - Fall Section Future Value of an Annuity; Sinking Funds

Finite Math - Fall Section Future Value of an Annuity; Sinking Funds Fnte Math - Fall 2016 Lecture Notes - 9/19/2016 Secton 3.3 - Future Value of an Annuty; Snkng Funds Snkng Funds. We can turn the annutes pcture around and ask how much we would need to depost nto an account

More information

Creating a zero coupon curve by bootstrapping with cubic splines.

Creating a zero coupon curve by bootstrapping with cubic splines. MMA 708 Analytcal Fnance II Creatng a zero coupon curve by bootstrappng wth cubc splnes. erg Gryshkevych Professor: Jan R. M. Röman 0.2.200 Dvson of Appled Mathematcs chool of Educaton, Culture and Communcaton

More information

Single-Item Auctions. CS 234r: Markets for Networks and Crowds Lecture 4 Auctions, Mechanisms, and Welfare Maximization

Single-Item Auctions. CS 234r: Markets for Networks and Crowds Lecture 4 Auctions, Mechanisms, and Welfare Maximization CS 234r: Markets for Networks and Crowds Lecture 4 Auctons, Mechansms, and Welfare Maxmzaton Sngle-Item Auctons Suppose we have one or more tems to sell and a pool of potental buyers. How should we decde

More information

EDC Introduction

EDC Introduction .0 Introducton EDC3 In the last set of notes (EDC), we saw how to use penalty factors n solvng the EDC problem wth losses. In ths set of notes, we want to address two closely related ssues. What are, exactly,

More information

ECONOMETRICS - FINAL EXAM, 3rd YEAR (GECO & GADE)

ECONOMETRICS - FINAL EXAM, 3rd YEAR (GECO & GADE) ECONOMETRICS - FINAL EXAM, 3rd YEAR (GECO & GADE) May 17, 2016 15:30 Frst famly name: Name: DNI/ID: Moble: Second famly Name: GECO/GADE: Instructor: E-mal: Queston 1 A B C Blank Queston 2 A B C Blank Queston

More information

International ejournals

International ejournals Avalable onlne at www.nternatonalejournals.com ISSN 0976 1411 Internatonal ejournals Internatonal ejournal of Mathematcs and Engneerng 7 (010) 86-95 MODELING AND PREDICTING URBAN MALE POPULATION OF BANGLADESH:

More information

Interval Estimation for a Linear Function of. Variances of Nonnormal Distributions. that Utilize the Kurtosis

Interval Estimation for a Linear Function of. Variances of Nonnormal Distributions. that Utilize the Kurtosis Appled Mathematcal Scences, Vol. 7, 013, no. 99, 4909-4918 HIKARI Ltd, www.m-hkar.com http://dx.do.org/10.1988/ams.013.37366 Interval Estmaton for a Lnear Functon of Varances of Nonnormal Dstrbutons that

More information

Lecture 7. We now use Brouwer s fixed point theorem to prove Nash s theorem.

Lecture 7. We now use Brouwer s fixed point theorem to prove Nash s theorem. Topcs on the Border of Economcs and Computaton December 11, 2005 Lecturer: Noam Nsan Lecture 7 Scrbe: Yoram Bachrach 1 Nash s Theorem We begn by provng Nash s Theorem about the exstance of a mxed strategy

More information

Notes are not permitted in this examination. Do not turn over until you are told to do so by the Invigilator.

Notes are not permitted in this examination. Do not turn over until you are told to do so by the Invigilator. UNIVERSITY OF EAST ANGLIA School of Economcs Man Seres PG Examnaton 2016-17 BANKING ECONOMETRICS ECO-7014A Tme allowed: 2 HOURS Answer ALL FOUR questons. Queston 1 carres a weght of 30%; queston 2 carres

More information

A Comparison of Statistical Methods in Interrupted Time Series Analysis to Estimate an Intervention Effect

A Comparison of Statistical Methods in Interrupted Time Series Analysis to Estimate an Intervention Effect Transport and Road Safety (TARS) Research Joanna Wang A Comparson of Statstcal Methods n Interrupted Tme Seres Analyss to Estmate an Interventon Effect Research Fellow at Transport & Road Safety (TARS)

More information

Dynamic Analysis of Knowledge Sharing of Agents with. Heterogeneous Knowledge

Dynamic Analysis of Knowledge Sharing of Agents with. Heterogeneous Knowledge Dynamc Analyss of Sharng of Agents wth Heterogeneous Kazuyo Sato Akra Namatame Dept. of Computer Scence Natonal Defense Academy Yokosuka 39-8686 JAPAN E-mal {g40045 nama} @nda.ac.jp Abstract In ths paper

More information

Optimising a general repair kit problem with a service constraint

Optimising a general repair kit problem with a service constraint Optmsng a general repar kt problem wth a servce constrant Marco Bjvank 1, Ger Koole Department of Mathematcs, VU Unversty Amsterdam, De Boelelaan 1081a, 1081 HV Amsterdam, The Netherlands Irs F.A. Vs Department

More information

Multifactor Term Structure Models

Multifactor Term Structure Models 1 Multfactor Term Structure Models A. Lmtatons of One-Factor Models 1. Returns on bonds of all maturtes are perfectly correlated. 2. Term structure (and prces of every other dervatves) are unquely determned

More information

Parallel Prefix addition

Parallel Prefix addition Marcelo Kryger Sudent ID 015629850 Parallel Prefx addton The parallel prefx adder presented next, performs the addton of two bnary numbers n tme of complexty O(log n) and lnear cost O(n). Lets notce the

More information

Chapter 3 Student Lecture Notes 3-1

Chapter 3 Student Lecture Notes 3-1 Chapter 3 Student Lecture otes 3-1 Busness Statstcs: A Decson-Makng Approach 6 th Edton Chapter 3 Descrbng Data Usng umercal Measures 005 Prentce-Hall, Inc. Chap 3-1 Chapter Goals After completng ths chapter,

More information

Introduction to PGMs: Discrete Variables. Sargur Srihari

Introduction to PGMs: Discrete Variables. Sargur Srihari Introducton to : Dscrete Varables Sargur srhar@cedar.buffalo.edu Topcs. What are graphcal models (or ) 2. Use of Engneerng and AI 3. Drectonalty n graphs 4. Bayesan Networks 5. Generatve Models and Samplng

More information

A Bootstrap Confidence Limit for Process Capability Indices

A Bootstrap Confidence Limit for Process Capability Indices A ootstrap Confdence Lmt for Process Capablty Indces YANG Janfeng School of usness, Zhengzhou Unversty, P.R.Chna, 450001 Abstract The process capablty ndces are wdely used by qualty professonals as an

More information

Analysis of Variance and Design of Experiments-II

Analysis of Variance and Design of Experiments-II Analyss of Varance and Desgn of Experments-II MODULE VI LECTURE - 4 SPLIT-PLOT AND STRIP-PLOT DESIGNS Dr. Shalabh Department of Mathematcs & Statstcs Indan Insttute of Technology Kanpur An example to motvate

More information

SIMPLE FIXED-POINT ITERATION

SIMPLE FIXED-POINT ITERATION SIMPLE FIXED-POINT ITERATION The fed-pont teraton method s an open root fndng method. The method starts wth the equaton f ( The equaton s then rearranged so that one s one the left hand sde of the equaton

More information

Finance 402: Problem Set 1 Solutions

Finance 402: Problem Set 1 Solutions Fnance 402: Problem Set 1 Solutons Note: Where approprate, the fnal answer for each problem s gven n bold talcs for those not nterested n the dscusson of the soluton. 1. The annual coupon rate s 6%. A

More information

Financial mathematics

Financial mathematics Fnancal mathematcs Jean-Luc Bouchot jean-luc.bouchot@drexel.edu February 19, 2013 Warnng Ths s a work n progress. I can not ensure t to be mstake free at the moment. It s also lackng some nformaton. But

More information

Preliminary communication. Received: 20 th November 2013 Accepted: 10 th December 2013 SUMMARY

Preliminary communication. Received: 20 th November 2013 Accepted: 10 th December 2013 SUMMARY Elen Twrdy, Ph. D. Mlan Batsta, Ph. D. Unversty of Ljubljana Faculty of Martme Studes and Transportaton Pot pomorščakov 4 632 Portorož Slovena Prelmnary communcaton Receved: 2 th November 213 Accepted:

More information

Facility Location Problem. Learning objectives. Antti Salonen Farzaneh Ahmadzadeh

Facility Location Problem. Learning objectives. Antti Salonen Farzaneh Ahmadzadeh Antt Salonen Farzaneh Ahmadzadeh 1 Faclty Locaton Problem The study of faclty locaton problems, also known as locaton analyss, s a branch of operatons research concerned wth the optmal placement of facltes

More information

Fast Laplacian Solvers by Sparsification

Fast Laplacian Solvers by Sparsification Spectral Graph Theory Lecture 19 Fast Laplacan Solvers by Sparsfcaton Danel A. Spelman November 9, 2015 Dsclamer These notes are not necessarly an accurate representaton of what happened n class. The notes

More information

Capability Analysis. Chapter 255. Introduction. Capability Analysis

Capability Analysis. Chapter 255. Introduction. Capability Analysis Chapter 55 Introducton Ths procedure summarzes the performance of a process based on user-specfed specfcaton lmts. The observed performance as well as the performance relatve to the Normal dstrbuton are

More information

ISyE 512 Chapter 9. CUSUM and EWMA Control Charts. Instructor: Prof. Kaibo Liu. Department of Industrial and Systems Engineering UW-Madison

ISyE 512 Chapter 9. CUSUM and EWMA Control Charts. Instructor: Prof. Kaibo Liu. Department of Industrial and Systems Engineering UW-Madison ISyE 512 hapter 9 USUM and EWMA ontrol harts Instructor: Prof. Kabo Lu Department of Industral and Systems Engneerng UW-Madson Emal: klu8@wsc.edu Offce: Room 317 (Mechancal Engneerng Buldng) ISyE 512 Instructor:

More information

Tree-based and GA tools for optimal sampling design

Tree-based and GA tools for optimal sampling design Tree-based and GA tools for optmal samplng desgn The R User Conference 2008 August 2-4, Technsche Unverstät Dortmund, Germany Marco Balln, Gulo Barcarol Isttuto Nazonale d Statstca (ISTAT) Defnton of the

More information

Supplementary material for Non-conjugate Variational Message Passing for Multinomial and Binary Regression

Supplementary material for Non-conjugate Variational Message Passing for Multinomial and Binary Regression Supplementary materal for Non-conjugate Varatonal Message Passng for Multnomal and Bnary Regresson October 9, 011 1 Alternatve dervaton We wll focus on a partcular factor f a and varable x, wth the am

More information

Physics 4A. Error Analysis or Experimental Uncertainty. Error

Physics 4A. Error Analysis or Experimental Uncertainty. Error Physcs 4A Error Analyss or Expermental Uncertanty Slde Slde 2 Slde 3 Slde 4 Slde 5 Slde 6 Slde 7 Slde 8 Slde 9 Slde 0 Slde Slde 2 Slde 3 Slde 4 Slde 5 Slde 6 Slde 7 Slde 8 Slde 9 Slde 20 Slde 2 Error n

More information

AC : THE DIAGRAMMATIC AND MATHEMATICAL APPROACH OF PROJECT TIME-COST TRADEOFFS

AC : THE DIAGRAMMATIC AND MATHEMATICAL APPROACH OF PROJECT TIME-COST TRADEOFFS AC 2008-1635: THE DIAGRAMMATIC AND MATHEMATICAL APPROACH OF PROJECT TIME-COST TRADEOFFS Kun-jung Hsu, Leader Unversty Amercan Socety for Engneerng Educaton, 2008 Page 13.1217.1 Ttle of the Paper: The Dagrammatc

More information

Cyclic Scheduling in a Job shop with Multiple Assembly Firms

Cyclic Scheduling in a Job shop with Multiple Assembly Firms Proceedngs of the 0 Internatonal Conference on Industral Engneerng and Operatons Management Kuala Lumpur, Malaysa, January 4, 0 Cyclc Schedulng n a Job shop wth Multple Assembly Frms Tetsuya Kana and Koch

More information

Teaching Note on Factor Model with a View --- A tutorial. This version: May 15, Prepared by Zhi Da *

Teaching Note on Factor Model with a View --- A tutorial. This version: May 15, Prepared by Zhi Da * Copyrght by Zh Da and Rav Jagannathan Teachng Note on For Model th a Ve --- A tutoral Ths verson: May 5, 2005 Prepared by Zh Da * Ths tutoral demonstrates ho to ncorporate economc ves n optmal asset allocaton

More information

Taxation and Externalities. - Much recent discussion of policy towards externalities, e.g., global warming debate/kyoto

Taxation and Externalities. - Much recent discussion of policy towards externalities, e.g., global warming debate/kyoto Taxaton and Externaltes - Much recent dscusson of polcy towards externaltes, e.g., global warmng debate/kyoto - Increasng share of tax revenue from envronmental taxaton 6 percent n OECD - Envronmental

More information

A Case Study for Optimal Dynamic Simulation Allocation in Ordinal Optimization 1

A Case Study for Optimal Dynamic Simulation Allocation in Ordinal Optimization 1 A Case Study for Optmal Dynamc Smulaton Allocaton n Ordnal Optmzaton Chun-Hung Chen, Dongha He, and Mchael Fu 4 Abstract Ordnal Optmzaton has emerged as an effcent technque for smulaton and optmzaton.

More information

ISE High Income Index Methodology

ISE High Income Index Methodology ISE Hgh Income Index Methodology Index Descrpton The ISE Hgh Income Index s desgned to track the returns and ncome of the top 30 U.S lsted Closed-End Funds. Index Calculaton The ISE Hgh Income Index s

More information

A Utilitarian Approach of the Rawls s Difference Principle

A Utilitarian Approach of the Rawls s Difference Principle 1 A Utltaran Approach of the Rawls s Dfference Prncple Hyeok Yong Kwon a,1, Hang Keun Ryu b,2 a Department of Poltcal Scence, Korea Unversty, Seoul, Korea, 136-701 b Department of Economcs, Chung Ang Unversty,

More information

The Integration of the Israel Labour Force Survey with the National Insurance File

The Integration of the Israel Labour Force Survey with the National Insurance File The Integraton of the Israel Labour Force Survey wth the Natonal Insurance Fle Natale SHLOMO Central Bureau of Statstcs Kanfey Nesharm St. 66, corner of Bach Street, Jerusalem Natales@cbs.gov.l Abstact:

More information

Wages as Anti-Corruption Strategy: A Note

Wages as Anti-Corruption Strategy: A Note DISCUSSION PAPER November 200 No. 46 Wages as Ant-Corrupton Strategy: A Note by dek SAO Faculty of Economcs, Kyushu-Sangyo Unversty Wages as ant-corrupton strategy: A Note dek Sato Kyushu-Sangyo Unversty

More information

A Set of new Stochastic Trend Models

A Set of new Stochastic Trend Models A Set of new Stochastc Trend Models Johannes Schupp Longevty 13, Tape, 21 th -22 th September 2017 www.fa-ulm.de Introducton Uncertanty about the evoluton of mortalty Measure longevty rsk n penson or annuty

More information

Lecture Note 2 Time Value of Money

Lecture Note 2 Time Value of Money Seg250 Management Prncples for Engneerng Managers Lecture ote 2 Tme Value of Money Department of Systems Engneerng and Engneerng Management The Chnese Unversty of Hong Kong Interest: The Cost of Money

More information

Mathematical Thinking Exam 1 09 October 2017

Mathematical Thinking Exam 1 09 October 2017 Mathematcal Thnkng Exam 1 09 October 2017 Name: Instructons: Be sure to read each problem s drectons. Wrte clearly durng the exam and fully erase or mark out anythng you do not want graded. You may use

More information

Networks in Finance and Marketing I

Networks in Finance and Marketing I Networks n Fnance and Marketng I Prof. Dr. Danng Hu Department of Informatcs Unversty of Zurch Nov 26th, 2012 Outlne n Introducton: Networks n Fnance n Stock Correlaton Networks n Stock Ownershp Networks

More information

Mode is the value which occurs most frequency. The mode may not exist, and even if it does, it may not be unique.

Mode is the value which occurs most frequency. The mode may not exist, and even if it does, it may not be unique. 1.7.4 Mode Mode s the value whch occurs most frequency. The mode may not exst, and even f t does, t may not be unque. For ungrouped data, we smply count the largest frequency of the gven value. If all

More information

Appendix for Solving Asset Pricing Models when the Price-Dividend Function is Analytic

Appendix for Solving Asset Pricing Models when the Price-Dividend Function is Analytic Appendx for Solvng Asset Prcng Models when the Prce-Dvdend Functon s Analytc Ovdu L. Caln Yu Chen Thomas F. Cosmano and Alex A. Hmonas January 3, 5 Ths appendx provdes proofs of some results stated n our

More information

Spatial Variations in Covariates on Marriage and Marital Fertility: Geographically Weighted Regression Analyses in Japan

Spatial Variations in Covariates on Marriage and Marital Fertility: Geographically Weighted Regression Analyses in Japan Spatal Varatons n Covarates on Marrage and Martal Fertlty: Geographcally Weghted Regresson Analyses n Japan Kenj Kamata (Natonal Insttute of Populaton and Socal Securty Research) Abstract (134) To understand

More information

references Chapters on game theory in Mas-Colell, Whinston and Green

references Chapters on game theory in Mas-Colell, Whinston and Green Syllabus. Prelmnares. Role of game theory n economcs. Normal and extensve form of a game. Game-tree. Informaton partton. Perfect recall. Perfect and mperfect nformaton. Strategy.. Statc games of complete

More information

Maximum Likelihood Estimation of Isotonic Normal Means with Unknown Variances*

Maximum Likelihood Estimation of Isotonic Normal Means with Unknown Variances* Journal of Multvarate Analyss 64, 183195 (1998) Artcle No. MV971717 Maxmum Lelhood Estmaton of Isotonc Normal Means wth Unnown Varances* Nng-Zhong Sh and Hua Jang Northeast Normal Unversty, Changchun,Chna

More information

OCR Statistics 1 Working with data. Section 2: Measures of location

OCR Statistics 1 Working with data. Section 2: Measures of location OCR Statstcs 1 Workng wth data Secton 2: Measures of locaton Notes and Examples These notes have sub-sectons on: The medan Estmatng the medan from grouped data The mean Estmatng the mean from grouped data

More information

Comparison of Singular Spectrum Analysis and ARIMA

Comparison of Singular Spectrum Analysis and ARIMA Int. Statstcal Inst.: Proc. 58th World Statstcal Congress, 0, Dubln (Sesson CPS009) p.99 Comparson of Sngular Spectrum Analss and ARIMA Models Zokae, Mohammad Shahd Behesht Unverst, Department of Statstcs

More information

Alternatives to Shewhart Charts

Alternatives to Shewhart Charts Alternatves to Shewhart Charts CUSUM & EWMA S Wongsa Overvew Revstng Shewhart Control Charts Cumulatve Sum (CUSUM) Control Chart Eponentally Weghted Movng Average (EWMA) Control Chart 2 Revstng Shewhart

More information

2) In the medium-run/long-run, a decrease in the budget deficit will produce:

2) In the medium-run/long-run, a decrease in the budget deficit will produce: 4.02 Quz 2 Solutons Fall 2004 Multple-Choce Questons ) Consder the wage-settng and prce-settng equatons we studed n class. Suppose the markup, µ, equals 0.25, and F(u,z) = -u. What s the natural rate of

More information

Foundations of Machine Learning II TP1: Entropy

Foundations of Machine Learning II TP1: Entropy Foundatons of Machne Learnng II TP1: Entropy Gullaume Charpat (Teacher) & Gaétan Marceau Caron (Scrbe) Problem 1 (Gbbs nequalty). Let p and q two probablty measures over a fnte alphabet X. Prove that KL(p

More information

CrimeStat Version 3.3 Update Notes:

CrimeStat Version 3.3 Update Notes: CrmeStat Verson 3.3 Update Notes: Part 2: Regresson Modelng Ned Levne Domnque Lord Byung-Jung Park Ned Levne & Assocates Zachry Dept. of Korea Transport Insttute Houston, TX Cvl Engneerng Goyang, South

More information

Games and Decisions. Part I: Basic Theorems. Contents. 1 Introduction. Jane Yuxin Wang. 1 Introduction 1. 2 Two-player Games 2

Games and Decisions. Part I: Basic Theorems. Contents. 1 Introduction. Jane Yuxin Wang. 1 Introduction 1. 2 Two-player Games 2 Games and Decsons Part I: Basc Theorems Jane Yuxn Wang Contents 1 Introducton 1 2 Two-player Games 2 2.1 Zero-sum Games................................ 3 2.1.1 Pure Strateges.............................

More information

Raising Food Prices and Welfare Change: A Simple Calibration. Xiaohua Yu

Raising Food Prices and Welfare Change: A Simple Calibration. Xiaohua Yu Rasng Food Prces and Welfare Change: A Smple Calbraton Xaohua Yu Professor of Agrcultural Economcs Courant Research Centre Poverty, Equty and Growth Unversty of Göttngen CRC-PEG, Wlhelm-weber-Str. 2 3773

More information

Ch Rival Pure private goods (most retail goods) Non-Rival Impure public goods (internet service)

Ch Rival Pure private goods (most retail goods) Non-Rival Impure public goods (internet service) h 7 1 Publc Goods o Rval goods: a good s rval f ts consumpton by one person precludes ts consumpton by another o Excludable goods: a good s excludable f you can reasonably prevent a person from consumng

More information

Benefit-Cost Analysis

Benefit-Cost Analysis Chapter 12 Beneft-Cost Analyss Utlty Possbltes and Potental Pareto Improvement Wthout explct nstructons about how to compare one person s benefts wth the losses of another, we can not expect beneft-cost

More information

Dr.Ram Manohar Lohia Avadh University, Faizabad , (Uttar Pradesh) INDIA 1 Department of Computer Science & Engineering,

Dr.Ram Manohar Lohia Avadh University, Faizabad , (Uttar Pradesh) INDIA 1 Department of Computer Science & Engineering, Vnod Kumar et. al. / Internatonal Journal of Engneerng Scence and Technology Vol. 2(4) 21 473-479 Generalzaton of cost optmzaton n (S-1 S) lost sales nventory model Vnod Kumar Mshra 1 Lal Sahab Sngh 2

More information

Optimal Service-Based Procurement with Heterogeneous Suppliers

Optimal Service-Based Procurement with Heterogeneous Suppliers Optmal Servce-Based Procurement wth Heterogeneous Supplers Ehsan Elah 1 Saf Benjaafar 2 Karen L. Donohue 3 1 College of Management, Unversty of Massachusetts, Boston, MA 02125 2 Industral & Systems Engneerng,

More information