Accuracy guarantees for l 1 -recovery

Size: px
Start display at page:

Download "Accuracy guarantees for l 1 -recovery"

Transcription

1 Accuracy guarantees for l -recovery Anatol Judtsky LJK, Unversté J. Fourer, B.P. 53, 38 Grenoble Cedex 9, France Anatol.Judtsky@mag.fr Arkad Nemrovsk Georga Insttute of Technology, Atlanta, Georga 333, USA nemrovs@sye.gatech.edu November 5, Abstract We dscuss two new methods of recovery of sparse sgnals from nosy observaton based on l - mnmzaton. They are closely related to the well-known technques such as Lasso and Dantzg Selector. However, these estmators come wth effcently verfable guarantes of performance. By optmzng these bounds wth respect to the method parameters we are able to construct the estmators whch possess better statstcal propertes than the commonly used ones. We also show how these technques allow to provde effcently computable accuracy bounds for Lasso and Dantzg Selector. We lnk our performance estmatons to the well known results of Compressve Sensng and justfy our proposed approach wth an oracle nequalty whch lnks the propertes of the recovery algorthms and the best estmaton performance when the sgnal support s known. We also show how the estmates can be computed usng the Non-Eucldean Bass Pursut algorthm. Key words : sparse recovery, lnear estmaton, oracle nequaltes, estmaton by convex optmzaton AMS Subject Classfcaton : 6G8, 9C5 Introducton Recently several methods of estmaton and selecton whch refer to the l -mnmzaton receved much attenton n the statstcal lterature. For nstance, Lasso estmator, whch s the l -penalzed least-squares method s probably the most studed (a theoretcal analyss of the Lasso estmator s provded n, e.g., [, 3,, 9,,, 7, 8], see also the references cted theren). Another, closely related to the Lasso, statstcal estmator s the Dantzg Selector [7,, 6, 7]. To be more precse, let us consder the estmaton problem as follows. Assume that an observaton y = Ax + σξ R m () s avalable, where x R n s an unknown sgnal and A R m n s a known sensng matrx. We suppose that σξ s a Gaussan dsturbance wth ξ N(, I m ) (.e., ξ = (ξ,..., ξ n ) T, where ξ are ndependent normal r.v. s wth zero mean and unt varance), and σ > s a known determnstc nose level. Our focus s on the recovery of unknown sgnal x. Research of the second author was supported by the Offce of Naval Research grant # N8.

2 The Dantzg Selector estmator x DS of the sgnal x s defned as follows [7]: x DS (y) Argmn{ v A T (Av y) ρ} v R n ( where ρ = O σ ) ln n s the algorthm s parameter. Snce x DS s obtaned as a soluton of an lnear program, t s very attractve by ts low computatonal cost. Accuracy bounds for ths estmator are readly ( avalable. For nstance, a well known result about ths estmator (cf. [7, Theorem.]) s that f ρ = O σ ) ln(n/ɛ) then x DS (y) x Kσ s log(nɛ ) wth probablty ɛ f a) the sgnal x s s-sparse,.e. has at most s non-vanshng components, and b) the sensng matrx A wth unt columns possesses the Restrcted Isometry Property RIP(δ, k) wth parameters < δ < + and k 3s. Further, n ths case one has K = C( δ), where C s a moderate absolute constant. Ths result s qute mpressve, n part due to the fact (see, e.g. [5, 6]) that there exst m n random matrces, wth m < n, whch possess the RIP wth probablty close to, δ close to zero and the value of k as large as O ( m ln (n/m) ). Smlar performance guarantees are known for Lasso recovery x lasso (y) Argmn v R n { v + κ Av y wth properly chosen penalty parameter κ. A drawback of Dantzg Selector and Lasso recoverng routnes s that these algorthms are really talored to comply wth the Restrcted Isometry Property whch we do not know how to verfy effcently. New accuracy bounds for Lasso and Dantzg Selector have been proposed recently, whch rely upon less restrctve assumptons about the sensng matrx, such as Restrcted Egenvalue [] or Compatblty [3] condtons (a complete overvew of those and several other assumptons wth descrpton of how they relate to each other s provded n [9]). However, these assumptons share wth the RIP the same mportant drawback: gven a problem nstance they cannot be effcently verfed. The latter mples that there s currently no way to provde any guarantes (e.g., confdence sets) of the performance of the proposed procedures. A notable excepton from ths rule s the Mutual Incoherence assumpton (see, e.g. [,, ]) whch can be used to compute the accuracy bounds for recovery algorthms: a matrx A wth columns of unt l -norm and mutual ncoherence µ(a) possesses RIP(δ, k) wth δ = (m )µ(a). Unfortunately, the latter relaton mples that µ(a) should be very small to certfy the possblty of accurate l -recovery of non-trval sparse sgnals, so that performance guarantees based on mutual ncoherence are very conservatve. Ths theoretcal observaton s supported by numercal experments the practcal guarantees whch may be obtaned usng the mutual ncoherence are generally qute poor even for the problems wth nce theoretcal propertes (cf. [, 5]). Recently the authors have proposed a new approach for effcent computng of upper and lower bounds on the level of goodness of a sensng matrx A,.e. the maxmal s such that the l -recovery of all sgnals wth no more than s non-vanshng components s accurate n the case where the measurement nose vanshes (see []). In the present paper we am to use the related verfable suffcent condtons of goodness of a sensng matrx A to provde effcently computable bounds for the error of l recovery procedures n the case when the observatons are affected by random nose. The man body of the paper s organzed as follows: Recall that RIP(δ, k), called also unform uncertanty prncple, means that for any v R n wth at most k nonzero entres, }, ( δ) v Av ( + δ) v Ths property essentally requres that every set of columns of A wth cardnalty less than k approxmately behaves lke an orthonormal system. The mutual ncoherence µ(a) of a sensng matrx A = [A,..., A n ] s computed accordng to µ(a) = max j A T A j A T A. Obvously, the mutual ncoherence can be easly computed even for large matrces.

3 . We start wth Secton. where we formulate the sparse recovery problem and ntroduce our core assumpton a verfable condton H s, (κ) lnkng matrx A R m n and a contrast matrx H R m n. In Sectons.,.3 we present two recovery routnes wth contrast matrces: regular recovery: penalzed recovery: x reg (y) Argmn{ v : H T (Av y) ρ}, v R n x pen (y) Argmn v R n { v : +θs H T (Av y) }, (s s our guess for the number of nonzero entres n the true sgnal, θ > s the penalty parameter) along wth ther performance guarantees under condton H s, (κ) wth κ < /, that s, explct upper bounds on the confdence levels of the recovery errors x x p. The novelty here s that our bounds are of the form Prob { x x p O ( s /p σ ) ln(n/ɛ) } for every s-sparse sgnal x and all p ɛ () (wth hdden factors n O( ) ndependent of ɛ, σ), and are vald n the entre range p of values of p. Note that smlar error bounds for Dantzg Selector and Lasso are only known for p, whatever be the assumptons on essentally nonsquare matrx A.. Our nterest n condton H s, (κ) stems from the fact that ths condton, n contrast to the majorty of the known suffcent condtons for the valdty of l -based sparse recovery (e.g., Restrcted Isometry/Egenvalue/Compatblty), s effcently verfable. Moreover, t turns out that one can effcently optmze the error bounds of the assocated wth ths verfable condton regular/penalzed recovery routnes over the contrast matrx H. The related ssues are consdered n Secton 3. In Secton we provde some addtonal justfcaton of the condton H, n partcular, by lnkng t wth the Mutual Incoherence and Restrcted Isometry propertes. Ths, n partcular, mples that the condton H s, (κ) wth, say, κ = 3 assocated wth randomly selected m n matrces A s feasble, wth probablty approachng as m, n grow, for s as large as O( m/ ln(n)). We also establsh lmts of performance of the condton, specfcally, show that unless A s nearly square, H s (κ) wth κ < / can be feasble only when s O() m, meanng that the tractablty of the condton has a heavy prce: when desgnng and valdatng l mnmzaton based sparse recovery routnes, ths condton can be useful only n a severely restrcted range of the sparsty parameter s. 3. In Secton 5 we show that the condton H s, (κ) s the strongest (and seemngly the only verfable one) n a natural famly of condtons H s,q (κ) lnkng a sensng and a contrast matrx; here s s the number of nonzeros n the sparse sgnal to be recovered q [, ]. We demonstrate that when a contrast matrx H satsfes H s,q (κ) wth κ < /, the assocated regular and penalzed l recoveres admt error bounds smlar to (), but now n the restrcted range p q of values of p. We demonstrate also that feasblty of H s,q (κ) wth κ < / mples nstructve (although slghtly worse than those n ()) error bounds for the Dantzg Selector and Lasso recoverng routnes.. In Secton 6, we present numercal results on comparson of regular/penalzed l recovery wth the Dantzg Selector and Lasso algorthms. The concluson suggested by these prelmnary numercal results s that when the former procedures are applcable (.e., when the technques of Secton 3 allow to buld a not too large contrast matrx satsfyng the condton H s, (κ) wth, say, κ = /3), our procedures outperform sgnfcantly the Dantzg Selector and work exactly as well as the Lasso algorthm wth deal (unrealstc n actual applcatons) choce of the regularzaton parameter 3. 3 Wth theoretcally optmal, rather than deal, choce of the regularzaton parameter n Lasso, ths algorthm s essentally worse than our algorthms utlzng the contrast matrx. 3

4 5. In the concludng Secton 7 we present a Non-Eucldean Matchng Pursut algorthm (smlar to the one presented n [5]) wth the same performance characterstcs as those of regular/penalzed l recoveres; ths algorthm, however, does not requre optmzaton and can be consdered as a computatonally cheap alternatve to l recoveres, especally n the case when one needs to process a seres of recovery problems wth common sensng matrx. All proofs are placed n the Appendx. Accuracy bounds for l -Recovery Routnes. Problem statement Notaton. For a vector x R n and s n we denote x s the vector obtaned from x by settng to all but the s largest n magntude entres of x. Tes, f any, could be resolved arbtrarly; for the sake of defnteness assume that among entres of equal magntudes, those wth smaller ndexes have prorty (e.g., wth x = [; ; ; 3] one has x = [; ; ; 3]). x s,p stands for the usual l p -norm of x s (so that x s, = x ). We say that a vector z s s-sparse f t has at most s nonzero entres. Fnally, for a set I {,..., n} we denote by J ts complement {,..., n}\i; gven x R n, we denote by x I the vector obtaned from x by zerong the entres wth ndces outsde of I, so that x = x I + x J. Gven a norm ν( ) on R m and a matrx H = [h,..., h N ] R m N, we set ν(h) = max ν(h ). N The problem. We consder an observaton y R m y = Ax + u + σξ, (3) where x R n s an unknown sgnal and A R m n s the sensng matrx. We suppose that σξ s a Gaussan dsturbance, where ξ N(, I m ) (.e., ξ = (ξ,..., ξ n ) T wth ndependent normal random varables ξ wth zero mean and unt varance), σ > beng known, and u s a nusance parameter known to belong to a gven uncertanty set U R m whch we wll suppose to be convex, compact and symmetrc w.r.t. the orgn. Our goal s to recover x from y, provded that x s nearly s-sparse. Specfcally, we consder the sets X(s, υ) = {x R n : x x s υ} of sgnals whch admt s-sparse approxmaton of -accuracy υ. Gven p, p, and a confdence level ɛ, ɛ (, ), we quantfy a recovery routne a Borel functon R m y x(y) R n by ts worst-case, over x X(s, υ), confdence nterval, taken w.r.t. p -norm of the error. Specfcally, we defne the rsks of a recovery routne as Rsk p ( x( ) ɛ, σ, s, υ) = nf {δ : Prob{ξ : x X(s, υ), u U : x(ax + σξ + u) x p > δ} ɛ}. Equvalently: Rsk p ( x( ) ɛ, σ, s, υ) δ f and only f there exsts a set Ξ of good realzatons of ξ wth Prob{ξ Ξ} ɛ such that whenever ξ Ξ, one has x(ax + σξ + u) x p δ for all x X(s, υ) and all u U. Norm ν( ). Gven ɛ and σ > let us denote ν(v) = ν ɛ,σ,u (v) = sup u T v + σ ln(n/ɛ) v. () u U Snce U s convex, closed and symmetrc wth respect to the orgn, ν( ) s a norm. Let ν be the norm on R n conjugate to ν: ν (u) = max v {vt u : ν(v) }.

5 Condtons H(γ) and H s, (κ). Let γ = (γ,..., γ n ) R n +. Gven A R m n, consder the followng condton on a matrx H = [h,..., h n ] R m n : H(γ): for all x R n and n one has x h T Ax + γ x. (5) Now let s be a postve nteger and κ >. Gven A R m n, we say that a matrx H = [h,..., h n ] R m n satsfes condton H s. (κ), f x R n : x H T Ax + s κ x. (6) The condtons we have ntroduced are closely related to each other: Lemma If H satsfes H(γ), then H satsfes H s, (s γ ), and nearly vce versa: gven H R m n satsfyng H s, (κ), one can buld effcently a matrx H R m n satsfyng H(γ) wth γ = κ s [;...; ] (.e., κ = s γ ) and such that the columns of H are convex combnatons of the columns of H and H, so that ν(h ) ν(h) for every norm ν( ) on R m.. Regular l Recovery In ths secton we dscuss the propertes of the regular l -recovery x reg gven by: x reg = x reg (y) Argmn v R n { v : h T (Av y) ρ, =,..., n}, (7) where y s as n (3), h, =,..., n, are some vectors n R m and ρ >, =,..., n. We refer to the matrx H = [h,..., h n ] as to the contrast matrx underlyng the recoverng procedure. The startng pont of our developments s the followng Proposton Gven an m n sensng matrx A, nose ntensty σ, uncertanty set U and a tolerance ɛ (, ), let the matrx H = [h,..., h n ] from (7) satsfy the condton H(γ) for some γ R n +, and let ρ n (7) satsfy the relaton ρ ν := ν(h ), =,..., n (8) where ν( ) s gven by (). Then there exsts a set Ξ R m, Prob{ξ Ξ} ɛ, of good realzatons of ξ such that () Whenever ξ Ξ, for every x R n, every u U and every subset I {,..., n} such that γ I := I γ <, (9) the regular l -recovery x reg gven by (7) satsfes: (a) x reg (Ax + σξ + u) x x J + ρ I + ν I γ I ; (b) [ x reg (Ax + σξ + u) x] ρ + ν + γ x reg (y) x () where ρ I = I ρ and ν I = I ν. () In partcular, when settng ρ + ν + γ x J + ρ I + ν I γ I, =,..., n, ρ s = [ρ ;...; ρ n ] s,, ν s = [ν(h );...; ν(h n )] s,, γ s = [γ ;...; γ n ] s,, ρ = ρ = max ρ, ν(h) = ν = max ν(h ), γ = γ = max γ, () The reason for ths cumbersome, at the frst glance, notaton wll become clear later, n Secton 5. 5

6 and assumng γ s <, for every x Rn, ξ Ξ and u U t holds x reg (Ax + σξ + u) x x xs + ρ s + ν s x xs ρ + ν(h) + s ; γ s γ s γ s x reg (Ax + σξ + u) x γ x xs γ s + [ + s γ γ s][ ρ + ν(h)] γ s () Fnally, assumng s γ < /, for every ξ X, x R n and u U one has x reg (Ax + σξ + u) x x xs s γ x reg (Ax + σξ + u) x s x x s s γ ρ+ν(h) + s s γ ; + ρ+ν(h) s γ. () Corollary Under the premse of Proposton, assume that γ s <. Then for all p and υ : Rsk p ( x reg ( ) ɛ, σ, s, υ) (for notaton, see ()). Further, f s γ < /, we have also γ s [ υ + ρs + ν s ] p [ γυ + [ γ s][ ρ + ν(h)] + γ[ ν s + ρ s ] ] p p (3) p Rsk p ( x reg ( ) ɛ, σ, s, υ) (s) p s γ (s υ + ρ + ν(h)). () The next statement s smlar to the cases of κ := s γ < / n Proposton and Corollary ; the dfference s that now we assume that H satsfes H s, (κ), whch, by Lemma, s a weaker requrement on H than to satsfy H(γ) wth s γ s γ = κ. Proposton Gven an m n sensng matrx A, nose ntensty σ, uncertanty set U and a tolerance ɛ (, ), let the matrx H = [h,..., h n ] from (7) satsfy the condton H s, (κ) for some κ < /, and let ρ n (7) satsfy the relaton (8). Then there exsts a set Ξ R m, Prob{ξ Ξ} ɛ, of good realzatons of ξ such that whenever ξ Ξ, for every x R n and every u U one has In partcular, x reg (Ax + σξ + u) x x xs κ x reg (Ax + σξ + u) x s x x s κ p Rsk p ( x reg ( ) ɛ, σ, s, υ) (s) p + s ρ+ν(h) κ ; + ρ+ν(h) κ. (5) κ (s υ + ρ + ν(h)). (6).3 Penalzed l Recovery Now consder the penalzed l -recovery x pen as follows: x pen (y) Argmn v R n { v + θs H T (Av y) }, (7) where y s as n (3), and an nteger s n, a postve θ, and a matrx H are parameters of the constructon. Proposton 3 Gven an m n sensng matrx A, an nteger s n, a matrx H = [h,..., h n ] R m n and postve reals γ, n, satsfyng the condton H(γ), and a θ >, assume that γ s := γ s, < (8) and ( γ s ) < θ < ( γ s ) (9) 6

7 Further, let σ, ɛ (, ), and let ν = ν ɛ,σ,u (h ), =,..., n, ν(h) = max ν. () Consder the penalzed recovery x pen ( ) assocated wth H, s, θ. There exsts a set Ξ R m, Prob{ξ Ξ} ɛ, of good realzatons of ξ such that () Whenever ξ Ξ, for every sgnal x R n and every u U one has (a) x pen (Ax + σξ + u) x x xs +sθν(h) mn[θ( γ s ), θ γ s ] (b) x pen (Ax + σξ + u)) x ( sθ + γ) x pen (Ax + σξ + u) x + ν(h) ( [ ] sθ + γ) x x s mn[θ( γ + ν(h) +sθ γ s), θ γ s] mn[θ( γ + s), θ γ s], where, as n Corollary, γ = max γ. () When θ = and γ < s, one has for every x Rn, u U and ξ Ξ: whence for every υ and p : (a) x pen (Ax + σξ + u) x x xs s γ (b) x pen (Ax + σξ + u)) x s x x s s γ Rsk p ( x pen ( ) ɛ, σ, s, υ) ν(h) + s s γ () + ν(h) s γ, () s p s γ (s υ + ν(h)). (3) The next statement s n the same relaton to Proposton 3 as Proposton s to Proposton and Corollary. Proposton Gven an m n sensng matrx A, nose ntensty σ, uncertanty set U and a tolerance ɛ (, ), let the matrx H = [h,..., h n ] from (7) satsfy the condton H s, (κ) for some κ < /, and let θ =. Then there exsts a set Ξ R m, Prob{ξ Ξ} ɛ, of good realzatons of ξ such that whenever ξ Ξ, for every x R n and every u U one has In partcular, x pen (Ax + σξ + u) x x xs κ x pen (Ax + σξ + u) x s x x s κ p Rsk p ( x pen ( ) ɛ, σ, s, υ) s p + s ν(h) κ ; + ν(h) κ. () κ (s υ + ν(h)). (5) Note that under the premse of Proposton, the smallest possble values of ρ are the quanttes ν, whch results n ρ = ν(h); wth ths choce of ρ, the rsk bound for the regular recovery, as gven by the rght hand sde n (6), concdes wthn factor wth the rsk bound for the penalzed recovery wth θ = as gven by (5); both bounds assume that H satsfes H s, (κ) wth κ < / and mply that p Rsk p ( x( ) ɛ, σ, s, υ) s p κ (s υ + ν(h)). (6) When υ =, the latter bound admts a qute transparent nterpretaton: everythng s as f we were observng the sum of an unknown s-dmensonal sgnal and an observaton error of the unform norm O()ν(H). 7

8 3 Effcent constructon of the contrast matrx H In what follows, we fx A, the envronment parameters ɛ, σ, U and the level of sparsty s of sgnals x we ntend to recover, and are nterested n buldng the contrast matrx H = [h,..., h n ] resultng n as small as possble error bound (6). All we need to ths end s to answer the followng queston (where we should specfy the norm ϕ( ) as ν ɛ,σ,u ( )): (?) Let ϕ( ) be a norm on R m, and s be a postve nteger. What s the doman G s of pars (ω, κ) R + such that κ < / and there exsts matrx H = [h,..., h n ] R m n satsfyng the condton H s, (κ) and the relaton φ(h) := max φ(h ) ω? How to fnd such an H, provded t exsts? Invokng Lemma, we can reformulate ths queston as follows: (??) Let ϕ( ) and s be as n (?). Gven (ω, κ) R +, how to fnd vectors h R m, n, satsfyng (a) : ϕ(h ) ω; & (b) : x h T Ax + s κ x x R n (P ) for every, or to detect correctly that no such collecton of vectors exsts? Indeed, by Lemma, f H satsfes H s, (κ) and φ(h ) ω, then there exsts H = [h,..., h n ] such that h satsfy (P.b) for all and φ(h) φ(h ) ω, so that h satsfy (P.a) for all as well. Vce versa, f h satsfy (P ), n, then the matrx H = [h,..., h n ] clearly satsfes H s, (κ), and φ(h) ω. The answer to (??) s gven by the followng Lemma Gven κ >, ω, and a postve nteger s, let γ = κ/s. For every n, the followng three propertes are equvalent to each other: () There exsts h = h satsfyng (P ); () The optmal value n the optmzaton problem where e s -th standard basc orth n R n, s ω; () One has Opt (γ) = mn h { ϕ(h) : A T h e γ } (P γ ) where ϕ (u) = max ϕ(v) ut v s the norm on R m conjugate to ϕ( ). x R n : x ωϕ (Ax) + γ x, (7) Whenever one (and then all) of these propertes take place, problem (P γ ) s solvable, and ts optmal soluton h satsfes (P ). 3. Optmal contrasts for regular and penalzed recoveres As an mmedate consequence of Lemma, we get the followng descrpton of the doman G s assocated wth the norm ϕ( ) = ν ɛ,σ,u ( ): (a) G s = { (ω, κ) : s κ γ, ω ω (s κ) }, where (b) γ = max h e = max mn n h (c) ω (γ) = max n Opt (γ) max {x : x, Ax = }, x (8) 8

9 where φ( ) n (P γ ) s specfed as ν ɛ,σ,u( ). Note that the second equalty n (b) s gven by Lnear Programmng dualty. Indeed, by (b), γ s the smallest γ for whch all problems (P γ ), =,..., n, are feasble, and thus, by Lemma, (γ, κ) G s f and only f κ/s γ and ω ω (κ/s). Note that the quantty γ depends solely of A, whle ω ( ) depends on ɛ, σ, U, as on parameters, but s ndependent of s. The outlned results suggest the followng scheme of buldng the contrast matrx H: we compute γ by solvng n Lnear Programmng problems n (8.b); f sγ, then G s does not contan ponts (ω, κ) wth κ < /, so that our recovery routnes are not applcable (or, at least, we cannot justfy them theoretcally); when sγ <, the set G s s nonempty, and ts Pareto fronter (the set of pars (ω, κ) R + such that (ω, κ) (ω, κ) G s s possble f and only f ω = ω) s the curve (ω (γ), sγ), γ γ < s. We choose a workng pont on ths curve, that s, a pont γ [γ, s ] and compute ω ( γ) by solvng the convex optmzaton programs (P γ ), =,..., n, wth φ( ) specfed as ν ɛ,σ,u( ). ω ( γ) s nothng but the maxmum, over, of the optmal values of these problems, and the optmal solutons h to the problems nduce the matrx H = H( γ) = [h,..., h n ] whch satsfes H s, (s γ) and has ν(h) ω ( γ). By reasonng whch led us to (??), ν(h( γ)) = ω ( γ) = mn H { ν(h ) : H satsfes H s, (s γ) }, that s, H = H( γ) s the best for our purposes contrast matrces satsfyng H s, (s γ). Wth ths contrast matrx, the error bound (6) for regular/penalzed l recoveres (n the former, ρ = ν (h ), n the latter, θ = ) read s p p Rsk p ( x( ) ɛ, σ, s, υ) s γ (s υ + ω ( γ)). (9) The outlned strategy does not explan how to choose γ. Ths ssue could be resolved, e.g., as follows. We choose an upper bound on the senstvty of the rsk (9) to υ,.e., to the -devaton of a sgnal to be recovered from the set of s-sparse sgnals. Ths senstvty s proportonal to s γ, so that an upper bound on the senstvty translates nto an upper bound γ + < s on γ. We can now choose γ by mnmzng the remanng term n the rsk bound over γ [γ, γ + ], whch amounts to solvng the optmzaton problem max { τ : τω (γ) sγ, γ γ γ +}. Observng that ω ( ) s, by ts orgn, a convex functon, we can solve the resultng problem effcently by bsecton n τ. A step of ths bsecton requres solvng a unvarate convex feasblty problem wth effcently computable constrant and thus s easy, at least for moderate values of n. Range of feasblty of condton H s, (κ) We address the crucal queston of what can be sad about the magntude of the quantty ω ( ), see (8) and the rsk bound (9). One way to answer t s just to compute the (effcently computable!) quantty ω (γ) for a desred value of γ. Yet t s natural to know theoretcal upper bounds on ω n some reference stuatons. Below, we provde three results of ths type. At ths pont, t makes sense to express n the notaton that ω (γ) depends, as on parameters, on the sensng matrx A and the envronment parameters ɛ, σ, U, so that n ths secton we wrte ω (γ A, ɛ, σ, U) nstead of ω (γ). 9

10 . Boundng ω ( ) va mutual ncoherence Recall that for an m n sensng matrx A = [A,..., A n ] wth no zero columns, ts mutual ncoherence s defned as A T µ(a) = max A j j n A u T. A Compressed Sensng lterature contans numerous mutual-ncoherence-related results (see, e.g., [,, ] and references theren). To the best of our knowledge, all these results state that f s s a postve nteger and A s a sensng matrx such that sµ(a) µ(a)+ <, then l -based sparse recovery s well suted for recoverng s-sparse sgnals (e.g., recovers them exactly when there s no observaton nose, admt explct error bounds when there s nose and/or the sgnal s only nearly s-sparse, etc.). To the best of our knowledge, all these results, up to the values of absolute constant factors n error bounds, are covered by the rsk bounds (9) combned wth the followng mmedate Observaton Whenever A = [A,..., A n ] s an m n matrx wth no zero columns and s s a postve ( nteger, the matrx H(A) = µ(a)+ [A /A T A, A /A T A,..., A n /A T n A n ] satsfes the condton H sµ(a) ) s, µ(a)+. Verfcaton s mmedate: the dagonal entres n the matrx Z = I H T A are equal to γ := µ(a) µ(a)+, whle the magntudes of the off-dagonal entres n Z do not exceed γ. Therefore µ(a)+ = x R n γ x Zx = x H T Ax x H T Ax x H T Ax + γ x H satsfes H s, (sγ). [ ] Observe that the Eucldean norms of the columns n H(A) do not exceed mn A, whence ν(h(a)) r(u) + σ ln(n/ɛ) mn A, where r(u) = max u. In the notaton from Secton 3, our observatons can be summarzed as u U follows: Corollary For every m n matrx A wth no zero columns, one has γ γ := ν(h(a)) r(u) + σ ln(n/ɛ) mn A. In partcular, s µ(a) + 3µ ω ( 3s A, ɛ, σ, U) r(u) + σ ln(n/ɛ) mn A. µ(a) µ(a)+ and ω (γ A, ɛ, σ, U) It should be added that as m, n grow n such a way that ln(n) O() ln m, realzatons A of typcal random m n matrces (e.g., those wth ndependent N (, /m) entres or wth ndependent entres takng values ±/ m) wth overwhelmng probablty satsfy µ(a) O() ln(n)/m and A.9 for all. By Corollary, t follows that for these A the condton H s, (κ) wth, say, κ = /3 can be satsfed for s as large as O() m/ ln(n) merely by the choce H = H(A), whch ensures that ν(h) O()[r(U) + σ ln(n/ɛ)]; n partcular, n the ndcated range of values of s one has ω ( 3s ) O()[r(U) + σ ln(n/ɛ)].. The case of A satsfyng the Restrcted Isometry Property Proposton 5 Let A satsfy RIP(δ, k) wth some δ (, ) and wth k >. Then there exsts matrx H(A) whch, for every postve nteger s, satsfes the condton H s, (sγ(δ, k)), wth δ γ(δ, k) = ( δ) k, (3) [ and s such that ν(h(a)) r(u) + σ ] ln(n/ɛ) / δ. In partcular, s δ 3 k ω ( δ 3s A, ɛ, σ, U) [r(u) + σ ] ln(n/ɛ). (3) δ

11 The bounds on ω stated n Proposton 5 deterorate as r(u) grows. However, Proposton 5 allows to state bounds on ω ndependent of the sze of U, provded that U belongs to a good lnear subspace (whch, wthout loss of generalty, we may assume to have the form AL for a lnear subspace L of R n ): Corollary 3 Let A satsfy RIP(δ, k) wth some δ (, ) and k >, and let L R n be a lnear subspace. Assume that the quantty Θ k [L] = nf mn A(x z) / x x=x k z L s postve, and that the nteger ( ŝ = Floor δ ) 3 k, δ = max[δ, Θ k δ [L]], s postve. Then there exsts a contrast matrx H = [h,..., h n ] satsfyng the condton Hŝ, ( 3 ) and such that the Eucldean lengths of the columns of H do not exceed, and these columns are orthogonal to δ M. In partcular, whenever U M, we have ν ɛ,σ,u (H) σ ln(n/ɛ). δ Corollary 3 brngs to our attenton the quantty Θ k [L]; n our context, the larger s ths quantty, the better. We are about to present a smple result allowng to bound Θ k [L] away from for subspaces L comprsed of dense sgnals and Gaussan sensng matrces. Gven a par of postve ntegers k n and a k-dmensonal subspace L n R n, let us set { } Ω k [L] = max x T f : x = x k, x =, f L, f =, so that Ω k [L] s the mnmal Eucldean devaton from L of a unt (n the Eucldean norm) k-sparse sgnal. Thus, Ω k [L] s small f and only f every unt k-sparse vector s far away at the Eucldean dstance close to of L. We are about to prove that for a randomly selected Gaussan m n sensng matrx A and a gven k-dmensonal lnear subspace L of R n wth large k (namely, O(m/ ln(n/m))), the quantty Θ k [L] s, wth overwhelmng probablty, small provded that Ω k[l] s small. Proposton 6 Let m, n be postve ntegers such that m n and let A be a random m n sensng matrx wth ndependent N (, /m) entres. Then, wth properly chosen absolute constant c >, the followng holds true wth probablty approachng as m, n grow: for every k cm/ ln(n/m), and every lnear subspace L of R n wth dm L k and Ω k [L]., A s RIP(., k) and Θ k [L].86. In order to bound Ω k [L], the followng mght be of use. Gven a lnear subspace L R n, let us characterze the mnmal densty of vectors from L by the quantty { } x n [L] = max : x L, x x so that [L] s always, [L] = f and only f L s a lne spanned by a maxmally dense (all entres of magntude ) vector, [L] C d when L admts an orthonormal bass comprsed of d vectors f l wth f l C/ n. A less trval example s as follows. Let d be a postve nteger, and p(z) = d l= p lz l be a polynomal of degree d wth all roots on the unt crcumference. The set of all solutons x R n to the homogeneous fnte dfference equaton p l x t+l =, t n d, l= s a lnear subspace L p( ) R n of the dmenson d, and t s easly seen that d n/ ln(n) [L p( ) ] O()d ln(n). (3)

12 Now, gven a k-sparse unt vector x wth support I and a unt vector f L, we have x T f f I k f [L] k/n f = [L] k/n, whence Ω k [L] k/n [L]. (33) Example. Let d be a postve nteger and L R n be the comprsed by restrctons on the grd [,,..., n ] of algebrac polynomals of [ degree d. ] Wth properly chosen absolute constant O(), settng k = k(m, n) = Floor(O() mn and assumng that d k, wth probablty approachng as n d ln(n), m ln(n/m) m, n grow, a Gaussan m n sensng matrx A s RIP(., k) and s such that Θ k [L].86 (Proposton 6 combned wth (3) and (33)). Whenever t happens, Corollary 3 ensures the exstence of a contrast matrx H = [h,..., h n ] whch satsfes H s, ( 3 whenever s.37 k and s such that h are orthogonal to AL and satsfy h.. The assocated wth H rsk bound for regular/penalzed recovery (n the former, ρ =.σ ln(n/ɛ), n the latter θ = ) reads p Rsk p ( x( ) ɛ, σ, s, υ) 6s p [ s υ +.σ ] ln(n/ɛ) whatever be the uncertanty set U contaned n AL. In other words, the rsk bound n queston s nsenstve to perturbng a (nearly) s-sparse sgnal of nterest by a whatever algebrac polynomal of degree d. It should be stressed that whle the assumptons on A, k, d, s whch led us to the latter concluson are dffcult to verfy, ths verfcaton s n fact redundant. Indeed, gven L and s and nvokng Lemma, we can effcently check whether the promsed H exsts, and dentfy H n the latter case, by solvng n convex optmzaton programs h Argmn h { h : A T h e 3s, AT h L }, n; the desred contrast matrx exsts f and only f all these programs are solvable wth the optmal values., and n ths case H = h,..., h n ] s readly gven by the optmal solutons to these programs..3 Oracle nequalty Here we assume that A R m n possesses the followng property (where S s a postve nteger and ϕ > ): O(S, ω): For every {,..., n} and every S-element subset I of {,..., n} there exsts a routne R,I for recoverng x from a nosy observaton y = Ax + u + σe, [ξ N (, I m ), u U] of unknown sgnal x R n, known to be supported on I such that for every such sgnal and every u U one has Prob{ R,I (Ax + u + σe) x ω} ɛ. We ntend to demonstrate that n ths stuaton for all s n certan range (whch extends as S grows and ω decreases) the unform error of the regular and the penalzed recoveres assocated wth properly selected contrast matrx s, wth probablty ɛ, close to ω. The precse statement s as follows: Proposton 7 Gven A and the envronment parameters ɛ < /6, σ, U, assume that A satsfes the condton O(S, γ) wth certan S, γ. Then for every nteger s from the range s σ S ln(/ɛ) ω A (3) (here s the standard matrx norm, the largest sngular value) there exsts a contrast matrx H satsfyng the condton H s, ( ) and such that ν(h) + ln(n)/ ln(/ɛ)ω, so that n the outlned range of values of

13 H one has ω ( s ) + ln(n)/ ln(/ɛ)ω, and the assocated wth H error bound (9) for regular/penalzed l recovery s [ ] Rsk p ( x( ) ɛ, σ, s, υ) 6s p ω + ln n ln(/ɛ) + υ. (35) s Proposton 7 justfes to some extent, our approach; t says that f there exsts a routne whch recovers S-sparse sgnals wth a pror known sparsty pattern wthn certan accuracy (measured component-wse), then our recoverng routnes exhbt close performance wthout any knowledge of the sparsty pattern, albet n a smaller range of values of the sparsty parameter.. Condton H s, (κ): lmts of performance Recall that when recoverng s-sparse sgnals, the condton H s, (κ) helps only when κ < /. Unfortunately, wth these κ, the condton s feasble n a severely restrcted range of values of s. Specfcally, from [5, Proposton 5.] and Lemma t mmedately follows that (*) If A R m n s not nearly square, that s, f n > ( m + ), then the condton H s, (κ) wth κ < / cannot be satsfed when s s large, namely, when s > m +. Note that from the dscusson at the end of secton. we know that the O( m) lmt of performance of the condton H s, ( ) stated n (*) s nearly sharp: when s O() m, the condton H s, ( 3 ) assocated wth a typcal randomly generated m n sensng matrx A s feasble and can be satsfes wth a contrast matrx H wth qute moderate ν(h). (*) says that unless A s nearly square, the condton H s, ( ) can valdate l sparse recovery only n a severely restrcted range s O( m) of values of the sparsty parameter. Ths s n sharp contrast wth unverfable suffcent condtons for goodness of l recovery, lke RIP: t s well known that when m, n grow, realzatons of typcal random m n matrces, lke those mentoned at the end of Secton., wth overwhelmng probablty possess RIP(., s) wth s as large as O(m/ ln(n/m)). As a result, unverfable suffcent condtons, lke RIP, can justfy the valdty of l recovery routnes n a much wder (and n fact the wdest possble) range of values of the sparsty parameter s than the fully computatonally tractable condton H s, ( ). Ths beng sad, note that ths comparson s not completely far. Indeed, asde of ts tractablty, the condton H s, (κ) wth κ < / ensures the error bounds (9) n the entre range p of values of p, whch perhaps s not the case wth condtons lke RIP. Specfcally, consder the no nusance case U = {}, and let A satsfy RIP(., S) for certan S. It s well known (see, e.g., the next secton) that n ths case the Dantzg Selector recovery ensures for every s S and every s-sparse sgnal x that x DS x p O()σ ln(n/ɛ)s /p, p, wth probablty ɛ. However, we are not aware of smlar bounds (under whatever condtons) for large s and p >. For comparson: n the case n queston, for small s, namely, s O() S, we have ω ( 3s ) O()σ ln(n/ɛ) (by Proposton 5), whence for regular and penalzed l recoveres wth approprately chosen contrast matrx (whch can be bult effcently!) one has for all s-sparse x x x p O()σ ln(n/ɛ)s p p [, ] wth probablty ɛ (see (9)). We wonder whether a smlar (perhaps, wth extra logarthmc factors) bound can be obtaned for large s (e.g., s m +δ ) for a whatever l recovery routne and a whatever essentally nonsquare (say, m < n/) m n sensng matrx A wth columns of Eucldean length O(). 3

14 5 Extensons We are about to demonstrate that the pvot element of the precedng sectons the condton H s, (κ) s the strongest (and seemngly the only verfable one) n a natural parametrc seres of condtons on a contrast matrx H; every one of these condtons valdates the regular and the penalzed l recoveres assocated wth H n certan restrcted range of values of p n the error bounds (9). 5. Condtons H s,q (κ) Let us fx an m n sensng matrx A. Gven a postve nteger s m, a q [, ] and a real κ >, let us say that an m n contrast matrx H satsfes condton H s,q (κ), f x R n : x s,q s q H T Ax + κs q x, (36) where x s,q = x s q and x s, as always, s the vector obtaned from x by zerong all but the s largest n magntude entres. Observe that What used to be denoted H s, (κ) before, s exactly what s called H s, (κ) now; If H satsfes H s,q (κ), H satsfes H s,q (κ) for all q [, q] (snce for s-sparse vector x s we have x s q s q q x s s,q ). Less mmedate observatons are as follows: Let A be an m n matrx and let s n be a postve nteger. We say that A s s-good f for all s-sparse x R n the l -recovery x Argmn{ v : Av = y} v s exact n the case of noseless observaton y = Ax. It turns out that feasblty of H s, (κ) wth κ < s ntmately related to s-goodness of A: Lemma 3 A s s-good f and only f there exst κ < and H Rm n satsfyng H s, (κ). The Restrcted Isometry Property mples feasblty of H s, (κ) wth small κ: Lemma Let A satsfy RIP(δ, s) wth δ < 3. Then the matrx H = δ A satsfes the condton H s, (κ) wth κ = δ δ <. 5. Regular and penalzed l recoveres wth contrast matrces satsfyng H s,q (κ) Our mmedate goal s to obtan the followng extenson of the man results of Secton, specfcally, Propostons, : Proposton 8 Assume we are gven an m n sensng matrx A = [a,..., a n ], an nteger s m, κ < /, a contrast matrx H = [h,..., h n ] R m n, and q [, ] such that H satsfes the condton H s,q (κ). Denote ν = ν ɛ,σ,u (h ), where the norm ν ɛ,σ,u ( ) s defned n (), and ν(h) = max ν. Let also nose ntensty σ, uncertanty set U and tolerance ɛ (, ) be gven. () Consder the regular recovery (7) wth the contrast matrx H and the parameters ρ satsfyng the relatons ρ ν, n, and let ρ = max ρ. Then p q Rsk p ( x reg ( ) ɛ, σ, s, υ) (3s) ρ + ν(h) + s υ p. (37) κ

15 () Consder the penalzed recovery (7) wth the contrast matrx H and θ =. Then p q Rsk p ( x pen ( ) ɛ, σ, s, υ) 3s ν(h) + s υ p. (38) κ 5.3 Error bounds for Lasso and Dantzg Selector under condton H s,q (κ) We are about to demonstrate that the feasblty of condton H s,q (κ) wth κ < mples some consequences for the performance of Lasso and Dantzg Selector when recoverng s-sparse sgnals n p norms, p q. Ths mght look strange at the frst glance, snce nether Lasso nor Dantzg Selector use contrast matrces. The surprse, however, s elmnated by the followng observaton: (!) Let H satsfy H s,q (κ) and let λ be the maxmum of the Eucldean norms of columns n H. Then x R n : x s,q λs q Ax + κs q x. (39) The fact that a condton lke (39) wth κ < / plays a crucal role n the performance analyss of Lasso and Dantzg Selector s nether surprsng nor too novel. For example, the standard error bounds for the latter algorthms under the RIP assumpton are n fact based on the valdty of (39) wth λ = O() for q = (see Lemma ). Another example s gven by the Restrcted Egenvalue [] and the Compatblty condtons [3, 9]. Specfcally, the Restrcted Egenvalue condton RE(s, ρ, κ) (s s postve nteger, ρ >, κ > states that x s κ Ax whenever ρ x s x x s, whence x s s κ Ax whenever (ρ + ) x s x, so that x R n : x s, s/ κ Ax + + ρ x ;. () Further, the Compatblty condton of [9] s nothng but () wth ρ = 3. We see that both Restrcted Egenvalue and Compatblty condtons mply (39) wth q =, λ = (κ s) and certan κ < /. We are about to present a smple result on the performance of Lasso and Dantzg Selector algorthms n the case when A satsfes the condton (39). The result s as follows: Proposton 9 Let m n matrx A = [a,..., a n ] satsfy (39) wth κ < and some q [, ], and let β = max a. Let also the envronment parameters σ >, ɛ (, ) be gven, and let there be no nusance: U = {}. () Consder the Dantzg Selector recovery where Then x DS (y) Argmn v { v : A T (Av y) ρ }, p q Rsk p ( x DS ( ) ɛ, σ, s, υ) (3s) p ρ ϱ := σβ ln(n/ɛ). () κ [ ] s λ (ρ + ϱ) + s υ. () κ () Consder the Lasso recovery and let κ satsfy the relaton x lasso (y) Argmn v { v + κ Av y }, κ + ϱκ <, 5

16 where ϱ s gven by (). Then p q Rsk p ( x lasso ( ) ɛ, σ, s, υ) [ ] s p s λ κ ϱκ κ + s υ. (3) In partcular, wth one has κ = κ ϱ, () [ ] p q Rsk p ( x lasso ( ) ɛ, σ, s, υ) 8s p 8sϱ λ κ κ + s υ. (5) Dscusson. Let us compare the error bounds gven by Propostons 8, 9. Assume that there s no nusance (U = {}) and A s such that the condton H s,q ( ) s satsfed by certan matrx H, the maxmum of Eucldean norms of the columns of H beng λ. Assumng that the penalzed recovery uses θ =, and the regular recovery uses ρ = ν(h) = λσ ln(n/ɛ)), the assocated rsk bounds as gven by Proposton 8 become Rsk p ( x( ) ɛ, σ, s, υ) O()s p [ λσ ln(n/ɛ) + s υ] p q. (6) Note that these bounds admt a transparent nterpretaton: n the range p q an s-sparse sgnal s recovered as f we were dentfyng correctly ts support and estmatng the entres wth the unform error O() λσ ln(n/ɛ). Now, as we have already explaned, the exstence of a matrx H satsfyng H s,q ( ) wth columns n H beng of Eucldean lengths λ mples valdty of (39) wth κ =. Assumng that n Dantzg Selector one uses ρ = ϱ, and that κ n Lasso s chosen accordng to (), the error bounds for Dantzg Selector and Lasso as gven by Proposton 9 become [ Rsk p ( x( ) ɛ, σ, s, υ) O()s p [β λ]s λσ ] ln(n/ɛ) + s υ p q. (7) Observe that β λ O() (look what happens wth (39) when x s the -th basc orth). We see that the bounds (7) are worse than the bounds (6), prmarly due to the presence of the factor s n the frst bracketed term n (7). At ths pont t s unclear whether ths drawback s an artfact caused by poor analyss of the Dantzg Selector and Lasso algorthms or t ndeed reflects realty. Some related numercal results presented n Secton 6. suggest that the latter opton could be the actual one. Moreover, consder an example of the recovery problem wth a matrx A wth unt columns and sngular values and ε. It can be easly seen that f x s algned wth the second rght sngular vector of A (correspondng to the sngular value ε) the error of the Dantzg Selector may be as large as O(ε σ), whle the error of H-conscous recovery wll be O(ε σ) up to the logarthmc factor n ɛ (ndeed, choosng H = A results n λ = ε ). Ths toy example suggests that the extra λ factor n the bound (7), at least for Dantzg Selector, s not only due to our clumsy analyss. Ths beng sad, t should be stressed that the comparson of regularzed/penalsed l recoveres wth Dantzg Selector and Lasso based solely on above the error bounds s somehow based aganst Dantzg Selector and Lasso. Indeed, n order for regular/penalzed l recoveres to enjoy ther good error bounds, we should specfy the requred contrast matrx, whch s not the case for Lasso and Dantzg Selector: the bounds (7) requre only exstence of such a matrx 5. Besdes ths, there s at least one case where error bounds for Dantzg Selector are as good as (6), specfcally, the case when A possesses, say, RIP(., s). Indeed, n ths case, by Lemma, the matrx H = O()A satsfes H s, ( ), meanng that Dantzg Selector 5 And even less than that, snce feasblty of H s,q (κ) s just a suffcent condton for the valdty of (39), the condton whch ndeed underles Proposton 9. 6

17 wth properly chosen ρ s nothng but the regular recovery wth contrast matrx H and as such obeys the bounds (6) wth q =. It s tme to pont out that the above dscusson s somehow scholastc: when q < and s s nontrval, we do not know how to verfy effcently the fact that the condton H s,q (κ) s satsfed by a gven H, not speakng about effcent synthess of H satsfyng ths condton. One should not thnk that these tractablty ssues concern only our algorthms whch need a good contrast matrx. In fact, all condtons whch allow to valdate Dantzg Selector and Lasso beyond the scope of the fully tractable condton H s, (κ) are, to the best of our knowledge, unverfable they cannot be checked effcently, and thus we never can be sure that Lasso and Dantzg Selector (or any other known computatonally effcent technque for sparse recovery) ndeed work well for a gven sensng matrx. As we have seen n Secton 3, the stuaton mproves dramatcally when passng from unverfable condtons H s,q (κ), q <, to the effcently verfable condton H s, (κ), although n a severely restrcted range of values of s. 6 Numercal examples We present here a small smulaton study. 6. Regular/penalzed recovery vs. Lasso: no-nusance case To llustrate the dscusson n Secton 5.3, we compare numercal performance of Lasso and penalzed recovery n the observaton model () wthout nusance: y = Ax + σξ, ξ N(, I m ), where σ > s known. The sensng matrx A s specfed by selectng at random m = rows of the 8 8 Hadamard matrx 6, and suppressng the frst of the selected rows by multplyng t by.e-3. The resultng 8 sensng matrx has orthogonal rows; 9 of ts sngular values are equal to 8, and the remanng sngular value s.8. We have processed A as explaned n Secton 3 (a reader s referred to ths secton for the descrpton of enttes nvolved). 7 We started wth computng γ, whch turned out to be.87, meanng that the level of s-goodness of A s at least 7. In our experment, we amed at recoverng sgnals wth at most s = nonzero entres and wth no nusance (U = {}). The synthess of the correspondng optmal contrast matrx H = H as outlned n Secton 3 results n γ =.9, ω ( γ) =.899 ln(n/ɛ). Note that we are n the case of U = {}, and n ths case the optmal H s ndependent of the values of σ and ɛ. We compare the penalzed l -recovery wth the contrast matrx H and θ = wth the Lasso recovery on randomly generated sgnals x wth nonzero entres. We consder two choces of the penalty κ n Lasso: the theoretcally optmal choce () and the deal choce, where we scanned the fne grd (.5) k, k =, ±, ±,... of values of κ and selected the value for whch the Lasso recovery was at the smallest -dstance from the true sgnal. The confdence parameter ɛ n () was set to.. The results of a typcal experment are presented n Table. We see that as compared to the penalzed l recovery, the accuracy of Lasso wth the theoretcally optmal choce of the penalty s nearly tmes worse. Wth the deal (unrealstc!) choce of penalty, Lasso s never better than the penalzed l recovery, and for the smallest value of σ s nearly tmes worse than the latter routne. 6 The k-th Hadamard matrx H k s gven by the recurrence H =, H p+ = [H p ; H p ; H p, H p ]. It s a k k matrx wth orthogonal rows and all entres equal to ±. 7 It s worth to menton that when A s comprsed of (perhaps, scaled) rows of an Hadamard matrx (and n fact, of scaled rows of any other Fourer transform matrx assocated wth a fnte Abelan group) the synthess descrbed n Secton 3 smplfes dramatcally due to the fact that all problems (P γ ) turn out to be equvalent to each other, and ther optmal solutons are obtaned from each other by smple lnear transformatons. As a result, we can work wth a sngle problem (P γ ) nstead of workng wth n of them. 7

18 x x p Recovery σ κ p = p = p = Penalzed N/A.e- 6.5e-5 3.8e-5 Lasso.e- 3.7e-3.e- 5.e- 3.9e-5 Lasso.e-.6e-3 5.e-.e- Penalzed N/A.e-5 6.e-6.7e-6 Lasso.e-5.78e- 3.e-5 8.e-6 3.e-6 Lasso.e-3.8e- 5.8e-5.e-5 Penalzed N/A.e-6 6.e-7.5e-7 Lasso.e-6.e- 8.8e-6.6e-6 5.9e-7 Lasso.e-.8e-5 5.e-6.9e-6 Table :. Lasso vs. penalzed l recovery. Choce of κ: deal choce; theoretcal choce. 6. The nusance case In the second experment we study the behavor of recovery procedures n the stuaton when an nput nusance s present: y = A(x + v) + σξ, where x R n s an unknown sparse sgnal, v V wth known V R n, σ s known and ξ R m s standard normal ξ N(, I m ); n terms of (3), u = Av and U = AV. We compare the performance of the regular and penalzed recoveres to that of the Lasso and Dantzg Selector algorthms. To handle the nusance, the latter methods were modfed as follows: nstead of the standard Lasso estmator we use the estmator x lasso (y) Argmn x R n mn v V { x + κ A(x + v) y }, where the penalzaton coeffcent κ s chosen accordng to [, Theorem.]; n turn, the Dantzg Selector s substtuted by { x DS (y) Argmn mn x : [A T (A(x + v) y)] ϱ, =,..., m } x R m v V wth ϱ = σ ln(n/ɛ) A, where A are the columns of A and ɛ s gven (n what follows ɛ =.). We present below the smulaton results for two setups wth n = 56:. Gaussan setup: a 6 56 sensng matrx A Gauss wth ndependent N(, ) entres s generated, then ts columns are normalzed. The nusance set V = V(L) R 56 s as follows: V(L) = {v R 56, v + v + v L, for =,..., 55, v = v = }, where L s a known parameter; n other words, we observe the sum of a sparse sgnal and smooth background.. Convoluton setup: a 56 sensng matrx A conv s constructed as follows: consder a sgnal x lvng on Z and supported on the 6 6 grd Γ = {(, j) Z :, j 5}. We subject such a sgnal to dscrete tme convoluton wth a kernel supported on the set {(, j) Z : 7, j 7}, and then restrct the result on the 6 5 grd Γ + = {(, j) Γ : j 5}. Ths way we obtan a lnear mappng x A conv x : R 56 R. The nusance set V = V(L) R 56 s composed of zero-mean sgnals u on Γ whch satsfy [D u],j L, where D s the dscrete (perodc) homogeneous Laplace operator: [Du],j = ) (u,j + u,j + u,j+ + u +,j u,j,, j =,..., 6, wth = mod 6, j = j mod 6. 8

19 In the smulatons we acted as follows: gven the sensng matrx A, the nusance set U = AV and the values of s and σ, we compute the contrast matrx H by choosng a reasonable value γ > γ of γ and specfyng H as the matrx satsfyng H s, (s γ) and such that ν(h) = ω ( γ), see Secton 3. Then N samples of random sgnal x, random nusance v V and random perturbaton ξ were generated, and the correspondng observatons were processed by every one of the algorthms we are comparng 8. The plots below present the average, over these N = experments, l and l recovery errors. All recovery procedures were usng Mosek optmzaton software []. We start wth Gaussan setup n whch the sgnal x has s = non-vanshng components, randomly drawn, wth x =. For the penalzed and the regular recovery algorthms the contrast matrx H was computed usng γ =.. On Fgure we plot the average recovery error as a functon of the value of the parameter L of the nusance set V, for fxed σ =., and on Fgure as a functon of σ for fxed L =.. In the next experment we fx the envronmental parameters σ, L and vary the number s of nonzero 6 5 Lasso Dantzg Selector Penalzed Recovery Regular Recovery Lasso Dantzg Selector Penalzed Recovery Regular Recovery l -error l -error Fgure : Mean recovery error as a functon of the nusance magntude L. σ =., s =, µ =., x =. Gaussan setup parameters: entres n the sgnal x (of norm x = 5s). On Fgure 3 we present the recovery error as a functon of s. We run the same smulatons n the convoluton setup. The contrast matrx H for the penalzed and the regular recoveres s computed usng γ =.. On Fgure we plot the average recovery error as a functon of the sze L of the nusance set V for fxed σ =., on Fgure 5 as a functon of σ for fxed L =., and on Fgure 6 as a functon of s. We observe qute dfferent behavor of the recovery procedures n our two setups. In the Gaussan setup the nusance sgnal v V does not mask the true sgnal x, and the performance of the Lasso and Dantzg Selector s qute good n ths case. The stuaton changes dramatcally n the convoluton setup, where the performance of the Lasso and Dantzg Selector degrades rapdly when the parameter L of the nusance set ncreases. 9 The concluson suggested by the outlned numercal results s that the penalzed l recovery, whle sometmes losng slghtly to Lasso, n some of the experments outperforms sgnfcantly all other algorthms we are comparng. 8 Randomness of the sparse sgnal x s mportant. Usng the technques of [], one can verfy that n the convoluton setup there are sgnals wth only 3 non-vanshng components whch cannot be recovered by l mnmzaton even n the noseless case V = {}, σ =. In other words, the s-goodness characterstc of the correspondng matrx A s equal to. 9 The error plot for these estmators on Fgure flatters for hgher values of L smply because they always underestmate the sgnal, and the error of recovery s always less than the correspondng norm of the sgnal. 9

Tests for Two Correlations

Tests for Two Correlations PASS Sample Sze Software Chapter 805 Tests for Two Correlatons Introducton The correlaton coeffcent (or correlaton), ρ, s a popular parameter for descrbng the strength of the assocaton between two varables.

More information

3: Central Limit Theorem, Systematic Errors

3: Central Limit Theorem, Systematic Errors 3: Central Lmt Theorem, Systematc Errors 1 Errors 1.1 Central Lmt Theorem Ths theorem s of prme mportance when measurng physcal quanttes because usually the mperfectons n the measurements are due to several

More information

Price and Quantity Competition Revisited. Abstract

Price and Quantity Competition Revisited. Abstract rce and uantty Competton Revsted X. Henry Wang Unversty of Mssour - Columba Abstract By enlargng the parameter space orgnally consdered by Sngh and Vves (984 to allow for a wder range of cost asymmetry,

More information

II. Random Variables. Variable Types. Variables Map Outcomes to Numbers

II. Random Variables. Variable Types. Variables Map Outcomes to Numbers II. Random Varables Random varables operate n much the same way as the outcomes or events n some arbtrary sample space the dstncton s that random varables are smply outcomes that are represented numercally.

More information

15-451/651: Design & Analysis of Algorithms January 22, 2019 Lecture #3: Amortized Analysis last changed: January 18, 2019

15-451/651: Design & Analysis of Algorithms January 22, 2019 Lecture #3: Amortized Analysis last changed: January 18, 2019 5-45/65: Desgn & Analyss of Algorthms January, 09 Lecture #3: Amortzed Analyss last changed: January 8, 09 Introducton In ths lecture we dscuss a useful form of analyss, called amortzed analyss, for problems

More information

/ Computational Genomics. Normalization

/ Computational Genomics. Normalization 0-80 /02-70 Computatonal Genomcs Normalzaton Gene Expresson Analyss Model Computatonal nformaton fuson Bologcal regulatory networks Pattern Recognton Data Analyss clusterng, classfcaton normalzaton, mss.

More information

MgtOp 215 Chapter 13 Dr. Ahn

MgtOp 215 Chapter 13 Dr. Ahn MgtOp 5 Chapter 3 Dr Ahn Consder two random varables X and Y wth,,, In order to study the relatonshp between the two random varables, we need a numercal measure that descrbes the relatonshp The covarance

More information

Appendix - Normally Distributed Admissible Choices are Optimal

Appendix - Normally Distributed Admissible Choices are Optimal Appendx - Normally Dstrbuted Admssble Choces are Optmal James N. Bodurtha, Jr. McDonough School of Busness Georgetown Unversty and Q Shen Stafford Partners Aprl 994 latest revson September 00 Abstract

More information

CS 286r: Matching and Market Design Lecture 2 Combinatorial Markets, Walrasian Equilibrium, Tâtonnement

CS 286r: Matching and Market Design Lecture 2 Combinatorial Markets, Walrasian Equilibrium, Tâtonnement CS 286r: Matchng and Market Desgn Lecture 2 Combnatoral Markets, Walrasan Equlbrum, Tâtonnement Matchng and Money Recall: Last tme we descrbed the Hungaran Method for computng a maxmumweght bpartte matchng.

More information

Problem Set 6 Finance 1,

Problem Set 6 Finance 1, Carnege Mellon Unversty Graduate School of Industral Admnstraton Chrs Telmer Wnter 2006 Problem Set 6 Fnance, 47-720. (representatve agent constructon) Consder the followng two-perod, two-agent economy.

More information

ECE 586GT: Problem Set 2: Problems and Solutions Uniqueness of Nash equilibria, zero sum games, evolutionary dynamics

ECE 586GT: Problem Set 2: Problems and Solutions Uniqueness of Nash equilibria, zero sum games, evolutionary dynamics Unversty of Illnos Fall 08 ECE 586GT: Problem Set : Problems and Solutons Unqueness of Nash equlbra, zero sum games, evolutonary dynamcs Due: Tuesday, Sept. 5, at begnnng of class Readng: Course notes,

More information

Appendix for Solving Asset Pricing Models when the Price-Dividend Function is Analytic

Appendix for Solving Asset Pricing Models when the Price-Dividend Function is Analytic Appendx for Solvng Asset Prcng Models when the Prce-Dvdend Functon s Analytc Ovdu L. Caln Yu Chen Thomas F. Cosmano and Alex A. Hmonas January 3, 5 Ths appendx provdes proofs of some results stated n our

More information

Games and Decisions. Part I: Basic Theorems. Contents. 1 Introduction. Jane Yuxin Wang. 1 Introduction 1. 2 Two-player Games 2

Games and Decisions. Part I: Basic Theorems. Contents. 1 Introduction. Jane Yuxin Wang. 1 Introduction 1. 2 Two-player Games 2 Games and Decsons Part I: Basc Theorems Jane Yuxn Wang Contents 1 Introducton 1 2 Two-player Games 2 2.1 Zero-sum Games................................ 3 2.1.1 Pure Strateges.............................

More information

2.1 Rademacher Calculus... 3

2.1 Rademacher Calculus... 3 COS 598E: Unsupervsed Learnng Week 2 Lecturer: Elad Hazan Scrbe: Kran Vodrahall Contents 1 Introducton 1 2 Non-generatve pproach 1 2.1 Rademacher Calculus............................... 3 3 Spectral utoencoders

More information

Random Variables. b 2.

Random Variables. b 2. Random Varables Generally the object of an nvestgators nterest s not necessarly the acton n the sample space but rather some functon of t. Techncally a real valued functon or mappng whose doman s the sample

More information

Applications of Myerson s Lemma

Applications of Myerson s Lemma Applcatons of Myerson s Lemma Professor Greenwald 28-2-7 We apply Myerson s lemma to solve the sngle-good aucton, and the generalzaton n whch there are k dentcal copes of the good. Our objectve s welfare

More information

Quiz on Deterministic part of course October 22, 2002

Quiz on Deterministic part of course October 22, 2002 Engneerng ystems Analyss for Desgn Quz on Determnstc part of course October 22, 2002 Ths s a closed book exercse. You may use calculators Grade Tables There are 90 ponts possble for the regular test, or

More information

Fast Laplacian Solvers by Sparsification

Fast Laplacian Solvers by Sparsification Spectral Graph Theory Lecture 19 Fast Laplacan Solvers by Sparsfcaton Danel A. Spelman November 9, 2015 Dsclamer These notes are not necessarly an accurate representaton of what happened n class. The notes

More information

Measures of Spread IQR and Deviation. For exam X, calculate the mean, median and mode. For exam Y, calculate the mean, median and mode.

Measures of Spread IQR and Deviation. For exam X, calculate the mean, median and mode. For exam Y, calculate the mean, median and mode. Part 4 Measures of Spread IQR and Devaton In Part we learned how the three measures of center offer dfferent ways of provdng us wth a sngle representatve value for a data set. However, consder the followng

More information

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture #21 Scribe: Lawrence Diao April 23, 2013

COS 511: Theoretical Machine Learning. Lecturer: Rob Schapire Lecture #21 Scribe: Lawrence Diao April 23, 2013 COS 511: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture #21 Scrbe: Lawrence Dao Aprl 23, 2013 1 On-Lne Log Loss To recap the end of the last lecture, we have the followng on-lne problem wth N

More information

Tests for Two Ordered Categorical Variables

Tests for Two Ordered Categorical Variables Chapter 253 Tests for Two Ordered Categorcal Varables Introducton Ths module computes power and sample sze for tests of ordered categorcal data such as Lkert scale data. Assumng proportonal odds, such

More information

Maximum Likelihood Estimation of Isotonic Normal Means with Unknown Variances*

Maximum Likelihood Estimation of Isotonic Normal Means with Unknown Variances* Journal of Multvarate Analyss 64, 183195 (1998) Artcle No. MV971717 Maxmum Lelhood Estmaton of Isotonc Normal Means wth Unnown Varances* Nng-Zhong Sh and Hua Jang Northeast Normal Unversty, Changchun,Chna

More information

A Bootstrap Confidence Limit for Process Capability Indices

A Bootstrap Confidence Limit for Process Capability Indices A ootstrap Confdence Lmt for Process Capablty Indces YANG Janfeng School of usness, Zhengzhou Unversty, P.R.Chna, 450001 Abstract The process capablty ndces are wdely used by qualty professonals as an

More information

OPERATIONS RESEARCH. Game Theory

OPERATIONS RESEARCH. Game Theory OPERATIONS RESEARCH Chapter 2 Game Theory Prof. Bbhas C. Gr Department of Mathematcs Jadavpur Unversty Kolkata, Inda Emal: bcgr.umath@gmal.com 1.0 Introducton Game theory was developed for decson makng

More information

occurrence of a larger storm than our culvert or bridge is barely capable of handling? (what is The main question is: What is the possibility of

occurrence of a larger storm than our culvert or bridge is barely capable of handling? (what is The main question is: What is the possibility of Module 8: Probablty and Statstcal Methods n Water Resources Engneerng Bob Ptt Unversty of Alabama Tuscaloosa, AL Flow data are avalable from numerous USGS operated flow recordng statons. Data s usually

More information

On the Moments of the Traces of Unitary and Orthogonal Random Matrices

On the Moments of the Traces of Unitary and Orthogonal Random Matrices Proceedngs of Insttute of Mathematcs of NAS of Ukrane 2004 Vol. 50 Part 3 1207 1213 On the Moments of the Traces of Untary and Orthogonal Random Matrces Vladmr VASILCHU B. Verkn Insttute for Low Temperature

More information

EDC Introduction

EDC Introduction .0 Introducton EDC3 In the last set of notes (EDC), we saw how to use penalty factors n solvng the EDC problem wth losses. In ths set of notes, we want to address two closely related ssues. What are, exactly,

More information

Economic Design of Short-Run CSP-1 Plan Under Linear Inspection Cost

Economic Design of Short-Run CSP-1 Plan Under Linear Inspection Cost Tamkang Journal of Scence and Engneerng, Vol. 9, No 1, pp. 19 23 (2006) 19 Economc Desgn of Short-Run CSP-1 Plan Under Lnear Inspecton Cost Chung-Ho Chen 1 * and Chao-Yu Chou 2 1 Department of Industral

More information

Understanding Annuities. Some Algebraic Terminology.

Understanding Annuities. Some Algebraic Terminology. Understandng Annutes Ma 162 Sprng 2010 Ma 162 Sprng 2010 March 22, 2010 Some Algebrac Termnology We recall some terms and calculatons from elementary algebra A fnte sequence of numbers s a functon of natural

More information

Multifactor Term Structure Models

Multifactor Term Structure Models 1 Multfactor Term Structure Models A. Lmtatons of One-Factor Models 1. Returns on bonds of all maturtes are perfectly correlated. 2. Term structure (and prces of every other dervatves) are unquely determned

More information

TCOM501 Networking: Theory & Fundamentals Final Examination Professor Yannis A. Korilis April 26, 2002

TCOM501 Networking: Theory & Fundamentals Final Examination Professor Yannis A. Korilis April 26, 2002 TO5 Networng: Theory & undamentals nal xamnaton Professor Yanns. orls prl, Problem [ ponts]: onsder a rng networ wth nodes,,,. In ths networ, a customer that completes servce at node exts the networ wth

More information

3/3/2014. CDS M Phil Econometrics. Vijayamohanan Pillai N. Truncated standard normal distribution for a = 0.5, 0, and 0.5. CDS Mphil Econometrics

3/3/2014. CDS M Phil Econometrics. Vijayamohanan Pillai N. Truncated standard normal distribution for a = 0.5, 0, and 0.5. CDS Mphil Econometrics Lmted Dependent Varable Models: Tobt an Plla N 1 CDS Mphl Econometrcs Introducton Lmted Dependent Varable Models: Truncaton and Censorng Maddala, G. 1983. Lmted Dependent and Qualtatve Varables n Econometrcs.

More information

A Constant-Factor Approximation Algorithm for Network Revenue Management

A Constant-Factor Approximation Algorithm for Network Revenue Management A Constant-Factor Approxmaton Algorthm for Networ Revenue Management Yuhang Ma 1, Paat Rusmevchentong 2, Ma Sumda 1, Huseyn Topaloglu 1 1 School of Operatons Research and Informaton Engneerng, Cornell

More information

A MODEL OF COMPETITION AMONG TELECOMMUNICATION SERVICE PROVIDERS BASED ON REPEATED GAME

A MODEL OF COMPETITION AMONG TELECOMMUNICATION SERVICE PROVIDERS BASED ON REPEATED GAME A MODEL OF COMPETITION AMONG TELECOMMUNICATION SERVICE PROVIDERS BASED ON REPEATED GAME Vesna Radonć Đogatovć, Valentna Radočć Unversty of Belgrade Faculty of Transport and Traffc Engneerng Belgrade, Serba

More information

Understanding price volatility in electricity markets

Understanding price volatility in electricity markets Proceedngs of the 33rd Hawa Internatonal Conference on System Scences - 2 Understandng prce volatlty n electrcty markets Fernando L. Alvarado, The Unversty of Wsconsn Rajesh Rajaraman, Chrstensen Assocates

More information

arxiv:cond-mat/ v1 [cond-mat.other] 28 Nov 2004

arxiv:cond-mat/ v1 [cond-mat.other] 28 Nov 2004 arxv:cond-mat/0411699v1 [cond-mat.other] 28 Nov 2004 Estmatng Probabltes of Default for Low Default Portfolos Katja Pluto and Drk Tasche November 23, 2004 Abstract For credt rsk management purposes n general,

More information

Cracking VAR with kernels

Cracking VAR with kernels CUTTIG EDGE. PORTFOLIO RISK AALYSIS Crackng VAR wth kernels Value-at-rsk analyss has become a key measure of portfolo rsk n recent years, but how can we calculate the contrbuton of some portfolo component?

More information

Chapter 5 Student Lecture Notes 5-1

Chapter 5 Student Lecture Notes 5-1 Chapter 5 Student Lecture Notes 5-1 Basc Busness Statstcs (9 th Edton) Chapter 5 Some Important Dscrete Probablty Dstrbutons 004 Prentce-Hall, Inc. Chap 5-1 Chapter Topcs The Probablty Dstrbuton of a Dscrete

More information

Lecture 7. We now use Brouwer s fixed point theorem to prove Nash s theorem.

Lecture 7. We now use Brouwer s fixed point theorem to prove Nash s theorem. Topcs on the Border of Economcs and Computaton December 11, 2005 Lecturer: Noam Nsan Lecture 7 Scrbe: Yoram Bachrach 1 Nash s Theorem We begn by provng Nash s Theorem about the exstance of a mxed strategy

More information

Analysis of Variance and Design of Experiments-II

Analysis of Variance and Design of Experiments-II Analyss of Varance and Desgn of Experments-II MODULE VI LECTURE - 4 SPLIT-PLOT AND STRIP-PLOT DESIGNS Dr. Shalabh Department of Mathematcs & Statstcs Indan Insttute of Technology Kanpur An example to motvate

More information

Linear Combinations of Random Variables and Sampling (100 points)

Linear Combinations of Random Variables and Sampling (100 points) Economcs 30330: Statstcs for Economcs Problem Set 6 Unversty of Notre Dame Instructor: Julo Garín Sprng 2012 Lnear Combnatons of Random Varables and Samplng 100 ponts 1. Four-part problem. Go get some

More information

Finance 402: Problem Set 1 Solutions

Finance 402: Problem Set 1 Solutions Fnance 402: Problem Set 1 Solutons Note: Where approprate, the fnal answer for each problem s gven n bold talcs for those not nterested n the dscusson of the soluton. 1. The annual coupon rate s 6%. A

More information

4. Greek Letters, Value-at-Risk

4. Greek Letters, Value-at-Risk 4 Greek Letters, Value-at-Rsk 4 Value-at-Rsk (Hull s, Chapter 8) Math443 W08, HM Zhu Outlne (Hull, Chap 8) What s Value at Rsk (VaR)? Hstorcal smulatons Monte Carlo smulatons Model based approach Varance-covarance

More information

Introduction to PGMs: Discrete Variables. Sargur Srihari

Introduction to PGMs: Discrete Variables. Sargur Srihari Introducton to : Dscrete Varables Sargur srhar@cedar.buffalo.edu Topcs. What are graphcal models (or ) 2. Use of Engneerng and AI 3. Drectonalty n graphs 4. Bayesan Networks 5. Generatve Models and Samplng

More information

Capability Analysis. Chapter 255. Introduction. Capability Analysis

Capability Analysis. Chapter 255. Introduction. Capability Analysis Chapter 55 Introducton Ths procedure summarzes the performance of a process based on user-specfed specfcaton lmts. The observed performance as well as the performance relatve to the Normal dstrbuton are

More information

The convolution computation for Perfectly Matched Boundary Layer algorithm in finite differences

The convolution computation for Perfectly Matched Boundary Layer algorithm in finite differences The convoluton computaton for Perfectly Matched Boundary Layer algorthm n fnte dfferences Herman Jaramllo May 10, 2016 1 Introducton Ths s an exercse to help on the understandng on some mportant ssues

More information

Notes on experimental uncertainties and their propagation

Notes on experimental uncertainties and their propagation Ed Eyler 003 otes on epermental uncertantes and ther propagaton These notes are not ntended as a complete set of lecture notes, but nstead as an enumeraton of some of the key statstcal deas needed to obtan

More information

Equilibrium in Prediction Markets with Buyers and Sellers

Equilibrium in Prediction Markets with Buyers and Sellers Equlbrum n Predcton Markets wth Buyers and Sellers Shpra Agrawal Nmrod Megddo Benamn Armbruster Abstract Predcton markets wth buyers and sellers of contracts on multple outcomes are shown to have unque

More information

Cyclic Scheduling in a Job shop with Multiple Assembly Firms

Cyclic Scheduling in a Job shop with Multiple Assembly Firms Proceedngs of the 0 Internatonal Conference on Industral Engneerng and Operatons Management Kuala Lumpur, Malaysa, January 4, 0 Cyclc Schedulng n a Job shop wth Multple Assembly Frms Tetsuya Kana and Koch

More information

Supplementary material for Non-conjugate Variational Message Passing for Multinomial and Binary Regression

Supplementary material for Non-conjugate Variational Message Passing for Multinomial and Binary Regression Supplementary materal for Non-conjugate Varatonal Message Passng for Multnomal and Bnary Regresson October 9, 011 1 Alternatve dervaton We wll focus on a partcular factor f a and varable x, wth the am

More information

Robust Stochastic Lot-Sizing by Means of Histograms

Robust Stochastic Lot-Sizing by Means of Histograms Robust Stochastc Lot-Szng by Means of Hstograms Abstract Tradtonal approaches n nventory control frst estmate the demand dstrbuton among a predefned famly of dstrbutons based on data fttng of hstorcal

More information

Comparison of Singular Spectrum Analysis and ARIMA

Comparison of Singular Spectrum Analysis and ARIMA Int. Statstcal Inst.: Proc. 58th World Statstcal Congress, 0, Dubln (Sesson CPS009) p.99 Comparson of Sngular Spectrum Analss and ARIMA Models Zokae, Mohammad Shahd Behesht Unverst, Department of Statstcs

More information

4.4 Doob s inequalities

4.4 Doob s inequalities 34 CHAPTER 4. MARTINGALES 4.4 Doob s nequaltes The frst nterestng consequences of the optonal stoppng theorems are Doob s nequaltes. If M n s a martngale, denote M n =max applen M. Theorem 4.8 If M n s

More information

OCR Statistics 1 Working with data. Section 2: Measures of location

OCR Statistics 1 Working with data. Section 2: Measures of location OCR Statstcs 1 Workng wth data Secton 2: Measures of locaton Notes and Examples These notes have sub-sectons on: The medan Estmatng the medan from grouped data The mean Estmatng the mean from grouped data

More information

Risk and Return: The Security Markets Line

Risk and Return: The Security Markets Line FIN 614 Rsk and Return 3: Markets Professor Robert B.H. Hauswald Kogod School of Busness, AU 1/25/2011 Rsk and Return: Markets Robert B.H. Hauswald 1 Rsk and Return: The Securty Markets Lne From securtes

More information

Problems to be discussed at the 5 th seminar Suggested solutions

Problems to be discussed at the 5 th seminar Suggested solutions ECON4260 Behavoral Economcs Problems to be dscussed at the 5 th semnar Suggested solutons Problem 1 a) Consder an ultmatum game n whch the proposer gets, ntally, 100 NOK. Assume that both the proposer

More information

Parallel Prefix addition

Parallel Prefix addition Marcelo Kryger Sudent ID 015629850 Parallel Prefx addton The parallel prefx adder presented next, performs the addton of two bnary numbers n tme of complexty O(log n) and lnear cost O(n). Lets notce the

More information

A Set of new Stochastic Trend Models

A Set of new Stochastic Trend Models A Set of new Stochastc Trend Models Johannes Schupp Longevty 13, Tape, 21 th -22 th September 2017 www.fa-ulm.de Introducton Uncertanty about the evoluton of mortalty Measure longevty rsk n penson or annuty

More information

Note on Cubic Spline Valuation Methodology

Note on Cubic Spline Valuation Methodology Note on Cubc Splne Valuaton Methodology Regd. Offce: The Internatonal, 2 nd Floor THE CUBIC SPLINE METHODOLOGY A model for yeld curve takes traded yelds for avalable tenors as nput and generates the curve

More information

A Single-Product Inventory Model for Multiple Demand Classes 1

A Single-Product Inventory Model for Multiple Demand Classes 1 A Sngle-Product Inventory Model for Multple Demand Classes Hasan Arslan, 2 Stephen C. Graves, 3 and Thomas Roemer 4 March 5, 2005 Abstract We consder a sngle-product nventory system that serves multple

More information

Still Simpler Way of Introducing Interior-Point method for Linear Programming

Still Simpler Way of Introducing Interior-Point method for Linear Programming Stll Smpler Way of Introducng Interor-Pont method for Lnear Programmng Sanjeev Saxena Dept. of Computer Scence and Engneerng, Indan Insttute of Technology, Kanpur, INDIA-08 06 October 9, 05 Abstract Lnear

More information

Likelihood Fits. Craig Blocker Brandeis August 23, 2004

Likelihood Fits. Craig Blocker Brandeis August 23, 2004 Lkelhood Fts Crag Blocker Brandes August 23, 2004 Outlne I. What s the queston? II. Lkelhood Bascs III. Mathematcal Propertes IV. Uncertantes on Parameters V. Mscellaneous VI. Goodness of Ft VII. Comparson

More information

ECONOMETRICS - FINAL EXAM, 3rd YEAR (GECO & GADE)

ECONOMETRICS - FINAL EXAM, 3rd YEAR (GECO & GADE) ECONOMETRICS - FINAL EXAM, 3rd YEAR (GECO & GADE) May 17, 2016 15:30 Frst famly name: Name: DNI/ID: Moble: Second famly Name: GECO/GADE: Instructor: E-mal: Queston 1 A B C Blank Queston 2 A B C Blank Queston

More information

Final Exam. 7. (10 points) Please state whether each of the following statements is true or false. No explanation needed.

Final Exam. 7. (10 points) Please state whether each of the following statements is true or false. No explanation needed. Fnal Exam Fall 4 Econ 8-67 Closed Book. Formula Sheet Provded. Calculators OK. Tme Allowed: hours Please wrte your answers on the page below each queston. (5 ponts) Assume that the rsk-free nterest rate

More information

EXAMINATIONS OF THE HONG KONG STATISTICAL SOCIETY

EXAMINATIONS OF THE HONG KONG STATISTICAL SOCIETY EXAMINATIONS OF THE HONG KONG STATISTICAL SOCIETY HIGHER CERTIFICATE IN STATISTICS, 2013 MODULE 7 : Tme seres and ndex numbers Tme allowed: One and a half hours Canddates should answer THREE questons.

More information

Quadratic Games. First version: February 24, 2017 This version: December 12, Abstract

Quadratic Games. First version: February 24, 2017 This version: December 12, Abstract Quadratc Games Ncolas S. Lambert Gorgo Martn Mchael Ostrovsky Frst verson: February 24, 2017 Ths verson: December 12, 2017 Abstract We study general quadratc games wth mult-dmensonal actons, stochastc

More information

Elements of Economic Analysis II Lecture VI: Industry Supply

Elements of Economic Analysis II Lecture VI: Industry Supply Elements of Economc Analyss II Lecture VI: Industry Supply Ka Hao Yang 10/12/2017 In the prevous lecture, we analyzed the frm s supply decson usng a set of smple graphcal analyses. In fact, the dscusson

More information

Comparative analysis of CDO pricing models

Comparative analysis of CDO pricing models Comparatve analyss of CDO prcng models ICBI Rsk Management 2005 Geneva 8 December 2005 Jean-Paul Laurent ISFA, Unversty of Lyon, Scentfc Consultant BNP Parbas laurent.jeanpaul@free.fr, http://laurent.jeanpaul.free.fr

More information

Discounted Cash Flow (DCF) Analysis: What s Wrong With It And How To Fix It

Discounted Cash Flow (DCF) Analysis: What s Wrong With It And How To Fix It Dscounted Cash Flow (DCF Analyss: What s Wrong Wth It And How To Fx It Arturo Cfuentes (* CREM Facultad de Economa y Negocos Unversdad de Chle June 2014 (* Jont effort wth Francsco Hawas; Depto. de Ingenera

More information

Global sensitivity analysis of credit risk portfolios

Global sensitivity analysis of credit risk portfolios Global senstvty analyss of credt rsk portfolos D. Baur, J. Carbon & F. Campolongo European Commsson, Jont Research Centre, Italy Abstract Ths paper proposes the use of global senstvty analyss to evaluate

More information

CHAPTER 9 FUNCTIONAL FORMS OF REGRESSION MODELS

CHAPTER 9 FUNCTIONAL FORMS OF REGRESSION MODELS CHAPTER 9 FUNCTIONAL FORMS OF REGRESSION MODELS QUESTIONS 9.1. (a) In a log-log model the dependent and all explanatory varables are n the logarthmc form. (b) In the log-ln model the dependent varable

More information

Introduction to game theory

Introduction to game theory Introducton to game theory Lectures n game theory ECON5210, Sprng 2009, Part 1 17.12.2008 G.B. Ashem, ECON5210-1 1 Overvew over lectures 1. Introducton to game theory 2. Modelng nteractve knowledge; equlbrum

More information

Economics 1410 Fall Section 7 Notes 1. Define the tax in a flexible way using T (z), where z is the income reported by the agent.

Economics 1410 Fall Section 7 Notes 1. Define the tax in a flexible way using T (z), where z is the income reported by the agent. Economcs 1410 Fall 2017 Harvard Unversty Yaan Al-Karableh Secton 7 Notes 1 I. The ncome taxaton problem Defne the tax n a flexble way usng T (), where s the ncome reported by the agent. Retenton functon:

More information

Data Mining Linear and Logistic Regression

Data Mining Linear and Logistic Regression 07/02/207 Data Mnng Lnear and Logstc Regresson Mchael L of 26 Regresson In statstcal modellng, regresson analyss s a statstcal process for estmatng the relatonshps among varables. Regresson models are

More information

Quadratic Games. First version: February 24, 2017 This version: August 3, Abstract

Quadratic Games. First version: February 24, 2017 This version: August 3, Abstract Quadratc Games Ncolas S. Lambert Gorgo Martn Mchael Ostrovsky Frst verson: February 24, 2017 Ths verson: August 3, 2018 Abstract We study general quadratc games wth multdmensonal actons, stochastc payoff

More information

Wages as Anti-Corruption Strategy: A Note

Wages as Anti-Corruption Strategy: A Note DISCUSSION PAPER November 200 No. 46 Wages as Ant-Corrupton Strategy: A Note by dek SAO Faculty of Economcs, Kyushu-Sangyo Unversty Wages as ant-corrupton strategy: A Note dek Sato Kyushu-Sangyo Unversty

More information

Solution of periodic review inventory model with general constrains

Solution of periodic review inventory model with general constrains Soluton of perodc revew nventory model wth general constrans Soluton of perodc revew nventory model wth general constrans Prof Dr J Benkő SZIU Gödöllő Summary Reasons for presence of nventory (stock of

More information

Scribe: Chris Berlind Date: Feb 1, 2010

Scribe: Chris Berlind Date: Feb 1, 2010 CS/CNS/EE 253: Advanced Topcs n Machne Learnng Topc: Dealng wth Partal Feedback #2 Lecturer: Danel Golovn Scrbe: Chrs Berlnd Date: Feb 1, 2010 8.1 Revew In the prevous lecture we began lookng at algorthms

More information

CHAPTER 3: BAYESIAN DECISION THEORY

CHAPTER 3: BAYESIAN DECISION THEORY CHATER 3: BAYESIAN DECISION THEORY Decson makng under uncertanty 3 rogrammng computers to make nference from data requres nterdscplnary knowledge from statstcs and computer scence Knowledge of statstcs

More information

Online Appendix for Merger Review for Markets with Buyer Power

Online Appendix for Merger Review for Markets with Buyer Power Onlne Appendx for Merger Revew for Markets wth Buyer Power Smon Loertscher Lesle M. Marx July 23, 2018 Introducton In ths appendx we extend the framework of Loertscher and Marx (forthcomng) to allow two

More information

Interval Estimation for a Linear Function of. Variances of Nonnormal Distributions. that Utilize the Kurtosis

Interval Estimation for a Linear Function of. Variances of Nonnormal Distributions. that Utilize the Kurtosis Appled Mathematcal Scences, Vol. 7, 013, no. 99, 4909-4918 HIKARI Ltd, www.m-hkar.com http://dx.do.org/10.1988/ams.013.37366 Interval Estmaton for a Lnear Functon of Varances of Nonnormal Dstrbutons that

More information

PREFERENCE DOMAINS AND THE MONOTONICITY OF CONDORCET EXTENSIONS

PREFERENCE DOMAINS AND THE MONOTONICITY OF CONDORCET EXTENSIONS PREFERECE DOMAIS AD THE MOOTOICITY OF CODORCET EXTESIOS PAUL J. HEALY AD MICHAEL PERESS ABSTRACT. An alternatve s a Condorcet wnner f t beats all other alternatves n a parwse majorty vote. A socal choce

More information

Lecture Note 2 Time Value of Money

Lecture Note 2 Time Value of Money Seg250 Management Prncples for Engneerng Managers Lecture ote 2 Tme Value of Money Department of Systems Engneerng and Engneerng Management The Chnese Unversty of Hong Kong Interest: The Cost of Money

More information

Foundations of Machine Learning II TP1: Entropy

Foundations of Machine Learning II TP1: Entropy Foundatons of Machne Learnng II TP1: Entropy Gullaume Charpat (Teacher) & Gaétan Marceau Caron (Scrbe) Problem 1 (Gbbs nequalty). Let p and q two probablty measures over a fnte alphabet X. Prove that KL(p

More information

iii) pay F P 0,T = S 0 e δt when stock has dividend yield δ.

iii) pay F P 0,T = S 0 e δt when stock has dividend yield δ. Fnal s Wed May 7, 12:50-2:50 You are allowed 15 sheets of notes and a calculator The fnal s cumulatve, so you should know everythng on the frst 4 revews Ths materal not on those revews 184) Suppose S t

More information

A Comparison of Statistical Methods in Interrupted Time Series Analysis to Estimate an Intervention Effect

A Comparison of Statistical Methods in Interrupted Time Series Analysis to Estimate an Intervention Effect Transport and Road Safety (TARS) Research Joanna Wang A Comparson of Statstcal Methods n Interrupted Tme Seres Analyss to Estmate an Interventon Effect Research Fellow at Transport & Road Safety (TARS)

More information

Skewness and kurtosis unbiased by Gaussian uncertainties

Skewness and kurtosis unbiased by Gaussian uncertainties Skewness and kurtoss unbased by Gaussan uncertantes Lorenzo Rmoldn Observatore astronomque de l Unversté de Genève, chemn des Mallettes 5, CH-9 Versox, Swtzerland ISDC Data Centre for Astrophyscs, Unversté

More information

Jean-Paul Murara, Västeras, 26-April Mälardalen University, Sweden. Pricing EO under 2-dim. B S PDE by. using the Crank-Nicolson Method

Jean-Paul Murara, Västeras, 26-April Mälardalen University, Sweden. Pricing EO under 2-dim. B S PDE by. using the Crank-Nicolson Method Prcng EO under Mälardalen Unversty, Sweden Västeras, 26-Aprl-2017 1 / 15 Outlne 1 2 3 2 / 15 Optons - contracts that gve to the holder the rght but not the oblgaton to buy/sell an asset sometmes n the

More information

- contrast so-called first-best outcome of Lindahl equilibrium with case of private provision through voluntary contributions of households

- contrast so-called first-best outcome of Lindahl equilibrium with case of private provision through voluntary contributions of households Prvate Provson - contrast so-called frst-best outcome of Lndahl equlbrum wth case of prvate provson through voluntary contrbutons of households - need to make an assumpton about how each household expects

More information

A Case Study for Optimal Dynamic Simulation Allocation in Ordinal Optimization 1

A Case Study for Optimal Dynamic Simulation Allocation in Ordinal Optimization 1 A Case Study for Optmal Dynamc Smulaton Allocaton n Ordnal Optmzaton Chun-Hung Chen, Dongha He, and Mchael Fu 4 Abstract Ordnal Optmzaton has emerged as an effcent technque for smulaton and optmzaton.

More information

Option pricing and numéraires

Option pricing and numéraires Opton prcng and numérares Daro Trevsan Unverstà degl Stud d Psa San Mnato - 15 September 2016 Overvew 1 What s a numerare? 2 Arrow-Debreu model Change of numerare change of measure 3 Contnuous tme Self-fnancng

More information

Evaluating Performance

Evaluating Performance 5 Chapter Evaluatng Performance In Ths Chapter Dollar-Weghted Rate of Return Tme-Weghted Rate of Return Income Rate of Return Prncpal Rate of Return Daly Returns MPT Statstcs 5- Measurng Rates of Return

More information

S yi a bx i cx yi a bx i cx 2 i =0. yi a bx i cx 2 i xi =0. yi a bx i cx 2 i x

S yi a bx i cx yi a bx i cx 2 i =0. yi a bx i cx 2 i xi =0. yi a bx i cx 2 i x LEAST-SQUARES FIT (Chapter 8) Ft the best straght lne (parabola, etc.) to a gven set of ponts. Ths wll be done by mnmzng the sum of squares of the vertcal dstances (called resduals) from the ponts to the

More information

Elton, Gruber, Brown and Goetzmann. Modern Portfolio Theory and Investment Analysis, 7th Edition. Solutions to Text Problems: Chapter 4

Elton, Gruber, Brown and Goetzmann. Modern Portfolio Theory and Investment Analysis, 7th Edition. Solutions to Text Problems: Chapter 4 Elton, Gruber, Brown and Goetzmann Modern ortfolo Theory and Investment Analyss, 7th Edton Solutons to Text roblems: Chapter 4 Chapter 4: roblem 1 A. Expected return s the sum of each outcome tmes ts assocated

More information

2) In the medium-run/long-run, a decrease in the budget deficit will produce:

2) In the medium-run/long-run, a decrease in the budget deficit will produce: 4.02 Quz 2 Solutons Fall 2004 Multple-Choce Questons ) Consder the wage-settng and prce-settng equatons we studed n class. Suppose the markup, µ, equals 0.25, and F(u,z) = -u. What s the natural rate of

More information

A New Uniform-based Resource Constrained Total Project Float Measure (U-RCTPF) Roni Levi. Research & Engineering, Haifa, Israel

A New Uniform-based Resource Constrained Total Project Float Measure (U-RCTPF) Roni Levi. Research & Engineering, Haifa, Israel Management Studes, August 2014, Vol. 2, No. 8, 533-540 do: 10.17265/2328-2185/2014.08.005 D DAVID PUBLISHING A New Unform-based Resource Constraned Total Project Float Measure (U-RCTPF) Ron Lev Research

More information

Sequential equilibria of asymmetric ascending auctions: the case of log-normal distributions 3

Sequential equilibria of asymmetric ascending auctions: the case of log-normal distributions 3 Sequental equlbra of asymmetrc ascendng auctons: the case of log-normal dstrbutons 3 Robert Wlson Busness School, Stanford Unversty, Stanford, CA 94305-505, USA Receved: ; revsed verson. Summary: The sequental

More information

A HEURISTIC SOLUTION OF MULTI-ITEM SINGLE LEVEL CAPACITATED DYNAMIC LOT-SIZING PROBLEM

A HEURISTIC SOLUTION OF MULTI-ITEM SINGLE LEVEL CAPACITATED DYNAMIC LOT-SIZING PROBLEM A eurstc Soluton of Mult-Item Sngle Level Capactated Dynamc Lot-Szng Problem A EUISTIC SOLUTIO OF MULTI-ITEM SIGLE LEVEL CAPACITATED DYAMIC LOT-SIZIG POBLEM Sultana Parveen Department of Industral and

More information

Alternatives to Shewhart Charts

Alternatives to Shewhart Charts Alternatves to Shewhart Charts CUSUM & EWMA S Wongsa Overvew Revstng Shewhart Control Charts Cumulatve Sum (CUSUM) Control Chart Eponentally Weghted Movng Average (EWMA) Control Chart 2 Revstng Shewhart

More information

Taxation and Externalities. - Much recent discussion of policy towards externalities, e.g., global warming debate/kyoto

Taxation and Externalities. - Much recent discussion of policy towards externalities, e.g., global warming debate/kyoto Taxaton and Externaltes - Much recent dscusson of polcy towards externaltes, e.g., global warmng debate/kyoto - Increasng share of tax revenue from envronmental taxaton 6 percent n OECD - Envronmental

More information