Performant Value at Risk bounds using the Extended Rearrangement Algorithm

Size: px
Start display at page:

Download "Performant Value at Risk bounds using the Extended Rearrangement Algorithm"

Transcription

1 FACULTY OF ENGINEERING Performant Value at Risk bounds using the Extended Rearrangement Algorithm Kristof Verbeken Promoter: Prof. Dr. Steven Vanduffel Co-promoter: Prof. Dr. Bart Jansen Thesis submitted in partial fulfilment of the requirements for the Master of Science in Applied Sciences and Engineering: Applied Computer Science Academic year

2 Performant Value at Risk bounds using the Extended Rearrangement Algorithm Kristof Verbeken, Master of Science in Engineering: Applied Computer Science ( ) Keywords: VaR-Bounds, Rearrangement Algorithm, Extended Rearrangement Algorithm. Value at Risk (VaR) bounds for portfolios of dependent risks that appeared in the recent literature are hard to compute for large portfolios (n > 50) and do not take into account the information on dependence. The objective of this thesis is twofold: to improve the performance of existing algorithms so that VaR bounds can be calculated for larger portfolios, and to tighten the VaR bounds by introducing a variance constraint. We calculate the variance-constrained bounds by implementing the Extended Rearrangement Algorithm in Java. We show that there is a tremendous performance improvement which allows us to deal with much larger portfolios (n > 2500). The variance-constrained bounds typically improve upon the unconstrained ones and make it possible for regulators to have a more realistic view on worst-case VaR. Les limites de Value at Risk (VaR) des portefeuilles à risques dépendants qui ont récemment été publiées sont difficiles à calculer pour les gros portefeuilles (n > 50) et ne tiennent pas compte de l information sur les dépendances. L objectif de ce mémoire est double: améliorer la performance des algorithmes existants afin que les limites de VaR puissent être calculées pour des plus gros portefeuilles et, d améliorer les limites de VaR en introduisant des contraintes sur la variance. Nous calculons les limites de la contrainte sur la variance en implémentant l Extended Rearrangement Algorithm en Java. Nous démontrons que la performance a considérablement augmenté ce qui nous permet de gérer des portefeuilles nettement plus gros (n > 2500). Les nouvelles limites (à variance contrainte) sont nettement améliorées comparées aux non contraintes et permettent aux régulateurs d avoir une vue nettement plus réaliste dans le pire de VaR. Value at Risk (VaR) bounds voor portefeuilles van afhankelijke risico s uit de recente literatuur zijn moeilijk te berekenen voor grote portefeuilles (n > 50) en houden geen rekening met informatie over afhankelijkheid. Het doel van deze thesis is tweeledig: de performantie van bestaande algoritmen zodanig verbeteren dat VaR bounds berekend kunnen worden voor grotere portefeuilles en de VaR bounds verbeteren aan de hand van een bovengrens op de variantie. We berekenen de begrensde VaR bounds in Java met een eigen implementatie van het Extended Rearrangement Algoritme. We tonen een uitzonderlijk verschil in performantie aan zodat het mogelijk wordt om met grotere portefeuilles te werken (n > 2500). De begrensde VaR bounds bieden meestal een significante verbetering tegenover onbegrensde VaR bounds, dit geeft regulatoren een meer realistische kijk op de slechtst mogelijke VaR.

3 Foreword After obtaining a bachelor s degree in Business Engineering, I was intrigued by quantitative and computational finance. Consequently, I decided to pursue a master s degree in Applied Engineering in Applied Computer Science, which has been extemely gratifying. This choice lead me to pursue a three month internship at a bank, a certificate in quantitative finance and to co-found a student association about finance. None of these things would have been possible without professor Steven Vanduffel, to whom I am forever indebted. Thanks also go to professor Bart Jansen for his advice on optimizing the performance and to professor Kris Boudt for the extensive feedback. Finally, I would like to express my utmost gratitute towards my girlfriend Katrien Wera and my parents Monique Van Eyck and Herman Verbeken for their love and support over this hectic year.

4 Table of Contents Foreword 3 Introduction 6 1 Value at Risk Defining Value at Risk Criticism on Value at Risk Fitting the unthinkable into a statistical model VaR has no tail information VaR is not a coherent risk measure Expected Shortfall as the new and improved VaR VaR aggregation with unknown dependence VaR bounds without dependence information Comonotonicity as a worst-case dependence structure Unconstrained VaR bounds The concept of complete mixability Sharpness of unconstrained VaR bounds Variance constrained VaR bounds on dependent risks Rearrangement Algorithm (RA) Computing numerically sharp bounds Implementing the Rearrangement Algorithm in Java Matrix optimized for RA Implementing the algorithm Performance analysis and optimization Defining a heterogenous portfolio The included probability distributions Normal distribution Pareto type II distribution Lognormal distribution Gamma distribution

5 4.1.5 Exponential distribution Weibull distribution Student s t-distribution Constructing the mixed portfolio Extended Rearrangement Algorithm (ERA) Introducing the algorithm Implementing the algorithm in Java Optimizing the implementation for performance Results Bounds for homogenous portfolios Impact of a variance constraint Impact of portfolio size n Impact of discretization points d Impact of confidence level q Impact of correlation ρ Bounds for heterogenous portfolios Performance analysis Impact of a variance constraint Impact of confidence level q Conclusion 65 List of figures 66 List of tables 67 List of code listings 68 References 70

6 Introduction The financial sector has known better times. Given the recent crisis, financial regulators are tightening the rules by which banks have to play in order to ensure financial stability. Financial risk management uses risk measures to regulate the financial sector. Value at Risk (VaR) is the most used risk measure, it is used as an indicator of a bank s risk profile and can be benchmarked against industry averages or historical data. Each risk has a marginal distribution which describes the stand-alone risk. In this paper, we assume a known marginal distribution and variance of the portfolio sum. Risks can be dependent upon each other. The variance of the portfolio sum provides information on the dependence structure between the risks. When no information about the dependence between risks is assumed, multiple VaRs are possible. In other words, the VaR can not be determined precisely due to the lack of dependence information. Banks and insureres are particularly interested in the worst-case VaR, also known as the upper VaR bound, because this is the basis for setting capital requirements. VaR bounds are the best- and worst-case values of the VaR. This information is of significant interest in credit risk portfolio models, risk aggregation and solvency calculation. Recent literature has dealt with the problem of finding bounds on the VaR of portfolios with known marginal distributions. The Rearrangement Algorithm (RA) by Puccetti and Rüschendorf [1] and Embrechts, Pucetti and Rüschendorf [2] is of significant importance to this thesis because it provides a numerical estimation for unconstrained bounds and is reused in the Extended Rearrangement Algorithm (ERA). As an extension, the ERA calculates variance-constrained bounds. The first objective of this thesis is to analyse whether variance-constrained bounds are actually an improvement over unconstrained bounds. A Java implementation of the Extended Rearrangement Algorithm is used to calculate variance-constrained 6

7 bounds. We show that the introduction of variance-constrained bounds offers a more realistic view on the worst-case VaR to both the industry and regulators. The second objective is to increase the performance of the Rearrangement Algorithm in order to allow for simulations of larger portfolios. A Java implementation of the algorithms is developed from scratch with performance in mind. As a result, we observe a tremendous improvement in performance over implementations of the algorithm in the literature so that larger portfolios can be estimated. The first chapter introduces Value at Risk, one of most popular risk measures in financial risk management. Value at Risk is explained by a simple example. Its mathematical definition is given and illustrated graphically. Some fundamental flaws of Value at Risk are reviewed and Expected Shortfall is introduced as its main contendor which is ready to replace Value at Risk. In the next chapter, we consider the concept of VaR bounds. These are the upper and lower limit for Value at Risk for a portfolio of dependent risks. Some fundamental concepts like comonotonicity and complete mixability are defined so that they can be reused later. The unconstrained VaR bounds A and B are defined. A variance constraint on the sum of the risks is added to improve the bounds, which are shown to be an improvement over the unconstrained bounds. In the third chapter, the Rearrangement Algorithm is used to compute numerically sharp bounds. The algorithm itself is given and explained by example. The actual implementation of the Rearrangement Algorithm in Java is discussed in detail, along with the performance optimizations that are made. These performance optimizations cause the Java implementation to outperform any existing implementation of the Rearrangement Algorithm by orders of magnitude. As this thesis attempts to estimate realistic VaR bounds, the concept of heterogenous portfolios is established in chapter four. A heterogenous portfolio consists of risks with different marginal distributions. Three heterogenous portfolios are constructed out of multiple probability distributions. These include the more traditional 7

8 distributions like Normal and Pareto, but also more exotic distributions like Weibull and Gamma. Bounds are calculated for all portfolios and their results are discussed in the final chapter. The Extended Rearrangement Algorithm by Bernard et al. [3] is introduced in the fifth chapter. It calculates variance-constrained Value at Risk bounds. It does so by applying the normal Rearrangement Algorithm on two parts of a matrix of risks. The algorithm is implemented in Java and optimized for performance by making use of performance-optimized libraries and slightly altering the algorithm. These optimizations result in a tremendous performance increase. In the final chapter, the results are discussed. The bounds are compared for different values of the critical paremeters. The impact of the portfolio size, the number of discretizations, the confidence level and the correlation is analyzed for both homogenous and heterogenous portfolios. Upon comparison, it is shown that the variance contrained bounds are significantly tighter than the unconstrained bounds. This results in a more realistic worst-case VaR that can be used by regulators and the industry. 8

9 1 Value at Risk How much is there to lose on an investment? That is the question financial risk management is trying to answer. Financial risk management is the practice of measuring the exposure to risk using risk measures. These measures can be benchmarked with industry averages or historical data. The global economy relies heavily on risk measures to regulate the financial sector. A risk measure ρ is a mapping from the set of risks to the real numbers. In other words, it summarizes the information inherent to a risk X into a number ρ(x). The Value at Risk (VaR) is one example of a risk measure, as is the Expected Shortfall (see section 1.3). Value at Risk is one of the most used risk measures in financial risk management. Its popularity and simplicity make it the current industry standard [4]. Partly due to its simplicity, VaR has some fundamental flaws which will be discussed. Expected Shortfall (ES) will be introduced as an alternative risk measure which deals with some of the issues Value at Risk suffers from. 1.1 Defining Value at Risk Value at Risk has gained a lot of traction over time. In 1980, the Securities and Exchange Commission linked the capital requirements to the losses that would be incurrent at a 95% confidence level over an entire month. By 1990, most banks were using a rudimentary implementation of VaR internally to monitor capital risk. J.P. Morgan released public data about the variances and covariances it used internally to calculate the risks. This allowed for the development of independent risk measures from the RiskMetric group. Since then, the usage of VaR has skyrocketed. Its use is now enforced in the Basel Accords and the Solvency Directives, which are regulations developed to ensure financial stability. The Basel Accords require a 9

10 very high confidence level (q = 99.5%), the consequences of this requirement will be criticized in section [5]. Value at Risk measures the potential exposure of a risky asset X over a period of time for a confidence level q. A 95% one-day VaR of e100 means that there is a 5% chance that the value of the asset will lose more than e100 in a day. Typical values for the confidence level q are 95% or even 99.5%, corresponding to respectively 95% and 99.5% of the area under the chart. Definition 1.1 (Value at Risk) Given a confidence level q (0, 1) and a risk with distribution F X, the VaR q (X) is the smallest loss such that the cumulative probability that X is larger than VaR q (X) is at most (1 q). X is the loss of the asset, VaR q (X) is the q-quantile of the marginal distribution F X [6]. VaR q (X) = inf{x R F X (x) q} In order to calculate the Value at Risk analytically, the marginal distribution F X for each risk is required. This thesis assumes the marginal distribution is known, this assumption is discussed in more detail in section 1.2. The marginal risks can typically be estimated through risk factors for expected loss. In the context of credit risk, the marginal distribution of a given risk is determined by the following factors: Loss Given Default (LGD): percentage of exposure that will be lost in case of default. Probability of Default (PD): chance of the borrower defaulting. Exposure At Default (EAD): amount the bank is exposed to when the borrower defaults. While VaR can be used for single risks, it is especially useful to indicate the potential loss for a portfolio comprising multiple risks. Calculating worst-possible VaR for a portfolio of dependent risks is discussed in detail in the section 2. 10

11 Probability density f(x) x Figure 1: Graphical representation of VaR, the area left from the red line represents 5% of the total probability mass. Therefore, this chart represents the VaR 95%. 1.2 Criticism on Value at Risk Heavy is the head that wears the crown. Value at Risk is the most used risk metric in financial risk management and therefore also the most criticized. No risk measure is perfect, and no risk measure ever will be. There are three main reasons VaR is criticized: it relies on a distribution function for unknown risks, it provides no information about the tail (see section 1.2.2) and it is not a coherent risk measure (see section 1.2.3). Each of these reasons will be further discussed in detail in this section Fitting the unthinkable into a statistical model VaR is charlatanism because it tries to estimate something that is not scientifically possible to estimate, namely the risks or rare events. Nassim Taleb [7]. According to its critics, it is easy to understand but dangerous when misunderstood. Nassim Taleb, bestselling author of the Black Swan, claims that Value at Risk gives false confidence because it estimates risks of rare events, which is impossible. The definition of VaR (definition 1.1) is based on a given marginal risk (F X ). This 11

12 assumption is heavily contested by VaR critics, claiming that some events will have undefined losses which can t follow a distribution function. Modeling rare events is very hard, if at all possible. As there is no real alternative, it is best to make the models as good as possible by taking this into account and choosing the distribution functions accordingly (see section 1.2.2) VaR has no tail information VaR is an airbag that works all the time, except when you have a car accident. David Einhorn [8]. The recent financial crisis has increased the awareness of unlikely events with a big impact. As an attempt to include these events in risk measures, academics use probability distributions with heavy tails. Heavy tails give an extra weight to very unlikely events, which reside in the tail of a distribution. Heavy-tailed distributions like Pareto or Lognormal are favored over the traditional normal distribution. The tails of heavy-tailed distributions are not exponentially bounded [9]. This is illustrated in figure 2, where the Pareto distribution is not exponentially bounded. The normal distribution, which is not heavy-tailed is exponentially bounded. Note that this shift is not enough to silence the critics, it is still parametric VaR and still assumes unlikely events can be modeled. This thesis assumes that each risk can be approximated by a known marginal distribution. In order to reduce the problem even further, a mixed portfolio consisting of multiple heavy-tailed marginal risks is introduced. Figure 1 shows that VaR only captures the q-quantile point. No information about the tail after that quantile point is used. For this reason it is recommended using other risk measures alongside VaR, or inspecting the distributions themselves. Increasing the confidence level to nearly 100%, like regulators do, is not the solution either due to the asymptotic nature of VaR. The impact of a high confidence level 12

13 is discussed in section Probability density f(x, µ, σ) x Pareto Exponential Normal Figure 2: Heavy-tailed distributions like Pareto are not exponentially bounded. On the contrary, the normal distribution is exponentially bounded VaR is not a coherent risk measure A coherent risk measure has the following properties [4]. Homogeneity: changing the size of a portfolio by a factor, while keeping the relative amounts of different items in the portfolio the same, should result in the risk measure being multiplied by that factor. Monotonicity: if a portfolio has lower returns than another portfolio for every state of the world, its risk measure should be greater. Subadditivity: the risk measure of the sum of two risks can not be greater than the sum of both individual risk measures. Translation invariance: if we add an amount of cash to a portfolio, its risk measure should go down by that amount. VaR conforms to all of the above properties, except for subadditivity. Mathematically, subadditivity can be expressed as follows [10]. Definition 1.2 (Subadditivity) The risk measure of the sum of two risks is smaller than or equal to the sum of the risk measure of the individual risks, i.e, for a risk 13

14 measure ρ one has that ρ(x 1 + X 2 ) ρ(x 1 ) + ρ(x 2 ) Intuitively, the lack of subadditivity can be explained using the following example. The average bank office is robbed about once every ten years. A single-office bank has about % chance of being robbed on a specific day, so the risk of robbery would not figure into one-day VaR. It would not even be within an order of magnitude of the VaR, so it is in the range where the institution should not worry about it. As institutions get more offices, the risk of a robbery on a specific day rises to within an order of magnitude of VaR. For a very large banking institution, robberies are a daily routine. Losses are part of the daily VaR calculation, and tracked statistically rather than case-by-case [11]. 1.3 Expected Shortfall as the new and improved VaR Expected Shortfall (ES) is the expected return of the portfolio in the worst-cases. Contrary to VaR, it is sensitive to the shape of the distribution in the tail. This definition can be mathematically formulated as follows (see Harmantzis et al. [12]). Definition 1.3 (Expected Shortfall) Given q (0, 1) and S = n i=1 X i with X i F i, then ES q (S) = 1 1 q 1 q VaR α (S)dα ES improves upon VaR by being a coherent risk measure and by providing more information about the tail. On top of that, ES is a more conservative risk measure than VaR. This means that for any q (0, 1), it can be shown that ES q (X) VaR q (X). (1) 14

15 There is no lack of subadditivity since all information of the tail is included in the average. In the previous example, every robbery would be included in the average of the tail. Whereas the VaR only captures the exposure where the tail starts [13]. Like any other risk measure, ES has its own deficiencies. Finite expectations are necessary to compute the Expected Shortfall, which is not always feasible in case of operational risk. Despite its deficiencies, Expected Shortfall seems to be the replacement for Value at Risk. It is more conservative, adds tail information and ES is a coherent risk measure. To conclude this chapter, it can be said that Value at Risk is a very popular and simple risk measure. This simplicity comes at a cost. No tail information is given and Value at Risk is not a coherent risk measure. Therefore, it is elementary to treat VaR as an indicator alongside other risk measures. Expected Shortfall exists as an alternative, it is similar to Value at Risk but provides more information about the tail. In the next chapter, Value at Risk aggregation will be discussed when the risks are dependent. 15

16 2 VaR aggregation with unknown dependence What s the Value at Risk of the entire portfolio? A common question in risk management without an easy answer. When the model is not specified precisely, it is impossible to calculate the exact VaR for a portfolio of risks that can be dependent upon each other. In this chapter, upper and lower bounds for the VaR in these portfolios will be introduced. The concept of VaR bounds is introduced together with comonotonicity and complete mixability. Bounds are calculated for the unconstrained case and their sharpness is analyzed. Next, the impact on VaR bounds of a variance constraint on the total portfolio sum is discussed. 2.1 VaR bounds without dependence information This section deals with the VaR of a portfolio containing dependent risks, without having information about their dependence. It is impossible to calculate the Value at Risk without any dependence information. VaR bounds give upper and lower limits to the values that the VaR of the portfolio can attain. Given a portfolio of n risks X i with i (1, n) with known marginals F i for each risk with X i F i. No assumptions are made on the dependence structure of the portfolio. The sum of all risks S is defined as n i=1 X i. These lower and upper limits are respectively called the best-case and worst-case VaR. In search of these bounds, the concept of comonotonicity and complete mixability are introduced. Using these concepts, upper and lower bounds are defined and their attainability is discussed. 16

17 2.1.1 Comonotonicity as a worst-case dependence structure It is often assumed that the worst-case VaR occurs when the risks are comonotonic. Comonotonicity means that risks have a maximal correlation, which occurs when all risks are increasing in each other. In this section it will be shown that the sums of comonotonic risks do not exhibit maximum VaR; see also [14] and [15]. By definition of Expected Shortfall (definition 1.3), we have the inequality (1). As discussed section 1.3, Expected Shortfall is known to be a subadditive risk measure (definition 1.2), which means that it is maximal in case of comonotonic risks. ES q (S) ES q (S c ) (2) In the next section, the worst-case VaR will be deduced from the previous two equations. A best-case VaR will be discussed as well Unconstrained VaR bounds It is of particular interest for risk management to find a dependence structure for the worst possible Value at Risk for the portfolio sum S. In the previous section (2.1.1), it was shown that the worst possible VaR can be greater than the comonotonic VaR. In this section extreme VaR bounds will be introduced. The upper bound for VaR can be deduced from equation 1 and 2. The upper VaR bound B in the unconstrained case is defined as Expected Shortfall for the sum of comonotonic risks S c given a confidence level q (0, 1). B = n ES q (X i ) = ES q (S c ) (3) i=1 By the same analogy, it is possible to define the lower VaR bound as the average of the sum of VaRs on the interval (0, q). This average is called the Left Tail VaR. A = LTVaR q (S c ) = n LTVaR q (X i ) = 1 q i=1 q 0 VaR u (S c )du. (4) 17

18 Equations (3) and (4) can be synthesized into one equation in theorem 2.1. Theorem 2.1 (Unconstrained bounds) Let q (0, 1), X i F i (i = 1, 2,..., n), S = n i=1 X i. Then, A VaR q (S) B. The bounds A and B are given in explicit form in equations (3) and (4). They can be computed directly from the marginal distributions. The bounds are also valid when individual risks do not have the same distribution, as is the case with heterogenous portfolios. This contrasts with earlier results in the literature (see e.g., Puccetti and Ruschendorf [1], [16] and Embrechts et al. [2]), in which the bounds are typically more difficult to compute and are not always available for heterogenous portfolios. In the next section, the concept of complete mixability will be introduced. Verifying the attainability or sharpness of the bounds A and B relies heavily on this concept The concept of complete mixability The concept of complete mixability is highly useful for the calculation of VaR bounds and in some other areas of financial risk management [17] [18]. complete mixability can be defined as follows. Mathematically, Definition 2.2 (Complete mixability) A distribution function F on R is n-completely mixable (n-cm) if there exist n random variables X 1,..., X n identically distributed as F such that n P ( X i = c) = 1 i=1 for some constant c R. Any vector (X 1,..., X n ) with X i F, 1 i n, is called a n-complete mix. The above definition is demonstrated in the following example. Each row represents a portfolio with five risks. The top row is the sum of the other rows. The initial 18

19 matrix has comonotonic risks, which has the maximal variance of the row sums. The RA is applied on the portfolio of comonotonic risks to flatten the distribution of the sums and minimizing the variance Figure 3: Initial matrix with a portfolio of comonotonic risks. This matrix can be transformed into a 3-complete mix where the variance of the row sums is constant. Note that not all univariate distributions are n CM Figure 4: Complete mix of the initial matrix. The variance of the sum is now 0. Complete mixability will return in the next section where the sharpness of bounds A and B will be discussed. The Rearrangement Algorithm from section 3 makes use of this concept as well Sharpness of unconstrained VaR bounds Value at Risk bounds are sharp when they can be attained. Because they can possibly occur, sharp bounds on the VaR can not be improved. Therefore, sharpness can be used as an indicator for the quality of the bound. In the following theorem, the sharpness criteria of the lower and upper bound are specified. Theorem 2.3 in [3] Theorem 2.3 (Sharpness of the unconstrained bounds) Let X i F i, 1 i n and let S = n i=1 X i be the portfolio sum. Then: = f i (U) 19

20 The upper bound B is attained by S if and only if the two following conditions are satisfied: The f i are rearrangements of F 1 i on [q, 1], 1 i n. X 1, X 2,...,X n are mixing on {U q}, i.e., n i=1 f i(u) = c, almost surely on [q, 1] for some c R. The lower bound A is attained by S if and only if the two following conditions are satisfied: The f i are rearrangements of F 1 i on [0, q[, 1 i n. X 1, X 2,...,X n are mixing on {U < q}, i.e., n i=1 f i(u) = c, almost surely on [0, q[ for some c R. The upper bound B is sharp and attained if and only if the quantile function of S is taking the constant value B on [q, 1]. Hence, the worst-case dependence structure for the VaR depends on the probability level q and involves some negative dependence in the upper part of the portfolio (to render the quantile function of S constant). The same reasoning applies to the lower bound A as well. By contrast, there is no unique worst-case dependence structure for the Tail VaR of Left Tail VaR. One possible TVaR worst-case dependence structure appears when the risks are comonotonic. However, there are many other worst-case dependence structures. The next section will add a variance constraint to the unconstrained VaR bounds in order to tighten the bounds. 2.2 Variance constrained VaR bounds on dependent risks Adding a variance constraint as a source of dependence information is highly relevant, as in many practical situations historical data on observed portfolio losses can 20

21 be used to estimate the variance of the portfolio sum. We first give an example that provides some intuition as to how we can deal with the constrained problem. The lower bound m for the VaR given a variance constraint on the sum is defined as follows: m(s 2 ) = inf VaR q (X) (5) subject to X j F j, var(x) s 2 (6) The upper bound M for the VaR, with a variance constraint s 2 d is denoted as follows: M(s 2 ) = sup VaR + q (X) (7) subject to X j F j, var(x) s 2. (8) Bounds with an infinite variance, m( ) and M( ) are written as m and M for simplicity. New bounds are proposed based on the unconstrained bounds A and B to take the variance constraint on the sum S into account [3]. X is defined as a random variable that takes two possible values, corresponding to the bounds A and B that we derived in Theorem 2.1. A with probability q, X = B with probability 1 q. In the presence of an additional variance constraint on the portfolio sum, A and B (as in Theorem 2.1) are still bounds for VaR q (S), and they may still be attained, in which case they are the best possible. For example, assume that the lowest value that S takes is A with probability q. In this case, S has minimum variance if S = d X and thus A may be attained depending on the value s 2. Similarly, the upper bound for the Value at Risk (with confidence q) is reached when B is the largest value that S can take (with probability 1 q). In this case, X already satisfies the variance constraint. Consequently, no improvement can be realized by adding a variance constraint. 21

22 However, if the constraint is not satisfied (var(x ) > s 2 ), tighter bounds are possible in general. It is then intuitive that better bounds can be found by constructing a variable Y as follows: a with probability q, Y = b with probability 1 q. Bounds a and b are tighter than A and B. variance-constraint is satisfied. µ s 1 q q with probability q and value µ + s The bounds are chosen so that the Note that a two-point distribution taking value q 1 q with probability 1 q is the only two-point distribution with mean µ and variance s 2. Bounds a and b are thus defined as follows: ( 1 q a = max µ s q ( b = min µ + s q 1 q, B ), A (9) ). (10) The following theorem shows that the construction as outlined above gives bounds on Value at Risk in the presence of a variance constraint as illustrated by Bernard et al [3]. Theorem 2.4 (constrained bounds) Let q (0, 1), X i F i (i = 1, 2,..., n), and S = n i=1 X i satisfy var(s) s 2. Then, we have a m VaR q (S) M b. In particular, if s 2 q(a µ) 2 + (1 q)(b µ) 2, then a = A and b = B (and the unconstrained bounds are not improved by the presence of the constraint on variance). If s 2 is not too large (i.e.,when s 2 q(a µ) 2 + (1 q)(b µ) 2 ), the constrained bounds strictly outperform the unconstrained bounds. For example, when the risks (e.g., in a life insurance context) are approximately independent, bounds a and b will strictly improve upon A and B for moderate portfolio sizes. When all risks are 22

23 distrbuted identically, then the bounds A and B grow linearly with the size of the portfolio (as illustrated in section 6.1.2). However, as the standard deviation of a portfolio is subadditive, the condition s 2 q(a µ) 2 + (1 q)(b µ) 2 becomes more difficult to satisfy, meaning that it becomes more likely that bounds a and b are better than A and B. This will be discussed in more detail in section 6, where the bounds are analyzed for different correlation levels. When the model is not precisely specified and inherently more realistic, calculating the exact Value at Risk of a portfolio is impossible. VaR bounds are used to give upper and lower limits to the VaR. Adding a variance constraint to these bounds can significantly improve the bounds, giving the industry and regulators a more realistic view on the worst-case VaR of a portfolio. The next section discusses the Rearrangement Algorithm of Puccetti and Rüschendorf [1] and Embrechts et al. [2]. This algorithm is used to flatten the distribution of the sums and makes up an important part of the Extended Rearrangement Algorithm (see section 5). 23

24 3 Rearrangement Algorithm (RA) Now that the concept of VaR bounds is introduced and their mathematical properties are discussed, the next step is to actually compute the bounds. In this chapter a Java implementation of the algorithm will be developed that implements the Rearrangement Algorithm of Puccetti and Rüschendorf [1] and Embrechts et al. [2]. Relying heavily on the concepts of complete mixability and comonotonicity (as discussed in section and 2.1.1), the RA provides a numerical way to calculate sharp bounds. First the goal of the algorithm will be introduced, followed by the algorithm itself which is explained using an example. The actual implementation in Java, together with the performance optimizations are discussed next. Finally some remarks and conclusions are drawn about the algorithm. 3.1 Computing numerically sharp bounds The Rearrangement Algorithm is designed to minimize the dependence between risks Xj with j = 1,..., n. In other words, the distribution of the sum of all risks, S = n i=1 X i, is as flat as possible. Note that the RA requires discretely distributed variables. The original algorithm uses columns for risks and rows for discretizations. Because of the way matrices are implemented in Java, the algorithm was slightly altered to work with a tilted matrix. This, and other performance optimizations will be discussed in section The algorithm works on a matrix of n risks that are discretized in d points. As a result, the matrix X is of the n d dimension. Initially, the matrix is filled as follows with i = 1,..., n and j = 1,..., d. 24

25 x ij = F 1 j i ( d + 1 ) (11) The resulting matrix has n identical rows. This means the risks are maximally dependent upon each other, thus comonotonic. Because the goal is to minimize the dependence between the risks, it s obvious that any operation performed on the rows can only decrease the dependence between the risks. The first step of the algorithm is to randomize each row. The reason for randomisation is the Central Limit Theorem. This theorem states that, given a sufficiently large number of risks, each column will approximate the same normal distribution. This means that the sum of all these columns will approximate a uniform distribution which is the entire goal of the algorithm [19]. The vast majority of the work is done in the first step, especially when X is large. The Rearrangement Algorithm consists of the following steps: 1. Permute randomly the elements in each row of X. 2. Iteratively rearrange the j-th row of the matrix X so that it becomes oppositely ordered to the sum of the other rows, for 1 j d. 3. Repeat Step 2 until no further changes occur. That is until a matrix X is found with each row oppositely ordered to the sum of the others. Now that the Rearrangement Algorithm is introduced, it will be illustrated by using a simple example. The initial matrix X contains three comonotonically dependent risks with five discretization points. The sum is tracked in the first row of the matrix Shuffle

26 Now that the matrix is shuffled, the first row is rearranged so that it is oppositely ordered to the sum of the other rows. Notice how the sum is already a lot flatter than it initially was. The variance decreased from 18 to 2.4 because the distribution of the sum is now flatter. The initial matrix had the worst possible dependence structure. Randomisation already flattens out the sums, because of the Central Limit Theorem (see section 3.1) Notice that the sum of the columns is a lot flatter, even after only applying the RA on one row. The current variance of the sum is 0.4. Next up is the second row, excluding the sum row Sums before RA Sums after RA Figure 5: Applying the Rearrangement Algorithm on a comonotonic portfolio effectively flattens the distribution of the sum. 26

27 The sum of the columns is now constant and the variance of the sum is zero. There is no more need to run the algorithm, a solution is found. Figure 5 illustrates that the Rearrangement Algorithm makes the distribution of the sum as flat as possible. 3.2 Implementing the Rearrangement Algorithm in Java In this section, the actual implementation of the RA in Java will be discussed. A designated matrix class was developed to deal with the specific needs of RA. Next, the actual algorithm is implemented and finally the performance is analyzed and optimized Matrix optimized for RA As shown in the example above, the RA needs sums of each row in order to perform sorting. RA also heavily relies on swapping elements in a column, meaning it needs access to columns very often. For these two reasons, it was worth the effort to define a new matrix class in Java which keeps track of the sum of the elements. The matrix is also tilted, which is a feature only implemented for performance reasons. The performance will be discussed later in this chapter. Note that while this section assumes prior knowledge of Java, the discussion should be accessible to non-programmers as well. To get a quick glance of the optimized matrix, listing 1 provides an overview of all the methods in the class. Most are self explanatory, others will be discussed in more detail later in this chapter. Keep in mind that the matrix is tilted for performance. This means that the first row is reserved for the sums, and the rows under it contain the actual data. A few methods are worth discussing in more detail, because they are either optimized for performance or because they are extensively used in the algorithm. 27

28 Listing 1. Blueprint of the SumMatrix class SumMatrix + get (int, int ) + getaveragesum () + getcols () + getdata () + getmatrix () + getrow ( int ) + getrows () + getsubmatrix ( int, int ) + getsumforcol ( int ) + getsumrow () + getsumsfromcol ( int ) + getsumvariance () + getvariance ( double ) + set (int, int, double ) + setrow ( int, double []) + setsumforcolumnwithoutrow (int, int ) + setsums () + setsumswithoutrow ( int ) + sort () + swapcols ( int ) The first method conforming to both criteria is the sort method. It sorts the entire matrix on basis of the sums (the first row), see listing 2. When one sum is larger than the other, the entire columns are swapped, not just the sum itself. Listing 2. Sort the matrix on its sum public void sort () { Comparator < double [] > compare = new Comparator < double [] >() { public int compare ( double [] o1, double [] o2) { return Double. compare (o2 [0], o1 [0]); } }; double [][] data = matrix. transpose (). getdata (); Arrays. sort (data, compare ); matrix = MatrixUtils. createrealmatrix ( data ). transpose (); } Note that Java sorts an array using the merge sort algorithm, as introduced by von 28

29 Neumann, J [20]. Merge sort is a divide and conquer algorithm that sorts the array in O(n log n). Lising 3 gives another method that is of big importance in the RA is setting sums for all columns, without counting the current row. In the algorithm, this method is used to sort the matrix inversely against the current row. Listing 3. Set sum for a column while ignoring a given row public void se tsumsw ithout Row ( int rowtoignore ) { for ( int col = 0; col < d; col ++) { double sum = 0; for ( int row = 1; row <= n; row ++) if ( row!= rowtoignore + 1) sum += matrix. getentry (row, col ); } } matrix. setentry (0, col, sum ); Now that the data structure SumMatrix is introduced, it is time to implement the actual Rearrangement Algorithm Implementing the algorithm The actual implementation of the Rearrangement Algorithm consists of a few steps as described in the algorithm itself. The first step is to permute randomly the elements in each column of X. In the Java implementation shuffling is done by applying the Fisher-Yates shuffle on each column in listing 4. The Fisher-Yates is an in-place shuffle algorithm that runs in linear time. The reader is referred to a visualization by Bostock, M. [21]. 29

30 Listing 4. Rearrange method in RA class private void shufflematrix () { for ( int row = 0; row < matrix. getrows (); row ++) { Double [] currentrow = ArrayUtils. toobject ( matrix. getrow ( row )); /* Fisher - Yates shuffle */ Collections. shuffle ( Arrays. aslist ( currentrow )); /* Convert shuffled array to primitive */ double [] shuffled = ArrayUtils. toprimitive ( currentrow ); } } matrix. setrow (row, shuffled ); Now that the matrix is shuffled, it is time to perform the actual rearrangements in order to make the sum as flat as possible. Listing 5. Rearrange method in RA class private void rearrange () { for ( int row = 0; row < matrix. getrows (); row ++) { /* save the current column in a double array */ double [] currentrow = matrix. getrow ( row ); /* set sum of other cols */ matrix. setsumswithoutrow ( row ); /* sort on that sum DESC */ matrix. sort (); /* sort current col ASC */ Arrays. sort ( currentrow ); } /* Plug the current column back in */ matrix. setrow (row, currentrow ); } /* Calculate the row sums */ matrix. setsums (); 30

31 In the rearrange method described in listing 5, a copy of the current row is temporarily stored. The sums are set for all other rows after which the array is sorted on the sum. Meanwhile, the temporary copy of the current row is sorted in the opposite direction. Finally, this sorted copy is pasted back into the matrix and the same procedure is performed for the next row. Note that currently the rearrange method only runs once for all rows without any guarantee that a local optimum is found. For this reason, the rearrange method is added to a loop with stopping criteria. These stopping criteria will be discussed in the next section, together with the overall performance of the implementation Performance analysis and optimization This section will handle the performance optimizations of the implementation of the Rearrangement Algorithm in Java. As performance optimization is inherently technical, it might be advisable for non-technical readers to skip this section. The most significant optimization is to slightly alter the algorithm by tilting the matrix. In realistic scenarios, there are a lot more discretizations than portfolios. By default, this would result in a very small but high matrix. However, the operation to get a column from a matrix is O(n) while getting a row is performed in O(1). Therefore, it is optimal in case of the RA to have big rows and small columns. Listing 6. Comparator to sort matrix based on first row Comparator < double [] > comparesum = new Comparator < double [] >() { public int compare ( double [] o1, double [] o2) { return Double. compare (o2 [0], o1 [0]); } }; A side-effect of this tilt is that there is no default comparator available in Java to sort the matrix on the first row. By default, the matrix can only be sorted based on 31

32 a column. To circumvent this issue, a new comparator can be written to sort the matrix based on the values of the first row. Another optimization, as discussed earlier, was to add the rearrange method in a loop and add stopping criteria to it. The loop runs while the variance is improving, meaning that the new variance is lower than the old variance. By giving this a small margin of error, the RA had to run two iterations less on average. This optimization is mainly preventing rounding errors. Listing 7. Start Rearrangement Algorithm private void startra () { /* Shuffle each column first to increase performance */ shufflematrix (); /* Set sum for each row */ matrix. setsums (); } while ( varianceisimproved ()) { oldvariance = matrix. getsumvariance (); rearrange (); newvariance = matrix. getsumvariance (); } The Rearrangement Algorithm makes the distribution of the sum as flat as possible. In listing 7, the rearrange method is added to a loop to rearrange the matrix until almost no further improvement is made. The Java implementation is optimized so that it can be used on large portfolios, as will be demonstrated in section 6. The next section introduces the concept of heterogenous portfolios, which contain risks with different marginal distributions. 32

33 4 Defining a heterogenous portfolio In a realistic scenario, risks follow different probability distributions. Intuitively, this can be the case for different financial instruments like stocks and life insurances. In section 6, VaR bounds are calculated for heterogenous portfolios and compared with the bounds of homogenous portfolios. First, all distributions that are included in the mixed portfolio are discussed in detail. Next, three heterogenous portfolios are constructed. Each portfolio gives different weights to the distributions and has different characteristics. 4.1 The included probability distributions Seven different probability distributions are included in the mixed portfolios. Each distribution is discussed in detail by defining the mean, variance and probability distribution function Normal distribution Probability density f(x, µ, σ) x µ = 0, σ = 0.5 µ = 0, σ = 1.0 µ = 0, σ = 2.0 Figure 6: Normal distribution with µ = 0 and σ = (0.5, 1, 2). 33

34 The normal or Gaussian distribution is the most used probability distrubtion, in almost any branch of science. As all distributions discussed in this thesis, it is a continuous distribution. The popularity of the normal distribution can partly be explained by the Central Limit Theorem (CLT). Briefly, the CLT states that many independently drawn random variables will approximately follow a normal distribution [19]. The normal distribution has the following probability density function with mean µ and standard deviation σ [22]: Probability density function (pdf) f(x, µ, σ) = 1 ) ( σ 2π exp (x µ)2 2σ 2 Mean Variance E[X] = µ V ar[x] = σ 2 A special case of the normal distribution is the standard normal distribution, with a mean µ = 0 and variance σ 2 = 1. In case of the mixed portfolio, a standard normal distribution will be used. In the next section, the Pareto distribution will be discussed Pareto type II distribution The Pareto distribution is a popular power law probability distribution. Power law states that one quantity varies as a power of another. The most popular power law is the rule. There are multiple types of pareto distributions. For the mixed portfolio, a type II Pareto distribution is chosen because of its heavy tail. This type of Pareto distribution is also referred to as a Lomax distribution. 34

35 Probability density f(x, λ, α) λ = 1, α = 2 λ = 1, α = 3 λ = 1, α = 4 λ = 1, α = x Figure 7: Pareto distribution with λ = 1 and α = (2, 3, 4, 5). Given the scale parameters λ and the shape parameter α, the parameters of the Pareto type II distributions are defined as follows [23]: Probability density function (pdf) f(x, λ, α) = α λ ( 1 + x ) (α+1) λ Mean (for α > 1) Variance (for α > 2) V ar[x] = E[X] = λ α 1 λ 2 α (α 1) 2 (α 2) For the mixed portfolio, the first shape parameter α for which both the variance and mean are finite is chosen. The Lomax distribution with α = 3 and λ = 1 is included in the mixed portfolio. Next, the Lognormal probability distribution is introduced and the parameters are chosen for the mixed portfolio Lognormal distribution If the logarithm of a random variable X is normally distributed, it means that X is lognormally distributed. The variable X could represent the compound return 35

36 from a sequence of many trades (each expressed as its return + 1); or a long-term discount factor can be derived from the product of short-term discount factors. Probability density f(x, µ, σ) µ = 0, σ = 1.5 µ = 0, σ = 1.0 µ = 0, σ = x Figure 8: Lognormal distribution with µ = 0 and σ = (0.25, 0.5, 1). The mean and variance of the distribution are defined as follows [24]: Probability density function (pdf) ) 1 f(x, µ, σ) = ( xσ 2π exp (log x µ)2 2σ 2 Mean Variance E[X] = exp(µ + σ 2 /2) V ar[x] = (e σ2 1) exp(2µ + σ 2 ) Like the normal distribution, the parameters for the mixed portfolio are a mean µ = 0 and a standard deviation of σ = 1. In the next section, the Gamma distribution is introduced, its properties are discussed and the parameters are chosen Gamma distribution The gamma distribution is frequently used in economics to model waiting times. In the case of VaR, this might be for life insurances. The shape parameter k and scale 36

Risk Aggregation with Dependence Uncertainty

Risk Aggregation with Dependence Uncertainty Risk Aggregation with Dependence Uncertainty Carole Bernard GEM and VUB Risk: Modelling, Optimization and Inference with Applications in Finance, Insurance and Superannuation Sydney December 7-8, 2017

More information

Risk Aggregation with Dependence Uncertainty

Risk Aggregation with Dependence Uncertainty Risk Aggregation with Dependence Uncertainty Carole Bernard (Grenoble Ecole de Management) Hannover, Current challenges in Actuarial Mathematics November 2015 Carole Bernard Risk Aggregation with Dependence

More information

Value-at-Risk bounds with variance constraints

Value-at-Risk bounds with variance constraints Value-at-Risk bounds with variance constraints C. Bernard L. Rüschendorf S. Vanduffel October 18, 2013 Abstract Recent literature deals with bounds on the Value-at-Risk (VaR) of risky portfolios when only

More information

Week 2 Quantitative Analysis of Financial Markets Hypothesis Testing and Confidence Intervals

Week 2 Quantitative Analysis of Financial Markets Hypothesis Testing and Confidence Intervals Week 2 Quantitative Analysis of Financial Markets Hypothesis Testing and Confidence Intervals Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg :

More information

Implied Systemic Risk Index (work in progress, still at an early stage)

Implied Systemic Risk Index (work in progress, still at an early stage) Implied Systemic Risk Index (work in progress, still at an early stage) Carole Bernard, joint work with O. Bondarenko and S. Vanduffel IPAM, March 23-27, 2015: Workshop I: Systemic risk and financial networks

More information

Budget Setting Strategies for the Company s Divisions

Budget Setting Strategies for the Company s Divisions Budget Setting Strategies for the Company s Divisions Menachem Berg Ruud Brekelmans Anja De Waegenaere November 14, 1997 Abstract The paper deals with the issue of budget setting to the divisions of a

More information

2 Modeling Credit Risk

2 Modeling Credit Risk 2 Modeling Credit Risk In this chapter we present some simple approaches to measure credit risk. We start in Section 2.1 with a short overview of the standardized approach of the Basel framework for banking

More information

RISKMETRICS. Dr Philip Symes

RISKMETRICS. Dr Philip Symes 1 RISKMETRICS Dr Philip Symes 1. Introduction 2 RiskMetrics is JP Morgan's risk management methodology. It was released in 1994 This was to standardise risk analysis in the industry. Scenarios are generated

More information

Mathematics in Finance

Mathematics in Finance Mathematics in Finance Steven E. Shreve Department of Mathematical Sciences Carnegie Mellon University Pittsburgh, PA 15213 USA shreve@andrew.cmu.edu A Talk in the Series Probability in Science and Industry

More information

Introduction to Algorithmic Trading Strategies Lecture 8

Introduction to Algorithmic Trading Strategies Lecture 8 Introduction to Algorithmic Trading Strategies Lecture 8 Risk Management Haksun Li haksun.li@numericalmethod.com www.numericalmethod.com Outline Value at Risk (VaR) Extreme Value Theory (EVT) References

More information

Copyright 2011 Pearson Education, Inc. Publishing as Addison-Wesley.

Copyright 2011 Pearson Education, Inc. Publishing as Addison-Wesley. Appendix: Statistics in Action Part I Financial Time Series 1. These data show the effects of stock splits. If you investigate further, you ll find that most of these splits (such as in May 1970) are 3-for-1

More information

Financial Risk Management

Financial Risk Management Financial Risk Management Professor: Thierry Roncalli Evry University Assistant: Enareta Kurtbegu Evry University Tutorial exercices #4 1 Correlation and copulas 1. The bivariate Gaussian copula is given

More information

Lecture 1 of 4-part series. Spring School on Risk Management, Insurance and Finance European University at St. Petersburg, Russia.

Lecture 1 of 4-part series. Spring School on Risk Management, Insurance and Finance European University at St. Petersburg, Russia. Principles and Lecture 1 of 4-part series Spring School on Risk, Insurance and Finance European University at St. Petersburg, Russia 2-4 April 2012 s University of Connecticut, USA page 1 s Outline 1 2

More information

Comparison of Estimation For Conditional Value at Risk

Comparison of Estimation For Conditional Value at Risk -1- University of Piraeus Department of Banking and Financial Management Postgraduate Program in Banking and Financial Management Comparison of Estimation For Conditional Value at Risk Georgantza Georgia

More information

Capital Allocation Principles

Capital Allocation Principles Capital Allocation Principles Maochao Xu Department of Mathematics Illinois State University mxu2@ilstu.edu Capital Dhaene, et al., 2011, Journal of Risk and Insurance The level of the capital held by

More information

IEOR E4602: Quantitative Risk Management

IEOR E4602: Quantitative Risk Management IEOR E4602: Quantitative Risk Management Risk Measures Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com Reference: Chapter 8

More information

Risk Measurement in Credit Portfolio Models

Risk Measurement in Credit Portfolio Models 9 th DGVFM Scientific Day 30 April 2010 1 Risk Measurement in Credit Portfolio Models 9 th DGVFM Scientific Day 30 April 2010 9 th DGVFM Scientific Day 30 April 2010 2 Quantitative Risk Management Profit

More information

A class of coherent risk measures based on one-sided moments

A class of coherent risk measures based on one-sided moments A class of coherent risk measures based on one-sided moments T. Fischer Darmstadt University of Technology November 11, 2003 Abstract This brief paper explains how to obtain upper boundaries of shortfall

More information

Financial Risk Forecasting Chapter 4 Risk Measures

Financial Risk Forecasting Chapter 4 Risk Measures Financial Risk Forecasting Chapter 4 Risk Measures Jon Danielsson 2017 London School of Economics To accompany Financial Risk Forecasting www.financialriskforecasting.com Published by Wiley 2011 Version

More information

Much of what appears here comes from ideas presented in the book:

Much of what appears here comes from ideas presented in the book: Chapter 11 Robust statistical methods Much of what appears here comes from ideas presented in the book: Huber, Peter J. (1981), Robust statistics, John Wiley & Sons (New York; Chichester). There are many

More information

Lecture outline W.B. Powell 1

Lecture outline W.B. Powell 1 Lecture outline Applications of the newsvendor problem The newsvendor problem Estimating the distribution and censored demands The newsvendor problem and risk The newsvendor problem with an unknown distribution

More information

[D7] PROBABILITY DISTRIBUTION OF OUTSTANDING LIABILITY FROM INDIVIDUAL PAYMENTS DATA Contributed by T S Wright

[D7] PROBABILITY DISTRIBUTION OF OUTSTANDING LIABILITY FROM INDIVIDUAL PAYMENTS DATA Contributed by T S Wright Faculty and Institute of Actuaries Claims Reserving Manual v.2 (09/1997) Section D7 [D7] PROBABILITY DISTRIBUTION OF OUTSTANDING LIABILITY FROM INDIVIDUAL PAYMENTS DATA Contributed by T S Wright 1. Introduction

More information

SOLVENCY AND CAPITAL ALLOCATION

SOLVENCY AND CAPITAL ALLOCATION SOLVENCY AND CAPITAL ALLOCATION HARRY PANJER University of Waterloo JIA JING Tianjin University of Economics and Finance Abstract This paper discusses a new criterion for allocation of required capital.

More information

An Application of Extreme Value Theory for Measuring Financial Risk in the Uruguayan Pension Fund 1

An Application of Extreme Value Theory for Measuring Financial Risk in the Uruguayan Pension Fund 1 An Application of Extreme Value Theory for Measuring Financial Risk in the Uruguayan Pension Fund 1 Guillermo Magnou 23 January 2016 Abstract Traditional methods for financial risk measures adopts normal

More information

Lecture 3: Factor models in modern portfolio choice

Lecture 3: Factor models in modern portfolio choice Lecture 3: Factor models in modern portfolio choice Prof. Massimo Guidolin Portfolio Management Spring 2016 Overview The inputs of portfolio problems Using the single index model Multi-index models Portfolio

More information

,,, be any other strategy for selling items. It yields no more revenue than, based on the

,,, be any other strategy for selling items. It yields no more revenue than, based on the ONLINE SUPPLEMENT Appendix 1: Proofs for all Propositions and Corollaries Proof of Proposition 1 Proposition 1: For all 1,2,,, if, is a non-increasing function with respect to (henceforth referred to as

More information

Measuring Financial Risk using Extreme Value Theory: evidence from Pakistan

Measuring Financial Risk using Extreme Value Theory: evidence from Pakistan Measuring Financial Risk using Extreme Value Theory: evidence from Pakistan Dr. Abdul Qayyum and Faisal Nawaz Abstract The purpose of the paper is to show some methods of extreme value theory through analysis

More information

Log-Robust Portfolio Management

Log-Robust Portfolio Management Log-Robust Portfolio Management Dr. Aurélie Thiele Lehigh University Joint work with Elcin Cetinkaya and Ban Kawas Research partially supported by the National Science Foundation Grant CMMI-0757983 Dr.

More information

SIMULATION OF ELECTRICITY MARKETS

SIMULATION OF ELECTRICITY MARKETS SIMULATION OF ELECTRICITY MARKETS MONTE CARLO METHODS Lectures 15-18 in EG2050 System Planning Mikael Amelin 1 COURSE OBJECTIVES To pass the course, the students should show that they are able to - apply

More information

Characterization of the Optimum

Characterization of the Optimum ECO 317 Economics of Uncertainty Fall Term 2009 Notes for lectures 5. Portfolio Allocation with One Riskless, One Risky Asset Characterization of the Optimum Consider a risk-averse, expected-utility-maximizing

More information

MFM Practitioner Module: Quantitative Risk Management. John Dodson. September 6, 2017

MFM Practitioner Module: Quantitative Risk Management. John Dodson. September 6, 2017 MFM Practitioner Module: Quantitative September 6, 2017 Course Fall sequence modules quantitative risk management Gary Hatfield fixed income securities Jason Vinar mortgage securities introductions Chong

More information

Section B: Risk Measures. Value-at-Risk, Jorion

Section B: Risk Measures. Value-at-Risk, Jorion Section B: Risk Measures Value-at-Risk, Jorion One thing to always keep in mind when reading this text is that it is focused on the banking industry. It mainly focuses on market and credit risk. It also

More information

Chapter 2 Uncertainty Analysis and Sampling Techniques

Chapter 2 Uncertainty Analysis and Sampling Techniques Chapter 2 Uncertainty Analysis and Sampling Techniques The probabilistic or stochastic modeling (Fig. 2.) iterative loop in the stochastic optimization procedure (Fig..4 in Chap. ) involves:. Specifying

More information

Continuous random variables

Continuous random variables Continuous random variables probability density function (f(x)) the probability distribution function of a continuous random variable (analogous to the probability mass function for a discrete random variable),

More information

MEASURING PORTFOLIO RISKS USING CONDITIONAL COPULA-AR-GARCH MODEL

MEASURING PORTFOLIO RISKS USING CONDITIONAL COPULA-AR-GARCH MODEL MEASURING PORTFOLIO RISKS USING CONDITIONAL COPULA-AR-GARCH MODEL Isariya Suttakulpiboon MSc in Risk Management and Insurance Georgia State University, 30303 Atlanta, Georgia Email: suttakul.i@gmail.com,

More information

CHOICE THEORY, UTILITY FUNCTIONS AND RISK AVERSION

CHOICE THEORY, UTILITY FUNCTIONS AND RISK AVERSION CHOICE THEORY, UTILITY FUNCTIONS AND RISK AVERSION Szabolcs Sebestyén szabolcs.sebestyen@iscte.pt Master in Finance INVESTMENTS Sebestyén (ISCTE-IUL) Choice Theory Investments 1 / 65 Outline 1 An Introduction

More information

The mean-variance portfolio choice framework and its generalizations

The mean-variance portfolio choice framework and its generalizations The mean-variance portfolio choice framework and its generalizations Prof. Massimo Guidolin 20135 Theory of Finance, Part I (Sept. October) Fall 2014 Outline and objectives The backward, three-step solution

More information

Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMSN50)

Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMSN50) Monte Carlo and Empirical Methods for Stochastic Inference (MASM11/FMSN50) Magnus Wiktorsson Centre for Mathematical Sciences Lund University, Sweden Lecture 2 Random number generation January 18, 2018

More information

Case Study: Heavy-Tailed Distribution and Reinsurance Rate-making

Case Study: Heavy-Tailed Distribution and Reinsurance Rate-making Case Study: Heavy-Tailed Distribution and Reinsurance Rate-making May 30, 2016 The purpose of this case study is to give a brief introduction to a heavy-tailed distribution and its distinct behaviors in

More information

Asset Allocation Model with Tail Risk Parity

Asset Allocation Model with Tail Risk Parity Proceedings of the Asia Pacific Industrial Engineering & Management Systems Conference 2017 Asset Allocation Model with Tail Risk Parity Hirotaka Kato Graduate School of Science and Technology Keio University,

More information

IEOR E4602: Quantitative Risk Management

IEOR E4602: Quantitative Risk Management IEOR E4602: Quantitative Risk Management Basic Concepts and Techniques of Risk Management Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com

More information

Analysis of truncated data with application to the operational risk estimation

Analysis of truncated data with application to the operational risk estimation Analysis of truncated data with application to the operational risk estimation Petr Volf 1 Abstract. Researchers interested in the estimation of operational risk often face problems arising from the structure

More information

Calculating VaR. There are several approaches for calculating the Value at Risk figure. The most popular are the

Calculating VaR. There are several approaches for calculating the Value at Risk figure. The most popular are the VaR Pro and Contra Pro: Easy to calculate and to understand. It is a common language of communication within the organizations as well as outside (e.g. regulators, auditors, shareholders). It is not really

More information

Window Width Selection for L 2 Adjusted Quantile Regression

Window Width Selection for L 2 Adjusted Quantile Regression Window Width Selection for L 2 Adjusted Quantile Regression Yoonsuh Jung, The Ohio State University Steven N. MacEachern, The Ohio State University Yoonkyung Lee, The Ohio State University Technical Report

More information

Sampling and sampling distribution

Sampling and sampling distribution Sampling and sampling distribution September 12, 2017 STAT 101 Class 5 Slide 1 Outline of Topics 1 Sampling 2 Sampling distribution of a mean 3 Sampling distribution of a proportion STAT 101 Class 5 Slide

More information

Equity correlations implied by index options: estimation and model uncertainty analysis

Equity correlations implied by index options: estimation and model uncertainty analysis 1/18 : estimation and model analysis, EDHEC Business School (joint work with Rama COT) Modeling and managing financial risks Paris, 10 13 January 2011 2/18 Outline 1 2 of multi-asset models Solution to

More information

John Hull, Risk Management and Financial Institutions, 4th Edition

John Hull, Risk Management and Financial Institutions, 4th Edition P1.T2. Quantitative Analysis John Hull, Risk Management and Financial Institutions, 4th Edition Bionic Turtle FRM Video Tutorials By David Harper, CFA FRM 1 Chapter 10: Volatility (Learning objectives)

More information

The Impact of Basel Accords on the Lender's Profitability under Different Pricing Decisions

The Impact of Basel Accords on the Lender's Profitability under Different Pricing Decisions The Impact of Basel Accords on the Lender's Profitability under Different Pricing Decisions Bo Huang and Lyn C. Thomas School of Management, University of Southampton, Highfield, Southampton, UK, SO17

More information

A general approach to calculating VaR without volatilities and correlations

A general approach to calculating VaR without volatilities and correlations page 19 A general approach to calculating VaR without volatilities and correlations Peter Benson * Peter Zangari Morgan Guaranty rust Company Risk Management Research (1-212) 648-8641 zangari_peter@jpmorgan.com

More information

Chapter 5 Univariate time-series analysis. () Chapter 5 Univariate time-series analysis 1 / 29

Chapter 5 Univariate time-series analysis. () Chapter 5 Univariate time-series analysis 1 / 29 Chapter 5 Univariate time-series analysis () Chapter 5 Univariate time-series analysis 1 / 29 Time-Series Time-series is a sequence fx 1, x 2,..., x T g or fx t g, t = 1,..., T, where t is an index denoting

More information

ME3620. Theory of Engineering Experimentation. Spring Chapter III. Random Variables and Probability Distributions.

ME3620. Theory of Engineering Experimentation. Spring Chapter III. Random Variables and Probability Distributions. ME3620 Theory of Engineering Experimentation Chapter III. Random Variables and Probability Distributions Chapter III 1 3.2 Random Variables In an experiment, a measurement is usually denoted by a variable

More information

**BEGINNING OF EXAMINATION** A random sample of five observations from a population is:

**BEGINNING OF EXAMINATION** A random sample of five observations from a population is: **BEGINNING OF EXAMINATION** 1. You are given: (i) A random sample of five observations from a population is: 0.2 0.7 0.9 1.1 1.3 (ii) You use the Kolmogorov-Smirnov test for testing the null hypothesis,

More information

Market Risk Analysis Volume IV. Value-at-Risk Models

Market Risk Analysis Volume IV. Value-at-Risk Models Market Risk Analysis Volume IV Value-at-Risk Models Carol Alexander John Wiley & Sons, Ltd List of Figures List of Tables List of Examples Foreword Preface to Volume IV xiii xvi xxi xxv xxix IV.l Value

More information

An Improved Skewness Measure

An Improved Skewness Measure An Improved Skewness Measure Richard A. Groeneveld Professor Emeritus, Department of Statistics Iowa State University ragroeneveld@valley.net Glen Meeden School of Statistics University of Minnesota Minneapolis,

More information

Pricing and risk of financial products

Pricing and risk of financial products and risk of financial products Prof. Dr. Christian Weiß Riga, 27.02.2018 Observations AAA bonds are typically regarded as risk-free investment. Only examples: Government bonds of Australia, Canada, Denmark,

More information

Slides for Risk Management

Slides for Risk Management Slides for Risk Management Introduction to the modeling of assets Groll Seminar für Finanzökonometrie Prof. Mittnik, PhD Groll (Seminar für Finanzökonometrie) Slides for Risk Management Prof. Mittnik,

More information

Yao s Minimax Principle

Yao s Minimax Principle Complexity of algorithms The complexity of an algorithm is usually measured with respect to the size of the input, where size may for example refer to the length of a binary word describing the input,

More information

Probability Weighted Moments. Andrew Smith

Probability Weighted Moments. Andrew Smith Probability Weighted Moments Andrew Smith andrewdsmith8@deloitte.co.uk 28 November 2014 Introduction If I asked you to summarise a data set, or fit a distribution You d probably calculate the mean and

More information

The University of Chicago, Booth School of Business Business 41202, Spring Quarter 2012, Mr. Ruey S. Tsay. Solutions to Final Exam

The University of Chicago, Booth School of Business Business 41202, Spring Quarter 2012, Mr. Ruey S. Tsay. Solutions to Final Exam The University of Chicago, Booth School of Business Business 41202, Spring Quarter 2012, Mr. Ruey S. Tsay Solutions to Final Exam Problem A: (40 points) Answer briefly the following questions. 1. Consider

More information

The risk/return trade-off has been a

The risk/return trade-off has been a Efficient Risk/Return Frontiers for Credit Risk HELMUT MAUSSER AND DAN ROSEN HELMUT MAUSSER is a mathematician at Algorithmics Inc. in Toronto, Canada. DAN ROSEN is the director of research at Algorithmics

More information

Risk, Coherency and Cooperative Game

Risk, Coherency and Cooperative Game Risk, Coherency and Cooperative Game Haijun Li lih@math.wsu.edu Department of Mathematics Washington State University Tokyo, June 2015 Haijun Li Risk, Coherency and Cooperative Game Tokyo, June 2015 1

More information

Stress testing of credit portfolios in light- and heavy-tailed models

Stress testing of credit portfolios in light- and heavy-tailed models Stress testing of credit portfolios in light- and heavy-tailed models M. Kalkbrener and N. Packham July 10, 2014 Abstract As, in light of the recent financial crises, stress tests have become an integral

More information

Optimizing S-shaped utility and risk management

Optimizing S-shaped utility and risk management Optimizing S-shaped utility and risk management Ineffectiveness of VaR and ES constraints John Armstrong (KCL), Damiano Brigo (Imperial) Quant Summit March 2018 Are ES constraints effective against rogue

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning Monte Carlo Methods Mark Schmidt University of British Columbia Winter 2018 Last Time: Markov Chains We can use Markov chains for density estimation, p(x) = p(x 1 ) }{{} d p(x

More information

Chapter 4: Commonly Used Distributions. Statistics for Engineers and Scientists Fourth Edition William Navidi

Chapter 4: Commonly Used Distributions. Statistics for Engineers and Scientists Fourth Edition William Navidi Chapter 4: Commonly Used Distributions Statistics for Engineers and Scientists Fourth Edition William Navidi 2014 by Education. This is proprietary material solely for authorized instructor use. Not authorized

More information

Random Variables and Probability Distributions

Random Variables and Probability Distributions Chapter 3 Random Variables and Probability Distributions Chapter Three Random Variables and Probability Distributions 3. Introduction An event is defined as the possible outcome of an experiment. In engineering

More information

The Optimization Process: An example of portfolio optimization

The Optimization Process: An example of portfolio optimization ISyE 6669: Deterministic Optimization The Optimization Process: An example of portfolio optimization Shabbir Ahmed Fall 2002 1 Introduction Optimization can be roughly defined as a quantitative approach

More information

Value at Risk Risk Management in Practice. Nikolett Gyori (Morgan Stanley, Internal Audit) September 26, 2017

Value at Risk Risk Management in Practice. Nikolett Gyori (Morgan Stanley, Internal Audit) September 26, 2017 Value at Risk Risk Management in Practice Nikolett Gyori (Morgan Stanley, Internal Audit) September 26, 2017 Overview Value at Risk: the Wake of the Beast Stop-loss Limits Value at Risk: What is VaR? Value

More information

Chapter 5. Continuous Random Variables and Probability Distributions. 5.1 Continuous Random Variables

Chapter 5. Continuous Random Variables and Probability Distributions. 5.1 Continuous Random Variables Chapter 5 Continuous Random Variables and Probability Distributions 5.1 Continuous Random Variables 1 2CHAPTER 5. CONTINUOUS RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS Probability Distributions Probability

More information

Value at Risk. january used when assessing capital and solvency requirements and pricing risk transfer opportunities.

Value at Risk. january used when assessing capital and solvency requirements and pricing risk transfer opportunities. january 2014 AIRCURRENTS: Modeling Fundamentals: Evaluating Edited by Sara Gambrill Editor s Note: Senior Vice President David Lalonde and Risk Consultant Alissa Legenza describe various risk measures

More information

Financial Mathematics III Theory summary

Financial Mathematics III Theory summary Financial Mathematics III Theory summary Table of Contents Lecture 1... 7 1. State the objective of modern portfolio theory... 7 2. Define the return of an asset... 7 3. How is expected return defined?...

More information

Risk measures: Yet another search of a holy grail

Risk measures: Yet another search of a holy grail Risk measures: Yet another search of a holy grail Dirk Tasche Financial Services Authority 1 dirk.tasche@gmx.net Mathematics of Financial Risk Management Isaac Newton Institute for Mathematical Sciences

More information

Lecture IV Portfolio management: Efficient portfolios. Introduction to Finance Mathematics Fall Financial mathematics

Lecture IV Portfolio management: Efficient portfolios. Introduction to Finance Mathematics Fall Financial mathematics Lecture IV Portfolio management: Efficient portfolios. Introduction to Finance Mathematics Fall 2014 Reduce the risk, one asset Let us warm up by doing an exercise. We consider an investment with σ 1 =

More information

Asymptotic results discrete time martingales and stochastic algorithms

Asymptotic results discrete time martingales and stochastic algorithms Asymptotic results discrete time martingales and stochastic algorithms Bernard Bercu Bordeaux University, France IFCAM Summer School Bangalore, India, July 2015 Bernard Bercu Asymptotic results for discrete

More information

Financial Risk Measurement/Management

Financial Risk Measurement/Management 550.446 Financial Risk Measurement/Management Week of September 23, 2013 Interest Rate Risk & Value at Risk (VaR) 3.1 Where we are Last week: Introduction continued; Insurance company and Investment company

More information

MODELLING OPTIMAL HEDGE RATIO IN THE PRESENCE OF FUNDING RISK

MODELLING OPTIMAL HEDGE RATIO IN THE PRESENCE OF FUNDING RISK MODELLING OPTIMAL HEDGE RATIO IN THE PRESENCE O UNDING RISK Barbara Dömötör Department of inance Corvinus University of Budapest 193, Budapest, Hungary E-mail: barbara.domotor@uni-corvinus.hu KEYWORDS

More information

Dependence Modeling and Credit Risk

Dependence Modeling and Credit Risk Dependence Modeling and Credit Risk Paola Mosconi Banca IMI Bocconi University, 20/04/2015 Paola Mosconi Lecture 6 1 / 53 Disclaimer The opinion expressed here are solely those of the author and do not

More information

Lecture 7: Bayesian approach to MAB - Gittins index

Lecture 7: Bayesian approach to MAB - Gittins index Advanced Topics in Machine Learning and Algorithmic Game Theory Lecture 7: Bayesian approach to MAB - Gittins index Lecturer: Yishay Mansour Scribe: Mariano Schain 7.1 Introduction In the Bayesian approach

More information

CPSC 540: Machine Learning

CPSC 540: Machine Learning CPSC 540: Machine Learning Monte Carlo Methods Mark Schmidt University of British Columbia Winter 2019 Last Time: Markov Chains We can use Markov chains for density estimation, d p(x) = p(x 1 ) p(x }{{}

More information

Alternative VaR Models

Alternative VaR Models Alternative VaR Models Neil Roeth, Senior Risk Developer, TFG Financial Systems. 15 th July 2015 Abstract We describe a variety of VaR models in terms of their key attributes and differences, e.g., parametric

More information

ADVANCED OPERATIONAL RISK MODELLING IN BANKS AND INSURANCE COMPANIES

ADVANCED OPERATIONAL RISK MODELLING IN BANKS AND INSURANCE COMPANIES Small business banking and financing: a global perspective Cagliari, 25-26 May 2007 ADVANCED OPERATIONAL RISK MODELLING IN BANKS AND INSURANCE COMPANIES C. Angela, R. Bisignani, G. Masala, M. Micocci 1

More information

VaR vs CVaR in Risk Management and Optimization

VaR vs CVaR in Risk Management and Optimization VaR vs CVaR in Risk Management and Optimization Stan Uryasev Joint presentation with Sergey Sarykalin, Gaia Serraino and Konstantin Kalinchenko Risk Management and Financial Engineering Lab, University

More information

Point Estimation. Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage

Point Estimation. Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage 6 Point Estimation Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage Point Estimation Statistical inference: directed toward conclusions about one or more parameters. We will use the generic

More information

Rebalancing the Simon Fraser University s Academic Pension Plan s Balanced Fund: A Case Study

Rebalancing the Simon Fraser University s Academic Pension Plan s Balanced Fund: A Case Study Rebalancing the Simon Fraser University s Academic Pension Plan s Balanced Fund: A Case Study by Yingshuo Wang Bachelor of Science, Beijing Jiaotong University, 2011 Jing Ren Bachelor of Science, Shandong

More information

Chapter 8: Sampling distributions of estimators Sections

Chapter 8: Sampling distributions of estimators Sections Chapter 8 continued Chapter 8: Sampling distributions of estimators Sections 8.1 Sampling distribution of a statistic 8.2 The Chi-square distributions 8.3 Joint Distribution of the sample mean and sample

More information

INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY. Lecture -5 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc.

INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY. Lecture -5 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc. INDIAN INSTITUTE OF SCIENCE STOCHASTIC HYDROLOGY Lecture -5 Course Instructor : Prof. P. P. MUJUMDAR Department of Civil Engg., IISc. Summary of the previous lecture Moments of a distribubon Measures of

More information

Financial Risk Forecasting Chapter 9 Extreme Value Theory

Financial Risk Forecasting Chapter 9 Extreme Value Theory Financial Risk Forecasting Chapter 9 Extreme Value Theory Jon Danielsson 2017 London School of Economics To accompany Financial Risk Forecasting www.financialriskforecasting.com Published by Wiley 2011

More information

Optimal retention for a stop-loss reinsurance with incomplete information

Optimal retention for a stop-loss reinsurance with incomplete information Optimal retention for a stop-loss reinsurance with incomplete information Xiang Hu 1 Hailiang Yang 2 Lianzeng Zhang 3 1,3 Department of Risk Management and Insurance, Nankai University Weijin Road, Tianjin,

More information

Asymmetric Information: Walrasian Equilibria, and Rational Expectations Equilibria

Asymmetric Information: Walrasian Equilibria, and Rational Expectations Equilibria Asymmetric Information: Walrasian Equilibria and Rational Expectations Equilibria 1 Basic Setup Two periods: 0 and 1 One riskless asset with interest rate r One risky asset which pays a normally distributed

More information

Asymptotic methods in risk management. Advances in Financial Mathematics

Asymptotic methods in risk management. Advances in Financial Mathematics Asymptotic methods in risk management Peter Tankov Based on joint work with A. Gulisashvili Advances in Financial Mathematics Paris, January 7 10, 2014 Peter Tankov (Université Paris Diderot) Asymptotic

More information

Distortion operator of uncertainty claim pricing using weibull distortion operator

Distortion operator of uncertainty claim pricing using weibull distortion operator ISSN: 2455-216X Impact Factor: RJIF 5.12 www.allnationaljournal.com Volume 4; Issue 3; September 2018; Page No. 25-30 Distortion operator of uncertainty claim pricing using weibull distortion operator

More information

Lecture 10: Performance measures

Lecture 10: Performance measures Lecture 10: Performance measures Prof. Dr. Svetlozar Rachev Institute for Statistics and Mathematical Economics University of Karlsruhe Portfolio and Asset Liability Management Summer Semester 2008 Prof.

More information

How to Consider Risk Demystifying Monte Carlo Risk Analysis

How to Consider Risk Demystifying Monte Carlo Risk Analysis How to Consider Risk Demystifying Monte Carlo Risk Analysis James W. Richardson Regents Professor Senior Faculty Fellow Co-Director, Agricultural and Food Policy Center Department of Agricultural Economics

More information

Master s in Financial Engineering Foundations of Buy-Side Finance: Quantitative Risk and Portfolio Management. > Teaching > Courses

Master s in Financial Engineering Foundations of Buy-Side Finance: Quantitative Risk and Portfolio Management.  > Teaching > Courses Master s in Financial Engineering Foundations of Buy-Side Finance: Quantitative Risk and Portfolio Management www.symmys.com > Teaching > Courses Spring 2008, Monday 7:10 pm 9:30 pm, Room 303 Attilio Meucci

More information

Multiple Objective Asset Allocation for Retirees Using Simulation

Multiple Objective Asset Allocation for Retirees Using Simulation Multiple Objective Asset Allocation for Retirees Using Simulation Kailan Shang and Lingyan Jiang The asset portfolios of retirees serve many purposes. Retirees may need them to provide stable cash flow

More information

Statistics 431 Spring 2007 P. Shaman. Preliminaries

Statistics 431 Spring 2007 P. Shaman. Preliminaries Statistics 4 Spring 007 P. Shaman The Binomial Distribution Preliminaries A binomial experiment is defined by the following conditions: A sequence of n trials is conducted, with each trial having two possible

More information

The Statistical Mechanics of Financial Markets

The Statistical Mechanics of Financial Markets The Statistical Mechanics of Financial Markets Johannes Voit 2011 johannes.voit (at) ekit.com Overview 1. Why statistical physicists care about financial markets 2. The standard model - its achievements

More information

ECON Micro Foundations

ECON Micro Foundations ECON 302 - Micro Foundations Michael Bar September 13, 2016 Contents 1 Consumer s Choice 2 1.1 Preferences.................................... 2 1.2 Budget Constraint................................ 3

More information

Lecture 2: Fundamentals of meanvariance

Lecture 2: Fundamentals of meanvariance Lecture 2: Fundamentals of meanvariance analysis Prof. Massimo Guidolin Portfolio Management Second Term 2018 Outline and objectives Mean-variance and efficient frontiers: logical meaning o Guidolin-Pedio,

More information

1 Residual life for gamma and Weibull distributions

1 Residual life for gamma and Weibull distributions Supplement to Tail Estimation for Window Censored Processes Residual life for gamma and Weibull distributions. Gamma distribution Let Γ(k, x = x yk e y dy be the upper incomplete gamma function, and let

More information