Likelihood-based Optimization of Threat Operation Timeline Estimation

Size: px
Start display at page:

Download "Likelihood-based Optimization of Threat Operation Timeline Estimation"

Transcription

1 12th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 2009 Likelihood-based Optimization of Threat Operation Timeline Estimation Gregory A. Godfrey Advanced Mathematics Applications Division Metron, Inc. Reston, VA, U.S.A. Thomas L. Mifflin Advanced Mathematics Applications Division Metron, Inc. Reston, VA, U.S.A. Abstract - TerrAlert is a system that Metron, Inc. has developed to track the progress of suspected terrorist operations and optimize courses of action to delay or disrupt these operations. The underlying algorithms use Monte Carlo sampling and Bayesian, nonlinear filtering to estimate the state (schedule) of a terrorist operation defined by a project management model (such as a Program Evaluation and Review Technique (PERT) or Gantt chart) with uncertain task durations. However, in order to generate schedules via sampling, it is not sufficient to specify only the model and estimated task duration distributions. The analyst must also provide a distribution of start dates for the operation, which we have observed is relatively difficult for analysts to do accurately. In this paper, we describe a likelihood-based approach for estimating the most likely start date given the available evidence, and perform a series of experiments to validate this approach. Keywords: Bayesian tracking, particle filtering, project management. 1 Introduction In prior work, the authors have developed a set of algorithms around a rigorous methodology for estimating the progress of a terrorist operation, modeled in a project management format, such as a PERT or Gantt chart [1]. This model contains a set of tasks, some of which can be performed in parallel and some of which must be performed in a particular sequence (the precedence relations). Each task is assumed to have a fixed duration that is unknown to the analyst. This methodology has been implemented in a software product called TerrAlert, which has undergone recent testing and evaluation with intelligence analysts. TerrAlert assumes that the analyst will be able to provide four primary types of data: 1. the set of tasks and precedence relations between the tasks; 2. an estimated probability distribution for the duration of each task; 3. an estimated probability distribution for the start date of the operation; and 4. the set of available evidence regarding the state of different tasks on different dates, and an estimate of the credibility of the report and the reliability of the source. The first type of data (operational model) may be difficult to know in practice, and is the topic of current and future research in which TerrAlert considers automatically alternative model configurations (tasks in different orders or satisfying different precedence relations). The second type of data (task durations) has been made more reasonable by training analysts in the use of triangular distributions, which derive a distribution given estimates of the minimum, most likely and maximum task durations. The third type of data (operational start date) is addressed directly in this paper. The fourth type of data (likelihood functions) can be derived given independent estimates of the report credibility and the source reliability, on a six-point scale. TerrAlert converts these credibility and reliability assessments into likelihood values that describe the probability of observing a particular reported operational state given that the operation is in a particular state (we will elaborate on the details of what this means in the next section). We observed in the TerrAlert training and evaluation that it was difficult for analysts to estimate the start date of a terrorist operation. For example, even now, how many analysts would know when the initial planning tasks started for the 9/11 attacks? If the analysts hedge their bets by specifying a wide distribution of possible start dates, then this wide range of uncertainty propagates directly into a wide range of uncertainty on the attack task at the end of the operation. After the TerrAlert training and evaluation, we concluded that asking analysts to provide task duration estimates is reasonable, but requesting operational start date estimates is not. The technical basis by which TerrAlert would estimate the start of the operation is to find the start date that best fits the available evidence to the operational model. In this paper, we describe an approach for doing so, where the fit is judged using a maximum likelihood calculation. In section 2, we define the notation and formulate the problem to be solved. In section 3, we derive the start date likelihood equation to be optimized. In section 4, we apply a Golden Section, one ISIF 948

2 dimensional search algorithm to find the maximum of the start date likelihood equation. In section 5, we design and perform a set of experiments to test the effectiveness of the optimization algorithm. Finally, in section 6, we summarize the conclusions and define directions for future research. p 1 p 2 p N Rainbow Chart Not Started Ongoing Finished 2 Notation and Problem Formulation In this section, we summarize the TerrAlert methodology (additional details are available in [1]) and define the notation used this paper. Consider a hypothetical plan (in Figure 1, a nerve agent attack) that consists of a set of J tasks z 1, z 2,, z J with precedence relations describing the order in which tasks must be performed, with some tasks in serial and some in parallel. What makes this model stochastic are the task durations, denoted by the random variable τ j for task j, which follows a specified probability distribution. µ 1 = 6 months Assemble Team µ 2 = 6 months Develop Delivery Method µ 3 = 12 months Produce Nerve Agent µ 4 = 3 months Prepare Equipment µ 5 = 2 months Select Target µ 6 = 1 month Attack Target Figure 1 Representative model with average task duration μ listed above each task In order to approximate the induced distribution of start and end dates for each of the tasks, we use a nonlinear particle filtering approach. To form the approximation, we generate N schedules (generally in the thousands) for the model via Monte Carlo simulation of the task durations. Initially, all sample schedules are assumed to be equally likely in terms of representing the actual schedule for the operation (weight p i = 1/N for all sample schedules i = 1,, N). Each schedule contains the entire past, present and future for one particular sample path for executing the activity. 2.1 Activity State Space To track the progress of the operation, we define a compact state space for the operation. Given a particular date, we can assess the state of each task in a sample schedule as being either Not Started (NS), Ongoing (OG) or Finished (F). The state of a sample schedule, then, is the state of each of its tasks on a particular date. The aggregate state of the entire set of schedules can be summarized by taking the weighted sum of each task state across all schedules. For example, the probability that Task 2 is OG at a given time is the sum of the probabilities on those schedules that have Task 2 in the OG state at that time. If we assign each of the task states a color, then the aggregate task probabilities correspond to the color slices for each task in Figure 2. We call the resulting picture a rainbow chart. Color indicates Task State at Time t (Not Started, Ongoing, Finished) Figure 2 Rainbow chart used to track the progress of a hypothetical operation 2.2 Measurement State Space We use a Bayesian, nonlinear tracking approach to update each sample schedule weight based on evidence that supports or refutes the schedule. For example, evidence that a particular task is ongoing as of a given date means increasing the probability weight on schedules that have that task as ongoing on that date and decreasing it on schedules that do not. The Bayesian approach specifies the amount of change on each schedule weight, taking into account the uncertainty of the evidence. Let Y j (t) = y be an evidence report regarding the state of task j at time t, and X j (t) be a random variable representing the (unknown) ground truth state of task j at time t. The likelihood function L(y ) converts an observation y to a function on the task state value x according to { j j } Ly ( x) = Pr Y( t) = y X( t) = x,. (1) where x {NS, OG, F}. That is, the likelihood function provides a probability of observing y given that the task is in state NS, OG or F. The observation (evidence) y is known, but the ground truth task state x is not. Often, likelihood functions can be defined from a confusion matrix, such as the one illustrated in Figure 3. In this example, the confusion matrix takes a ground truth state and applies noise to produce the reported state. If the ground truth state is OG, then the reported state is either NS, OG or F with probabilities 0.05, 0.80 and 0.15, respectively. The likelihood function associated with a particular reported state is the relevant column in the confusion matrix. If the reported state is OG, then L(y ) = {0.2, 0.8, 0.2}. Ground Truth Reported State NS OG F NS OG F Figure 3 Example of a confusion matrix used to define the evidence likelihood function, L(y ) 949

3 Given an observed report y j (t) for task j at time t, we update the probability weight p i (t) on schedule i based on the (deterministic) state x ji (t) of task j under schedule i at time t ( ) p () t = p () t L y () t x () t. (2) i i j ji In other words, multiply the schedule weight by the likelihood of observing the evidence given the state of that particular schedule. Using the example from Figure 3, if the reported state is OG, then multiply all schedules with task state OG on that date by 0.8 and all other schedules by 0.2. After completing this Bayesian update, renormalize the probabilities to sum to one across all schedules. In addition to tracking the progress of an activity, this same approach can be used to forecast the progress in the future. Since each sample schedule contains the past, present and future for that Monte Carlo sample path of the activity execution, making future predictions about the progress of the activity in the absence of additional evidence is as easy as advancing the clock. 3 Derivation of Start Date Likelihood Calculation Let us assume that a model can be specified completely by knowing the set of tasks, precedence relations, task duration distributions and operational start distribution for that model. We summarize the model by the start date random variable τ 0 and the task duration random variables τ j for task j = 1,, J. We assume there is a set of K evidence reports available, with the k th report, y jk (t k ), describing an observation of the state of task j k at time t k. To simplify the notation, we drop the explicit reference to the time of the k th report, t k, and use the presence of the k subscript or sub-subscript to clarify the time association. We want to use these reports to infer the maximum likelihood start date, τ 0. To do so, we would compute the cumulative likelihood of observing the set of available evidence under this model, which we write as,..., j,,..., J ) L y y τ τ τ. (3) 1 K 0 1 Since we need to find the optimal value of τ 0, we would integrate over the task durations τ 1 to τ J to get an expected likelihood conditioned only on the start date,,..., y ) 1 j τ 0 = K (,...,,,..., ) (,..., ) L y L y y τ τ τ p τ τ dτ dτ.(4) j1 jk 0 1 J 1 J 1 J We can use the independence assumption of the evidence reports to break the likelihood calculation (3) into the product of individual evidence likelihoods across all reports.,..., yj τ ) L y = 1 K 0 K L( yj τ0, τ1,..., τ ) ( 1,..., ) k J p τ τj dτ1 dτj k = 1. (5) Although the expression in equation (5) is explicit, in general, the integration is intractable due to the need to compute the likelihood of the report for each start date and set of task durations. Instead, we use the Monte Carlo sampling approach to generate individual schedules conditioned on a particular start date τ 0 and a set of sampled durations. We will approximate the integral over all possible task durations as a sum over the set of sampled task durations. Since all schedules generated via the Monte Carlo sampling are equally likely, then we can replace the probability in the integral with 1/N in the sum. Under schedule i, task j k (associated with evidence report k) has a particular state x ijk at time t k. Given this information, we can compute the likelihood of the report conditioned on schedule i as (,,..., j 0 1 ) ( ) k i ij jk ijk L y τ τ τ = L y x. (6) That is, the likelihood of the report depends only on the state of the corresponding task under schedule i. Substituting equation (6) in equation (5) and changing the continuous integral to a discrete sum, we get N K 1,..., ) ( ) 1 j τ K 0 jk ijk L y y L y x N i= 1 k = 1. (7) Choosing the number of schedules N to use in the cumulative likelihood calculation involves trading off fidelity in the calculation against the computational expense. For example, the purpose of defining the cumulative likelihood is to assess the quality of fit between the evidence and different operational start dates. Figure 4 illustrates the cumulative likelihood over a wide range of operational start dates using either 10,000 Monte Carlo schedules or the single average schedule derived from using the average duration for each task. Model Likelihood Given Start Date (Log10) 1.E-03 1.E-04 1.E-05 1.E-06 1.E-07 1.E-08 1.E-09 1.E-10 1.E-11 1.E-12 1.E-13 1.E-14 1.E-15 1.E Operational Start Date 10,000 Monte Carlo Schedules Single Schedule using Average Durations Figure 4 Cumulative likelihood comparison given average and Monte Carlo schedules 950

4 The likelihood based on the average schedule is piecewise-constant as a function of the operational start date. If the start date is set to be early enough, then all evidence reports that observe a task in the Finished state will agree with the average schedule, and all other reports will disagree with the schedule. If the start date is set to be late enough, there will be agreement with all reports that observe a task in the Not Started state, and disagreement with all other schedules. The model likelihood jumps in value when there is a change in the agreement between the evidence and the average schedule given that start date. For 10,000 Monte Carlo schedules, the function is also piecewise-constant, but with smaller steps because the likelihood changes when there is a change in agreement between the evidence and any one of the schedules. Note also that the Monte Carlo and average curves have the same asymptotic likelihoods in either time direction. The Monte Carlo curve also tends to have a wider peak because the variance in task start and end dates is higher over multiple schedules. In the next section, we describe an algorithm for finding the peak of either of these functions efficiently. 4 Golden Section Search Algorithm Regardless of whether the model likelihood is computed using the average schedule or a set of Monte Carlo schedules, we need an efficient algorithm for determining the maximum likelihood start date of the operation (the date corresponding to the peak of the model likelihood function). In this section, we describe a one-dimensional Golden Section search algorithm that draws heavily from sections 10.1 and 10.2 in [2]. There are two main parts: (1) determine the initial bracket for the one-dimensional search, and (2) select the next start date to consider as a possible peak. Figure 5 serves as a visual guide to the sequence of points generated by the algorithm. Model Likelihood (Log10) After adding point 4, remove the interval from 1 to Start Date Figure 5 Illustration of sequence of points for Golden Section search 6 2 Let S be the length of the critical path of the average schedule (that is, the schedule that uses the expected values for the task durations). If t 1 t 2 t K are the dates for the K evidence reports, then an extremely conservative initial bracket of start dates would be a = t 1 S and c = t K. (8) We need to select an interior point b in this interval that has a strictly greater likelihood than either endpoint. The initial guess of this interior point is based on the Golden Section value W. W 3 5 = 0.382, 2 ( ) ( 1 ) b = a+ W c a = Wc+ W a. If f (b) > f (c) and f (b) > f (a), then continue to the next part of the algorithm. Otherwise, we will sample other points until the interior point b has a greater likelihood than either endpoint. In Figure 5, this initial triplet of points is indicated by the points labeled 1, 2 and Select the next point (Golden Section search) The process for finding the peak involves cutting off one end of the interval [a, c] at each iteration, given the triplet (a, b, c). To do so, we determine the next point to be sampled, q, which will be in the interior of the larger of the two intervals [a, b] and [b, c]. We consider these two cases separately. If b a > c b, then interval [a, b] is larger, and we have the set of points (a, q, b, c), where ( ) ( 1 ) q = b W b a = Wa+ W b. As part of this update, the portion to be cut will be either [a, q] or [b, c]. Evaluate the likelihood f (q) and update the triplet (a, b, c) as follows. ( ) ( ) ( abc) ( qbc) If f ( q) > f( b), then let a, b, c a, q, b. Otherwise, let,,,,. For the other case in which b a c b, the interval [b, c] is larger, and we have the set of points (a, b, q, c), where 4.1 Determine the initial bracket The first phase consists of finding three start dates (or points), (a, b, c) that have a cumulative model likelihood (or value), (f (a), f (b), f (c)), that must satisfy the following conditions: ( ) ( 1 ) q = b+ W c b = Wc+ W b. The portion to be cut will be either [a, b] or [q, c]. Evaluate the likelihood f (q) and update the triplet (a, b, c) as follows. a < b < c and f (b) > f (c) and f (b) > f (a). 951

5 ( ) ( ) ( abc) ( abq) If f ( q) > f( b), then let abc,, bqc,,. Otherwise, let,,,,. In either case, for the new triplet (a, b, c), f (b) is guaranteed to be larger than either f (a) or f (c). Repeat this step and compute a new point q as before. Figure 5 shows a sequence of new points, and for each new point, we show which interval is removed in that step. For example, after adding point 6, we remove the interval between points 4 and 5. At each step, a fraction W (38%) of the interval will be removed regardless of whether it is chopped from the left or the right, so the bracket will decrease in size very quickly. In fact, after 14 iterations, the interval will decrease by a factor of about 1,000 from the original length. 5 Experimental Results We performed a series of experiments to evaluate the performance of the start date optimization algorithm. There are 100 different models, each constructed to have exactly twelve tasks. Each task has a duration modeled as a triangular distribution. For a particular task, let Z 1, Z 2 and Z 3 be independent random samples drawn from an Exponential distribution with parameter λ. Then the triangular distribution for that task has minimum duration Z 1, most likely duration Z 1 + Z 2, and maximum duration Z 1 + Z 2 + Z 3. The precedence relations are chosen at random by partitioning the tasks into subgroups. Each subgroup is performed in sequence, and the tasks within a subgroup are performed in parallel, such as illustrated in Figure 6. For each model, we attempt to find the maximum likelihood operational start date using either the average schedule, Monte Carlo with 100 generated schedules, or Monte Carlo with 10,000 generated schedules. In each case, the initial bounding interval for a particular model and set of evidence will be the same, and is defined by equation (8). The range of this initial interval is approximately five years, or roughly 1800 days. Given an initial triplet of points (a, b, c), we apply the Golden Section algorithm for 17 iterations. This cuts the size of the bounding interval to about a half-day, and we choose as the peak the interior point in this final interval. Figure 7 shows the results of the experiments. For each model, we compute the cumulative likelihood assuming that the ground truth start date is known, where the likelihood is computed using either the average schedule (AVG), Monte Carlo with 100 schedules (MC100), or Monte Carlo with 10,000 schedules (MC10K). These ground truth likelihood values are plotted on the x-axis in the chart. As a comparison with the ground truth values, we use the Golden Section algorithm to find the start date with the maximum likelihood for each model using either AVG, MC100 or MC10K. This maximum likelihood is plotted on the y-axis in the chart. Likelihood given Optimized Start Date (Log10) 1.E-03 1.E-05 1.E-07 1.E-09 1.E-11 1.E-13 1.E-15 1.E-17 1.E-19 1.E-21 1.E-23 1.E-23 1.E-21 1.E-19 1.E-17 1.E-15 1.E-13 1.E-11 1.E-09 1.E-07 1.E-05 1.E-03 Likelihood given Ground Truth Start Date (Log10) Average Sched 100 MC Sched 10,000 MC Sched Figure 6 Example of precedence relations from random task partition approach For each model, we construct a ground truth schedule using a fixed operational start date and the most likely value for each task duration. We generate 31 evidence reports for each model, spaced equally in time across the operation. For each evidence date, either all tasks are Not Started, all tasks are Finished, or at least one task is Ongoing under the ground truth schedule. If at least one task is Ongoing, then choose one of those tasks at random uniformly. Otherwise, choose one of the Not Started or Finished tasks at random uniformly. We apply a symmetric confusion matrix to convert the ground truth state to the reported state. The confusion matrix, which we assume is known, has values of 0.9 on the diagonals and 0.05 on the off-diagonals. Figure 7 Comparison of cumulative likelihoods given the ground truth start date versus the Golden Search-optimized start date In addition to the individual points in the chart, we include a diagonal line that shows where the two likelihood values would be the same. If our Golden Section search algorithm found the ground truth start date every time, then all points would fall on this line. Points above this line suggest that the search algorithm finds a start date that is a better fit to the evidence than the ground truth start date, and points below this line suggest that the optimized start date is worse fit. For the AVG results (pink boxes), there is a large spread of likelihood values, but nearly all of the optimized likelihood values are greater than ground truth. There are two primary reasons for this behavior. First, the average schedule provides a fast, crude approximation to the true likelihood curve, as 952

6 is shown in Figure 4, so the start date peak using the average schedule may not agree with the ground truth start date. Second, the noise in the evidence reports provide an opportunity to find start dates that agree with the evidence better than the ground truth start date. The Monte Carlo results are similar, with less spread in the likelihood values as the number of schedules increases because the likelihood curve approximation becomes more refined as more schedules are added. Although there are a few models for which the optimized likelihood is less than the ground truth likelihood, the gap is relatively small (generally less than an order of magnitude). 6 Conclusions In this paper, we have described an approach for optimizing the start date of an operation based on maximizing a start date likelihood calculation. In experimental testing, the algorithm performs well and the number of schedules used in the calculation can be tuned to the available computational budget. We are have implemented these algorithms into the operational version of TerrAlert, where we believe it will improve the analyses and reduce the amount and precision of information that an analyst must specify manually, especially when available evidence is relatively plentiful. In addition, this start date optimization base opens up research opportunities that we believe will lead to significant, new capabilities for real-world analysts. First, there are other extensions to the start date optimization that we plan to consider. For example, in this paper, we find the optimal start date by setting all Monte Carlo schedules to start on that date and computing the cumulative likelihood for that date. One alternative to this approach, which is virtually guaranteed to increase the maximum cumulative likelihood, is to optimize the start date for each Monte Carlo schedule independently. The maximum cumulative likelihood is then the weighted average of the individually optimized schedule likelihoods. This will increase the computational effort, but we suspect at most by a factor of two or so. Second, we would like to extend the start date optimization to incorporate resampling, which is a feature of particle filters described in detail in [1]. Given a set of generated schedules, TerrAlert updates the probabilities on each schedule based on available evidence. Resampling is an approach for pruning low probability schedules and splitting high probability schedules in two such that both schedules share the same past but have different Monte Carlo futures. Since this is an important feature within TerrAlert, we would like for the start date optimization to incorporate this approach as well. Finally, the start date optimization is the foundation for a new capability by which TerrAlert could generate automatically alternative configurations of a model (different task order and precedence relations) to find the maximum likelihood configuration. We will be investigating approaches that generate alternative configurations, optimize the start date for that configuration and compute the cumulative likelihood of the evidence given that configuration. We believe this capability will provide a great leap in the ability of analysts to consider multiple hypotheses regarding a terrorist operation automatically. REFERENCES [1] G. Godfrey, J. Cunningham and T. Tran, A Bayesian, Nonlinear Particle Filtering Approach for Tracking the State of Terrorist Operations, Intelligence and Security Informatics, IEEE, May 2007, [2] W. Press, B. Flannery, S. Teukolsky and W. Vetterling, Numerical Recipes: The Art of Scientific Computing, 9 th printing, Cambridge University Press, Cambridge, [3] L. Stone, C. Barlow and T. Corwin, Bayesian Multiple Target Tracking, Artech House, Boston,

Chapter 2 Uncertainty Analysis and Sampling Techniques

Chapter 2 Uncertainty Analysis and Sampling Techniques Chapter 2 Uncertainty Analysis and Sampling Techniques The probabilistic or stochastic modeling (Fig. 2.) iterative loop in the stochastic optimization procedure (Fig..4 in Chap. ) involves:. Specifying

More information

ELEMENTS OF MONTE CARLO SIMULATION

ELEMENTS OF MONTE CARLO SIMULATION APPENDIX B ELEMENTS OF MONTE CARLO SIMULATION B. GENERAL CONCEPT The basic idea of Monte Carlo simulation is to create a series of experimental samples using a random number sequence. According to the

More information

Handout 4: Deterministic Systems and the Shortest Path Problem

Handout 4: Deterministic Systems and the Shortest Path Problem SEEM 3470: Dynamic Optimization and Applications 2013 14 Second Term Handout 4: Deterministic Systems and the Shortest Path Problem Instructor: Shiqian Ma January 27, 2014 Suggested Reading: Bertsekas

More information

STOCHASTIC COST ESTIMATION AND RISK ANALYSIS IN MANAGING SOFTWARE PROJECTS

STOCHASTIC COST ESTIMATION AND RISK ANALYSIS IN MANAGING SOFTWARE PROJECTS Full citation: Connor, A.M., & MacDonell, S.G. (25) Stochastic cost estimation and risk analysis in managing software projects, in Proceedings of the ISCA 14th International Conference on Intelligent and

More information

Modelling the Sharpe ratio for investment strategies

Modelling the Sharpe ratio for investment strategies Modelling the Sharpe ratio for investment strategies Group 6 Sako Arts 0776148 Rik Coenders 0777004 Stefan Luijten 0783116 Ivo van Heck 0775551 Rik Hagelaars 0789883 Stephan van Driel 0858182 Ellen Cardinaels

More information

UPDATED IAA EDUCATION SYLLABUS

UPDATED IAA EDUCATION SYLLABUS II. UPDATED IAA EDUCATION SYLLABUS A. Supporting Learning Areas 1. STATISTICS Aim: To enable students to apply core statistical techniques to actuarial applications in insurance, pensions and emerging

More information

Getting started with WinBUGS

Getting started with WinBUGS 1 Getting started with WinBUGS James B. Elsner and Thomas H. Jagger Department of Geography, Florida State University Some material for this tutorial was taken from http://www.unt.edu/rss/class/rich/5840/session1.doc

More information

Monitoring Accrual and Events in a Time-to-Event Endpoint Trial. BASS November 2, 2015 Jeff Palmer

Monitoring Accrual and Events in a Time-to-Event Endpoint Trial. BASS November 2, 2015 Jeff Palmer Monitoring Accrual and Events in a Time-to-Event Endpoint Trial BASS November 2, 2015 Jeff Palmer Introduction A number of things can go wrong in a survival study, especially if you have a fixed end of

More information

Risk Video #1. Video 1 Recap

Risk Video #1. Video 1 Recap Risk Video #1 Video 1 Recap 1 Risk Video #2 Video 2 Recap 2 Risk Video #3 Risk Risk Management Process Uncertain or chance events that planning can not overcome or control. Risk Management A proactive

More information

STOCHASTIC COST ESTIMATION AND RISK ANALYSIS IN MANAGING SOFTWARE PROJECTS

STOCHASTIC COST ESTIMATION AND RISK ANALYSIS IN MANAGING SOFTWARE PROJECTS STOCHASTIC COST ESTIMATION AND RISK ANALYSIS IN MANAGING SOFTWARE PROJECTS Dr A.M. Connor Software Engineering Research Lab Auckland University of Technology Auckland, New Zealand andrew.connor@aut.ac.nz

More information

Presented at the 2012 SCEA/ISPA Joint Annual Conference and Training Workshop -

Presented at the 2012 SCEA/ISPA Joint Annual Conference and Training Workshop - Applying the Pareto Principle to Distribution Assignment in Cost Risk and Uncertainty Analysis James Glenn, Computer Sciences Corporation Christian Smart, Missile Defense Agency Hetal Patel, Missile Defense

More information

Characterization of the Optimum

Characterization of the Optimum ECO 317 Economics of Uncertainty Fall Term 2009 Notes for lectures 5. Portfolio Allocation with One Riskless, One Risky Asset Characterization of the Optimum Consider a risk-averse, expected-utility-maximizing

More information

Markov Decision Processes

Markov Decision Processes Markov Decision Processes Robert Platt Northeastern University Some images and slides are used from: 1. CS188 UC Berkeley 2. AIMA 3. Chris Amato Stochastic domains So far, we have studied search Can use

More information

Retirement. Optimal Asset Allocation in Retirement: A Downside Risk Perspective. JUne W. Van Harlow, Ph.D., CFA Director of Research ABSTRACT

Retirement. Optimal Asset Allocation in Retirement: A Downside Risk Perspective. JUne W. Van Harlow, Ph.D., CFA Director of Research ABSTRACT Putnam Institute JUne 2011 Optimal Asset Allocation in : A Downside Perspective W. Van Harlow, Ph.D., CFA Director of Research ABSTRACT Once an individual has retired, asset allocation becomes a critical

More information

Extend the ideas of Kan and Zhou paper on Optimal Portfolio Construction under parameter uncertainty

Extend the ideas of Kan and Zhou paper on Optimal Portfolio Construction under parameter uncertainty Extend the ideas of Kan and Zhou paper on Optimal Portfolio Construction under parameter uncertainty George Photiou Lincoln College University of Oxford A dissertation submitted in partial fulfilment for

More information

Lecture 3: Factor models in modern portfolio choice

Lecture 3: Factor models in modern portfolio choice Lecture 3: Factor models in modern portfolio choice Prof. Massimo Guidolin Portfolio Management Spring 2016 Overview The inputs of portfolio problems Using the single index model Multi-index models Portfolio

More information

Introduction to Sequential Monte Carlo Methods

Introduction to Sequential Monte Carlo Methods Introduction to Sequential Monte Carlo Methods Arnaud Doucet NCSU, October 2008 Arnaud Doucet () Introduction to SMC NCSU, October 2008 1 / 36 Preliminary Remarks Sequential Monte Carlo (SMC) are a set

More information

High Volatility Medium Volatility /24/85 12/18/86

High Volatility Medium Volatility /24/85 12/18/86 Estimating Model Limitation in Financial Markets Malik Magdon-Ismail 1, Alexander Nicholson 2 and Yaser Abu-Mostafa 3 1 malik@work.caltech.edu 2 zander@work.caltech.edu 3 yaser@caltech.edu Learning Systems

More information

Spike Statistics: A Tutorial

Spike Statistics: A Tutorial Spike Statistics: A Tutorial File: spike statistics4.tex JV Stone, Psychology Department, Sheffield University, England. Email: j.v.stone@sheffield.ac.uk December 10, 2007 1 Introduction Why do we need

More information

Strategies for Improving the Efficiency of Monte-Carlo Methods

Strategies for Improving the Efficiency of Monte-Carlo Methods Strategies for Improving the Efficiency of Monte-Carlo Methods Paul J. Atzberger General comments or corrections should be sent to: paulatz@cims.nyu.edu Introduction The Monte-Carlo method is a useful

More information

Supplementary Material: Strategies for exploration in the domain of losses

Supplementary Material: Strategies for exploration in the domain of losses 1 Supplementary Material: Strategies for exploration in the domain of losses Paul M. Krueger 1,, Robert C. Wilson 2,, and Jonathan D. Cohen 3,4 1 Department of Psychology, University of California, Berkeley

More information

A Cash Flow-Based Approach to Estimate Default Probabilities

A Cash Flow-Based Approach to Estimate Default Probabilities A Cash Flow-Based Approach to Estimate Default Probabilities Francisco Hawas Faculty of Physical Sciences and Mathematics Mathematical Modeling Center University of Chile Santiago, CHILE fhawas@dim.uchile.cl

More information

Applications of Exponential Functions Group Activity 7 Business Project Week #10

Applications of Exponential Functions Group Activity 7 Business Project Week #10 Applications of Exponential Functions Group Activity 7 Business Project Week #10 In the last activity we looked at exponential functions. This week we will look at exponential functions as related to interest

More information

Department of Mathematics. Mathematics of Financial Derivatives

Department of Mathematics. Mathematics of Financial Derivatives Department of Mathematics MA408 Mathematics of Financial Derivatives Thursday 15th January, 2009 2pm 4pm Duration: 2 hours Attempt THREE questions MA408 Page 1 of 5 1. (a) Suppose 0 < E 1 < E 3 and E 2

More information

1 Introduction. Term Paper: The Hall and Taylor Model in Duali 1. Yumin Li 5/8/2012

1 Introduction. Term Paper: The Hall and Taylor Model in Duali 1. Yumin Li 5/8/2012 Term Paper: The Hall and Taylor Model in Duali 1 Yumin Li 5/8/2012 1 Introduction In macroeconomics and policy making arena, it is extremely important to have the ability to manipulate a set of control

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Maximum Likelihood Estimation The likelihood and log-likelihood functions are the basis for deriving estimators for parameters, given data. While the shapes of these two functions are different, they have

More information

Contents Critique 26. portfolio optimization 32

Contents Critique 26. portfolio optimization 32 Contents Preface vii 1 Financial problems and numerical methods 3 1.1 MATLAB environment 4 1.1.1 Why MATLAB? 5 1.2 Fixed-income securities: analysis and portfolio immunization 6 1.2.1 Basic valuation of

More information

FAILURE RATE TRENDS IN AN AGING POPULATION MONTE CARLO APPROACH

FAILURE RATE TRENDS IN AN AGING POPULATION MONTE CARLO APPROACH FAILURE RATE TRENDS IN AN AGING POPULATION MONTE CARLO APPROACH Niklas EKSTEDT Sajeesh BABU Patrik HILBER KTH Sweden KTH Sweden KTH Sweden niklas.ekstedt@ee.kth.se sbabu@kth.se hilber@kth.se ABSTRACT This

More information

Improving Returns-Based Style Analysis

Improving Returns-Based Style Analysis Improving Returns-Based Style Analysis Autumn, 2007 Daniel Mostovoy Northfield Information Services Daniel@northinfo.com Main Points For Today Over the past 15 years, Returns-Based Style Analysis become

More information

RISK BASED LIFE CYCLE COST ANALYSIS FOR PROJECT LEVEL PAVEMENT MANAGEMENT. Eric Perrone, Dick Clark, Quinn Ness, Xin Chen, Ph.D, Stuart Hudson, P.E.

RISK BASED LIFE CYCLE COST ANALYSIS FOR PROJECT LEVEL PAVEMENT MANAGEMENT. Eric Perrone, Dick Clark, Quinn Ness, Xin Chen, Ph.D, Stuart Hudson, P.E. RISK BASED LIFE CYCLE COST ANALYSIS FOR PROJECT LEVEL PAVEMENT MANAGEMENT Eric Perrone, Dick Clark, Quinn Ness, Xin Chen, Ph.D, Stuart Hudson, P.E. Texas Research and Development Inc. 2602 Dellana Lane,

More information

Making sense of Schedule Risk Analysis

Making sense of Schedule Risk Analysis Making sense of Schedule Risk Analysis John Owen Barbecana Inc. Version 2 December 19, 2014 John Owen - jowen@barbecana.com 2 5 Years managing project controls software in the Oil and Gas industry 28 years

More information

1 Residual life for gamma and Weibull distributions

1 Residual life for gamma and Weibull distributions Supplement to Tail Estimation for Window Censored Processes Residual life for gamma and Weibull distributions. Gamma distribution Let Γ(k, x = x yk e y dy be the upper incomplete gamma function, and let

More information

Point Estimation. Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage

Point Estimation. Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage 6 Point Estimation Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage Point Estimation Statistical inference: directed toward conclusions about one or more parameters. We will use the generic

More information

Monte Carlo Methods in Structuring and Derivatives Pricing

Monte Carlo Methods in Structuring and Derivatives Pricing Monte Carlo Methods in Structuring and Derivatives Pricing Prof. Manuela Pedio (guest) 20263 Advanced Tools for Risk Management and Pricing Spring 2017 Outline and objectives The basic Monte Carlo algorithm

More information

An Improved Skewness Measure

An Improved Skewness Measure An Improved Skewness Measure Richard A. Groeneveld Professor Emeritus, Department of Statistics Iowa State University ragroeneveld@valley.net Glen Meeden School of Statistics University of Minnesota Minneapolis,

More information

A Test of the Normality Assumption in the Ordered Probit Model *

A Test of the Normality Assumption in the Ordered Probit Model * A Test of the Normality Assumption in the Ordered Probit Model * Paul A. Johnson Working Paper No. 34 March 1996 * Assistant Professor, Vassar College. I thank Jahyeong Koo, Jim Ziliak and an anonymous

More information

Using Monte Carlo Integration and Control Variates to Estimate π

Using Monte Carlo Integration and Control Variates to Estimate π Using Monte Carlo Integration and Control Variates to Estimate π N. Cannady, P. Faciane, D. Miksa LSU July 9, 2009 Abstract We will demonstrate the utility of Monte Carlo integration by using this algorithm

More information

Optimal Search for Parameters in Monte Carlo Simulation for Derivative Pricing

Optimal Search for Parameters in Monte Carlo Simulation for Derivative Pricing Optimal Search for Parameters in Monte Carlo Simulation for Derivative Pricing Prof. Chuan-Ju Wang Department of Computer Science University of Taipei Joint work with Prof. Ming-Yang Kao March 28, 2014

More information

Fast Convergence of Regress-later Series Estimators

Fast Convergence of Regress-later Series Estimators Fast Convergence of Regress-later Series Estimators New Thinking in Finance, London Eric Beutner, Antoon Pelsser, Janina Schweizer Maastricht University & Kleynen Consultants 12 February 2014 Beutner Pelsser

More information

Content Added to the Updated IAA Education Syllabus

Content Added to the Updated IAA Education Syllabus IAA EDUCATION COMMITTEE Content Added to the Updated IAA Education Syllabus Prepared by the Syllabus Review Taskforce Paul King 8 July 2015 This proposed updated Education Syllabus has been drafted by

More information

Revenue Management Under the Markov Chain Choice Model

Revenue Management Under the Markov Chain Choice Model Revenue Management Under the Markov Chain Choice Model Jacob B. Feldman School of Operations Research and Information Engineering, Cornell University, Ithaca, New York 14853, USA jbf232@cornell.edu Huseyin

More information

Iteration. The Cake Eating Problem. Discount Factors

Iteration. The Cake Eating Problem. Discount Factors 18 Value Function Iteration Lab Objective: Many questions have optimal answers that change over time. Sequential decision making problems are among this classification. In this lab you we learn how to

More information

Quantitative Trading System For The E-mini S&P

Quantitative Trading System For The E-mini S&P AURORA PRO Aurora Pro Automated Trading System Aurora Pro v1.11 For TradeStation 9.1 August 2015 Quantitative Trading System For The E-mini S&P By Capital Evolution LLC Aurora Pro is a quantitative trading

More information

Spike Statistics. File: spike statistics3.tex JV Stone Psychology Department, Sheffield University, England.

Spike Statistics. File: spike statistics3.tex JV Stone Psychology Department, Sheffield University, England. Spike Statistics File: spike statistics3.tex JV Stone Psychology Department, Sheffield University, England. Email: j.v.stone@sheffield.ac.uk November 27, 2007 1 Introduction Why do we need to know about

More information

the display, exploration and transformation of the data are demonstrated and biases typically encountered are highlighted.

the display, exploration and transformation of the data are demonstrated and biases typically encountered are highlighted. 1 Insurance data Generalized linear modeling is a methodology for modeling relationships between variables. It generalizes the classical normal linear model, by relaxing some of its restrictive assumptions,

More information

Personal Financial Plan. John and Mary Sample

Personal Financial Plan. John and Mary Sample For October 21, 2013 Prepared by Public Retirement Planners, LLC 820 Davis Street Suite 434 Evanston IL 60714 224-567-1854 This presentation provides a general overview of some aspects of your personal

More information

Top-down particle filtering for Bayesian decision trees

Top-down particle filtering for Bayesian decision trees Top-down particle filtering for Bayesian decision trees Balaji Lakshminarayanan 1, Daniel M. Roy 2 and Yee Whye Teh 3 1. Gatsby Unit, UCL, 2. University of Cambridge and 3. University of Oxford Outline

More information

The value of foresight

The value of foresight Philip Ernst Department of Statistics, Rice University Support from NSF-DMS-1811936 (co-pi F. Viens) and ONR-N00014-18-1-2192 gratefully acknowledged. IMA Financial and Economic Applications June 11, 2018

More information

Web Appendix. Are the effects of monetary policy shocks big or small? Olivier Coibion

Web Appendix. Are the effects of monetary policy shocks big or small? Olivier Coibion Web Appendix Are the effects of monetary policy shocks big or small? Olivier Coibion Appendix 1: Description of the Model-Averaging Procedure This section describes the model-averaging procedure used in

More information

Design of a Financial Application Driven Multivariate Gaussian Random Number Generator for an FPGA

Design of a Financial Application Driven Multivariate Gaussian Random Number Generator for an FPGA Design of a Financial Application Driven Multivariate Gaussian Random Number Generator for an FPGA Chalermpol Saiprasert, Christos-Savvas Bouganis and George A. Constantinides Department of Electrical

More information

1 Consumption and saving under uncertainty

1 Consumption and saving under uncertainty 1 Consumption and saving under uncertainty 1.1 Modelling uncertainty As in the deterministic case, we keep assuming that agents live for two periods. The novelty here is that their earnings in the second

More information

TABLE OF CONTENTS - VOLUME 2

TABLE OF CONTENTS - VOLUME 2 TABLE OF CONTENTS - VOLUME 2 CREDIBILITY SECTION 1 - LIMITED FLUCTUATION CREDIBILITY PROBLEM SET 1 SECTION 2 - BAYESIAN ESTIMATION, DISCRETE PRIOR PROBLEM SET 2 SECTION 3 - BAYESIAN CREDIBILITY, DISCRETE

More information

Uncertainty in Economic Analysis

Uncertainty in Economic Analysis Risk and Uncertainty Uncertainty in Economic Analysis CE 215 28, Richard J. Nielsen We ve already mentioned that interest rates reflect the risk involved in an investment. Risk and uncertainty can affect

More information

3: Balance Equations

3: Balance Equations 3.1 Balance Equations Accounts with Constant Interest Rates 15 3: Balance Equations Investments typically consist of giving up something today in the hope of greater benefits in the future, resulting in

More information

Monte Carlo Methods for Uncertainty Quantification

Monte Carlo Methods for Uncertainty Quantification Monte Carlo Methods for Uncertainty Quantification Abdul-Lateef Haji-Ali Based on slides by: Mike Giles Mathematical Institute, University of Oxford Contemporary Numerical Techniques Haji-Ali (Oxford)

More information

Pricing & Risk Management of Synthetic CDOs

Pricing & Risk Management of Synthetic CDOs Pricing & Risk Management of Synthetic CDOs Jaffar Hussain* j.hussain@alahli.com September 2006 Abstract The purpose of this paper is to analyze the risks of synthetic CDO structures and their sensitivity

More information

SCHEDULE CREATION AND ANALYSIS. 1 Powered by POeT Solvers Limited

SCHEDULE CREATION AND ANALYSIS. 1   Powered by POeT Solvers Limited SCHEDULE CREATION AND ANALYSIS 1 www.pmtutor.org Powered by POeT Solvers Limited While building the project schedule, we need to consider all risk factors, assumptions and constraints imposed on the project

More information

Maximum Likelihood Estimation

Maximum Likelihood Estimation Maximum Likelihood Estimation EPSY 905: Fundamentals of Multivariate Modeling Online Lecture #6 EPSY 905: Maximum Likelihood In This Lecture The basics of maximum likelihood estimation Ø The engine that

More information

Introductory Econometrics for Finance

Introductory Econometrics for Finance Introductory Econometrics for Finance SECOND EDITION Chris Brooks The ICMA Centre, University of Reading CAMBRIDGE UNIVERSITY PRESS List of figures List of tables List of boxes List of screenshots Preface

More information

Reasoning with Uncertainty

Reasoning with Uncertainty Reasoning with Uncertainty Markov Decision Models Manfred Huber 2015 1 Markov Decision Process Models Markov models represent the behavior of a random process, including its internal state and the externally

More information

August Asset/Liability Study Texas Municipal Retirement System

August Asset/Liability Study Texas Municipal Retirement System August 2016 Asset/Liability Study Texas Municipal Retirement System Table of Contents ACKNOWLEDGEMENTS... PAGE 2 INTRODUCTION... PAGE 3 CURRENT STATUS... PAGE 7 DETERMINISTIC ANALYSIS... PAGE 8 DETERMINISTIC

More information

Chapter 5 Univariate time-series analysis. () Chapter 5 Univariate time-series analysis 1 / 29

Chapter 5 Univariate time-series analysis. () Chapter 5 Univariate time-series analysis 1 / 29 Chapter 5 Univariate time-series analysis () Chapter 5 Univariate time-series analysis 1 / 29 Time-Series Time-series is a sequence fx 1, x 2,..., x T g or fx t g, t = 1,..., T, where t is an index denoting

More information

Market Risk Analysis Volume II. Practical Financial Econometrics

Market Risk Analysis Volume II. Practical Financial Econometrics Market Risk Analysis Volume II Practical Financial Econometrics Carol Alexander John Wiley & Sons, Ltd List of Figures List of Tables List of Examples Foreword Preface to Volume II xiii xvii xx xxii xxvi

More information

Analyses of an Internet Auction Market Focusing on the Fixed-Price Selling at a Buyout Price

Analyses of an Internet Auction Market Focusing on the Fixed-Price Selling at a Buyout Price Master Thesis Analyses of an Internet Auction Market Focusing on the Fixed-Price Selling at a Buyout Price Supervisor Associate Professor Shigeo Matsubara Department of Social Informatics Graduate School

More information

Lecture 7: Bayesian approach to MAB - Gittins index

Lecture 7: Bayesian approach to MAB - Gittins index Advanced Topics in Machine Learning and Algorithmic Game Theory Lecture 7: Bayesian approach to MAB - Gittins index Lecturer: Yishay Mansour Scribe: Mariano Schain 7.1 Introduction In the Bayesian approach

More information

Lecture outline. Monte Carlo Methods for Uncertainty Quantification. Importance Sampling. Importance Sampling

Lecture outline. Monte Carlo Methods for Uncertainty Quantification. Importance Sampling. Importance Sampling Lecture outline Monte Carlo Methods for Uncertainty Quantification Mike Giles Mathematical Institute, University of Oxford KU Leuven Summer School on Uncertainty Quantification Lecture 2: Variance reduction

More information

Better decision making under uncertain conditions using Monte Carlo Simulation

Better decision making under uncertain conditions using Monte Carlo Simulation IBM Software Business Analytics IBM SPSS Statistics Better decision making under uncertain conditions using Monte Carlo Simulation Monte Carlo simulation and risk analysis techniques in IBM SPSS Statistics

More information

Chapter 4: Commonly Used Distributions. Statistics for Engineers and Scientists Fourth Edition William Navidi

Chapter 4: Commonly Used Distributions. Statistics for Engineers and Scientists Fourth Edition William Navidi Chapter 4: Commonly Used Distributions Statistics for Engineers and Scientists Fourth Edition William Navidi 2014 by Education. This is proprietary material solely for authorized instructor use. Not authorized

More information

Sample Size Calculations for Odds Ratio in presence of misclassification (SSCOR Version 1.8, September 2017)

Sample Size Calculations for Odds Ratio in presence of misclassification (SSCOR Version 1.8, September 2017) Sample Size Calculations for Odds Ratio in presence of misclassification (SSCOR Version 1.8, September 2017) 1. Introduction The program SSCOR available for Windows only calculates sample size requirements

More information

A Model of Coverage Probability under Shadow Fading

A Model of Coverage Probability under Shadow Fading A Model of Coverage Probability under Shadow Fading Kenneth L. Clarkson John D. Hobby August 25, 23 Abstract We give a simple analytic model of coverage probability for CDMA cellular phone systems under

More information

Estimation of Volatility of Cross Sectional Data: a Kalman filter approach

Estimation of Volatility of Cross Sectional Data: a Kalman filter approach Estimation of Volatility of Cross Sectional Data: a Kalman filter approach Cristina Sommacampagna University of Verona Italy Gordon Sick University of Calgary Canada This version: 4 April, 2004 Abstract

More information

2017 IAA EDUCATION SYLLABUS

2017 IAA EDUCATION SYLLABUS 2017 IAA EDUCATION SYLLABUS 1. STATISTICS Aim: To enable students to apply core statistical techniques to actuarial applications in insurance, pensions and emerging areas of actuarial practice. 1.1 RANDOM

More information

Course information FN3142 Quantitative finance

Course information FN3142 Quantitative finance Course information 015 16 FN314 Quantitative finance This course is aimed at students interested in obtaining a thorough grounding in market finance and related empirical methods. Prerequisite If taken

More information

February 2010 Office of the Deputy Assistant Secretary of the Army for Cost & Economics (ODASA-CE)

February 2010 Office of the Deputy Assistant Secretary of the Army for Cost & Economics (ODASA-CE) U.S. ARMY COST ANALYSIS HANDBOOK SECTION 12 COST RISK AND UNCERTAINTY ANALYSIS February 2010 Office of the Deputy Assistant Secretary of the Army for Cost & Economics (ODASA-CE) TABLE OF CONTENTS 12.1

More information

An Introduction to Bayesian Inference and MCMC Methods for Capture-Recapture

An Introduction to Bayesian Inference and MCMC Methods for Capture-Recapture An Introduction to Bayesian Inference and MCMC Methods for Capture-Recapture Trinity River Restoration Program Workshop on Outmigration: Population Estimation October 6 8, 2009 An Introduction to Bayesian

More information

Richardson Extrapolation Techniques for the Pricing of American-style Options

Richardson Extrapolation Techniques for the Pricing of American-style Options Richardson Extrapolation Techniques for the Pricing of American-style Options June 1, 2005 Abstract Richardson Extrapolation Techniques for the Pricing of American-style Options In this paper we re-examine

More information

Statistics 431 Spring 2007 P. Shaman. Preliminaries

Statistics 431 Spring 2007 P. Shaman. Preliminaries Statistics 4 Spring 007 P. Shaman The Binomial Distribution Preliminaries A binomial experiment is defined by the following conditions: A sequence of n trials is conducted, with each trial having two possible

More information

Optimal Portfolio Selection Under the Estimation Risk in Mean Return

Optimal Portfolio Selection Under the Estimation Risk in Mean Return Optimal Portfolio Selection Under the Estimation Risk in Mean Return by Lei Zhu A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master of Mathematics

More information

2.1 Mathematical Basis: Risk-Neutral Pricing

2.1 Mathematical Basis: Risk-Neutral Pricing Chapter Monte-Carlo Simulation.1 Mathematical Basis: Risk-Neutral Pricing Suppose that F T is the payoff at T for a European-type derivative f. Then the price at times t before T is given by f t = e r(t

More information

List of tables List of boxes List of screenshots Preface to the third edition Acknowledgements

List of tables List of boxes List of screenshots Preface to the third edition Acknowledgements Table of List of figures List of tables List of boxes List of screenshots Preface to the third edition Acknowledgements page xii xv xvii xix xxi xxv 1 Introduction 1 1.1 What is econometrics? 2 1.2 Is

More information

A Markov Chain Monte Carlo Approach to Estimate the Risks of Extremely Large Insurance Claims

A Markov Chain Monte Carlo Approach to Estimate the Risks of Extremely Large Insurance Claims International Journal of Business and Economics, 007, Vol. 6, No. 3, 5-36 A Markov Chain Monte Carlo Approach to Estimate the Risks of Extremely Large Insurance Claims Wan-Kai Pang * Department of Applied

More information

A Stochastic Reserving Today (Beyond Bootstrap)

A Stochastic Reserving Today (Beyond Bootstrap) A Stochastic Reserving Today (Beyond Bootstrap) Presented by Roger M. Hayne, PhD., FCAS, MAAA Casualty Loss Reserve Seminar 6-7 September 2012 Denver, CO CAS Antitrust Notice The Casualty Actuarial Society

More information

Application of MCMC Algorithm in Interest Rate Modeling

Application of MCMC Algorithm in Interest Rate Modeling Application of MCMC Algorithm in Interest Rate Modeling Xiaoxia Feng and Dejun Xie Abstract Interest rate modeling is a challenging but important problem in financial econometrics. This work is concerned

More information

EFFICIENT MONTE CARLO ALGORITHM FOR PRICING BARRIER OPTIONS

EFFICIENT MONTE CARLO ALGORITHM FOR PRICING BARRIER OPTIONS Commun. Korean Math. Soc. 23 (2008), No. 2, pp. 285 294 EFFICIENT MONTE CARLO ALGORITHM FOR PRICING BARRIER OPTIONS Kyoung-Sook Moon Reprinted from the Communications of the Korean Mathematical Society

More information

MATH 425: BINOMIAL TREES

MATH 425: BINOMIAL TREES MATH 425: BINOMIAL TREES G. BERKOLAIKO Summary. These notes will discuss: 1-level binomial tree for a call, fair price and the hedging procedure 1-level binomial tree for a general derivative, fair price

More information

Antino Kim Kelley School of Business, Indiana University, Bloomington Bloomington, IN 47405, U.S.A.

Antino Kim Kelley School of Business, Indiana University, Bloomington Bloomington, IN 47405, U.S.A. THE INVISIBLE HAND OF PIRACY: AN ECONOMIC ANALYSIS OF THE INFORMATION-GOODS SUPPLY CHAIN Antino Kim Kelley School of Business, Indiana University, Bloomington Bloomington, IN 47405, U.S.A. {antino@iu.edu}

More information

Full Monte. Looking at your project through rose-colored glasses? Let s get real.

Full Monte. Looking at your project through rose-colored glasses? Let s get real. Realistic plans for project success. Looking at your project through rose-colored glasses? Let s get real. Full Monte Cost and schedule risk analysis add-in for Microsoft Project that graphically displays

More information

T.I.H.E. IT 233 Statistics and Probability: Sem. 1: 2013 ESTIMATION

T.I.H.E. IT 233 Statistics and Probability: Sem. 1: 2013 ESTIMATION In Inferential Statistic, ESTIMATION (i) (ii) is called the True Population Mean and is called the True Population Proportion. You must also remember that are not the only population parameters. There

More information

16 MAKING SIMPLE DECISIONS

16 MAKING SIMPLE DECISIONS 247 16 MAKING SIMPLE DECISIONS Let us associate each state S with a numeric utility U(S), which expresses the desirability of the state A nondeterministic action A will have possible outcome states Result

More information

Using Monte Carlo Analysis in Ecological Risk Assessments

Using Monte Carlo Analysis in Ecological Risk Assessments 10/27/00 Page 1 of 15 Using Monte Carlo Analysis in Ecological Risk Assessments Argonne National Laboratory Abstract Monte Carlo analysis is a statistical technique for risk assessors to evaluate the uncertainty

More information

On the value of European options on a stock paying a discrete dividend at uncertain date

On the value of European options on a stock paying a discrete dividend at uncertain date A Work Project, presented as part of the requirements for the Award of a Master Degree in Finance from the NOVA School of Business and Economics. On the value of European options on a stock paying a discrete

More information

Hedging Under Jump Diffusions with Transaction Costs. Peter Forsyth, Shannon Kennedy, Ken Vetzal University of Waterloo

Hedging Under Jump Diffusions with Transaction Costs. Peter Forsyth, Shannon Kennedy, Ken Vetzal University of Waterloo Hedging Under Jump Diffusions with Transaction Costs Peter Forsyth, Shannon Kennedy, Ken Vetzal University of Waterloo Computational Finance Workshop, Shanghai, July 4, 2008 Overview Overview Single factor

More information

American Option Pricing: A Simulated Approach

American Option Pricing: A Simulated Approach Utah State University DigitalCommons@USU All Graduate Plan B and other Reports Graduate Studies 5-2013 American Option Pricing: A Simulated Approach Garrett G. Smith Utah State University Follow this and

More information

WC-5 Just How Credible Is That Employer? Exploring GLMs and Multilevel Modeling for NCCI s Excess Loss Factor Methodology

WC-5 Just How Credible Is That Employer? Exploring GLMs and Multilevel Modeling for NCCI s Excess Loss Factor Methodology Antitrust Notice The Casualty Actuarial Society is committed to adhering strictly to the letter and spirit of the antitrust laws. Seminars conducted under the auspices of the CAS are designed solely to

More information

Bounding the Composite Value at Risk for Energy Service Company Operation with DEnv, an Interval-Based Algorithm

Bounding the Composite Value at Risk for Energy Service Company Operation with DEnv, an Interval-Based Algorithm Bounding the Composite Value at Risk for Energy Service Company Operation with DEnv, an Interval-Based Algorithm Gerald B. Sheblé and Daniel Berleant Department of Electrical and Computer Engineering Iowa

More information

Construction and behavior of Multinomial Markov random field models

Construction and behavior of Multinomial Markov random field models Graduate Theses and Dissertations Iowa State University Capstones, Theses and Dissertations 2010 Construction and behavior of Multinomial Markov random field models Kim Mueller Iowa State University Follow

More information

Properties of the estimated five-factor model

Properties of the estimated five-factor model Informationin(andnotin)thetermstructure Appendix. Additional results Greg Duffee Johns Hopkins This draft: October 8, Properties of the estimated five-factor model No stationary term structure model is

More information

CHAPTER 5 STOCHASTIC SCHEDULING

CHAPTER 5 STOCHASTIC SCHEDULING CHPTER STOCHSTIC SCHEDULING In some situations, estimating activity duration becomes a difficult task due to ambiguity inherited in and the risks associated with some work. In such cases, the duration

More information

THE LYING ORACLE GAME WITH A BIASED COIN

THE LYING ORACLE GAME WITH A BIASED COIN Applied Probability Trust (13 July 2009 THE LYING ORACLE GAME WITH A BIASED COIN ROBB KOETHER, Hampden-Sydney College MARCUS PENDERGRASS, Hampden-Sydney College JOHN OSOINACH, Millsaps College Abstract

More information

Yao s Minimax Principle

Yao s Minimax Principle Complexity of algorithms The complexity of an algorithm is usually measured with respect to the size of the input, where size may for example refer to the length of a binary word describing the input,

More information