Meta-metrics for the Accuracy of Software Project Estimation
|
|
- Eugene Bradley
- 5 years ago
- Views:
Transcription
1 Meta-metrics for the Accuracy of Software Project Estimation T.L. Woodings Department of Information Technology, Murdoch University and Comast Consulting Pty Ltd PO Box 88, Nedlands, Western Australia 6009 Phone: (618) Abstract Software project estimation for such items as Size, Effort, Cost, Delivery time, Reliability and Risk, is a fundamental skill for software engineers. In order to improve estimation, there is a need for measurement of the measurements, that is, a meta-metric for the process. This paper compares three existing metrics and provides an example of conversions between them. It then extends one (DeMarco's Estimating Quality Factor) to recognise and reward rapid convergence to an accurate figure during a project. A second metric is defined to provide a lower bound on error for multiple initial estimates. The paper concludes with a worked example as an illustration of the metric's properties. 1. Background One of the major problems of Software Engineering is the lack of confidence in estimates of project factors such as Size, Effort, Cost, Delivery time, Reliability and Risk, even though these are of fundamental interest to the client. Poor estimates lead to poor plans and inadequate planning is a basic cause of failure with software projects. Despite substantial developments in recent years in the field of Software Metrics, industry's ability to predict basic project parameters is low and the proportion of systems delivered significantly overdue remains unacceptably high. A previous paper (Woodings, 1995) considered a taxonomy of Software Metrics with particular reference to giving greater visibility to measures of process improvement. This paper considers the issue of meta-metrics for the accuracy of measurement of project parameters in general (and development effort in particular) and the need for specialised metrics which may be monitored to provide evidence of organisational improvement. The need for meta-metrics was emphasised in some recent research (Lederer, 1998), which indicated little improvement in estimation practices due to new models and techniques. It asserted: "Only one managerial practice - the use of the estimate in performance evaluations of software managers and professionals - presages greater accuracy. By implication, the research suggests somewhat ironically that the most effective approach to improve estimating accuracy may be to make estimators, developers and managers more accountable for the estimate even though it may be impossible to direct them explicitly on how to produce a more accurate one." However, the more managers are held to their promises, the more they will be tempted to adjust other project factors in order to look good. Predictions become self-fulfilling prophesies. Thus, there is a need to have in place a framework of meta-metrics to monitor, guide and provide feedback to organisations on their project management.
2 T L Woodings 2. Existing Metrics for Project Estimation Accuracy The accuracy of an estimate is given by Relative Error R R = E - A / A where E is the estimate and A is the actual result at the conclusion of the project. There are three metrics in general use for the measurement of accuracy of project parameter estimation (Fenton, 1997). All three may be used for any parameter and are independent of the units of measurement. (i) In a study of three major estimation techniques, Kemerer (1987) uses mean percentage magnitude of relative error MPMRE over n projects: MPMRE = 100Σ R / n As an example, Kemerer's data gives the MPMRE for estimating effort over 15 projects using Albrecht's Function Points as , with for the standard deviation. (ii) A non-parametric alternative is a binary (success or failure) measure of an estimate to be within a certain percentage of the actual measurement. Over a set of projects, this may be widened to a profile of the proportion of estimates achieving success at a given level q. Conte et al. (1986) define prediction quality P as: P(q) = m / n where m of the n projects have R < q. Again using Kemerer's Function Point data, this may be tabulated or graphed (Figure 1): P(0.25) = 0.33 P(0.5) = 0.47 P(1.0) = 0.60 P(3.0) = 0.86 Figure 1. Cumulative proportion of projects with R < q (iii) DeMarco (1982) employs the relative error to depict the improvement in estimates over the scope of the project. In order to give a positive correlation with the MPMRE, the metric described below is the inverse of DeMarco's original description. Let t 0, t 1,.. t n be a sequence of times of project milestones after the start of the project with t 0 = 0 and t n the final delivery time. If E i is the revised estimate made at time t i, then r i = E i - A / A
3 The Estimating Quality Factor EQF may be defined as: EQF = n-1 r i=0 i ( t i+1 t i ) t n This metric can be visualised as a ratio: the area from the error times the interval for which that estimate was in effect, divided by the area from the actual times the total period (see Figure 2). Perfect estimation would have an EQF of zero. Figure 2. EQF as the ratio of dark shaded area to (A x t 3 ) As an illustration, the fifteen items of Kemerer's data could be considered as the relative error r i of fifteen equally spaced estimates of the same project. This gives EQF = Σ r i ( t i+1 - t i ) / t n = Σ r i (1/15) /1 = the same as the MPMRE. 3. Desirable Properties for Software Metrics Before defining the two new metrics for the meta-measurement of the estimation process, it is appropriate to consider desirable properties for their design. The thirteen qualities for metrics used in this research are an expanded list from those given by Watts (1987) and Kitchenham (1996) and are summarised in Table Measuring convergence Several researchers (for example, Selby, 1991 and Kitchenham, 1996), point out that any estimation model should take advantage of a staged approach, whereby new estimates are made as soon as a better predictor (or more information on the scope or design of the project), becomes available. The most important aspect of a staged process is to incorporate feedback. Selby (1991) asserts: "A fundamental principle is to make measurement active by integrating measurement and process, which contrasts with the primarily passive use of measurement in the past." DeMarco (1982) suggests an alternative EQF by using a weighting of the various estimates made during a project, but interestingly reports that some managers: "feel obliged to stress more heavily the estimates made at the beginning, and don't care terribly much about the last ten to twenty percent convergence at the end". Accordingly, his Time-Weighted EQF is biased towards the initial estimates.
4 T L Woodings 1. Objective Is independent of anyone's opinion 2. Reproducible Can be consistently repeated 3. Standardised Uses a mathematically appropriate scale 4. Valid Is clearly related to the feature being measured - it monotonically increases as the feature rises 5. Precise Is sensitive to changes in the feature measured 6. Robust Is not easily manipulated or sensitive to extraneous factors 7. Comparable Is highly correlated with other metrics measuring the same feature 8. Timely Can be obtained in time for action to be taken on its message 9. Sustainable Is likely to be valid in the future so that trend forecasts based on the metric will be effective 10. Universal Can be translated into sub-metrics for lower parts of the product or process 11. Economical Does not consume significant resources (preferably a byproduct of other activities) 12. Cost-Effective Provides a return on investment 13. Useful/Relevant Supports the goals of the organisation Table One - Desirable properties for a software metric However, elsewhere in his book, DeMarco gives the rule: "Success for the estimator must be defined as a function of convergence of the estimate to the actual, and of nothing else." He goes on to state: "With such incentives, the estimator has no inclination to dodge opportunities for re-estimation. Any change in the right direction will improve the final judgement of estimate quality, and the earlier the change is made the better." A linear weighting based on the time for the centre of each dark shaded area in Figure 2 would appear to be a reasonable candidate for the new metric to reward rapid convergence of E i towards A. That is the weight = ( t i+1 + t i ) / 2 - t 0 where t 0 = 0 as stated earlier.
5 Applying this to the r i in the EQF gives the Convergence of Estimate metric as: CE = n 1 r i=0 i t n ( t i+1 t i)t i+1 + t i n 1 t i+1 + t i 2 i=0( ) ( ) 2 ( ) ( ) n 1 r t 2 2 n i=0 i i+1 t i E i A t i=0 i+1 t i = = n 1 n 1 t n t n + 2 t i=1 i t n At n + 2 t i i=1 Based on DeMarco's comments, a CE of 0.25 should be within the capability of the average software supplier. 5. Assessing a set of estimates The use of such metrics as those above, looks at the accuracy of the estimates after the project is complete and the actual value known. However, there is a need to focus attention on the accuracy of the initial estimate at the start of the project. Conventionally, this is done be requesting the variance or a confidence interval for the estimate - generally obtained directly from a set of expert opinions or by means of the Beta-PERT approach (Malcolm, 1959). Figure 3 shows the distribution of many actual project completion dates standardised about a nominal estimated target date for a typical software organisation. Figure 3. The distribution of actual completion dates It is possible to invert this approach. Instead of a hypothetical distribution of actual completion dates around a fixed estimate, there is a distribution of known independent estimates about a hypothetical actual (that is, unknown at start of project) delivery date. For a given set of estimates, the minimum of the proportion with errors above a certain limit with respect to a variable 'actual' gives a Lower Bound on Estimate Accuracy (LBEA) for the group effort. Although this gives little more information than having the standard deviation of the estimates, it produces a quite different appreciation when labelled "At least x% of the estimators are in error by at least y%". For example, it may be employed to impress upon software engineering students the difficulty of the estimation task. It also gives an indication of convergence for experts using Delphi methods (Boehm, 1981). Suppose, as in section 2 (ii), q is set at 0.25 as a reasonable limit on the relative error R. The LBEA is based on the proportion of estimates which have R > q. That is, A is varied so that the proportion of estimates below (1-q)A plus those above (1+q)A is a minimum (see Figure 4). This is equivalent to maximising the number of estimates within the interval [(1-q)A, (1+q)A].
6 T L Woodings Figure 4. LBEA as a minimum shaded area Let f(t) be the frequency distribution of the estimates over time. Then the proportion of estimates outside the interval is given by: (1+q )A g( A ) = 1 f(t)dt (1 q )A The LBEA is that minimum value of g found when dg/da = 0. A non-parametric equivalent of the LBEA is simple to compute. Given n estimates, rank them in ascending order and select each in turn as the interval lower limit L. The upper limit U = (1+q) L / (1-q). For q = 0.25, U = 1.67L. Thus for each, count the number of estimates m in the interval [L, 1.67L]. Then the LBEA is that value of g = 1 - m/n for which m is greatest. This may be set up without difficulty on a spreadsheet. As an example of the use of the LBEA metric, 20 undergraduates starting a software engineering course were asked to estimate the Spelling Checker given in Fenton (1997). The estimates of effort, in person-days were: {0.8, 2, 3, 5, 6, 10, 10, 13, 20, 20, 20, 20, 60, 80, 100, 100, 100, 160, 160, 180} The mean is 53.5, the standard deviation 60.0, the coefficient of variation 1.12 and the ratio of the largest to smallest 225:1. For a relative error of 0.25, m is computed for each of the twenty estimates: {1, 2, 2, 2, 3, 3, 3, 5, 4, 4, 4, 4, 5, 5, 5, 5, 5, 3, 3, 1} Note that in this case the list is bimodal and thus the function g has two minima The LBEA = 1-5 / 20 = 0.75 which is not surprising given the large range of estimates. That is to say a minimum of 75% of the estimates will be in error by more than 25%. 6. Conclusion Two meta-metrics have been proposed to assess software development parameter estimation and thus focus attention on areas of potential improvement. Both achieve the desirable requirements for metrics. The first, Convergence of Estimate, CE, recognises and rewards the re-estimation of parameters during a project. The second, the Lower Bound on Estimator Accuracy, LBEA, focuses attention on the acceptability of the spread of initial estimates. Research is continuing on the effectiveness of these approaches in improving process estimation in teaching and industry.
7 References Boehm, B W (1981) "Software Engineering Economics", Prentice Hall. Conte, S D, Dunsmore, H E and Shen, V Y (1986) "Software Engineering Metrics and Models", Benjamin-Cummings. DeMarco, T (1982) "Controlling Software Projects", Yourdon Press. Fenton, N E and Pfleeger, S L (1997) "Software Metrics - A Rigorous and Practical Approach" (2nd edition), International Thomson Publishing. Kemerer, C (1987) "An Empirical Validation of Software Cost Estimation Models", Communications of the ACM, 30:5 pp Kitchenham, B A (1996) "Software Metrics - Measurement for Software Process Improvement", NCC-Blackwell. Lederer, A L and Prasad, J (1998) "A Causal Model for Software Cost Estimating Error", IEEE Transactions on Software Engineering, 24:2 pp Malcolm, D G, Roseboom, J H and Clark, C E (1959) "Application of a Technique for Research and Development Program Evaluation", Operations Research, 7 pp Selby, R W, Porter, A A, Schmidt, D C and Berney, J (1991) "Metric-Driven Analysis and Feedback Systems for Enabling Empirically Guided Software Development", Proceedings of the Thirteenth International Conference on Software Engineering, Austin, pp Watts, R A (1987) "Measuring Software Quality", NCC. Woodings, T L (1995) "A Taxonomy of Software Metrics", Software Process Improvement Network (SPIN), available from Comast Consulting, Perth.
Monte Carlo Simulation (General Simulation Models)
Monte Carlo Simulation (General Simulation Models) Revised: 10/11/2017 Summary... 1 Example #1... 1 Example #2... 10 Summary Monte Carlo simulation is used to estimate the distribution of variables when
More informationChapter 3 Descriptive Statistics: Numerical Measures Part A
Slides Prepared by JOHN S. LOUCKS St. Edward s University Slide 1 Chapter 3 Descriptive Statistics: Numerical Measures Part A Measures of Location Measures of Variability Slide Measures of Location Mean
More informationFundamentals of Statistics
CHAPTER 4 Fundamentals of Statistics Expected Outcomes Know the difference between a variable and an attribute. Perform mathematical calculations to the correct number of significant figures. Construct
More informationMeasuring and managing market risk June 2003
Page 1 of 8 Measuring and managing market risk June 2003 Investment management is largely concerned with risk management. In the management of the Petroleum Fund, considerable emphasis is therefore placed
More informationWHY PORTFOLIO MANAGERS SHOULD BE USING BETA FACTORS
Page 2 The Securities Institute Journal WHY PORTFOLIO MANAGERS SHOULD BE USING BETA FACTORS by Peter John C. Burket Although Beta factors have been around for at least a decade they have not been extensively
More informationAn Improved Saddlepoint Approximation Based on the Negative Binomial Distribution for the General Birth Process
Computational Statistics 17 (March 2002), 17 28. An Improved Saddlepoint Approximation Based on the Negative Binomial Distribution for the General Birth Process Gordon K. Smyth and Heather M. Podlich Department
More informationSubject CS1 Actuarial Statistics 1 Core Principles. Syllabus. for the 2019 exams. 1 June 2018
` Subject CS1 Actuarial Statistics 1 Core Principles Syllabus for the 2019 exams 1 June 2018 Copyright in this Core Reading is the property of the Institute and Faculty of Actuaries who are the sole distributors.
More informationSample Size for Assessing Agreement between Two Methods of Measurement by Bland Altman Method
Meng-Jie Lu 1 / Wei-Hua Zhong 1 / Yu-Xiu Liu 1 / Hua-Zhang Miao 1 / Yong-Chang Li 1 / Mu-Huo Ji 2 Sample Size for Assessing Agreement between Two Methods of Measurement by Bland Altman Method Abstract:
More informationChapter ! Bell Shaped
Chapter 6 6-1 Business Statistics: A First Course 5 th Edition Chapter 7 Continuous Probability Distributions Learning Objectives In this chapter, you learn:! To compute probabilities from the normal distribution!
More information1 Exercise One. 1.1 Calculate the mean ROI. Note that the data is not grouped! Below you find the raw data in tabular form:
1 Exercise One Note that the data is not grouped! 1.1 Calculate the mean ROI Below you find the raw data in tabular form: Obs Data 1 18.5 2 18.6 3 17.4 4 12.2 5 19.7 6 5.6 7 7.7 8 9.8 9 19.9 10 9.9 11
More informationJournal Of Financial And Strategic Decisions Volume 10 Number 2 Summer 1997 AN ANALYSIS OF VALUE LINE S ABILITY TO FORECAST LONG-RUN RETURNS
Journal Of Financial And Strategic Decisions Volume 10 Number 2 Summer 1997 AN ANALYSIS OF VALUE LINE S ABILITY TO FORECAST LONG-RUN RETURNS Gary A. Benesh * and Steven B. Perfect * Abstract Value Line
More informationIOP 201-Q (Industrial Psychological Research) Tutorial 5
IOP 201-Q (Industrial Psychological Research) Tutorial 5 TRUE/FALSE [1 point each] Indicate whether the sentence or statement is true or false. 1. To establish a cause-and-effect relation between two variables,
More informationINSTITUTE AND FACULTY OF ACTUARIES. Curriculum 2019 SPECIMEN EXAMINATION
INSTITUTE AND FACULTY OF ACTUARIES Curriculum 2019 SPECIMEN EXAMINATION Subject CS1A Actuarial Statistics Time allowed: Three hours and fifteen minutes INSTRUCTIONS TO THE CANDIDATE 1. Enter all the candidate
More informationChapter 14 : Statistical Inference 1. Note : Here the 4-th and 5-th editions of the text have different chapters, but the material is the same.
Chapter 14 : Statistical Inference 1 Chapter 14 : Introduction to Statistical Inference Note : Here the 4-th and 5-th editions of the text have different chapters, but the material is the same. Data x
More informationChapter-8 Risk Management
Chapter-8 Risk Management 8.1 Concept of Risk Management Risk management is a proactive process that focuses on identifying risk events and developing strategies to respond and control risks. It is not
More informationMath 2311 Bekki George Office Hours: MW 11am to 12:45pm in 639 PGH Online Thursdays 4-5:30pm And by appointment
Math 2311 Bekki George bekki@math.uh.edu Office Hours: MW 11am to 12:45pm in 639 PGH Online Thursdays 4-5:30pm And by appointment Class webpage: http://www.math.uh.edu/~bekki/math2311.html Math 2311 Class
More informationStatistics for Managers Using Microsoft Excel 7 th Edition
Statistics for Managers Using Microsoft Excel 7 th Edition Chapter 7 Sampling Distributions Statistics for Managers Using Microsoft Excel 7e Copyright 2014 Pearson Education, Inc. Chap 7-1 Learning Objectives
More informationUnit 4: Elements of Managerial Accounting Syllabus Section Absorption (Total) costing
www.xtremepapers.com Unit 4: Elements of Managerial Accounting Syllabus Section Absorption (Total) costing Learning Outcomes Suggested Teaching Activities Resources Online Resources Students will learn
More informationIntroduction to Algorithmic Trading Strategies Lecture 8
Introduction to Algorithmic Trading Strategies Lecture 8 Risk Management Haksun Li haksun.li@numericalmethod.com www.numericalmethod.com Outline Value at Risk (VaR) Extreme Value Theory (EVT) References
More informationRisk Analysis Risk Management
Risk Analysis Risk Management References: T. Lister, Risk Management is Project Management for Adults, IEEE Software, May/June 1997, pp 20 22. M.J. Carr, Risk management May Not Be for Everyone, IEEE Software,
More informationSTOCHASTIC COST ESTIMATION AND RISK ANALYSIS IN MANAGING SOFTWARE PROJECTS
Full citation: Connor, A.M., & MacDonell, S.G. (25) Stochastic cost estimation and risk analysis in managing software projects, in Proceedings of the ISCA 14th International Conference on Intelligent and
More informationStatistical Intervals. Chapter 7 Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage
7 Statistical Intervals Chapter 7 Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage Confidence Intervals The CLT tells us that as the sample size n increases, the sample mean X is close to
More informationStatistics for Business and Economics
Statistics for Business and Economics Chapter 7 Estimation: Single Population Copyright 010 Pearson Education, Inc. Publishing as Prentice Hall Ch. 7-1 Confidence Intervals Contents of this chapter: Confidence
More informationMBEJ 1023 Dr. Mehdi Moeinaddini Dept. of Urban & Regional Planning Faculty of Built Environment
MBEJ 1023 Planning Analytical Methods Dr. Mehdi Moeinaddini Dept. of Urban & Regional Planning Faculty of Built Environment Contents What is statistics? Population and Sample Descriptive Statistics Inferential
More informationDATA SUMMARIZATION AND VISUALIZATION
APPENDIX DATA SUMMARIZATION AND VISUALIZATION PART 1 SUMMARIZATION 1: BUILDING BLOCKS OF DATA ANALYSIS 294 PART 2 PART 3 PART 4 VISUALIZATION: GRAPHS AND TABLES FOR SUMMARIZING AND ORGANIZING DATA 296
More informationConfidence Intervals Introduction
Confidence Intervals Introduction A point estimate provides no information about the precision and reliability of estimation. For example, the sample mean X is a point estimate of the population mean μ
More informationChapter 9: Sampling Distributions
Chapter 9: Sampling Distributions 9. Introduction This chapter connects the material in Chapters 4 through 8 (numerical descriptive statistics, sampling, and probability distributions, in particular) with
More informationSTATISTICAL DISTRIBUTIONS AND THE CALCULATOR
STATISTICAL DISTRIBUTIONS AND THE CALCULATOR 1. Basic data sets a. Measures of Center - Mean ( ): average of all values. Characteristic: non-resistant is affected by skew and outliers. - Median: Either
More informationSTOCHASTIC COST ESTIMATION AND RISK ANALYSIS IN MANAGING SOFTWARE PROJECTS
STOCHASTIC COST ESTIMATION AND RISK ANALYSIS IN MANAGING SOFTWARE PROJECTS Dr A.M. Connor Software Engineering Research Lab Auckland University of Technology Auckland, New Zealand andrew.connor@aut.ac.nz
More informationThe Limitations of Estimation
The Limitations of Estimation Linda M. Laird Linda_m_laird@msn.com June, 2006 Anyone who expects a quick and easy solution to the multifaceted problem of resource estimation is going to be disappointed.
More informationThe misleading nature of correlations
The misleading nature of correlations In this note we explain certain subtle features of calculating correlations between time-series. Correlation is a measure of linear co-movement, to be contrasted with
More informationRandom Variables and Probability Distributions
Chapter 3 Random Variables and Probability Distributions Chapter Three Random Variables and Probability Distributions 3. Introduction An event is defined as the possible outcome of an experiment. In engineering
More informationEstimating gamma for regulatory purposes
Estimating gamma for regulatory purposes REPORT FOR AURIZON NETWORK November 2016 Frontier Economics Pty. Ltd., Australia. November 2016 Frontier Economics i Estimating gamma for regulatory purposes 1
More informationGlobal Credit Data by banks for banks
9 APRIL 218 Report 218 - Large Corporate Borrowers After default, banks recover 75% from Large Corporate borrowers TABLE OF CONTENTS SUMMARY 1 INTRODUCTION 2 REFERENCE DATA SET 2 ANALYTICS 3 CONCLUSIONS
More informationPricing & Risk Management of Synthetic CDOs
Pricing & Risk Management of Synthetic CDOs Jaffar Hussain* j.hussain@alahli.com September 2006 Abstract The purpose of this paper is to analyze the risks of synthetic CDO structures and their sensitivity
More informationESTIMATION OF MODIFIED MEASURE OF SKEWNESS. Elsayed Ali Habib *
Electronic Journal of Applied Statistical Analysis EJASA, Electron. J. App. Stat. Anal. (2011), Vol. 4, Issue 1, 56 70 e-issn 2070-5948, DOI 10.1285/i20705948v4n1p56 2008 Università del Salento http://siba-ese.unile.it/index.php/ejasa/index
More informationOperational Risk Aggregation
Operational Risk Aggregation Professor Carol Alexander Chair of Risk Management and Director of Research, ISMA Centre, University of Reading, UK. Loss model approaches are currently a focus of operational
More informationTwo-Sample Z-Tests Assuming Equal Variance
Chapter 426 Two-Sample Z-Tests Assuming Equal Variance Introduction This procedure provides sample size and power calculations for one- or two-sided two-sample z-tests when the variances of the two groups
More informationResearch Factor Indexes and Factor Exposure Matching: Like-for-Like Comparisons
Research Factor Indexes and Factor Exposure Matching: Like-for-Like Comparisons October 218 ftserussell.com Contents 1 Introduction... 3 2 The Mathematics of Exposure Matching... 4 3 Selection and Equal
More informationThe Vasicek adjustment to beta estimates in the Capital Asset Pricing Model
The Vasicek adjustment to beta estimates in the Capital Asset Pricing Model 17 June 2013 Contents 1. Preparation of this report... 1 2. Executive summary... 2 3. Issue and evaluation approach... 4 3.1.
More informationFE670 Algorithmic Trading Strategies. Stevens Institute of Technology
FE670 Algorithmic Trading Strategies Lecture 4. Cross-Sectional Models and Trading Strategies Steve Yang Stevens Institute of Technology 09/26/2013 Outline 1 Cross-Sectional Methods for Evaluation of Factor
More informationforeseechange forecasting and futures consultants foreseechange pty ltd acn
A survey-based leading indicator for Australian retail sales growth Charlie Nelson October A leading indicator for retail sales dollar growth has been developed, which is based solely on consumer survey
More informationChapter 5. Forecasting. Learning Objectives
Chapter 5 Forecasting To accompany Quantitative Analysis for Management, Eleventh Edition, by Render, Stair, and Hanna Power Point slides created by Brian Peterson Learning Objectives After completing
More informationStatistical Intervals (One sample) (Chs )
7 Statistical Intervals (One sample) (Chs 8.1-8.3) Confidence Intervals The CLT tells us that as the sample size n increases, the sample mean X is close to normally distributed with expected value µ and
More informationStatistical Modeling Techniques for Reserve Ranges: A Simulation Approach
Statistical Modeling Techniques for Reserve Ranges: A Simulation Approach by Chandu C. Patel, FCAS, MAAA KPMG Peat Marwick LLP Alfred Raws III, ACAS, FSA, MAAA KPMG Peat Marwick LLP STATISTICAL MODELING
More informationModeling Credit Risk of Loan Portfolios in the Presence of Autocorrelation (Part 2)
Practitioner Seminar in Financial and Insurance Mathematics ETH Zürich Modeling Credit Risk of Loan Portfolios in the Presence of Autocorrelation (Part 2) Christoph Frei UBS and University of Alberta March
More informationStochastic Analysis Of Long Term Multiple-Decrement Contracts
Stochastic Analysis Of Long Term Multiple-Decrement Contracts Matthew Clark, FSA, MAAA and Chad Runchey, FSA, MAAA Ernst & Young LLP January 2008 Table of Contents Executive Summary...3 Introduction...6
More informationABSTRACT. involved therein. This paper has been motivated by the desire to meet the challenge of statistical estimation. A new estimator for
A Shorter-Length Confidence-Interval Estimator (CIE) for Sharpe-Ratio Using a Multiplier k* to the Usual Bootstrap-Resample CIE and Computational Intelligence Chandra Shekhar Bhatnagar 1, Chandrashekhar.Bhatnagar@sta.uwi.edu
More informationForecast Combination
Forecast Combination In the press, you will hear about Blue Chip Average Forecast and Consensus Forecast These are the averages of the forecasts of distinct professional forecasters. Is there merit to
More informationTABLE OF CONTENTS - VOLUME 2
TABLE OF CONTENTS - VOLUME 2 CREDIBILITY SECTION 1 - LIMITED FLUCTUATION CREDIBILITY PROBLEM SET 1 SECTION 2 - BAYESIAN ESTIMATION, DISCRETE PRIOR PROBLEM SET 2 SECTION 3 - BAYESIAN CREDIBILITY, DISCRETE
More informationPortfolio Analysis with Random Portfolios
pjb25 Portfolio Analysis with Random Portfolios Patrick Burns http://www.burns-stat.com stat.com September 2006 filename 1 1 Slide 1 pjb25 This was presented in London on 5 September 2006 at an event sponsored
More informationCHAPTER 5 STOCHASTIC SCHEDULING
CHPTER STOCHSTIC SCHEDULING In some situations, estimating activity duration becomes a difficult task due to ambiguity inherited in and the risks associated with some work. In such cases, the duration
More informationOperational Risk Aggregation
Operational Risk Aggregation Professor Carol Alexander Chair of Risk Management and Director of Research, ISMA Centre, University of Reading, UK. Loss model approaches are currently a focus of operational
More informationDRAFT GUIDANCE NOTE ON SAMPLING METHODS FOR AUDIT AUTHORITIES
EUROPEAN COMMISSION DIRECTORATE-GENERAL REGIONAL POLICY COCOF 08/0021/01-EN DRAFT GUIDANCE NOTE ON SAMPLING METHODS FOR AUDIT AUTHORITIES (UNDER ARTICLE 62 OF REGULATION (EC) NO 1083/2006 AND ARTICLE 16
More information3.1 Measures of Central Tendency
3.1 Measures of Central Tendency n Summation Notation x i or x Sum observation on the variable that appears to the right of the summation symbol. Example 1 Suppose the variable x i is used to represent
More informationA Scenario Based Method for Cost Risk Analysis
A Scenario Based Method for Cost Risk Analysis Paul R. Garvey The MITRE Corporation MP 05B000003, September 005 Abstract This paper presents an approach for performing an analysis of a program s cost risk.
More informationWeb Extension: Continuous Distributions and Estimating Beta with a Calculator
19878_02W_p001-008.qxd 3/10/06 9:51 AM Page 1 C H A P T E R 2 Web Extension: Continuous Distributions and Estimating Beta with a Calculator This extension explains continuous probability distributions
More informationUniversity 18 Lessons Financial Management. Unit 12: Return, Risk and Shareholder Value
University 18 Lessons Financial Management Unit 12: Return, Risk and Shareholder Value Risk and Return Risk and Return Security analysis is built around the idea that investors are concerned with two principal
More informationProbabilistic Benefit Cost Ratio A Case Study
Australasian Transport Research Forum 2015 Proceedings 30 September - 2 October 2015, Sydney, Australia Publication website: http://www.atrf.info/papers/index.aspx Probabilistic Benefit Cost Ratio A Case
More informationExpected utility inequalities: theory and applications
Economic Theory (2008) 36:147 158 DOI 10.1007/s00199-007-0272-1 RESEARCH ARTICLE Expected utility inequalities: theory and applications Eduardo Zambrano Received: 6 July 2006 / Accepted: 13 July 2007 /
More informationInternational Financial Markets 1. How Capital Markets Work
International Financial Markets Lecture Notes: E-Mail: Colloquium: www.rainer-maurer.de rainer.maurer@hs-pforzheim.de Friday 15.30-17.00 (room W4.1.03) -1-1.1. Supply and Demand on Capital Markets 1.1.1.
More informationSENSITIVITY ANALYSIS IN CAPITAL BUDGETING USING CRYSTAL BALL. Petter Gokstad 1
SENSITIVITY ANALYSIS IN CAPITAL BUDGETING USING CRYSTAL BALL Petter Gokstad 1 Graduate Assistant, Department of Finance, University of North Dakota Box 7096 Grand Forks, ND 58202-7096, USA Nancy Beneda
More informationMeasurable value creation through an advanced approach to ERM
Measurable value creation through an advanced approach to ERM Greg Monahan, SOAR Advisory Abstract This paper presents an advanced approach to Enterprise Risk Management that significantly improves upon
More informationSTAT 157 HW1 Solutions
STAT 157 HW1 Solutions http://www.stat.ucla.edu/~dinov/courses_students.dir/10/spring/stats157.dir/ Problem 1. 1.a: (6 points) Determine the Relative Frequency and the Cumulative Relative Frequency (fill
More informationThe Duration Derby: A Comparison of Duration Based Strategies in Asset Liability Management
The Duration Derby: A Comparison of Duration Based Strategies in Asset Liability Management H. Zheng Department of Mathematics, Imperial College London SW7 2BZ, UK h.zheng@ic.ac.uk L. C. Thomas School
More informationSimulation. Decision Models
Lecture 9 Decision Models Decision Models: Lecture 9 2 Simulation What is Monte Carlo simulation? A model that mimics the behavior of a (stochastic) system Mathematically described the system using a set
More informationKey Objectives. Module 2: The Logic of Statistical Inference. Z-scores. SGSB Workshop: Using Statistical Data to Make Decisions
SGSB Workshop: Using Statistical Data to Make Decisions Module 2: The Logic of Statistical Inference Dr. Tom Ilvento January 2006 Dr. Mugdim Pašić Key Objectives Understand the logic of statistical inference
More informationFitting financial time series returns distributions: a mixture normality approach
Fitting financial time series returns distributions: a mixture normality approach Riccardo Bramante and Diego Zappa * Abstract Value at Risk has emerged as a useful tool to risk management. A relevant
More informationRules and Models 1 investigates the internal measurement approach for operational risk capital
Carol Alexander 2 Rules and Models Rules and Models 1 investigates the internal measurement approach for operational risk capital 1 There is a view that the new Basel Accord is being defined by a committee
More informationUsing survival models for profit and loss estimation. Dr Tony Bellotti Lecturer in Statistics Department of Mathematics Imperial College London
Using survival models for profit and loss estimation Dr Tony Bellotti Lecturer in Statistics Department of Mathematics Imperial College London Credit Scoring and Credit Control XIII conference August 28-30,
More informationImproving Returns-Based Style Analysis
Improving Returns-Based Style Analysis Autumn, 2007 Daniel Mostovoy Northfield Information Services Daniel@northinfo.com Main Points For Today Over the past 15 years, Returns-Based Style Analysis become
More information1.1 Calculate VaR using a historical simulation approach. Historical simulation approach ( )
1.1 Calculate VaR using a historical simulation approach. Historical simulation approach ( ) (1) The simplest way to estimate VaR is by means of historical simulation (HS). The HS approach estimates VaR
More informationDATA HANDLING Five-Number Summary
DATA HANDLING Five-Number Summary The five-number summary consists of the minimum and maximum values, the median, and the upper and lower quartiles. The minimum and the maximum are the smallest and greatest
More informationSocial Studies 201 January 28, Percentiles 2
1 Social Studies 201 January 28, 2005 Positional Measures Percentiles. See text, section 5.6, pp. 208-213. Note: The examples in these notes may be different than used in class on January 28. However,
More informationThe following content is provided under a Creative Commons license. Your support
MITOCW Recitation 6 The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make
More informationELEMENTS OF MONTE CARLO SIMULATION
APPENDIX B ELEMENTS OF MONTE CARLO SIMULATION B. GENERAL CONCEPT The basic idea of Monte Carlo simulation is to create a series of experimental samples using a random number sequence. According to the
More informationSummarising Data. Summarising Data. Examples of Types of Data. Types of Data
Summarising Data Summarising Data Mark Lunt Arthritis Research UK Epidemiology Unit University of Manchester Today we will consider Different types of data Appropriate ways to summarise these data 17/10/2017
More informationImproving Stock Price Prediction with SVM by Simple Transformation: The Sample of Stock Exchange of Thailand (SET)
Thai Journal of Mathematics Volume 14 (2016) Number 3 : 553 563 http://thaijmath.in.cmu.ac.th ISSN 1686-0209 Improving Stock Price Prediction with SVM by Simple Transformation: The Sample of Stock Exchange
More informationCSC Advanced Scientific Programming, Spring Descriptive Statistics
CSC 223 - Advanced Scientific Programming, Spring 2018 Descriptive Statistics Overview Statistics is the science of collecting, organizing, analyzing, and interpreting data in order to make decisions.
More informationChapter 8 Statistical Intervals for a Single Sample
Chapter 8 Statistical Intervals for a Single Sample Part 1: Confidence intervals (CI) for population mean µ Section 8-1: CI for µ when σ 2 known & drawing from normal distribution Section 8-1.2: Sample
More informationDescriptive Statistics
Chapter 3 Descriptive Statistics Chapter 2 presented graphical techniques for organizing and displaying data. Even though such graphical techniques allow the researcher to make some general observations
More informationsubmission To the QCA 9 March 2015 QRC Working together for a shared future ABN Level Mary St Brisbane Queensland 4000
Working together for a shared future To the QCA 9 March 2015 ABN 59 050 486 952 Level 13 133 Mary St Brisbane Queensland 4000 T 07 3295 9560 F 07 3295 9570 E info@qrc.org.au www.qrc.org.au Page 2 response
More informationSIMULATION OF ELECTRICITY MARKETS
SIMULATION OF ELECTRICITY MARKETS MONTE CARLO METHODS Lectures 15-18 in EG2050 System Planning Mikael Amelin 1 COURSE OBJECTIVES To pass the course, the students should show that they are able to - apply
More informationChapter 3 Discrete Random Variables and Probability Distributions
Chapter 3 Discrete Random Variables and Probability Distributions Part 4: Special Discrete Random Variable Distributions Sections 3.7 & 3.8 Geometric, Negative Binomial, Hypergeometric NOTE: The discrete
More informationA comparison of two methods for imputing missing income from household travel survey data
A comparison of two methods for imputing missing income from household travel survey data A comparison of two methods for imputing missing income from household travel survey data Min Xu, Michael Taylor
More informationUNIT 4 MATHEMATICAL METHODS
UNIT 4 MATHEMATICAL METHODS PROBABILITY Section 1: Introductory Probability Basic Probability Facts Probabilities of Simple Events Overview of Set Language Venn Diagrams Probabilities of Compound Events
More information(a) (i) Year 0 Year 1 Year 2 Year 3 $ $ $ $ Lease Lease payment (55,000) (55,000) (55,000) Borrow and buy Initial cost (160,000) Residual value 40,000
Answers Applied Skills, FM Financial Management (FM) September/December 2018 Sample Answers Section C 31 Melanie Co (a) (i) Year 0 Year 1 Year 2 Year 3 $ $ $ $ Lease Lease payment (55,000) (55,000) (55,000)
More informationModelling catastrophic risk in international equity markets: An extreme value approach. JOHN COTTER University College Dublin
Modelling catastrophic risk in international equity markets: An extreme value approach JOHN COTTER University College Dublin Abstract: This letter uses the Block Maxima Extreme Value approach to quantify
More informationConfidence Intervals for Pearson s Correlation
Chapter 801 Confidence Intervals for Pearson s Correlation Introduction This routine calculates the sample size needed to obtain a specified width of a Pearson product-moment correlation coefficient confidence
More informationRisk Video #1. Video 1 Recap
Risk Video #1 Video 1 Recap 1 Risk Video #2 Video 2 Recap 2 Risk Video #3 Risk Risk Management Process Uncertain or chance events that planning can not overcome or control. Risk Management A proactive
More informationINSTITUTE OF ACTUARIES OF INDIA EXAMINATIONS. 20 th May Subject CT3 Probability & Mathematical Statistics
INSTITUTE OF ACTUARIES OF INDIA EXAMINATIONS 20 th May 2013 Subject CT3 Probability & Mathematical Statistics Time allowed: Three Hours (10.00 13.00) Total Marks: 100 INSTRUCTIONS TO THE CANDIDATES 1.
More informationWeek 2 Quantitative Analysis of Financial Markets Hypothesis Testing and Confidence Intervals
Week 2 Quantitative Analysis of Financial Markets Hypothesis Testing and Confidence Intervals Christopher Ting http://www.mysmu.edu/faculty/christophert/ Christopher Ting : christopherting@smu.edu.sg :
More informationLecture 12. Some Useful Continuous Distributions. The most important continuous probability distribution in entire field of statistics.
ENM 207 Lecture 12 Some Useful Continuous Distributions Normal Distribution The most important continuous probability distribution in entire field of statistics. Its graph, called the normal curve, is
More informationPassing the repeal of the carbon tax back to wholesale electricity prices
University of Wollongong Research Online National Institute for Applied Statistics Research Australia Working Paper Series Faculty of Engineering and Information Sciences 2014 Passing the repeal of the
More informationInternational Financial Markets Prices and Policies. Second Edition Richard M. Levich. Overview. ❿ Measuring Economic Exposure to FX Risk
International Financial Markets Prices and Policies Second Edition 2001 Richard M. Levich 16C Measuring and Managing the Risk in International Financial Positions Chap 16C, p. 1 Overview ❿ Measuring Economic
More informationWeek 1 Variables: Exploration, Familiarisation and Description. Descriptive Statistics.
Week 1 Variables: Exploration, Familiarisation and Description. Descriptive Statistics. Convergent validity: the degree to which results/evidence from different tests/sources, converge on the same conclusion.
More informationThe effects of transaction costs on depth and spread*
The effects of transaction costs on depth and spread* Dominique Y Dupont Board of Governors of the Federal Reserve System E-mail: midyd99@frb.gov Abstract This paper develops a model of depth and spread
More informationPart V - Chance Variability
Part V - Chance Variability Dr. Joseph Brennan Math 148, BU Dr. Joseph Brennan (Math 148, BU) Part V - Chance Variability 1 / 78 Law of Averages In Chapter 13 we discussed the Kerrich coin-tossing experiment.
More informationChapter 5. Sampling Distributions
Lecture notes, Lang Wu, UBC 1 Chapter 5. Sampling Distributions 5.1. Introduction In statistical inference, we attempt to estimate an unknown population characteristic, such as the population mean, µ,
More informationConfidence Intervals for Paired Means with Tolerance Probability
Chapter 497 Confidence Intervals for Paired Means with Tolerance Probability Introduction This routine calculates the sample size necessary to achieve a specified distance from the paired sample mean difference
More information