Risk Management, Qualtity Control & Statistics, part 2. Article by Kaan Etem August 2014

Similar documents
Quality Performance Benchmarking By Hakki Etem

Audit Sampling: Steering in the Right Direction

R & R Study. Chapter 254. Introduction. Data Structure

Basic Procedure for Histograms

(AA12) QUANTITATIVE METHODS FOR BUSINESS

Some Characteristics of Data

CABARRUS COUNTY 2008 APPRAISAL MANUAL

Broker History User Manual

Actuarial. Predictive Modeling. March 23, Dan McCoach, Pricewaterhouse Coopers Ben Williams, Towers Watson

Data Analysis and Statistical Methods Statistics 651

Planning Sample Size for Randomized Evaluations Esther Duflo J-PAL

Student Guide: RWC Simulation Lab. Free Market Educational Services: RWC Curriculum

Binomial Distributions

Solutions for practice questions: Chapter 15, Probability Distributions If you find any errors, please let me know at

Wholesale Price Monitoring in the Age of Tough Enforcement

CH 5 Normal Probability Distributions Properties of the Normal Distribution

The Forex Report CORE CONCEPTS. J A N U A R Y Signal Selection By Scott Owens

CHAPTER 8 Estimating with Confidence

Appendix CA-15. Central Bank of Bahrain Rulebook. Volume 1: Conventional Banks

Insurance Program Benchmarking Methodology July 2015 Global Headquarters 1430 Broadway, 8th Floor New York, NY

Trading Diary Manual. Introduction

Confidence Intervals and Sample Size

Binomial Distributions

LOCALLY ADMINISTERED SALES AND USE TAXES A REPORT PREPARED FOR THE INSTITUTE FOR PROFESSIONALS IN TAXATION

Article from The Modeling Platform. November 2017 Issue 6

FRAMEWORK FOR SUPERVISORY INFORMATION

Using alternative data, millions more consumers qualify for credit and go on to improve their credit standing

Probability Distributions

STAT 157 HW1 Solutions

Benchmarking. Club Fund. We like to think about being in an investment club as a group of people running a little business.

Bringing Meaning to Measurement

Please consider the following comments on the Second Exposure Draft of the ASOP on Modeling.

SUPERVISORY FRAMEWORK FOR THE USE OF BACKTESTING IN CONJUNCTION WITH THE INTERNAL MODELS APPROACH TO MARKET RISK CAPITAL REQUIREMENTS

Quality Control Self-Assessment

Data Analysis and Statistical Methods Statistics 651

INSIDE DAYS. The One Trading Secret That Could Make You Rich

TCA what s it for? Darren Toulson, head of research, LiquidMetrix. TCA Across Asset Classes

Prioritize QC with Pre-Funding. April 19, 2012 Presented By: Brady W. Meadows

Copyright 2011 Pearson Education, Inc. Publishing as Addison-Wesley.

AP Statistics Chapter 6 - Random Variables

9/17/2015. Basic Statistics for the Healthcare Professional. Relax.it won t be that bad! Purpose of Statistic. Objectives

Statistical Sampling Approach for Initial and Follow-Up BMP Verification

Computing compound interest and composition of functions

The Kalman filter - and other methods

USER GUIDE. How To Get The Most Out Of Your Daily Cryptocurrency Trading Signals

Confidence Intervals for One-Sample Specificity

Planning Sample Size for Randomized Evaluations

Wholesale Originations Best Practices

Key Objectives. Module 2: The Logic of Statistical Inference. Z-scores. SGSB Workshop: Using Statistical Data to Make Decisions

User Guide for Schwab Equity Ratings Report

AR SOLUTION. User Guide. Version 1.1 9/24/2015

An Overview of the ZMA : The Superior Moving Average Page 2. ZMA Indicator: Infinite Flexibility and Maximum Adaptability Page 4

Planning for ESOP Repurchase Obligations

The following content is provided under a Creative Commons license. Your support

HOW FINANCIAL ADVISORS USE AND THINK ABOUT EXCHANGE-LISTED OPTIONS

File Number S Request for Comment on Business and Financial Disclosure Requirements in Regulation S-K

Sampling & Statistical Methods for Compliance Professionals. Frank Castronova, PhD, Pstat Wayne State University

The Central Limit Theorem. Sec. 8.2: The Random Variable. it s Distribution. it s Distribution

Before How can lines on a graph show the effect of interest rates on savings accounts?

The Vasicek adjustment to beta estimates in the Capital Asset Pricing Model

Independent Loan Review An Essential Tool

David Tenenbaum GEOG 090 UNC-CH Spring 2005

6.1, 7.1 Estimating with confidence (CIS: Chapter 10)

Employer Survey Design and Planning Report. February 2013 Washington, D.C.

Analytic measures of credit capacity can help bankcard lenders build strategies that go beyond compliance to deliver business advantage

GIPS Guidance Statement on Composite Definition

Lecture 1: Review and Exploratory Data Analysis (EDA)

Lesson 4: Back to School Part 4: Saving

Bullalgo Trading Systems, Inc. Orion Bollinger Band (BB) Threshold Study Indicators User Manual Version 1.0 Manual Revision

Expert Trend Locator. The Need for XTL. The Theory Behind XTL

Descriptive Statistics in Analysis of Survey Data

the number of correct answers on question i. (Note that the only possible values of X i

The Robust Repeated Median Velocity System Working Paper October 2005 Copyright 2004 Dennis Meyers

MT4 Supreme Edition Trade Terminal

Medicaid Audit Overpayments: Challenging Statistical Sampling and Extrapolation

Stratification Analysis. Summarizing an Output Variable by a Grouping Input Variable

The Market Approach to Valuing Businesses (Second Edition)

Chance Error in Sampling

Preview of Observations from 2016 Inspections of Auditors of Issuers

RetirementWorks. The Paycheck Comparison module can be used as a stand-alone utility, or in combination with the Federal 1040 Analysis module.

Optimization Prof. A. Goswami Department of Mathematics Indian Institute of Technology, Kharagpur. Lecture - 18 PERT

Unit 13: Investing and Retirement

A Financial Benchmarking Initiative Primer

False Dilemmas, Energy Projects and Value Creation

Part 10: The Binomial Distribution

Quantitative Trading System For The E-mini S&P

Data Analysis and Statistical Methods Statistics 651

Financial Coordinator Checklist Explanation and Job Duties in Depth

Multi Account Manager

How to Lay Off Equity in Your Real Estate Portfolio While Retaining Control. By Evan W. Hudson 1 October 9, 2017

Interval estimation. September 29, Outline Basic ideas Sampling variation and CLT Interval estimation using X More general problems

Understanding Financial Data

November 3, Transmitted via to Dear Commissioner Murphy,

The Kalman filter - and other methods

Information about 2017 Inspections

DRAFT GUIDANCE NOTE ON SAMPLING METHODS FOR AUDIT AUTHORITIES

ASTM D02, Dec./ Orlando Statistics Seminar

DFAST Modeling and Solution

SHRIMPY PORTFOLIO REBALANCING FOR CRYPTOCURRENCY. Michael McCarty Shrimpy Founder. Algorithms, market effects, backtests, and mathematical models

Valuation Public Comps and Precedent Transactions: Historical Metrics and Multiples for Public Comps

Transcription:

Risk Management, Qualtity Control & Statistics, part 2 Article by Kaan Etem August 2014

Risk Management, Quality Control & Statistics, part 2 BY KAAN ETEM Kaan Etem These statistical techniques, used consistently and without bias, can efficiently provide great insight into quality, taking you some way towards managing enterprise risk. This is the second part of a two-part series. Basic Statistical Terms Statistical sampling in loan QC involves the drawing of a limited sample of units from a larger population of units, with the intention of making inferences about the population, with more or less precision and confidence. Using a simple example to illustrate: you grab a fistful of marbles (the sample) from a bag full of marbles (the population). Based on the characteristics of the marbles in your fist, you can make educated guesses (inferences) about the marbles in the population. Based on the number of marbles in your fist relative to the number of marbles in the bag, your guess can be within a more or less narrow range (precision) and you can be more or less sure that your inference is repeatable (confidence). These basic statistical terms should be understood by both the QC department and the senior managers for whom they generate QC reports. But it is up to the QC department to ensure the statistical validity of their sampling and auditing. That is, to be sure that statistical samples are being drawn randomly from the appropriate populations, that no statistical bias is being introduced, and that the proper inferences are being made. To extend our marble illustration: be sure that the bag contains the right marbles, that you are not cherry-picking your marbles from the bag, and that your assessment of the marbles is consistent. Finally, recognize that there are also sources of non-statistical bias. Watch for incomplete loan reviews for all sampled loans, non-response (missing files), gross vs. net defect rates, exclusion of adverse (targeted) selections, and inconsistency in file review standards. 10 August 2014

results and trends. Support this with trend and findings reports so that significant results can be traced back to sources and root causes. Figure 2. Quality Trend Report shows observed sample defect rates (blue bars) and inference to the population at a given precision and confidence (red lines on top of blue bars.) Focused Reporting Once you have consensus about what to measure and how metrics are defined, visualize the kinds of reports you wish to produce. This will drive your sampling. There are still QC operations that generate reports as thick as a phone book about each finding encountered in a QC cycle. Commonly known as data dumps, such reports arguably do more harm than good since it s up to the audience to winnow out the information that matters and to make a judgment about how much it matters. No wonder QC reports have been so roundly ignored. Your own reports should be succinct, easy to interpret, actionable, timely and accurate. Make liberal use of charts, graphs and illustrations, which can convey information quickly, concisely and in context. Include an executive summary at the front of the report package that highlights the most important quality Statistical Sampling Strategy The guiding principle for statistical sampling in loan QC is to minimize random sampling and emphasize risk-based (aka targeted or discretionary ) sampling. This offers the most efficient way to effectively monitor your quality. You can gain additional efficiencies if your audits of randomly sampled loans are similar to audits of your targeted samples, in which case you can credit the audits performed under random samples towards the required counts for targeted samples. This means certain loan audits count both towards a randomly sampled audit and a targeted audit a great way to leverage your auditing capacity. To take advantage of this leverage, begin with the highest level, least granular layer of sampling (e.g., a statistically derived random sample drawn from a population of all loans originated in the month of March). This establishes an overall quality benchmark. It also meets many regulators and/or investors minimum requirements for random statistical samples. Then draw appropriately sized random samples from targeted sub-populations, (e.g., only retail channel originations,) while crediting qualifying loans that were sampled in the first sample (retail loans) towards the count required for the second sample. As your samples drill deeper into more granular populations (e.g., new products, new loan officers, appraisers on a watch list, risky states, etc.), be sure to give yourself credit for earlier samples that qualify for later samples. This approach is particularly useful if you have a mandate to regularly sample from every unit in a particular class. For example, some enterprises require that at least one loan must be sampled every month from each broker sending loans to a lender. Almost invariably, this is an exercise in futility because any samples drawn will both be too small and drawn from too small a population to be meaningful. But organizations are filled with well-intentioned distortions like this. If you are forced to do this sort of review, leave the sampling for it to the very end. A large number of qualifying loans will already have been sampled 12 August 2014

in early sample layers and the net number you need to sample will be reduced. In combination with this, consider sampling a larger number of loans, less frequently, and from a smaller subset of individual brokers. So sample a statistically valid number of loans, quarterly instead of monthly, from one broker region per calendar quarter. Estimating the Required Statistical Sample Size While you could manually calculate estimated sample size, calculators exist online that ease the burden. Be sure to understand how each calculator derives its sample size; they may be intended for different audiences, use different assumptions, or use different inputs. The inputs for statistical sample size estimation are: population size, precision, confidence, and expected quality (or defect rate). A higher defect rate means a larger statistical sample size, all else being equal. So by lowering defect rates, organizations not only reduce the costs of poor quality, they also reduce the number of audits required. A worthy goal. A suitable statistical sample size calculator for loan QC (for example at http://bit.ly/1ncofzm) should achieve a 2% statistical precision at a onesided 95% confidence level on an annual basis. This has become the industry standard. Drawing Conclusions QC s objective is to make valid inferences about various populations from which loans have been sampled. Whether these populations are entire servicing portfolios, originations from a geographic area, pre-funded loans in the pipeline, appraisers on a watch list, or newly introduced loan products, the idea is to gauge the quality of the particular population. Yet many lenders simply report on the results of their sampled audits, without making any inferences to the population at all. Without the extra step of making inferences, audit reporting is far less meaningful and reliable. Making inferences is one thing, making the right inferences is another. If your goal is to achieve a 2% statistical precision at a one-sided 95% confidence August 2014 13

Figure 3. Statistical Control Chart showing several brokers with above average defect rates but only one outlier (#4422) above the upper control limit, or out of statistical control. interval on an annual basis, then you are looking to make a statement such as this: Our random sample of 26 loans from this population of 10,000 loans yielded a defect rate of 5%. So if we were to randomly sample the same number of loans a total of 100 times, then 95 of those times [95% confidence] the defect rate of the population from which we drew will have a defect rate of 7% or less [2% precision at one-sided confidence interval.] If this had been a two-sided 95% confidence interval, a larger number of loans would have been sampled and we would have been in a position to say that the defect rate was in a range between 3% and 7% (i.e., 5% observed defect rate plus or minus the 2% precision level we set). However, in loan QC, we are interested in the likely maximum defect rate so we can benefit from the lower sample size required for a one-sided confidence interval. Two additional points about drawing conclusions in this sort of statistical analysis: one is that in order to confirm statistical precision, it is necessary to calculate the confidence interval after reviews are complete. This involves solving the same statistical formula used in sample size estimation (where an expected incidence rate or defect rate was used), except solving for the confidence interval with the observed defect rate. The other point regards confidence intervals (or control limits ), which are used in statistical control charts to separate the signal from the noise. In auditing a sample of broker loans, you may find that several may have higher than average defect rates. But much of this variation may be the noise of randomness. It is the outliers whose defect rates are statistically significant or beyond the upper control limit that merit further examination. 95% and 99% confidence, equating to two and three standard deviations respectively, are two accepted thresholds of confidence. At 95% confidence, we are saying that there is an unlikely 5% chance that a defect rate outside the upper control limit is attributable to statistical randomness. Instead, there is likely to be something worth investigating. These statistical techniques, used consistently and without bias, can efficiently provide great insight into quality, taking you some way towards managing enterprise risk. With the level of uncertainty and diversity of risk that is prevalent in the industry today, that is a step in the right direction. MC M Kaan Etem is Senior Vice President of Cogent QC Systems, a provider of risk management software solutions for loan quality and compliance. Mr. Etem can be reached at Kaan.Etem@ cogentqc.com. 14 August 2014