Acceptance Criteria: What Accuracy Will We Require for M&V2.0 Results, and How Will We Prove It?

Similar documents
Double Ratio Estimation: Friend or Foe?

Evaluation, Measurement & Verification Framework for Washington

Evaluation, Measurement, & Verification Principles and Vermont Examples

Phase III Statewide Evaluation Team. Addendum to Act 129 Home Energy Report Persistence Study

Home Energy Reporting Program Evaluation Report. June 8, 2015

Appendix B. The EnergyRM EE PPA

Evaluation, Measurement, and Verification (EM&V) of Residential Behavior-Based Energy Efficiency Programs: Issues and Recommendations

Impact Evaluation of 2014 San Diego Gas & Electric Home Energy Reports Program (Final Report)

DRAFT. California ISO Baseline Accuracy Work Group Proposal

Energy Efficiency Feed-in-Tariff: Key Policy & Design Considerations

Review and Validation of 2014 Southern California Edison Home Energy Reports Program Impacts (Final Report)

Province-Wide Pay-For-Performance (P4P) Draft Program Documents Webinar: IESO response to input received

Industry Drivers of Commissioning: Programs & Legislation

FIVE YEAR PLAN FOR ENERGY EFFICIENCY

MEASUREMENT AND VERIFICATION AND THE IPMVP

Before the Nova Scotia Utility and Review Board

IMPACT AND PROCESS EVALUATION OF AMEREN ILLINOIS COMPANY BEHAVIORAL MODIFICATION PROGRAM (PY5) FINAL OPINION DYNAMICS. Prepared for: Prepared by:

Natural Gas Demand Side Management Evaluation, Measurement, and Verification (EM&V) Plan

Measurement of Market Risk

CALIFORNIA ISO BASELINE ACCURACY ASSESSMENT. Principal authors. November 20, Josh Bode Adriana Ciccone

Presented to. OPOWER, Inc. February 20, Presented by: Kevin Cooney. Navigant Consulting 30 S. Wacker Drive, Suite 3100 Chicago, IL 60606

Measuring Retirement Plan Effectiveness

Most Critical Factors Impacting Cost-Effectiveness of Feedback Programs

STAT 509: Statistics for Engineers Dr. Dewei Wang. Copyright 2014 John Wiley & Sons, Inc. All rights reserved.

Load and Billing Impact Findings from California Residential Opt-in TOU Pilots

Accounting for Behavioral Persistence A Protocol and a Call for Discussion

Southern California Edison Revised Cal. PUC Sheet No E Rosemead, California (U 338-E) Cancelling Revised Cal. PUC Sheet No.

Statistical Literacy & Data Analysis

How to Hit Several Targets at Once: Impact Evaluation Sample Design for Multiple Variables

Niagara Mohawk Power Corporation d/b/a National Grid Residential Building Practices and Demonstration Program: Impact Evaluation Summary

Stochastic Modelling: The power behind effective financial planning. Better Outcomes For All. Good for the consumer. Good for the Industry.

EE Based Legalization of Informal Settlements in Montenegro

Retirement Income Analysis Executive Summary

Performance-Based Ratemaking

PROJECT 73 TRACK D: EXPECTED USEFUL LIFE (EUL) ESTIMATION FOR AIR-CONDITIONING EQUIPMENT FROM CURRENT AGE DISTRIBUTION, RESULTS TO DATE

Annual Report to the Pennsylvania Public Utility Commission For the period December 2009 to May 2010 Program Year 2009

Energy Efficiency Reporting & Streamlining

Statistical Sampling Approach for Initial and Follow-Up BMP Verification

EEAC EM&V Briefing. Ralph Prahl EEAC Consultant EM&V Team Leader July 9th, 2013

SIMULATION OF ELECTRICITY MARKETS

EVALUATION, MEASUREMENT & VERIFICATION PLAN. For Hawaii Energy Conservation and Efficiency Programs. Program Year 2010 (July 1, 2010-June 30, 2011)

PBR in the Audit: What to Expect Michael Fruchter, FSA, MAAA Emily Cassidy, ASA, MAAA

Research on HFTs in the Canadian Venture Market

PRE CONFERENCE WORKSHOP 3

British Gas Consultation Response to the 2014 Allocation of Unidentified Gas Statement for 2015/16

Incorporating Model Error into the Actuary s Estimate of Uncertainty

Measures of Association

Promoting energy and peak savings for residential customers through real time energy information displays

Draft Small Customer Aggregation Program Rules

Chapter 7 presents the beginning of inferential statistics. The two major activities of inferential statistics are

SesSIon February HM Revenue & Customs. Tackling tax credits error and fraud

Forecasting Long-term Electric Price Volatility for Valuation of Real Power Options

Methodology for the Evaluation of an Energy Savings Performance Contracting Program for the U.S. Federal Government

Factors influencing the reliability of policy proposal costings. Technical note no. 01/2017 Date issued: 13 September 2017

Energy Efficiency Valuation:

Predicting & Quantifying Risk in Airport Capacity Profile Selection for Air Traffic Management

Risk Management, Qualtity Control & Statistics, part 2. Article by Kaan Etem August 2014

AND NOTICE OF HEARING REVIEW OF NON-PAYMENT OF ACCOUNT SERVICE CHARGES FOR ELECTRCITY AND NATURAL GAS DISTRIBUTORS BOARD FILE NO.

Module 4: Point Estimation Statistics (OA3102)

Minimizing Basis Risk for Cat-In- Catastrophe Bonds Editor s note: AIR Worldwide has long dominanted the market for. By Dr.

Point Estimation. Some General Concepts of Point Estimation. Example. Estimator quality

Using Monte Carlo Analysis in Ecological Risk Assessments

Impact Evaluation of 2015 Marin Clean Energy Home Utility Report Program (Final Report)

Recipe for Developing a Winning M&V Formula in Indian ESCO Projects: Balancing Rigor and Accuracy with Cost-Effectiveness and Practicality

Impact Evaluation of 2014 Marin Clean Energy Home Utility Report Program (Final Report)

on credit institutions credit risk management practices and accounting for expected credit losses

Ontario Energy Board RP Board Proposal Regulated Price Plan for Electricity Consumers. Submission of Aegent Energy Advisors Inc.

Mitigating Self-Selection Bias in Billing Analysis for Impact Evaluation

Aleatory and Epistemic Uncertain3es. By Shahram Pezeshk, Ph.D., P.E. The university of Memphis

Applying Gross Savings and Net Savings in an Integrated Policy Framework

Trade the Open with Quantified Strategies

Quarterly Report to the Pennsylvania Public Utility Commission

Allocating Impact Evaluation Resources: Using Risk Analysis to get the Biggest Bang for your Buck 1

EFFECTIVE IMPLEMENTATION OF IPMVP OPTION C- WHOLE BUILDING MEASUREMENT MEASUREMENT AND VERIFICATION PLANS

Portfolio Reviews: Monitor for Risk, Catalyst

NATIONWIDE ASSET ALLOCATION INVESTMENT PROCESS

OWN RISK AND SOLVENCY ASSESSMENT. ERM Seminar Compliance All Dealing from the same deck now

Quantitative and Qualitative Disclosures about Market Risk.

Modeling and Accounting Methods for Estimating Unbilled Energy

The Simple Regression Model

All text in Red Italics is sample verbiage or instructions and may to be removed from the final document.

Home Energy Reports Program PY5 Evaluation Report. January 28, 2014

Payoff Scale Effects and Risk Preference Under Real and Hypothetical Conditions

Understanding the Results of an Integrated Cost/Schedule Risk Analysis James Johnson, NASA HQ Darren Elliott, Tecolote Research Inc.

CO-INVESTMENTS. Overview. Introduction. Sample

Point Estimation. Stat 4570/5570 Material from Devore s book (Ed 8), and Cengage

WM2013 Conference, February 24 28, 2013, Phoenix, Arizona, USA.

New Insights for Home Energy Reports: Persistence, Targeting Effectiveness, and More

Seattle City Light Home Energy Report Program Impact Evaluation

Statistics 13 Elementary Statistics

Introduction to Water and Sewer Fund Needs August 11, 2017

2018 New Construction Rebate Application

7.1 Incidence and proportion of online stock traders and online derivatives traders

SafetyAnalyst: Software Tools for Safety Management of Specific Highway Sites White Paper for Module 4 Countermeasure Evaluation August 2010

STATISTICAL FLOOD STANDARDS

Third-Year Program Results for a Utility Recommissioning Program

DRAFT GUIDANCE NOTE ON SAMPLING METHODS FOR AUDIT AUTHORITIES

Building Systems and Performance: an Introduction to Building Operator Certification Lesson 19: Energy Audits

Independent Audit of Enbridge Gas Distribution 2013 DSM Program Results FINAL REPORT. Prepared for the Enbridge Gas Distribution Audit Committee

Modeling and IRP Analysis of EE Resources in the Southwest

Transcription:

Acceptance Criteria: What Accuracy Will We Require for M&V2.0 Results, and How Will We Prove It? 1

Quality, accurate results Tool testing can tell us that 2.0 technologies are reliable can model, predict energy use well over time horizons used for EE Once we have reliable tools, still have to verify that application generates a quality result Many, but not all buildings are predictable /model-able Uncertainty analysis can quantify error due to modeling error in M&V2.0 tools Gross savings at meter may not be gross savings due to the measure, i.e., non-routine adjustments may be needed Transparent documentation especially of non-routine adjustments is needed 2

Non-routine events and adjustments Gross metered savings may not reflect gross program/measure savings E.g. Occupancy may change or loads may be added/removed Most 2.0 tools do *not* capture non-routine events, comparison group 2.0 tools may Possible that 2.0 analytics can flag cases where savings drop or increase unexpectedly, so that implementers can flag events, make adjustment Currently this is a manual process If whole-building M&V were used at large scale, would these events cancel out? 3

Common sources of error savings estimation and evaluation Measurement error: is the instrument accurate Often assumed negligible for revenue-grade utility meters Modeling error: does the model fit the data, characterize the phenomenon Often characterized with goodness of fit statistics Sampling error: is a selection representative of the population Often considered in evaluation, not applicable to single site M&V 4

Uncertainty analysis ASHRAE Guideline 14 provides formulation to quantify savings uncertainty due to model error (no sampling) Scope is individual buildings/projects Negligible measurement uncertainty for revenue grade utility meters Savings Uncertainty = fn of # of Data Points Baseline, Post Energy Savings Desired Confidence Level Model Accuracy (error) Add up each building s savings to get program-level result; Use error propagation to get aggregated savings uncertainty (not covered in ASHRAE) 5

Usual interpretation of uncertainty Establish range of values (uncertainty), and likelihood (confidence) that savings lie in that range Lower uncertainty, smaller confidence interval, smaller range 95% confident that savings are between [4,000, 12,000], i.e. 8,000 +/- 4,000, i.e. fractional savings uncertainty is 50% 68% confident that savings are between [6,000, 10,000], i.e. 8,000 +/- 2,000, i.e. fractional savings uncertainty is 25% 4,000 6,000 8,000 10,000 12,000 6

Illustration: savings uncertainty at building and aggregate level, due to model uncertainty (no sampling) For the aggregate of the 39 buildings, at 95% confidence level Savings = 3.96% +/-.3, that is within confidence interval of [3.66%; 4.26%] Aggregate far exceeds ASHRAE guidance Savings uncertainty ranges for each of 39 buildings, at 95% confidence level 7

Some cautions on certainty analyses If accuracy concerns are issue for M&V2.0, we should establish what bar for rigor must be met For 2.0 tools, use same standards for sufficiency applied to 1.0 Consider whether 2.0 can give equivalent or higher levels of certainty Not suggesting we quantify every source of uncertainty in EE savings estimations Avoid double standard for existing cond. whole building approaches vs. deemed, custom, simulation-based approaches Currently, often treat gross savings as point values -- no uncertainty Uncertainty is considered for program evaluation, often in sampling 8

Existing confidence-uncertainty guidance ASHRAE puts bar at 68-50 for building-level gross M&V Propagating gross uncertainty from building to aggregate multi-building level reduces uncertainty for the total Forward capacity markets have used 80-20 for portfolio-wide savings EM&V These criteria arise from separate use cases - what will we require of M&V2.0 tools applied to a program? 9

Certainty/uncertainty wrap up Savings uncertainty may be useful framework to consider M&V2.0 accuracy associated with imperfect ability to model/predict consumption Non-routine adjustments to attribute meter-level savings to measures are currently manual, can be more automated, well documented for evaluation review Collective question: How to set the uncertainty target to accept 2.0 tool results What do we require for non-routine event documentation? 10

Questions on Uncertainty 11

Program Evaluation Perspective Sue Haselhorst, Vice President of Project Analytics 12

Scaling site results to a program evaluation The results of an impact evaluation are high-stakes Drive shareholder incentives Large factor in cost-effectiveness Best practices specifies impact evaluations that will yield unbiased results. A precise value in-and-of-itself does not insure an unbiased result Considerations in scaling to a full impact evaluation 13

Accurate but biased In 1948, the opinion polls projected Dewey would beat Truman based on telephone surveys. The newspapers were so confident, they printed the results before all the results were in. It turns out Republicans owned telephones, Democrats not so much 14

A precisely biased result Hypothetical In blue: Evaluated results for all sites yields a 97% RR with equivalent precision of ±5% (includes red marker sites as well) In red: Sub-sample not selected randomly yields 60% with similar precision. 15

Uncertainty and bias On-site M&V: sampling error, unknown measurement error, but minimal bias Sites are selected for on-site M&V introducing sampling error Sampling error is often prescribed to meet ± 10% at the 90% confidence level If the sample was redrawn 10x All but one of the ten result would fall within ±10% of the others The results are unbiased, as long as the sample has been selected randomly This error value does not account for measurement error that is the uncertainty associated with individual sites Billing analysis: no sampling error, some assessment of measurement error, unknown bias Billing analysis starts with a census of sites so no sampling error However, bias is potentially introduced by dropping of sites that are not suitable for billing analysis (insufficient data, too many estimated reads, badly behaved) Some measurement error (although attribution of technology contribution to savings within this band has uncertainty) 16

SMUD Example Direct install small business whole building evaluation billing data attrition What is your confidence in savings estimated using a non-random sample of 45 sites? Deep Savings for Small CDI ACEEE Summer Study 2016 17

Systematic confounding factors Billing analysis works well in the residential sector One-to-one correspondence between the measure and the meter serving the measure Savings is often a large fraction of bills (i.e. weatherization savings in the order of 20%) Similar order of magnitude of stable usages (500kWh 20000 kwh) 10s of thousands of accounts in the analysis often Less successful in non-residential sector Multi-meter accounts occur frequently Correspondence between the measure and the meter serving the measure is uncertain Small savings fractions Orders of magnitude range in usage (20,000 200,000,000 kwh) Relatively small number of participants NY and MA have recently tried and failed to conduct a billing analysis, in some part attributed to meter mismatch problems 18

Potential Pilot Features Collect data through out the pilot of the account attrition How many and why were accounts excluded from analysis Within the confines of the pilot Track screening processes and number of sites that meet screening and those that do not Track reasons for attrition Did not meet initial screening Insufficient pre or post billing data Missing or estimated meter reads Poorly performing individual models Model fails other tests At the conclusion of the test, estimate potential impact of the excluded sites on the outcome 19