Framework and Methods for Infrastructure Management. Samer Madanat UC Berkeley NAS Infrastructure Management Conference, September 2005

Similar documents
Maintenance Management of Infrastructure Networks: Issues and Modeling Approach

OPTIMAL CONDITION SAMPLING FOR A NETWORK OF INFRASTRUCTURE FACILITIES

Dynamic Replication of Non-Maturing Assets and Liabilities

CPS 270: Artificial Intelligence Markov decision processes, POMDPs

Maintenance and Repair Decision Making for Infrastructure Facilities without a Deterioration Model

CS 188: Artificial Intelligence Spring Announcements

91.420/543: Artificial Intelligence UMass Lowell CS Fall 2010

Asset Management Ruminations. T. H. Maze Professor of Civil Engineering Iowa State University

Markov Decision Processes

Reinforcement Learning

Multi-armed bandits in dynamic pricing

17 MAKING COMPLEX DECISIONS

Logistics. CS 473: Artificial Intelligence. Markov Decision Processes. PS 2 due today Midterm in one week

Making Complex Decisions

Non-Deterministic Search

Hazim M Abdulwahid, MSC, MBA Hazim Consulting

Making Decisions. CS 3793 Artificial Intelligence Making Decisions 1

Estimating Macroeconomic Models of Financial Crises: An Endogenous Regime-Switching Approach

Reinforcement Learning. Slides based on those used in Berkeley's AI class taught by Dan Klein

Decision Making. BUS 735: Business Decision Making and Research. Learn how to conduct regression analysis with a dummy independent variable.

CS 188: Artificial Intelligence

Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration

Robust Dual Dynamic Programming

Introduction LEARNING OBJECTIVES. The Six Steps in Decision Making. Thompson Lumber Company. Thompson Lumber Company

Reinforcement Learning (1): Discrete MDP, Value Iteration, Policy Iteration

Reinforcement Learning

Estimating a Dynamic Oligopolistic Game with Serially Correlated Unobserved Production Costs. SS223B-Empirical IO

Economic optimization in Model Predictive Control

Reduced Complexity Approaches to Asymmetric Information Games

Markov Decision Processes

Pakes (1986): Patents as Options: Some Estimates of the Value of Holding European Patent Stocks

CS 188: Artificial Intelligence

Robust Portfolio Optimization with Derivative Insurance Guarantees

CS 188: Artificial Intelligence. Outline

Modeling of Life Cycle Alternatives in the National Bridge Investment Analysis System (NBIAS) Prepared by: Bill Robert, SPP Steve Sissel, FHWA

ADVANCED MACROECONOMIC TECHNIQUES NOTE 7b

Lecture 12: MDP1. Victor R. Lesser. CMPSCI 683 Fall 2010

NCHRP Consequences of Delayed Maintenance

Effective Use of Pavement Management Programs. Roger E. Smith, P.E., Ph.D. Zachry Department of Civil Engineering Texas A&M University

OPTIMIZATION OF ROAD MAINTENANCE AND REHABILITATION ON SERBIAN TOLL ROADS

CSE 473: Artificial Intelligence

Multi-armed bandit problems

Analysis of Past NBI Ratings for Predicting Future Bridge System Preservation Needs

Probabilistic Robotics: Probabilistic Planning and MDPs

CSEP 573: Artificial Intelligence

Market Survival in the Economies with Heterogeneous Beliefs

Intelligent Systems (AI-2)

SEQUENTIAL DECISION PROBLEM WITH PARTIAL MAINTENANCE ON A PARTIALLY OBSERVABLE MARKOV PROCESS. Toru Nakai. Received February 22, 2010

Developing Optimized Maintenance Work Programs for an Urban Roadway Network using Pavement Management System

The Cost of Pavement Ownership (Not Your Father s LCCA!)

Contract Theory in Continuous- Time Models

POMDPs: Partially Observable Markov Decision Processes Advanced AI

Making Decisions Using Uncertain Forecasts. Environmental Modelling in Industry Study Group, Cambridge March 2017

CS 4100 // artificial intelligence

Markov Decision Process

CS 343: Artificial Intelligence

16 MAKING SIMPLE DECISIONS

A Multi-Stage Stochastic Programming Model for Managing Risk-Optimal Electricity Portfolios. Stochastic Programming and Electricity Risk Management

A simple wealth model

Reasoning with Uncertainty

Multi-Objective Optimization Model using Constraint-Based Genetic Algorithms for Thailand Pavement Management

16 MAKING SIMPLE DECISIONS

Decision Making Models

Reinforcement Learning. Monte Carlo and Temporal Difference Learning

Complex Decisions. Sequential Decision Making

Hosten, Chowdhury, Shekharan, Ayotte, Coggins 1

Decision Analysis under Uncertainty. Christopher Grigoriou Executive MBA/HEC Lausanne

The Market Price of Risk and the Equity Premium: A Legacy of the Great Depression? by Cogley and Sargent

Highway Engineering-II

Quantile Regression. By Luyang Fu, Ph. D., FCAS, State Auto Insurance Company Cheng-sheng Peter Wu, FCAS, ASA, MAAA, Deloitte Consulting

AM 121: Intro to Optimization Models and Methods

Lec 1: Single Agent Dynamic Models: Nested Fixed Point Approach. K. Sudhir MGT 756: Empirical Methods in Marketing

Markov Decision Processes for Road Maintenance Optimisation

Dynamic Decisions with Short-term Memories

Application of MCMC Algorithm in Interest Rate Modeling

Optimal Dam Management

Dynamic Macroeconomics

4 Reinforcement Learning Basic Algorithms

UNIT 5 DECISION MAKING

Decision Making. BUS 735: Business Decision Making and Research. exercises. Assess what we have learned. 2 Decision Making Without Probabilities

SCHOOL OF BUSINESS, ECONOMICS AND MANAGEMENT. BF360 Operations Research

1. Introduction 2. Model Formulation 3. Solution Approach 4. Case Study and Findings 5. On-going Research

Behavioral Competitive Equilibrium and Extreme Prices. Faruk Gul Wolfgang Pesendorfer Tomasz Strzalecki

Optimal Policies for Distributed Data Aggregation in Wireless Sensor Networks

Sequential Decision Making

CS 188: Artificial Intelligence Fall 2011

PART II GUIDANCE MANUAL

Action Selection for MDPs: Anytime AO* vs. UCT

CS 5522: Artificial Intelligence II

Comparison of Decision-making under Uncertainty Investment Strategies with the Money Market

Stochastic Games and Bayesian Games

Lecture 2: Making Good Sequences of Decisions Given a Model of World. CS234: RL Emma Brunskill Winter 2018

Optimal Scheduling Policy Determination in HSDPA Networks

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 15 : DP Examples

Basic Framework. About this class. Rewards Over Time. [This lecture adapted from Sutton & Barto and Russell & Norvig]

MONETARY PERFORMANCE APPLIED TO PAVEMENT OPTIMIZATION DECISION MANAGEMENT

An Empirical Study of Optimization for Maximizing Diffusion in Networks

Continuous-time Stochastic Control and Optimization with Financial Applications

Wealth Accumulation in the US: Do Inheritances and Bequests Play a Significant Role

Dr. Abdallah Abdallah Fall Term 2014

Transcription:

Framework and Methods for Infrastructure Management Samer Madanat UC Berkeley NAS Infrastructure Management Conference, September 2005

Outline 1. Background: Infrastructure Management 2. Flowchart for IM Systems 3. Issues in infrastructure deterioration models 4. Issues in M&R decision-making 5. Focus: Model uncertainty 6. Adaptive MDP formulations 7. Parametric analyses 8. An alternate approach: Robust optimization 9. Results

Infrastructure Management Infrastructure Management Concerned with the selection of cost-effective policies to monitor, maintain and repair (M&R) deteriorating facilities in Infrastructure Systems Examples of IM Systems Arizona s Pavement Management System PONTIS: FHWA Bridge Management System

Deterioration and M&R actions Facilities deteriorate under the influence of traffic and environmental factors User costs increase as condition worsens To mitigate/reverse deterioration, agencies apply maintenance and repair (M&R) actions Range of M&R policies available: frequent low-cost maintenance vs. infrequent high-cost rehabilitation Allocation of resources among facilities in network

Flowchart for IM Systems Inspection and Data Collection Performance Modeling and Prediction M&R and Inspection Policy Selection

Infrastructure deterioration models Dependent variable: future condition of a facility Explanatory variables: usage, structure, environmental conditions, past deterioration, history of M&R actions Forms: continuous or discrete condition states Example: stochastic models (Markov processes, semi- Markov processes, etc).

Issues in deterioration modeling 1. Data source used: experimental data vs. field data Experimental data may not represent the true process of deterioration in the field (e.g., accelerated pavement testing) and may suffer from censoring (unobserved failure times due to limited duration of experiments) Field data suffer from large measurement errors, endogeneity of observed design variables (pavement sections designed on the basis of predicted traffic) and selectivity bias (e.g., maintenance activity is selected based on observed deterioration) 2. Discrete indicators of performance Interest is in the duration of some process (time to failure, time to condition transition) What is the appropriate probability model?

Issue 1. Deterioration modeling by combining experimental and field data Specifications based on physical understanding of facility behavior and structured statistical estimation methods for parameter calibration Joint estimation with experimental and field data sets Examples: Nonlinear models of pavement rutting progression (Archilla and Madanat 2000, 2001) Nonlinear models of pavement roughness progression (Prozzi and Madanat 2003, 2004)

Prediction tests with nonlinear model (Prozzi and Madanat 2003) 6 5 DATA ROUGHNESS (m/km IRI) 4 3 2 1 NONLINEAR MODEL ORIGINAL AASHO MODEL 0 0 200,000 400,000 600,000 800,000 1,000,000 1,200,000 AXLE REPETITIONS

Issue 2. Stochastic deterioration models of facility state transitions Some facilities have monotonic failure rate, and parametric methods (e.g. Weibull) are appropriate; examples: Models of state transition probabilities for bridge decks (Mishalani and Madanat 2002) Models of pavement crack initiation (Shin and Madanat 2003) For others, failure rate cannot be represented by known probability models: semi-parametric methods more appropriate; example: Models of overlay crack initiation for in-service pavements (Nakat and Madanat 2005)

Estimated transition probabilities (Mishalani and Madanat 2002) 1.0 0.8 0.6 0.4 0.2 0.0 0 10 20 30 40 50 Time-in-state (years) Corrosion-induced bridge deck deterioration; condition state 8

Issues in M&R Decision-Making 1. Accounting for stochastic facility deterioration in M&R decision-making 2. Accounting for budget constraints (system level vs. facility level problems) 3. Accounting for measurement errors in inspection 4. Accounting for model uncertainty: Successive reduction of model uncertainty through parameter updating, using latest inspection data Accepting model uncertainty as a fact of life and avoiding worst-case scenarios

Issue 1. Markov Decision Process (MDP) Markov assumption: facility deterioration is a function only of current state and current action Deterioration model: Markovian transition probabilities Finite horizon problems: solve by Dynamic Programming Infinite horizon problems: solve by successive approximation or policy iteration P(x t+1 = j x t =i, a t ) i x t a t j x t+1 a t+1 Partial Decision Tree for Markov Decision Process

Issue 2. System-level MDP Use randomized policies: solve for optimal fractions of facilities in state i to which action a is applied Formulate as a linear program Infinite planning horizon problems: minimize expected cost per year Finite planning horizon problems: minimize expected discounted total cost for planning horizon

System-level MDP formulation (for finite horizon problem)

Issue 3. The Latent MDP Measurement uncertainty: condition state imperfectly observed State of system given by the information state Evolution of information state is Markovian Apply Dynamic Programming to solve finite horizon problem P(I t+1 = k I t, a t ) I t a t k I t+1 a t+1 Partial Decision Tree for Latent Markov Decision Process

Issue 4: Model Uncertainty Model uncertainty is due to incomplete knowledge of facility deterioration processes Reasons: partial information about facility structure or materials uncertainty about construction quality material behavior poorly understood differences between laboratory and field deterioration Epistemic uncertainty, as opposed to statistical uncertainty (represented by stochastic model or random error)

Model uncertainty vs. random error Facility s Condition State Facility s Condition State Predicted Observation Range E2 E E1 A A Time Time E: Expected deterioration process A: Actual/Observed deterioration process

Accounting for model uncertainty Adaptive MDP (Durango and Madanat 2002) Characterizes more than one possible deterioration model Represents model uncertainty through decision-maker beliefs Uses Bayes Law to update beliefs Updated beliefs used to determine M&R policies for subsequent time periods

Bayesian updating of beliefs Facility s Condition State Facility s Condition State Time Time Facility s Condition State Time

Open-loop feedback vs. Closed-loop Control Open-loop feedback Closed-loop Facility s Condition State Facility s Condition State Time Time

Results: value of updating Actual Deterioration Rate: Slow Prior Beliefs: (0.05, 0.05, 0.90) Actual Deterioration Rate: Fast Prior Beliefs: (0.90, 0.05, 0.05) 90 160 80 140 70 60 Expected Costs ($/yard) 50 40 30 20 Expected Costs ($/yard) 120 100 80 60 40 10 20 0 2 3 4 5 6 7 8 Pavement Segment State 0 2 3 4 5 6 7 8 Pavement Segment State Actual Deterioration Rate: Slow Prior Beliefs: (0.33, 0.34, 0.33) Actual Deterioration Rate: Fast Prior Beliefs: (0.33, 0.34, 0.33) 60 140 50 Expected Costs ($/yard) 40 30 20 10 120 Expected Costs ($/yard) 100 80 60 40 20 0 2 3 4 5 6 7 8 Pavement Segment State 0 2 3 4 5 6 7 8 Pavement Segment State

Results: CLC vs. OLFC Actual Deterioration Rate: Slow Prior Beliefs: (0.05, 0.05, 0.90) Initial State: New 1.2 1 P(Y(t)=Slow) 0.8 0.6 0.4 0.2 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 Years

Problems with Adaptive Control methods CLC methods not practical for system-level decisionmaking and OLFC methods may not converge to true model To guarantee convergence, OLFC methods require costly probing Both CLC and OLFC require large amounts of data to reduce deterioration model uncertainty, but condition survey data accumulates slowly

Alternate approach: Robust optimization Work in progress (Kuhn and Madanat 2005) does not assume full knowledge of model parameters, only assume parameters belong to defined uncertainty sets seek solutions that are not overly sensitive to any realization of uncertainty within set Range of possible criteria: MAXIMIN, MAXIMAX, Hurwicz

System-level MAXIMIN MDP formulation

System-level MDP: cost ranges

Alternatives to MAXIMIN MAXIMAX assume nature is benevolent Hurwicz criterion define an optimism level β in [0,1] then let 1 β be the pessimism level maximize the sum of the optimism level times the best possible outcome and the pessimism level times the worst possible outcome

System-level Hurwicz MDP formulation

System-level MDP: cost ranges

Conclusions Model uncertainty has important cost implications if not accounted for in M&R decision-making Adaptive optimization methods can reduce the impacts of model uncertainty but require large amounts of data or long time horizons Robust optimization is a practical alternative to adaptive optimization methods Robust optimization saves more under worst case conditions than it costs under expected or best case conditions