Exact shape-reconstruction by one-step linearization in EIT

Similar documents
Exact shape-reconstruction by one-step linearization in EIT

Heinz W. Engl. Industrial Mathematics Institute Johannes Kepler Universität Linz, Austria

Reduced Basis Methods for MREIT

Magnet Resonance Electrical Impedance Tomography (MREIT): convergence of the Harmonic B z Algorithm

A model reduction approach to numerical inversion for parabolic partial differential equations

PDE Project Course 1. Adaptive finite element methods

25 Increasing and Decreasing Functions

PICOF, Palaiseau, April 2-4, 2012

A model reduction approach to numerical inversion for parabolic partial differential equations

Finite Element Method

hp-version Discontinuous Galerkin Methods on Polygonal and Polyhedral Meshes

ERROR ESTIMATES FOR LINEAR-QUADRATIC ELLIPTIC CONTROL PROBLEMS

Trust Region Methods for Unconstrained Optimisation

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

HIGH ORDER DISCONTINUOUS GALERKIN METHODS FOR 1D PARABOLIC EQUATIONS. Ahmet İzmirlioğlu. BS, University of Pittsburgh, 2004

Chapter 3: Black-Scholes Equation and Its Numerical Evaluation

Chapter 7 One-Dimensional Search Methods

Introduction to Numerical PDEs

Sparse Wavelet Methods for Option Pricing under Lévy Stochastic Volatility models

Contract Theory in Continuous- Time Models

What can we do with numerical optimization?

Lecture Quantitative Finance Spring Term 2015

Phys. Lett. A, 372/17, (2008),

About Weak Form Modeling

A class of coherent risk measures based on one-sided moments

A distributed Laplace transform algorithm for European options

Functional vs Banach space stochastic calculus & strong-viscosity solutions to semilinear parabolic path-dependent PDEs.

Solving the Stochastic Steady-State Diffusion Problem Using Multigrid

Numerical Solution of Two Asset Jump Diffusion Models for Option Valuation

arxiv: v1 [q-fin.cp] 1 Nov 2016

SELF-ADJOINT BOUNDARY-VALUE PROBLEMS ON TIME-SCALES

NUMERICAL METHODS OF PARTIAL INTEGRO-DIFFERENTIAL EQUATIONS FOR OPTION PRICE

PDE Methods for the Maximum Drawdown

The stochastic calculus

Lecture 17: More on Markov Decision Processes. Reinforcement learning

Convergence Analysis of Monte Carlo Calibration of Financial Market Models

Steepest descent and conjugate gradient methods with variable preconditioning

Tikhonov Regularization Applied to the Inverse Problem of Option Pricing: Convergence Analysis and Rates

Risk minimizing strategies for tracking a stochastic target

Stable Local Volatility Function Calibration Using Spline Kernel

Advanced Numerical Techniques for Financial Engineering

A NOTE ON NUMERICAL SOLUTION OF A LINEAR BLACK-SCHOLES MODEL

Rohini Kumar. Statistics and Applied Probability, UCSB (Joint work with J. Feng and J.-P. Fouque)

Option pricing in the stochastic volatility model of Barndorff-Nielsen and Shephard

On Using Shadow Prices in Portfolio optimization with Transaction Costs

Outline. 1 Introduction. 2 Algorithms. 3 Examples. Algorithm 1 General coordinate minimization framework. 1: Choose x 0 R n and set k 0.

lecture 31: The Secant Method: Prototypical Quasi-Newton Method

Using condition numbers to assess numerical quality in HPC applications

The coupling of electrical eddy current heat production and air cooling

A Numerical Approach to the Estimation of Search Effort in a Search for a Moving Object

Chapter 5 Finite Difference Methods. Math6911 W07, HM Zhu

EFFICIENT MONTE CARLO ALGORITHM FOR PRICING BARRIER OPTIONS

LECTURE 2: MULTIPERIOD MODELS AND TREES

Richardson Extrapolation Techniques for the Pricing of American-style Options

GLOBAL CONVERGENCE OF GENERAL DERIVATIVE-FREE TRUST-REGION ALGORITHMS TO FIRST AND SECOND ORDER CRITICAL POINTS

Application of an Interval Backward Finite Difference Method for Solving the One-Dimensional Heat Conduction Problem

Premia 14 HESTON MODEL CALIBRATION USING VARIANCE SWAPS PRICES

4: SINGLE-PERIOD MARKET MODELS

Multi-period mean variance asset allocation: Is it bad to win the lottery?

Portfolio Management and Optimal Execution via Convex Optimization

F A S C I C U L I M A T H E M A T I C I

Yao s Minimax Principle

Lecture 4. Finite difference and finite element methods

Accelerated Stochastic Gradient Descent Praneeth Netrapalli MSR India

Numerical valuation for option pricing under jump-diffusion models by finite differences

Strategies and Nash Equilibrium. A Whirlwind Tour of Game Theory

Game Theory: Normal Form Games

Math 167: Mathematical Game Theory Instructor: Alpár R. Mészáros

Principal-Agent Problems in Continuous Time

A No-Arbitrage Theorem for Uncertain Stock Model

arxiv: v1 [math.pr] 15 Dec 2011

Probabilistic Meshless Methods for Bayesian Inverse Problems. Jon Cockayne July 8, 2016

Pricing Implied Volatility

Regression estimation in continuous time with a view towards pricing Bermudan options

e-companion ONLY AVAILABLE IN ELECTRONIC FORM

arxiv: v1 [math.st] 6 Jun 2014

Pricing Algorithms for financial derivatives on baskets modeled by Lévy copulas

On generalized resolvents of symmetric operators of defect one with finitely many negative squares

The Optimization Process: An example of portfolio optimization

Partitioned Analysis of Coupled Systems

Reduced models for sparse grid discretizations of the multi-asset Black-Scholes equation

Applied Mathematics Letters. On local regularization for an inverse problem of option pricing

M5MF6. Advanced Methods in Derivatives Pricing

Monte Carlo Methods for Uncertainty Quantification

As an example, we consider the following PDE with one variable; Finite difference method is one of numerical method for the PDE.

Online Appendix Optimal Time-Consistent Government Debt Maturity D. Debortoli, R. Nunes, P. Yared. A. Proofs

Sensitivity Analysis with Data Tables. 10% annual interest now =$110 one year later. 10% annual interest now =$121 one year later

Numerical Solution of a Linear Black-Scholes Models: A Comparative Overview

Contents Critique 26. portfolio optimization 32

AD in Monte Carlo for finance

An overview of some financial models using BSDE with enlarged filtrations

EC316a: Advanced Scientific Computation, Fall Discrete time, continuous state dynamic models: solution methods

Testing for non-correlation between price and volatility jumps and ramifications

d n U i dx n dx n δ n U i

TDT4171 Artificial Intelligence Methods

Optimal Allocation of Policy Limits and Deductibles

The Stigler-Luckock model with market makers

Portfolio selection with multiple risk measures

Short-time-to-expiry expansion for a digital European put option under the CEV model. November 1, 2017

Part 1: q Theory and Irreversible Investment

Transcription:

Exact shape-reconstruction by one-step linearization in EIT Bastian von Harrach harrach@ma.tum.de Department of Mathematics - M1, Technische Universität München, Germany Joint work with Jin Keun Seo, Yonsei University, Seoul, Korea Dept. of Computational Science and Engineering, Yonsei University, Seoul, Korea, August 5, 2010.

Mathematical Model Forward operator of EIT: Λ : σ Λ(σ), conductivity measurements Conductivity: σ L +(Ω) Continuum model: Λ(σ): Neumann-Dirichlet-operator Λ(σ) : g u Ω, applied current measured voltage (σ u) = 0 in Ω, σ ν u Ω = g on Ω. (1) Linear elliptic PDE theory: g L 2 ( Ω)!u H1 (Ω) solving (1). Λ(σ) : L 2 ( Ω) L2 ( Ω) linear, compact, self-adjoint

Inverse problem Non-linear forward operator of EIT Λ : σ Λ(σ), L +(Ω) L(L 2 ( Ω)) Inverse problem of EIT: Λ(σ) σ? Uniqueness ( Calderón problem ): Measurements on complete boundary: Calderón (1980), Druskin (1982+85), Kohn/Vogelius (1984+85), Sylvester/Uhlmann (1987), Nachman (1996), Astala/Päivärinta (2006) Measurements on part of the boundary: Bukhgeim/Uhlmann ( 02), Knudsen ( 06), Isakov ( 07), Kenig/Sjöstrand/Uhlmann ( 07), H. ( 08), Imanuvilov/Uhlmann/Yamamoto ( 09)

Linearization Generic approach: Linearization Λ(σ) Λ(σ 0 ) Λ (σ 0 )(σ σ 0 ) σ 0 : known reference conductivity / initial guess /... Λ (σ 0 ): Fréchet-Derivative / sensitivity matrix. Λ (σ 0 ) : L + (Ω) L(L2 ( Ω)). Solve linearized equation for difference σ σ 0. Often: supp(σ σ 0 ) Ω compact. ( shape / inclusion )

Linearization Linear reconstruction method e.g. NOSER (Cheney et al., 1990), GREIT (Adler et al., 2009) Solve Λ (σ 0 )κ Λ(σ) Λ(σ 0 ), then κ σ σ 0. Multiple possibilities to measure residual norm and to regularize. No rigorous theory for single linearization step. Almost no theory for Newton iteration: Dobson (1992): (Local) convergence for regularized EIT equation. Lechleiter/Rieder(2008): (Local) convergence for discretized setting. No (local) convergence theory for non-discretized case!

Linearization Linear reconstruction method e.g. NOSER (Cheney et al., 1990), GREIT (Adler et al., 2009) Solve Λ (σ 0 )κ Λ(σ) Λ(σ 0 ), then κ σ σ 0. Seemingly, no rigorous results possible for single linearization step. Seemingly, only justifiable for small σ σ 0 (local results). In this talk: Rigorous and global(!) result about the linearization error.

Exact Linearization Theorem ( H./Seo, SIAM J. Math. Anal. 2010) Let κ, σ, σ 0 piecewise analytic and Λ (σ 0 )κ = Λ(σ) Λ(σ 0 ). Then (a) supp Ω κ = supp Ω (σ σ 0 ). (b) σ 0 σ (σ σ 0) κ σ σ 0 on the bndry of supp Ω (σ σ 0 ). supp Ω : outer support ( = supp, if supp is compact and has conn. complement) Exact solution of lin. equation yields correct (outer) shape. No assumptions on σ σ 0! Linearization error does not affect shape reconstruction. Proof: Combination of monotony and localized potentials.

Monotony Monotony (in the sense of quadr. forms): ( Λ (σ 0 )(σ σ 0 ) Λ(σ) Λ(σ 0 ) Λ σ0 ) (σ }{{} 0 ) σ (σ σ 0). =Λ (σ 0 )κ Kang/Seo/Sheen (1997), Kirsch (2005), Ide/Isozaki/Nakata/Siltanen/Uhlmann (2007) Quadratic forms / energy formulation: gλ(σ 0 )g ds = σ 0 u 0 2 dx Ω Ω gλ(σ)g ds = σ u 2 dx Ω Ω g ( Λ(σ 0 ) κ ) g ds = κ u 0 2 dx Ω u 0 (resp. u): solution corresponding to σ 0 (resp. σ) and bndry curr. g. Ω

Bounds on squares Exact linearization Λ (σ 0 )κ = Λ(σ) Λ(σ 0 ) yields: (σ σ 0 ) u 0 2 dx κ u 0 2 σ 0 dx σ (σ σ 0) u 0 2 dx. Ω for all reference solutions u 0. Does this imply Ω σ σ 0 κ σ 0 σ (σ σ 0)? Famous concept of inverse problems for PDEs: Completeness of products (of solutions of a PDE) Here: bounds on squares (of gradients of solutions of a PDE). Can we control the squares? Ω

Bounds on squares Ω (σ σ 0 ) u 0 2 dx Ω κ u 0 2 dx Ω σ 0 σ (σ σ 0) u 0 2 dx. Localized potentials (H. 2008): Make u 0 2 arbitrarily large in a region connected to the boundary but keep it small outside the connecting domain. supp Ω σ 0 σ (σ σ 0) = supp Ω (σ σ 0 ) supp Ω κ = supp Ω (σ σ 0 ) u 0 2 small u 0 2 large

Consequences Theorem Let κ, σ, σ 0 piecewise analytic and Λ (σ 0 )κ = Λ(σ) Λ(σ 0 ). Then (a) supp Ω κ = supp Ω (σ σ 0 ). (b) σ 0 σ (σ σ 0) κ σ σ 0 on the bndry of supp Ω (σ σ 0 ). Same arguments applied to the Calderón-problem: Λ(σ) = Λ(σ 0 ) = κ = 0 : Calderón problem uniquely solvable for piecew. anal. conduct. (already known: Kohn/Vogelius, 1984). Linearized Calderón problem uniquely solvable for p.a. conduct. (already known for piecewise polynomials: Lechleiter/Rieder, 2008).

Non-exact Linearization? Theorem Let κ, σ, σ 0 piecewise analytic and Λ (σ 0 )κ = Λ(σ) Λ(σ 0 ). Then (a) supp Ω κ = supp Ω (σ σ 0 ). (b) σ 0 σ (σ σ 0) κ σ σ 0 on the bndry of supp Ω (σ σ 0 ). Existence of exact solution is unknown! In practice: finite-dimensional, noisy measurements. Proof only requires Λ (σ 0 )(σ σ 0 ) Λ (σ 0 )κ Λ (σ 0 ) Solve linearized equation s.t. (*) is fulfilled. ( σ0 ) σ (σ σ 0). ( )

Non-exact Linearization Additional definiteness assumption: σ σ 0. Assume we are given Noisy data Λ m (σ) Λ m (σ 0 ) Λ(σ) Λ(σ 0 ) Noisy sensitivity Λ m (σ 0) Λ (σ 0 ). Finite-dim. subspace V 1 V 2... L 2 ( Ω) with dense union. Equip V k with norm g 2 (m) := ( Λ m (σ) Λ m (σ 0 ))g,g. Minimize (Galerkin approx. of) linearization residual Λ(σ) Λ(σ 0 ) Λ (σ 0 )κ m in the sense of quadratic forms on V k.

Non-exact Linearization Theorem ( H./Seo, SIAM J. Math. Anal. 2010) For appropriately chosen δ 1,δ 2 > 0, every V k and suff. large m, κ m : δ 1 Λ(σ) Λ(σ 0 ) Λ (σ 0 )κ m δ 2. (in the sense of quadr. forms on V k, κ m piecewise analytic) Every piecewise analytic L -limit κ of a converging subsequence fulfills (a) supp Ω κ = supp Ω (σ σ 0 ). (b) ( ) σ 0 σ δ 1 (σ σ0 ) κ (δ 2 +1)(σ σ 0 ) on bndry of supp Ω (σ σ 0 ). Convergence guaranteed if σ σ 0 belongs to fin-dim. ansatz space. Globally convergent shape reconstruction by one-step linearization.

Summary The linearization error in EIT does not affect the shape. With additional definiteness assumption, we derived a local one-step linearization algorithm with globally convergent shape reconstruction properties. Additional definiteness property is typical for shape reconstruction. Open questions Numerical implementation? Formulation as Tikhonov regularization with special norms? Definiteness only enters in V k -norm. Can this be replaced by other oszillation-preventing regularization?