Applied Stochastic Processes and Control for Jump-Diffusions Modeling, Analysis, and Computation Floyd B. Hanson University of Illinois at Chicago Chicago, Illinois siam.. Society for Industrial and Applied Mathematics Philadelphia
List of Figures List of Tables Preface xv xxi xxiii 1 Stochastic Jump and Diffusion Processes: Introduction 1 1.1 Poisson and Wiener Processes Basics 1 1.2 Wiener Process Basic Properties 3 1.3 More Wiener Process Moments 6 1.4 Wiener Process Nondifferentiability 9 1.5 Wiener Process Expectations Conditioned on the Past 9 1.6 Poisson Process Basic Properties 11 1.7 Poisson Process Moments 16 1.8 Poisson Zero-One Jump Law 18 1.9 Temporal, Nonstationary Poisson Process 20 1.10 Poisson Process Expectations Conditioned on the Past 24 1.11 Exercises 25 2 Stochastic Integration for Diffusions 31 2.1 Ordinary or Riemann Integration 32 2.2 Stochastic Integration in W(t): The Foundations 34 2.3 Stratonovich and Other Stochastic Integration Rules 55 2.4 Conclusion 57 2.5 Exercises 57 3 Stochastic Integration for Jumps 63 3.1 Stochastic Integration in P(t): The Foundations 63 3.2 Stochastic Jump Integration Rules and Expectations 74 3.3 Conclusion 77 3.4 Exercises 77 4 Stochastic Calculus for Jump-Diffusions: Elementary SDEs 81 4.1 Diffusion Process Calculus Rules 82 4.1.1 Functions of Diffusions Alone, G(W(t)) 82 vii
4.1.2 Functions of Diffusions and Time G(W(t)J) 86 4.1.3 Itô Stochastic Natural Exponential Construction 88 4.1.4 Transformations of Linear Diffusion SDEs 94 4.1.5 Functions of General Diffusion States and Time F(X(t), t). 98 4.2 Poisson Jump Process Calculus Rules 99 4.2.1 Jump Calculus Rule for h{dp{t)) 99 4.2.2 Jump Calculus Rule for H(P(t),t) 100 4.2.3 Jump Calculus Rule with General State Y(t) = F(X(t),t). 103 4.2.4 Transformations of Linear Jump with Drift SDEs 104 4.3 Jump-Diffusion Rules and SDEs 106 4.3.1 Jump-Diffusion Conditional Infinitesimal Moments 106 4.3.2 Stochastic Jump-Diffusion Chain Rule 107 4.3.3 Linear Jump-Diffusion SDEs 109 4.3.4 SDE Models Exactly Transformable to Purely Time-Varying Coefficients 119 4.4 Poisson Noise Is White Noise Too! 120 4.5 Exercises 122 Stochastic Calculus for General Markov SDËs: Space-Time Poisson, State-Dependent Noise, and Multidimensions 129 5.1 Space-Time Poisson Process 130 5.2 State-Dependent Generalization of Jump-Diffusion SDEs 139 5.2.1 State-Dependent Generalization for Space-Time Poisson Processes 139 5.2.2 State-Dependent Jump-Diffusion SDEs 141 5.2.3 Linear State-Dependent SDEs 142 5.3 Multidimensional Markov SDE 158 5.3.1 Conditional Infinitesimal Moments in Multidimensions... 160 5.3.2 Stochastic Chain Rule in Multidimensions 162 5.4 Distributed Jump SDE Models Exactly Transformable 163 5.4.1 Distributed Jump SDE Models Exactly Transformable.... 164 5.4.2 Vector Distributed Jump SDE Models Exactly Transformable 164 5.5 Exercises 165 Stochastic Optimal Control: Stochastic Dynamic Programming 169 6.1 Stochastic Optimal Control Problem 169 6.2 Bellman's Principle of Optimality 172 6.3 Hamiton-Jacobi-Bellman (HJB) Equation of Stochastic Dynamic Programming (SDP) 176 6.4 Linear Quadratic Jump-Diffusion (LQJD) Problem 179 6.4.1 LQJD in Control Only (LQJD/U) Problem 180 6.4.2 LLJD/U or the Case C 2 = 0 183 6.4.3 Canonical LQJD Problem 184 6.5 Exercises 188
ix 7 Kolmogorov Forward and Backward Equations and Their Applications 193 7.1 Dynkin's Formula and the Backward Operator 193 7.2 Backward Kolmogorov Equations 196 7.3 Forward Kolmogorov Equations 198 7.4 Multidimensional Backward and Forward Equations 202 7.5 Chapman-Kolmogorov Equation for Markov Processes in Continuous Time 205 7.6 Jump-Diffusion Boundary Conditions 205 7.6.1 Absorbing Boundary Conditions 205 7.6.2 Reflecting Boundary Conditions 206 7.7 Stopping Times: Expected Exit and First Passage Times 206 7.7.1 Expected Stochastic Exit Time 208 7.8 Diffusion Approximation Basis 213 7.9 Exercises 215 8 Computational Stochastic Control Methods 219 8.1 Finite Difference PDE Methods of SDP 220 8.1.1 Linear Dynamics and Quadratic Control Costs 221 8.1.2 Crank-Nicolson, Extrapolation-Predictor-Corrector Finite Difference Algorithm for SDP 222 8.1.3 Upwinding Finite Differences If Not Diffusion-Dominated. 228 8.1.4 Multistate Systems and Bellman's Curse of Dimensionality. 229 8.2 Markov Chain Approximation for SDP 231 8.2.1 MCA Formulation for Stochastic Diffusions 232 8.2.2 MCA Local Diffusion Consistency Conditions 233 8.2.3 MCA Numerical Finite Differences for State Derivatives and Construction of Transition Probabilities 233 8.2.4 MCA Extensions to Include Jump Processes 236 9 Stochastic Simulations 241 9.1 SDE Simulation Methods 241 9.1.1 Convergence and Stability for Stochastic Problems and Simulations 242 9.1.2 Stochastic Diffusion Euler Simulations 244 9.1.3 Milstein's Higher Order Diffusion Simulations 248 9.1.4 Convergence and Stability of Jump-Diffusion Euler Simulations 251 9.1.5 Jump-Diffusion Euler Simulation Procedures 255 9.2 Monte Carlo Methods 258 9.2.1 Basic Monte Carlo Simulations 260 9.2.2 Inverse Method for Generating Nonuniform Variates... 268 9.2.3 Acceptance and Rejection Method of von Neumann 270 9.2.4 Importance Sampling 274 9.2.5 Stratified Sampling 276 9.2.6 Antithetic Variates 279 9.2.7 Control Variates 281
10 Applications in Financial Engineering 287 10.1 Classical Black-Scholes Option Pricing Model 288 10.2 Merton 's Three Asset Option Pricing Model Version of Black-Scholes 291 10.2.1 PDE of Option Pricing 299 10.2.2 Final and Boundary Conditions for Option Pricing PDE...301 10.2.3 Transforming PDE to Standard Diffusion PDE 304 10.3 Jump-Diffusion Option Pricing 309 10.3.1 Jump-Diffusions with Normal Jump-Amplitudes 310 10.3.2 Risk-Neutral Option Pricing for Jump-Diffusions 311 10.4 Optimal Portfolio and Consumption Models 317 10.4.1 Log-Uniform Amplitude Jump-Diffusion for Log-Returns..318 10.4.2 Log-Uniform Jump-Amplitude Model 319 10.4.3 Optimal Portfolio and Consumption Policies Application.. 321 10.4.4 CRRA Utility and Canonical Solution Reduction 325 10.5 Important Financial Events Model: The Greenspan Process 327 10.5.1 Stochastic Scheduled and Unscheduled Events Model with Stochastic Parameter Processes 328 10.5.2 Further Properties of Quasi-Deterministic or Scheduled Event Processes: K(q\ A{t))dQ(t) 330 10.5.3 Optimal Portfolio Utility, Stock Fraction, and Consumption. 330 10.5.4 Canonical CRRA Model Solution 333 10.6 Exercises 335 11 Applications in Mathematical Biology and Medicine 339 11.1 Stochastic Bioeconomics: Optimal Harvesting Applications 339 11.1.1 Optimal Harvesting of Logistically Growing Population Undergoing Random Jumps 340 ll. 1.2 Optimal Harvesting with Both Price and Population Random Dynamics 344 11.2 Stochastic Biomedical Applications 347 11-2.1 Diffusion Approximation of Tumor Growth and Tumor Doubling Time Application 347 11.2.2 Optimal Drug Delivery to Brain PDE Model 353 12 Applied Guide to Abstract Theory of Stochastic Processes 361 1 Very Basic Probability Measure Background 362. 1 Mathematical Measure Theory Basics 362.2 Change of Measure: Radon-Nikodym Theorem and Derivative 367.3 Probability Measure Basics 368.4 Stochastic Processes in Continuous Time on Filtered Probability Spaces 371.5 Martingales in Continuous Time 372.6 Jump-Diffusion Martingale Representation 375 2 Change in Probability Measure: Radon-Nikodym Derivatives and Girsanov's Theorem 376
xi 2.1 Radon-Nikodym Theorem and Derivative for Change of Probability Measure 376 2.2 Change in Measure for Stochastic Processes: Girsanov's Theorem 382 3 Itô, Levy, and Jump-Diffusion Comparisons 389 3.1 Itô Processes and Jump-Diffusion Processes 389 3.2 Levy Processes and Jump-Diffusion Processes 390 4 Exercise 401 Bibliography 403 Index 423 A Online Appendix: Deterministic Optimal Control Al A.I Hamilton's Equations: Hamiltonian and Lagrange Multiplier Formulation of Deterministic Optimal Control A2 A.1.1 Deterministic Computation and Computational Complexity.All A.2 Optimum Principles: The Basic Principles Approach A12 A.3 Linear Quadratic (LQ) Canonical Models A22 A.3.1 Scalar, Linear Dynamics, Quadratic Costs (LQ) A22 A.3.2 Matrix, Linear Dynamics, Quadratic Costs (LQ) A24 A.4 Deterministic Dynamic Programming (DDP) A28 A.4.1 Deterministic Principle of Optimality A29 A.4.2 Hamilton-Jacobi-Bellman (HJB) Equation of Deterministic Dynamic Programming A30 A.4.3 Computational Complexity for Deterministic Dynamic Programming A31 A.4.4 Linear Quadratic (LQ) Problem by Deterministic Dynamic Programming A32 A.5 Control of PDE Driven Dynamics: Distributed Parameter Systems (DPS) A34 A.5.1 DPS Optimal Control Problem A34 A.5.2 DPS Hamiltonian Extended Space Formulation A35 A.5.3 DPS Optimal State, Costate, and Control PDEs A37 A.6 Exercises A39 Β Online Appendix: Preliminaries in Probability and Analysis Bl B.l Distributions for Continuous Random Variables B2 Β. 1.1 Probability Distribution and Density Functions B2 Β. 1.2 Expectations and Higher Moments B4 Β. 1.3 Uniform Distribution B5 Β. 1.4 Normal Distribution and Gaussian Processes B8 Β. 1.5 Simple Gaussian Processes B9 Β. 1.6 Lognormal Distribution Β11 Β. 1.7 Exponential Distribution Β14 B.2 Distributions of Discrete Random Variables Β17 B.2.1 Poisson Distribution and Poisson Process Β18
xji B.3 Joint and Conditional Distribution Definitions B20 B.3.1 Conditional Distributions and Expectations B25 B.3.2 Law of Total Probability B29 B.4 Probability Distribution of a Sum: Convolutions B30 B.5 Characteristic Functions B33 B.6 Sample Mean and Variance: Sums of Independent, Identically Distributed (HD) Random Variables B36 B.7 Law of Large Numbers B38 B.7.1 Weak Law of Large Numbers (WLLN) B38 B.7.2 Strong Law of Large Numbers (SLLN) B38 B.8 Central Limit Theorem B39 B.9 Matrix Algebra and Analysis B39 B.IO Some Multivariate Distributions B45 B.10.1 Multivariate Normal Distribution B45 B.10.2 Multinomial Distribution B46 B.ll Basic Asymptotic Notation and Results B49 BJ2 Generalized Functions: Combined Continuous and Discrete Processes B52 B.13 Fundamental Properties of Stochastic and Markov Processes B59 B.13.1 Basic Classification of Stochastic Processes B59 B.13.2 Markov Processes and Markov Chains B59 B.13.3 Stationary Markov Processes and Markov Chains B61 B.14 Continuity, Jump Discontinuity, and Nonsmoothness Approximations. B61 B.I4.1 Beyond Continuity Properties B61 B.14.2 Taylor Approximations of Composite Functions B63 B.15 Extremal Principles B67 B.16 Exercises B69 C Online Appendix: MATLAB Programs CI C.I Program: Uniform Distribution Simulation Histograms CI C.2 Program: Normal Distribution Simulation Histograms C2 C3 Program; Lognormal Distribution Simulation Histograms C3 C.4 Program: Exponential Distribution Simulation Histograms C4 C.5 Program: Poisson Distribution Versus Jump Counter k CS C.6 Program: Binomial Distribution Versus Binomial Frequency f {... Cβ C.7 Program: Simulated Diffusion W(t) Sample Paths CI C.8 Program: Simulated Diffusion W(t) Sample Paths Showing Variation with Time Step Size C8 C.9 Program: Simulated Simple Poisson P(t) Sample Paths C9 CIO Program: Simulated Simple Incremental Poisson AP(t) Sample Paths. CIO Cll Program: Simulated Diffusion Integrals f(dw) 2 (t) by Itô Partial Sums C12 C12 Program: Simulated Diffusion Integrals fg(wj)äw: Direct Case by Itô Partial Sums C13 C.13 Program: Simulated Diffusion Integrals fg(x,t)dw: Chain Rule.... C14 C.14 Program: Simulated Linear Jump-Diffusion Sample Paths C16 C.I5 Program: Simulated Linear Mark-Jump-Diffusion Sample Paths... C18
xiii C.16 Program: Curse of Dimensionality C21 C.I7 Program: Euler-Maruyama Simulations for Linear Diffusion SDE.. C23 C.I8 Program: Milstein Simulations for Linear Diffusion SDE C25 C.19 Program: Monte Carlo Simulation Comparing Uniform and Normal Errors C27 C.20 Program: Monte Carlo Simulation Testing Uniform Distribution... C29 C.21 Program: Monte Carlo Acceptance-Rejection Technique C30 C.22 Program: Monte Carlo Multidimensional Integration C32 C.23 Program: Regular and Bang Control Examples C34 C.24 Program: Simple Optimal Control Example C37 C.25 Program: Bang-Bang Control with Control Switching Example... C38 C.26 Program: Singular Control Examples C40