HIGH ORDER DISCONTINUOUS GALERKIN METHODS FOR 1D PARABOLIC EQUATIONS. Ahmet İzmirlioğlu. BS, University of Pittsburgh, 2004

Similar documents
Application of an Interval Backward Finite Difference Method for Solving the One-Dimensional Heat Conduction Problem

PDE Project Course 1. Adaptive finite element methods

Outline. 1 Introduction. 2 Algorithms. 3 Examples. Algorithm 1 General coordinate minimization framework. 1: Choose x 0 R n and set k 0.

Lecture 4. Finite difference and finite element methods

Chapter 5 Finite Difference Methods. Math6911 W07, HM Zhu

Introduction to Numerical PDEs

Financial Risk Management

King s College London

Numerical schemes for SDEs

Pricing Algorithms for financial derivatives on baskets modeled by Lévy copulas

A distributed Laplace transform algorithm for European options

Finite Element Method

A Study on Numerical Solution of Black-Scholes Model

EC316a: Advanced Scientific Computation, Fall Discrete time, continuous state dynamic models: solution methods

MAFS Computational Methods for Pricing Structured Products

A local RBF method based on a finite collocation approach

Using radial basis functions for option pricing

FINITE DIFFERENCE METHODS

Financial Market Models. Lecture 1. One-period model of financial markets & hedging problems. Imperial College Business School

25 Increasing and Decreasing Functions

King s College London

Infinite Reload Options: Pricing and Analysis

Sparse Wavelet Methods for Option Pricing under Lévy Stochastic Volatility models

d n U i dx n dx n δ n U i

Research Article Exponential Time Integration and Second-Order Difference Scheme for a Generalized Black-Scholes Equation

Project 1: Double Pendulum

Computational Finance Finite Difference Methods

Interpolation. 1 What is interpolation? 2 Why are we interested in this?

Short-time-to-expiry expansion for a digital European put option under the CEV model. November 1, 2017

F A S C I C U L I M A T H E M A T I C I

1 Explicit Euler Scheme (or Euler Forward Scheme )

Pricing Dynamic Solvency Insurance and Investment Fund Protection

32.4. Parabolic PDEs. Introduction. Prerequisites. Learning Outcomes

Exact shape-reconstruction by one-step linearization in EIT

Backpropagation. Deep Learning Theory and Applications. Kevin Moon Guy Wolf

EFFICIENT MONTE CARLO ALGORITHM FOR PRICING BARRIER OPTIONS

PDE Methods for the Maximum Drawdown

NUMERICAL METHODS OF PARTIAL INTEGRO-DIFFERENTIAL EQUATIONS FOR OPTION PRICE

A model reduction approach to numerical inversion for parabolic partial differential equations

Comparative Study between Linear and Graphical Methods in Solving Optimization Problems

Of Rocket Science, Finance, and Nuclear Data: REWIND (Ranking Experiments by Weighting for Improved Nuclear Data)

The Intermediate Value Theorem states that if a function g is continuous, then for any number M satisfying. g(x 1 ) M g(x 2 )

Pricing Implied Volatility

hp-version Discontinuous Galerkin Methods on Polygonal and Polyhedral Meshes

Galerkin Least Square FEM for the European option price with CEV model

A THREE-FACTOR CONVERGENCE MODEL OF INTEREST RATES

AD in Monte Carlo for finance

Solution 2.1. We determine the accumulation function/factor and use it as follows.

Chapter 3: Black-Scholes Equation and Its Numerical Evaluation

Integrating Feynman Kac Eq

"Pricing Exotic Options using Strong Convergence Properties

Advanced Numerical Methods for Financial Problems

Numerical Evaluation of Multivariate Contingent Claims

Unit 2: Modeling in the Frequency Domain Part 2: The Laplace Transform

Pricing American Options Using a Space-time Adaptive Finite Difference Method

A High-order Front-tracking Finite Difference Method for Pricing American Options under Jump-Diffusion Models

THE OPTIMAL ASSET ALLOCATION PROBLEMFOR AN INVESTOR THROUGH UTILITY MAXIMIZATION

arxiv: v1 [q-fin.pm] 13 Mar 2014

A model reduction approach to numerical inversion for parabolic partial differential equations

Exact shape-reconstruction by one-step linearization in EIT

Evaluation of Asian option by using RBF approximation

IEOR E4703: Monte-Carlo Simulation

Random Variables and Probability Distributions

Valuation of performance-dependent options in a Black- Scholes framework

Multiname and Multiscale Default Modeling

Functional vs Banach space stochastic calculus & strong-viscosity solutions to semilinear parabolic path-dependent PDEs.

OPTIMAL PORTFOLIO CONTROL WITH TRADING STRATEGIES OF FINITE

arxiv: v1 [q-fin.cp] 1 Nov 2016

Optimal prepayment of Dutch mortgages*

The Pennsylvania State University. The Graduate School. Department of Industrial Engineering AMERICAN-ASIAN OPTION PRICING BASED ON MONTE CARLO

Chapter 7 One-Dimensional Search Methods

The University of Chicago, Booth School of Business Business 41202, Spring Quarter 2017, Mr. Ruey S. Tsay. Solutions to Final Exam

Version A. Problem 1. Let X be the continuous random variable defined by the following pdf: 1 x/2 when 0 x 2, f(x) = 0 otherwise.

Trust Region Methods for Unconstrained Optimisation

Solution of the problem of the identified minimum for the tri-variate normal

SYLLABUS AND SAMPLE QUESTIONS FOR MSQE (Program Code: MQEK and MQED) Syllabus for PEA (Mathematics), 2013

European option pricing under parameter uncertainty

FX Smile Modelling. 9 September September 9, 2008

Exam M Fall 2005 PRELIMINARY ANSWER KEY

Fractional PDE Approach for Numerical Solution of Some Jump-Diffusion Models

American Options; an American delayed- Exercise model and the free boundary. Business Analytics Paper. Nadra Abdalla

1 Explicit Euler Scheme (or Euler Forward Scheme )

An IMEX-method for pricing options under Bates model using adaptive finite differences Rapport i Teknisk-vetenskapliga datorberäkningar

Tests for Two Means in a Multicenter Randomized Design

Log-Robust Portfolio Management

Cash Accumulation Strategy based on Optimal Replication of Random Claims with Ordinary Integrals

An option-theoretic valuation model for residential mortgages with stochastic conditions and discount factors

A Numerical Approach to the Estimation of Search Effort in a Search for a Moving Object

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

A No-Arbitrage Theorem for Uncertain Stock Model

An inverse finite element method for pricing American options under linear complementarity formulations

Quasi-Monte Carlo for Finance

Math 416/516: Stochastic Simulation

No ANALYTIC AMERICAN OPTION PRICING AND APPLICATIONS. By A. Sbuelz. July 2003 ISSN

Numerical Solution of a Linear Black-Scholes Models: A Comparative Overview

(RP13) Efficient numerical methods on high-performance computing platforms for the underlying financial models: Series Solution and Option Pricing

Risk-Return Optimization of the Bank Portfolio

BROWNIAN MOTION Antonella Basso, Martina Nardon

The Optimization Process: An example of portfolio optimization

Richardson Extrapolation Techniques for the Pricing of American-style Options

Transcription:

HIGH ORDER DISCONTINUOUS GALERKIN METHODS FOR D PARABOLIC EQUATIONS by Ahmet İzmirlioğlu BS, University of Pittsburgh, 24 Submitted to the Graduate Faculty of Art and Sciences in partial fulfillment of the requirements for the degree of Master of Science University of Pittsburgh 29

UNIVERSITY OF PITTSBURGH ARTS AND SCIENCES This thesis was presented By Ahmet İzmirlioğlu It was defended on May 8 th, 28 and approved by Beatrice Riviere, PhD, Associate Professor Anna Vainchtein, PhD, Associate Professor David Swigon, PhD, Associate Professor Thesis Director: Beatrice Riviere, PhD, Associate Professor ii

Copyright by Ahmet İzmirlioğlu 29 iii

HIGH ORDER DISCONTINUOUS GALERKIN METHODS FOR D PARABOLIC EQUATIONS Ahmet İzmirlioğlu, M.S. University of Pittsburgh, 29 iv

TABLE OF CONTENTS. Introduction... 2. Problem... 2 3. Backward Euler And Discontinuous Galerkin Scheme... 2 3. Local Basis Functions... 4 3.2 Linear System... 7 3.3 Convergence Of The Dg Method... 4. DG In Time And Space Scheme... 22 5. Conclusions... 34 References... 35

LIST OF TABLES Table. Experiments with u (x, t) = sin(t) + e x2 and polynomial degree 2.... 2 Table 2. Experiments with u (x, t) = sin(t) + e x2 and polynomial degree 3.... 4 Table 3. Experiments with u (x, t) = sin(t) + e x2 and polynomial degree 4.... 7 Table 4. Experiments with u 2 (x, t) = t 2 e x2 and polynomial degree 2.... 2 Table 5. Experiments with u (x, t) = sin(t) + e x2 and polynomial degree 2.... 25 Table 6. Experiments with u (x, t) = sin(t) + e x2 and polynomial degree 3.... 29 vi

. INTRODUCTION The goal of this work is to compare the computational efficiency of the Backward Euler (BE) in time and high order Discontinuous Galerkin (DG) in space method vs. the computational efficiency of the DG in time and space method (high order only in space), for a one dimensional (D) parabolic equation. The DG methods have recently become popular thanks to certain features which may make them attractive to researchers, such as: Local, element-wise mass conservation; Flexibility to use high-order polynomial and non-polynomial basis functions; Ability to easily increase the order of approximation on each mesh element independently; Ability to achieve an almost exponential convergence rate when smooth solutions are captured on appropriate meshes; Suitability for parallel computations due to (relatively)local data communications; Applicability to problems with discontinuous coefficients and/or solutions; The DG methods have been successfully applied to a wide variety of problems ranging from the solid mechanics to the fluid mechanics. There are other methods which are used to solve similar problems, such as the finite difference method. The major disadvantage of this method is that it is a low order method. Additionally, DG method is well suited for handling unstructured meshes, compared to the finite difference method. There are also many commonly used finite element methods. However, adaptively increasing the degree of polynomial in these methods is not as straight forward as in the DG method. After we establish the formulation of the problem and delineate the construction of the solution methods, we conduct a number of computational experiments to test the rates of convergence of the utilized methods against theoretical predictions. We note that the BE method requires very small time steps during calculations in order to maintain high order convergence rates in space with the DG method. Such restrictions are much more relaxed in the case of the DG in time and space method, as will be explained in this thesis. This is the one clear advantage of the DG in time and space method against the BE in time and DG in space method.

2. PROBLEM We consider the following parabolic problem: u t (x, t) 2 u (x, t) = f(x, t), x [a, b], t (, τ), () x2 u(, t) = g (t), u(, t) = g (t), (2) u(x, ) = u (x). Here, f belongs to CC (,). We can assume that the problem ()-(2) has a solution in (a, b) R. We say that u is a strong solution of the above system if u CC 2 (,) and u satisfies the system pointwise. 3. BACKWARD EULER AND DISCONTINUOUS GALERKIN SCHEME Let = xx <xx <.. < xx NN = be a subdivision of [,] and let I n = [x n, x n ]. Denote by PP k the space of piecewise discontinuous polynomials of degree k: PP kk = { v v In P k (I n ), n =,, N } where P k (I n ) is the space of polynomials of degree k on the interval I n. To solve ()-(2), we will first use a combined Backward Euler scheme, and a Discontinuous Galerkin (DG) scheme. In order to define the method, we introduce a linear form L and a bilinear form aa εε (see [2]): L(t, v) = f(x, t)v(x)dx + σ h v(x )g (t) εv (x )g (t) + σ h v(x N)g (t) + εv (x N )g (t) where h = /N, and 2

N x n + N N aa εε (ww, v) = w (x)v (x)dx {w (x n )}[v(x n )] + εε {v (x n )}[w(x n )] +JJ (w, v) n= x n n= n= where, w, v PP kk, and JJ is the penalty term for the jump in the functions v and w, defined as: N JJ (w, v) = σ h [w(x n)][v(x n )] n= Here, σ is a non-negative real number called penalty parameter. In order to define the jump [ ] and average { } terms, we first define x n + and x n as follows: x n + : = lim ε (x n + ε) and x n : = lim ε (x n ε). Then, we define the jump of a function w at a point x n, for i =,, N, as the difference of the values of w from the right of point x n and from the left of point x n, ie; [w(x n )] = v(x n + ) v(x n ) Clearly, there is no jump at the initial and end points of the interval (points x and x N ), and by convention we set [w(x )] = v(x + ), [w(x N )] = v(x N ). If the function w is continuous at the point x n, then the jump equals. If the function w is discontinuous at the point x n, then the jump is non-zero. Additionally, we define the average term for a function v at a point x n, for i =,, N, as the average of the values of w from the right of point x n and from the left of point x n. ie; {v(x n )} = 2 (v(x n + ) + v(x n )) If the function v is continuous at point x n, then {v(x n )} = v(x n ). Similarly by convention, {v(x )} = v(x + ), {v(x N )} = v(x N ). The reason for the inclusion of the penalty terms will be explained in more detail in the next section. Also, ε is a real number, but we restrict ourselves to the cases εε {,,}. This restriction will allow us to identify the error in our estimates in the cases of the bilinear form being symmetric and non-symmetric. These cases are identified as NIPG 3

(Non-symmetric Interior Penalty Galerkin) when εε =, IIPG (Incomplete Interior Penalty Galerkin) when εε =, and SIPG (Symmetric Interior Penalty Galerkin) when εε =. The bilinear form is non-symmetric in the cases εε = aaaaaa only. [] Let t > be the time step and let t i = i t. We want to find an approximation DG (x) u(x, t). P i+ First, we solve for the initial solution P DG : For v PP kk, P DG (x)v(x)dx Then, we solve the following equation for P DG i+ PP kk and i : For v PP kk, t P i+ DG (x)v(x)dx + aa εε (P DG i+, v) = LL(t, v) + t P DG i (x)v(x)dx = u (x)v(x)dx 3. LOCAL BASIS FUNCTIONS We now need to discuss some details of our scheme. We will choose basis functions from PP kk to be used in our scheme. We will consider the case k=4. On each interval I n, we choose 5 basis functions {φφ n, φφ n, φφ 2 n, φφ 3 n, φφ 4 n } such that φφ n is constant φφ n is linear φφ 2 n is quadratic φφ 3 n is cubic φφ 4 n is quartic We will extend these functions to equal zero on all other intervals, and keep their names. These extended functions are global basis functions. This construction will have the benefit of causing the global basis functions to have local support. This will be very useful in calculations of our solution. From this construction, we observe that the global basis functions are not well defined at the points x i, for i =,, N. This can easily be illustrated by an example. 4

Consider the case of the two intervals [x, x ] and [x, x 2 ] belonging to [,]. On the point x, φφ assumes two values based on the local basis functions of each sub-interval. Therefore, φφ is not well defined on the points x i, for i =,, N. But, how do we choose a local basis function φφ i j in the first place? Before answering this question, we should shift our attention to a seemingly minor point. Again, we consider the case of k = 4. To be practical, we would like to use the monomial basis functions {, x, x 2, x 3, x 4 } of PP 4 on each interval I n. However, these basis functions need to be translated to each interval I n from the interval (-, ). The reason for our choice of (-, ) is, of course, due to our use of Gaussian quadrature in calculating the integral in our DG scheme. The translation is accomplished as follows: ϕ n (x) = ϕ n (x) = 2 x x n+ /2 x n+ x n ϕ 2 n (x) = 4 (x x n+ /2) 2 (x n+ x n ) 2 ϕ 3 n (x) = 8 (x x n+ /2) 3 (x n+ x n ) 3 ϕ 4 n (x) = 6 (x x n+ /2) 4 (x n+ x n ) 4 where x n + /2 = 2 (x n + x n+ ) is the midpoint of the interval I n. Since all intervals are of the same length h, this simplifies the basis functions to the following form: ϕ n (x) = ϕ n (x) = 2 h (x n + h) 2 ϕ n 2 (x) = 4 h 2 (x n + h)2 2 5

ϕ n 3 (x) = 8 h 3 (x n + h)3 2 ϕ n 4 (x) = 6 h 4 (x n + h)4 2 These basis functions have the following derivatives: ϕ n (x) = ϕ n (x) = 2 h ϕ n 2 (x) = 8 h 2 (x n + h) 2 ϕ n 3 (x) = 24 h 3 (x n + h)2 2 ϕ n 4 (x) = 64 h 4 (x n + h)3 2 We also need to calculate the basis functions over points shared by adjacent intervals. First, ϕ n (x n + ) =, ϕ n (x n + ) = ϕ n (x n + ) =, ϕ n (x n + ) = 2 h ϕ 2 n (x n + ) =, ϕ 2 n (x n + ) = 4 h ϕ n 3 (x + n ) =, ϕ n 3 (x + n ) = 6 h ϕ n 4 (x + n ) =, ϕ n 4 (x + n ) = 8 Next, ϕ n (x n ) =, ϕ n (x n ) = h 6

ϕ n (x n ) =, ϕ n (x n ) = 2 h ϕ 2 n (x n ) =, ϕ 2 n (x n ) = 4 h ϕ 3 n (x n ) =, ϕ 3 n (x n ) = 6 h ϕ 4 n (x n ) =, ϕ 4 n (x n ) = 8 h 3.2 LINEAR SYSTEM Using the above basis functions, we can expand the DG solution as: for every x (,). N 4 P l DG (x) = α l,j m= j= m φφ j m (x) m Here, α l,j are unknown real numbers to be solved for. With this decomposition of P DG l our scheme becomes: N 4 xx nn + NN t α m l+,j φφ m j (x)φφ n m j i (x)dx + α l+,j aa εε (φφ m m= j= xx nn 4 mm = jj =, φφ i n ) = L (φφ i n ) where, i L ϕ n = LL t l+ i, φφ n + t P l DG (x) v(x) dx which holds for all i 4 and n N. m Thus, we obtain a linear system Aα = b, where α is the vector with the components α l+,j. A very important technical point to make is that the global matrix A can be obtained by computing and assembling local matrices. The reason we can do this is that, by their n construction, the global basis functions ϕ j have local support. The matrices A n and M n correspond to the volume integral in our scheme, ie; 7

DG (P l+ ) (x) v n (x) dx = A n α l+ I n n where α l+ n = (α l+, n, α l+, DG n (P l+ )(x)v(x) dx = M n α l+ I n n,, α l+,4 ) T, (A n ) ij = (ϕ n i ) (x) ϕ n j (x) dx, and I n (M n ) ij = ϕ n i (x)ϕ n j (x) dx. I n The matrix B n corresponds to the interactions of the local basis functions of the interval I n. Additionally, the matrix C n corresponds to the interactions of local basis functions on I n-. These matrices can be calculated by expanding the average and jump terms in our scheme as: Bn = 2 (PDG l+) (x + n )v(x + n ) ε 2 P l+ DG (x + n )v (x + n ) + σσ h P l+ DG (x + n )v(x + n ) Cn = 2 (PDG l+) (x n )v(x n ) + ε 2 P l+ DG (x n )v (x n ) + σσ h P l+ DG (x n )v(x n ) As alluded to earlier, there are also very limited, but important, interactions between basis functions of adjacent intervals. The matrices D n and E n represent these interactions between the intervals I n and I n-. These matrices can also be calculated by expanding the average and jump terms in our scheme as: Dn = 2 (PDG l+) (x + n )v(x n ) ε 2 P l+ DG (x + n )v (x n ) σσ h P l+ DG (x + n )v(x n ) En = 2 (PDG l+) (x n )v(x + n ) + ε 2 P l+ DG (x n )v (x + n ) σσ h P l+ DG (x n )v(x + n ) Finally, F and F N are the local matrices arising from the boundary nodes x and x N. F = (P DG l+ ) (x )v(x ) εp DG l+ (x )v (x ) + σσ h P DG l+(x )v(x ) F N = (P DG l+ ) (x N )v(x N ) + εp DG l+ (x N )v (x N ) + σσ h P l+ DG (x N )v(x N ) The local matrices for interval I n, based on quartic polynomials, are: 8

4 4 A n = 6 32 3 5 h 4 32 5 σ σ 2 + σ 3 σ 4 + σ B n = ε σ + εε + σ 2 εε σ 3 + εε + σ 4 εε σ h 2εε + σ 2εε σ 2 + 2εε + σ 3 2εε σ 4 + 2εε + σ 3εε σ + 3εε + σ 2 3εε σ 3 + 3εε + σ 4 3εε σ 4εε + σ 4εε σ 2 + 4εε + σ 3 4εε σ 4 + 4εε + σ σ + σ 2 + σ 3 + σ 4 + σ C n = ε + σ + εε + σ 2 + εε + σ 3 + εε + σ 4 + εε + σ h 2εε + σ + 2εε + σ 2 + 2εε + σ 3 + 2εε + σ 4 + 2εε + σ 3εε + σ + 3εε + σ 2 + 3εε + σ 3 + 3εε + σ 4 + 3εε + σ 4εε + σ + 4εε + σ 2 + 4εε + σ 3 + 4εε + σ 4 + 4εε + σ 36 5 64 7 σ + σ 2 σ 3 + σ 4 σ D n = ε σ + εε + σ 2 εε σ 3 + εε + σ 4 εε σ h 2εε σ + 2εε + σ 2 2εε σ 3 + 2εε + σ 4 2εε σ 3εε σ + 3εε + σ 2 3εε σ 3 + 3εε + σ 4 3εε σ 4εε σ + 4εε + σ 2 4εε σ 3 + 4εε + σ 4 4εε σ σ 2 σ 4 + σ 6 σ 8 + σ F = 2ε σ 2 + 2εε + σ 4 2εε σ 6 + 2εε + σ 8 2εε σ h 4εε + σ 2 4εε σ 4 + 4εε + σ 6 4εε σ 8 + 4εε + σ 6εε σ 2 + 6εε + σ 4 6εε σ 6 + 6εε + σ 8 6εε σ 8εε + σ 2 8εε σ 4 + 8εε + σ 6 8εε σ 8 + 8εε + σ σ 2 + σ 4 + σ 6 + σ 8 + σ F N = 2ε + σ 2 + 2εε + σ 4 + 2εε + σ 6 + 2εε + σ 8 + 2εε + σ h 4εε + σ 2 + 4εε + σ 4 + 4εε + σ 6 + 4εε + σ 8 + 4εε + σ 6εε + σ 2 + 6εε + σ 4 + 6εε + σ 6 + 6εε + σ 8 + 6εε + σ 8εε + σ 2 + 8εε + σ 4 + 8εε + σ 6 + 8εε + σ 8 + 8εε + σ Once all the local matrices are computed, we use them to assemble the global matrix. The assembly depends on the order of the unknowns α n l,j. So, assuming that the unknowns are listed as (α l+,, α l+,, α l+,2, α l+,3, α l+,4,, α N l+,, α N l+,, α N l+,2, α N l+,3, α N l+,4 ), 9

The global matrix has the following tri-diagonal form: Θ D E Θ D 2 E 2 Θ 2 D 3 E 3 D N 2 E N 2 Θ N D N E N Θ N where Θ n = A n + B n + C n+ + t M n, Θ = A + F + C + t M, and Θ N = A N- + F N + B N- + t M N-. 3.3 CONVERGENCE OF THE DG METHOD Now, I would like to discuss the error obtained during this process. Our results will show that as one decreases the mesh size h (ie; increases the number of intervals N), then the numerical error decreases correspondingly. Define the numerical error obtained at the point (x, t i ) by: e h (t i )(x) = u(x, t i ) P i DG (x). Then, the L 2 norm of the error is: e h (t i ) L 2 (,) = (e h t i ) 2 dx One can prove that, [,3,4] 2. e h l (L 2 ) = o(h k+ + t) for εε = (3) and [,3,4] e h l (L 2 ) = o(h k + t) for εε = oooo. (4)

The following tables contain experimental results obtained by our method. The data confirms the theoretical results predicted by (3) and (4). We test the method with two exact solutions: u (x, t) = sin(t) + e x2 aaaaaa u 2 (x, t) = t 2 e x2 We first describe experiments with u. In polynomial degree 2, we first investigate the rate convergence of the solution with εε =. We choose a very small time step, t = /5. In order to test our results against those predicted by theory, we need the following inequality to hold in our experiments: t h k+ We begin our experiments with a small penalty parameter, σ =., and increase it until we achieve the error ratios predicted by theory. We increase σ by an order of magnitude with each experiment, testing for σ =.,.,,,, and. With mesh sizes /8 and /6, we see good accuracy with maximum error in the neighborhood of 5. However, the proper error ratios (in this case 2 k+ = 8) are not achieved until σ =. With mesh size /32 a good accuracy is only achieved at σ =, and convergence is sub-optimal until σ =. Next, we test the rates of convergence for εε =, and as before we test for σ =.,.,,,, and. We see that good accuracy is achieved immediately, with maximum error in the neighborhood of 5. As σ increases, we see that the maximum error becomes smaller as the mesh size h becomes smaller. With σ = and, the maximum error for mesh size /8 is around 5 for mesh size /6 around 6, and for mesh size /32 around 7. By (4), optimal convergence requires error ratios to be equal to 4. The error ratios start out around 2 with σ =.,., and for all mesh sizes. With σ = the error ratio equals 5.87 between mesh sizes /8 and /6 (better than optimal convergence), and equals 3.32 between mesh sizes /6 and /32 (sub-optimal convergence). Finally, better than optimal convergence with a ratio of around 8 is obtained for all mesh sizes with σ = and. The last experiment we conduct with solution u with basis functions of polynomial degree 2, is for εε =. Good accuracy for all mesh sizes is immediate, with maximum error in the neighborhood of 5. Again, as σ increases, we see that the maximum error becomes smaller as the mesh size h becomes smaller. With σ = and, the maximum error for mesh size /8 is around 5 for mesh size /6 around 6, and for mesh size /32 around 7. Optimal convergence requires error ratio to equal 4. The error ratios start

out around 3.5 with σ =.,., and for mesh sizes /8 and /6, and around 6 for mesh size /32. With σ = the error ratio equals 7.44 between mesh sizes /8 and /6, and equals 7.63 between mesh sizes /6 and /32. Finally, a ratio of around 8 is obtained for all mesh sizes with σ = and. Table : Experiments with u (x, t) = sin(t) + e x2 and polynomial degree 2. h Dt max error (L2) ratio (fixed Ntm) With poly. Deg=2 for sin(t)+e^(-x^2) with ε =-, σ =. /8 /5 7.649E-5 /32 /5 8.559E+5. for sin(t)+e^(-x^2) with ε =-, σ =. /8 /5 7.455E-5 /6 /5 8.5649E-5.87 /32 /5 3.4465E+5. for sin(t)+e^(-x^2) with ε =-, σ = /8 /5 7.6755E-5 /6 /5.72E-4.72 /32 /5 9.3482E+5. for sin(t)+e^(-x^2) with ε =-, σ = /8 /5 2.763E-5 /6 /5 4.883E-5.68 /32 /5 9.722E-. for sin(t)+e^(-x^2) with ε =-, σ = /8 /5 3.58E-5 /6 /5 4.82E-6 7.73 /32 /5.E-5.4 for sin(t)+e^(-x^2) with ε =-, σ = /8 /5 3.256E-5 /6 /5 4.865E-6 7.96 /32 /5 5.2E-7 7.99 2

In polynomial degree 3, we first investigate the rate convergence of the solution with εε =. We choose t = /5 as our time step, since in order to test our results against those predicted by theory we need the following inequality to hold in our experiments: t h k+ Again, we begin our experiments with a small penalty parameter, σ =., and increase it until we achieve the error ratios predicted by theory. We increase σ by an order of magnitude with each experiment, testing for σ =.,.,,,, and. With mesh size /8, we see good accuracy with maximum error in the neighborhood of 4. However, the proper error ratios (in this case 2 k+ = 6) are not achieved until σ =. With mesh size /32 a good accuracy and optimal convergence is only achieved at σ =. Next, we test the rates of convergence for εε =, and as before we test for σ =.,.,,,, and. We see that good accuracy is achieved immediately, with maximum error at most in the neighborhood of 6. As σ increases, we see that the maximum error becomes smaller as the mesh size h becomes smaller. With σ = and, the maximum error for mesh size /8 is around 7 for mesh size /6 around 8, and for mesh size /32 around 9. By (4), optimal convergence requires error ratios to be equal to 8. The error ratios start out around 2.65 with σ =., and. for all mesh sizes. With σ = the error ratio equals 7.39 between mesh sizes /8 and /6 (sub- optimal convergence), and equals 6.57 between mesh sizes /6 and /32 (sub-optimal convergence). Finally, better than optimal convergence with a ratio of at least.52 is obtained for all mesh sizes with σ = and. The last experiment we conduct with solution u with basis functions of polynomial degree 3, is for εε =. Good accuracy for all mesh sizes is immediate, with maximum error at most in the neighborhood of 7. Again, as σ increases, we see that the maximum error becomes smaller as the mesh size h becomes smaller. With σ = and, the maximum error for mesh size /8 is around 7 for mesh size /6 around 8, and for mesh size /32 around 9. Optimal convergence requires error ratio to equal 8. The error ratios start out around.94 with σ =.,., and for mesh sizes /8 and /6, and around 6.75 for mesh size /32. With σ = the error ratio equals 2.5 between mesh sizes /8 and /6, and equals 5.95 between mesh sizes /6 and /32. Finally, a ratio of around 4.57 is obtained between mesh sizes /8 and /6, and 5.4 between mesh sizes /6 and /32 with σ = and. 3

Table 2: Experiments with u (x, t) = sin(t) + e x2 and polynomial degree 3. h dt max error (L2) ratio (fixed Ntm) With poly. Deg=3 for sin(t)+e^(-x^2) with ε=-, σ =. /8 /5.337E-4 /6 /5 5.2523E+3. for sin(t)+e^(-x^2) with ε=-, σ =. /8 /5.87E-4 /6 /5 3.89E+3. /32 /5 7.489E+8. for sin(t)+e^(-x^2) with ε=-, σ = /8 /5.398E-4 /6 /5 5.4499E+3. /32 /5 4.487E+43. for sin(t)+e^(-x^2) with ε=-, σ = /8 /5 2.22E-5 /6 /5 2.8786E-. /32 /5.456E+22. for sin(t)+e^(-x^2) with ε=-, σ = /8 /5 6.496E-7 /6 /5 5.8685E-6. /32 /5.56E+. for sin(t)+e^(-x^2) with ε=-, σ = /8 /5 5.2347E-7 /6 /5 3.5529E-8 4.73 /32 /5 2.2267E-9 5.96 for sin(t)+e^(-x^2) with ε=-, σ = /8 /5 5.333E-7 /6 /5 3.62E-8 4.72 /32 /5 2.2634E-9 5.95 for sin(t)+e^(-x^2) with ε=, σ =. /8 /5 2.27E-6 /6 /5 8.585E-7 2.65 for sin(t)+e^(-x^2) with ε=, σ =. /8 /5 2.2374E-6 /6 /5 7.892E-7 2.84 4

h dt max error (L2) ratio (fixed Ntm) /32 /5 3.633E-7 2.49 for sin(t)+e^(-x^2) with ε=, σ = /8 /5 2.2739E-6 /6 /5 8.6798E-7 2.62 /32 /5 4.629E-7.88 for sin(t)+e^(-x^2) with ε=, σ = /8 /5.425E-6 /6 /5 3.249E-7 4.39 /32 /5 4.9428E-8 6.57 for sin(t)+e^(-x^2) with ε=, σ = /8 /5 5.463E-7 /6 /5 4.893E-8.52 /32 /5 4.8562E-9.7 for sin(t)+e^(-x^2) with ε=, σ = /8 /5 5.2377E-7 /6 /5 3.685E-8 4.5 /32 /5 2.33E-9 5.68 for sin(t)+e^(-x^2) with ε=, σ =. /8 /5 8.4525E-7 /6 /5 7.777E-8.94 /32 /5 4.2258E-9 6.75 for sin(t)+e^(-x^2) with ε=, σ =. /8 /5 8.4333E-7 /6 /5 7.57E-8.96 /32 /5 4.259E-9 6.73 for sin(t)+e^(-x^2) with ε=, σ = /8 /5 8.4547E-7 /6 /5 7.87E-8.94 /32 /5 4.227E-9 6.75 for sin(t)+e^(-x^2) with ε=, σ = /8 /5 7.579E-7 /6 /5 6.88E-8 2.5 /32 /5 3.8749E-9 5.95 for sin(t)+e^(-x^2) with ε=, σ = /8 /5 5.557E-7 /6 /5 4.77E-8.78 5

h dt max error (L2) ratio (fixed Ntm) /32 /5 3.2673E-9 4.44 for sin(t)+e^(-x^2) with ε=, σ = /8 /5 5.25E-7 /6 /5 3.6786E-8 4.27 /32 /5 2.3872E-9 5.4 Next, in polynomial degree 4, we first investigate the rate convergence of the solution with εε =. We choose t = /34 as our time step, since in order to test our results against those predicted by theory we need the following inequality to hold in our experiments: t h k+ Again, we begin our experiments with a small penalty parameter, σ =., and increase it until we achieve the error ratios predicted by theory. We increase σ by an order of magnitude with each experiment, testing for σ =.,.,,,, and. With mesh sizes /8 and /6, we see good accuracy with maximum error in the neighborhood of 9 and, respectively. However, the proper error ratios (in this case 2 k+ = 32) are not achieved for these mesh sizes until σ =. With mesh size /32 a good accuracy is achieved with σ =, and optimal convergence is only achieved at σ =3. Next, we test the rates of convergence for εε =, and as before we test for σ =.,.,,,, and. We see that good accuracy and better than optimal convergence is achieved immediately for mesh sizes /8 and /6, with maximum error at most in the neighborhood of 9 and error ratio equal to 6.58. As σ increases, we see that the maximum error becomes smaller as the mesh size h becomes smaller. With σ = and, the maximum error for mesh size /8 is around 9 for mesh size /6 around, and for mesh size /32 around 2. By (4), optimal convergence requires error ratios to be equal to 6. The error ratios start out sub-optimally around 7 for mesh size /32 with σ =., and.. With σ = the error ratio remains beyond optimal at around 32 between mesh sizes /8 and /6, and improves to 22.84 between mesh sizes /6 and /32 (better than optimal convergence). Finally, better than optimal convergence with a ratio of at least 3.9 is obtained for all mesh size with σ =. The last experiment we conduct with solution u with basis functions of polynomial degree 4, is for εε =. Good accuracy for all mesh sizes is immediate, with maximum error at most 6

in the neighborhood of 9. Again, as σ increases, we see that the maximum error becomes smaller as the mesh size h becomes smaller. With σ = and, the maximum error for mesh size /8 is around 9 for mesh size /6 around, and for mesh size /32 around 2. Optimal convergence requires error ratio to equal 6. The error ratios start out around 22. with σ =.,., and for mesh sizes /8 and /6, and around.85 for mesh size /32. With σ = the error ratio equals 27.55 between mesh sizes /8 and /6, and equals 9.84 between mesh sizes /6 and /32. Finally, a ratio of over 3 is obtained between mesh sizes /8 and /6, and at least over 3 between mesh sizes /6 and /32 with σ = and. Table 3: Experiments with u (x, t) = sin(t) + e x2 and polynomial degree 4. h dt max error (L2) ratio (fixed Ntm) With poly. Deg=4 for sin(t)+e^(-x^2) with ε=-, σ =. /8 /34 8.2E-9 /6 /34 8.767E- 9.92 for sin(t)+e^(-x^2) with ε=-, σ =. /8 /34 8.93E-9 /6 /34 8.724E-.3 for sin(t)+e^(-x^2) with ε=-, σ = /8 /34 8.4E-9 /6 /34 8.884E- 9.9 /32 /34 5.3858E-.52 for sin(t)+e^(-x^2) with ε=-, σ = /8 /34 6.882E-9 /6 /34 2.469E- 27.97 /32 /34 4.62E- 5.33 for sin(t)+e^(-x^2) with ε=-, σ = /8 /34 9.639E-9 /6 /34 2.96E- 3.8 /32 /34 2.88E- 4.44 for sin(t)+e^(-x^2) with ε=-, σ =3 /8 /34 9.4987E-9 /6 /34 3.3E- 3.52 /32 /34 9.9839E-2 3.8 7

h dt max error (L2) ratio (fixed Ntm) for sin(t)+e^(-x^2) with ε=-, σ =5 /8 /34 9.5686E-9 /6 /34 3.332E- 3.55 /32 /34 9.7239E-2 3.9 for sin(t)+e^(-x^2) with ε=-, σ = /8 /34 9.6287E-9 /6 /34 3.49E- 3.58 /32 /34 9.76E-2 3.24 for sin(t)+e^(-x^2) with ε=, σ =. /8 /34 7.3586E-9 /6 /34 4.4369E- 6.58 /32 /34 6.2365E- 7. for sin(t)+e^(-x^2) with ε=, σ =. /8 /34 7.3473E-9 /6 /34 4.3865E- 6.75 /32 /34 6.2E- 7.29 for sin(t)+e^(-x^2) with ε=, σ = /8 /34 7.3598E-9 /6 /34 4.4425E- 6.57 /32 /34 6.262E- 7. for sin(t)+e^(-x^2) with ε=, σ = /8 /34 6.7859E-9 /6 /34 2.237E- 3.33 /32 /34 9.7953E-2 22.84 for sin(t)+e^(-x^2) with ε=, σ = /8 /34 9.26E-9 /6 /34 2.924E- 3.3 /32 /34 9.522E-2 3.7 for sin(t)+e^(-x^2) with ε=, σ = /8 /34 9.639E-9 /6 /34 3.499E- 3.58 /32 /34 9.7788E-2 3.9 for sin(t)+e^(-x^2) with ε=, σ =. /8 /34 7.39E-9 /6 /34 3.8E- 22. /32 /34 2.6854E-.85 for sin(t)+e^(-x^2) with ε=, σ =. /8 /34 7.255E-9 8

h dt max error (L2) ratio (fixed Ntm) /6 /34 3.6E- 22.23 /32 /34 2.6482E-.94 for sin(t)+e^(-x^2) with ε=, σ = /8 /34 7.35E-9 /6 /34 3.832E- 22.9 /32 /34 2.69E-.83 for sin(t)+e^(-x^2) with ε=, σ = /8 /34 6.99E-9 /6 /34 2.546E- 27.55 /32 /34.2623E- 9.84 for sin(t)+e^(-x^2) with ε=, σ = /8 /34 9.444E-9 /6 /34 2.946E- 3.9 /32 /34 9.744E-2 3.9 for sin(t)+e^(-x^2) with ε=, σ = /8 /34 9.6329E-9 /6 /34t 3.5E- 3.57 /32 /34 9.7969E-2 3.4 Now, I will describe some experiments with u 2. Experiments with all polynomial degrees yielded similar results, so only polynomial degree 2 will be described. In polynomial degree 2, we first investigate the rate convergence of the solution with εε =. As with u, we choose a very small time step, t = /5. We begin our experiments with a small penalty parameter, σ =., and attempt to increase it until we achieve the error ratios predicted by theory. We increase σ by an order of magnitude with each experiment, testing for σ =.,.,,,, and. However, we immediately see an excellent approximation to the actual solution with mesh sizes /8 and /6, where we have accuracy with maximum error in the neighborhood of with σ =.. For mesh size /32, we achieve accuracy with maximum error in the neighborhood of with σ =. However, the proper error ratios (in this case 2 k+ = 8) are not achieved for any mesh size and any σ, since the approximation is so accurate. We achieve an error ratio of. between all mesh sizes for σ =. Next, we test the rates of convergence for εε =, and as before we test for σ =.,.,,,, and. We see that good accuracy is achieved immediately, with maximum error in the neighborhood of for all mesh sizes. Again, since the approximation is very 9

accurate, proper error ratios are not achieved (in this case (in this case 2 k = 4). We achieve an error ratio of. between all mesh sizes for σ =. The last experiment we conduct with solution u 2 with basis functions of polynomial degree 2, is for εε =. Good accuracy for all mesh sizes is immediate, with maximum error in the neighborhood of 5. With σ = and, the maximum error for mesh size /8 is around 5 for mesh size /6 around 6, and for mesh size /32 around 7. Optimal convergence requires error ratio to equal 4. The error ratios start out around 3.5 with σ =.,., and for mesh sizes /8 and /6, and around 6 for mesh size /32. With σ = the error ratio equals 7.44 between mesh sizes /8 and /6, and equals 6.7 between mesh sizes /6 and /32. Finally, a ratio of around 8 is obtained for all mesh sizes with σ = and. Table 4: Experiments with u 2 (x, t) = t 2 e x2 and polynomial degree 2. h dt max error (L2) ratio (fixed Ntm) for ((t)^2)*e^(-x^2) with ε=-, σ =. /8 /5 2.955E- /6 /5 7.377E-9.4 /32 /5 3.754E+2. for ((t)^2)*e^(-x^2) with ε=-, σ =. /8 /5 2.9532E- /6 /5 6.538E-9.5 /32 /5.5835E+2. for ((t)^2)*e^(-x^2) with ε=-, σ = /8 /5 2.9553E- /6 /5 7.4834E-9.4 /32 /5 4.97E+2. for ((t)^2)*e^(-x^2) with ε=-, σ = /8 /5 2.878E- /6 /5 4.695E-.6 /32 /5 2.79E-5. for ((t)^2)*e^(-x^2) with ε=-, σ = /8 /5 2.84E- /6 /5 2.87E-. /32 /5 3.4298E-.82 2

h dt max error (L2) ratio (fixed Ntm) for ((t)^2)*e^(-x^2) with ε=-, σ = /8 /5 2.798E- /6 /5 2.88E-. /32 /5 2.88E-. for ((t)^2)*e^(-x^2) with ε=, σ =. /8 /5 2.8667E- /6 /5 2.8666E-. /32 /5 2.8666E-. for ((t)^2)*e^(-x^2) with ε=, σ =. /8 /5 2.866E- /6 /5 2.8663E-. /32 /5 2.866E-. for ((t)^2)*e^(-x^2) with ε=, σ =. /8 /5 2.8667E- /6 /5 2.8666E-. /32 /5 2.8667E-. for ((t)^2)*e^(-x^2) with ε=, σ = /8 /5 2.857E- /6 /5 2.8455E-. /32 /5 2.837E-. for ((t)^2)*e^(-x^2) with ε=, σ = /8 /5 2.827E- /6 /5 2.86E-. /32 /5 2.853E-. for ((t)^2)*e^(-x^2) with ε=, σ = /8 /5 2.7996E- /6 /5 2.828E-. /32 /5 2.822E-. for ((t)^2)*e^(-x^2) with ε=, σ =. /8 /5 3.79E-5 /6 /5.74E-5 3.53 /32 /5.7688E-6 6.7 for ((t)^2)*e^(-x^2) with ε=, σ =. /8 /5 3.6834E-5 /6 /5.38E-5 3.67 /32 /5.5982E-6 6.28 for ((t)^2)*e^(-x^2) with ε=, σ = /8 /5 3.82E-5 2

h dt max error (L2) ratio (fixed Ntm) /6 /5.823E-5 3.5 /32 /5.7896E-6 6.5 for ((t)^2)*e^(-x^2) with ε=, σ = /8 /5 2.3397E-5 /6 /5 3.432E-6 7.44 /32 /5 4.24E-7 7.63 for ((t)^2)*e^(-x^2) with ε=, σ = /8 /5 3.36E-5 /6 /5 3.997E-6 7.86 /32 /5 5.968E-7 7.83 for ((t)^2)*e^(-x^2) with ε=, σ = /8 /5 3.2526E-5 /6 /5 4.884E-6 7.96 /32 /5 5.9E-7 7.99 4. DG IN TIME AND SPACE SCHEME As for the BE method, we subdivide the time interval [, T]: where t n = n t for some time step t >. N T [, T] = [t n, t n+ ] On each subinterval (t n, t n+ ), the solution is derived by integrating and adding jump terms to n= u (x, t)v(x, t)dx + aa t εε (u, v) = L(t, v) Note that aa εε and L(v) are already discretized in space with the DG method in space. Thus, we have: 22

t n + u (x, t)v(x, t)dxdt + aa t n t εε (u, v)dt + u(x, t + n ) v(x, t + n )dx t n t n + t n + = L(t, v)dt + u(x, t n ) v(x, t + n )dx (5) t n We denote by P (n) (x, t) the approximation of u(x, t) on the interval (t n, t n+ ). We solve the following equation for P (n) (x, t): t n + P(n) t n + (x, t)v(x, t)dxdt + aa t n t εε (P (n), v)dt + P (n) (x, t + n ) v(x, t + n )dx t n t n + And by convention, P ( ) (x, t ) = u (x). = L(t, v)dt + P (n ) (x, t n ) v(x, t + n )dx (6) t n r In the above formula, we have v(x, t) = i= t i v i (x), with v i (x) usual polynomial of degree k in space and r is the degree of polynomials over time. Our choice of basis functions for r=4 as an example are: Now, with r=, we write:, t t n t, (t t n) 2 t 2, (t t n) 3 t 3, (t t n) 4 t 4 P (n) (x, t n ) = P (n) (x) + t t n t Therefore, (6) becomes P 2 (n) (x), for P (n), P 2 (n) in PP kk P (n) t = t P (n) 2 (x) t n t n + t P 2 (n) (x) v(x, t)dxdt t n + + aa εε P (n) (x) + t t n P (n) t 2 (x), v(x, t) dt + P (n) (x, t + n ) v(x, t + n )dx t n t n + We evaluate P (n) (x, t n + ): = L(t, v)dt + P (n ) (x, t n ) v(x, t + n )dx (7) t n 23

P (n) (x, t + n ) = P (x) + t n + t n P t 2 (x) = P (x), ie; the calculations only involve the space basis functions. First, we consider v(x, t) = v (x) for any v (x) in PP kk. Thus, (7) becomes Next, with v = t t n t 2 P 2 P (n) 2 (x) v (x)dx + aa εε (P, v ) t + aa εε (P 2, v ) t + P(n) (x, t + 2 n )v (x)dx (n) (x) t n + = L(t, v)dt + P (n ) (x, t n ) v (x)dx t n v (x), (7) becomes v (x)dx + t 2 aa εε P (n) (x), v + t 3 aa εε P (n) 2 (x), v + P (n) = t (t t n)l(t, v)dt + P (n ) (x, t n ) v (x)dx t n t n + Concerning the error in this scheme, as before, one can prove that e h l (L 2 ) = o(h k+ + t 2 ) for εε = and e h l (L 2 ) = o(h k + t 2 ) for εε = oooo. (x, t n + )v (x)dx The following tables contain experimental results obtained by our method. We test the method with the two exact solutions as before: u (x, t) = sin(t) + e x2 aaaaaa u 2 (x, t) = t 2 e x2 We first describe experiments with u. In polynomial degree 2, we first investigate the rate of convergence of the solution with εε =. We choose a time step much larger than we did in the Backward Euler scheme, t = /24. In order to test our results against those predicted by theory, we need the following inequality to hold in our experiments: t 2 h k+ 24

We begin our experiments with a small penalty parameter, σ =., and increase it until we achieve the error ratios predicted by theory. We increase σ by an order of magnitude with each experiment, testing for σ =.,.,,,, and. With mesh sizes /8 and /6, we see good accuracy with maximum error in the neighborhood of 5 and 4, respectively. However, the proper error ratios (in this case 2 k+ = 8) are not achieved until σ =. With mesh size /32 a good accuracy is only achieved at σ =, and convergence is sub-optimal until σ =. Next, we test the rates of convergence for εε =, and as before we test for σ =.,.,,,, and. We see that good accuracy is achieved immediately, with maximum error in the neighborhood of 5 for mesh sizes /8 and /6, and 6 for mesh size /32. With σ = and, the maximum error for mesh size /8 is around 5 for mesh size /6 around 6, and for mesh size /32 around 7. By (4), optimal convergence requires error ratios to be equal to 4. The error ratios start out between.85 and 3.65 with σ =.,., and for all mesh sizes. With σ = the error ratio equals 4.98 between mesh sizes /8 and /6 (better than optimal convergence), and equals 4.9 between mesh sizes /6 and /32 (better than optimal convergence). Finally, better than optimal convergence with a ratios between 6.86 and 8.3 are obtained for all mesh sizes with σ = and. The last experiment we conduct with solution u with basis functions of polynomial degree 2, is for εε =. Good accuracy for all mesh sizes is immediate, with maximum error between 5 and 6. With σ = and, the maximum error for mesh size /8 is around 5 for mesh size /6 around 6, and for mesh size /32 around 7. Optimal convergence requires error ratio to equal 4. The error ratios start out around 4 with σ =. for mesh sizes /8 and /6, and around 5 for mesh size /32. With σ = the error ratio equals 6.94 between mesh sizes /8 and /6, and equals 7.67 between mesh sizes /6 and /32. Finally, a ratio of around 8 is obtained for all mesh sizes with σ = and. Table 5: Experiments with u (x, t) = sin(t) + e x2 and polynomial degree 2. h dt Max Error (L2) Error Ratio (fixed Ntm) With poly. Deg=2 for sin(t)+e^(-x^2) with ε =-, σσ =. /8 /24 8.57E-5 25

h dt Max Error (L2) Error Ratio (fixed Ntm) /6 /24 2.33E-4.25 /32 /24 9.3E+5. for sin(t)+e^(-x^2) with ε =-, σσ =. /8 /24 6.573E-5 /6 /24 8.24E-5.8 /32 /24 2.43E+5. for sin(t)+e^(-x^2) with ε =-, σσ = /8 /24 7.874E-5 /6 /24 2.3E-4.39 /32 /24 7.25E+5. for sin(t)+e^(-x^2) with ε =-, σσ = /8 /24.273E-5 /6 /24 2.23E-5.63 /32 /24 8.29E-. for sin(t)+e^(-x^2) with ε =-, σσ = /8 /24 3.224E-5 /6 /24 3.978E-6 8.5 /32 /24 2.42E-5.2 for sin(t)+e^(-x^2) with ε =-, σσ = /8 /24 4.482E-5 /6 /24 5.72E-6 8.8 /32 /24 6.3E-7 8.27 for sin(t)+e^(-x^2) with ε =, σσ =. /8 /24 4.276E-5 /6 /24.32E-5 3.65 /32 /24 6.735E-6.69 for sin(t)+e^(-x^2) with ε =, σσ =. /8 /24 5.73E-5 /6 /24 2.34E-5 2.38 /32 /24 9.3974E-6 2.27 for sin(t)+e^(-x^2) with ε =, σσ = /8 /24 4.532E-5 /6 /24.732E-5 2.62 /32 /24 9.359E-6.85 26

h dt Max Error (L2) Error Ratio (fixed Ntm) for sin(t)+e^(-x^2) with ε =, σσ = /8 /24.992E-5 /6 /24 4.2E-6 4.98 /32 /24 9.782E-7 4.9 for sin(t)+e^(-x^2) with ε =, σσ = /8 /24 3.23E-5 /6 /24 3.84E-6 7.9 /32 /24 4.899E-7 7.8 for sin(t)+e^(-x^2) with ε =, σσ = /8 /24 2.278E-5 /6 /24 3.7E-6 6.86 /32 /24 3.843E-7 8.3 for sin(t)+e^(-x^2) with ε =, σσ =. /8 /24 4.2E-5 /6 /24.23E-5 3.99 /32 /24 2.E-6 5. for sin(t)+e^(-x^2) with ε =, σσ =. /8 /24 3.6834E-5 /6 /24.38E-5 3.67 /32 /24.5982E-6 6.28 for sin(t)+e^(-x^2) with ε =, σσ = /8 /24 3.97E-5 /6 /24 9.872E-6 3.6 /32 /24.23E-6 9.75 for sin(t)+e^(-x^2) with ε =, σσ = /8 /24 2.357E-5 /6 /24 3.784E-6 6.94 /32 /24 4.3E-7 7.67 for sin(t)+e^(-x^2) with ε =, σσ = /8 /24 3.23E-5 /6 /24 4.2E-6 7.53 /32 /24 5.9E-7 7.99 for sin(t)+e^(-x^2) with ε =, σσ = /8 /24 2.99E-5 /6 /24 3.732E-6 8. /32 /24 5.2E-7 7.46 27

In polynomial degree 3, we again first investigate the rate of convergence of the solution with εε =. We again choose a time step much larger than we did in the Backward Euler scheme, t = /24. In order to test our results against those predicted by theory, we need the following inequality to hold in our experiments: t 2 h k+ We begin our experiments with a small penalty parameter, σ =., and increase it until we achieve the error ratios predicted by theory. We increase σ by an order of magnitude with each experiment, testing for σ =.,.,,,, and. With mesh size /8 we immediately see good accuracy with maximum error in the neighborhood of 4. However, the proper error ratios (in this case 2 k+ = 6) are not achieved until σ =. With mesh size /32 a good accuracy and convergence are only achieved at σ =. Since convergence was established only with a high σ, we also tested for convergence with the additional value of σ =. This did not significantly increase accuracy or error ratios. Next, we test the rates of convergence for εε =, and as before we test for σ =.,.,,,, and. We see that good accuracy is achieved immediately, with maximum error in the neighborhood of 6 and 7 for mesh sizes /8 and /6, and 7 for mesh size /32 with σ =.. With σ = and, the maximum error for mesh size /8 is around 7, for mesh size /6 around 8, and for mesh size /32 around 9. By (4), optimal convergence requires error ratios to be equal to 8. The error ratios start out between 2.2 and 3.64 with σ =.,., and for all mesh sizes. With σ = the error ratio equals 4.58 between mesh sizes /8 and /6 (sub optimal convergence), and equals 6.83 between mesh sizes /6 and /32 (sub optimal convergence). Finally, better than optimal convergence with ratios between.7 and 6.39 are obtained for all mesh sizes with σ = and. The last experiment we conduct with solution u with basis functions of polynomial degree 3, is for εε =. Good accuracy for all mesh sizes is immediate, with maximum error between 7 and 9. With σ = and, the maximum error for mesh size /8 is around 7 for mesh size /6 around 8, and for mesh size /32 around 9. Optimal convergence requires error ratios to equal 4. The error ratios start out around with σ =. for mesh sizes /8 and /6, and around 5.5 for mesh size /32. With σ = the error ratio equals 2.35 between mesh sizes /8 and /6, and equals 6.4 between mesh sizes /6 and /32. Finally, a ratio of between 4 and 5.5 is obtained for all mesh sizes with σ = and. 28

Table 6: Experiments with u (x, t) = sin(t) + e x2 and polynomial degree 3. h dt Max Error (L2) Error Ratio (fixed Ntm) With poly. Deg=3 for sin(t)+e^(-x^2) with ε =-, σσ =. /8 /24 2.3E-4 /6 /24 7.23E+5. for sin(t)+e^(-x^2) with ε =-, σσ =. /8 /24.E-4 /6 /24 2.973E+5. /32 /24 9.78E+5. for sin(t)+e^(-x^2) with ε=-, σσ = /8 /24.57E-4 /6 /24 2.37E+5. /32 /24.78E+3. for sin(t)+e^(-x^2) with ε=-, σσ = /8 /24 2.3E- /6 /24 2.56E-.79 /32 /24 4.23E+9. for sin(t)+e^(-x^2) with ε=-, σσ = /8 /24 4.23E-7 /6 /24 6.2E-6.7 /32 /24 2.E+. for sin(t)+e^(-x^2) with ε=-, σσ = /8 /24 4.98E-7 /6 /24 2.993E-8 3.44 /32 /24.99E-9 5.3 for sin(t)+e^(-x^2) with ε=-, σσ = /8 /24 5.2E-7 /6 /24 2.859E-8 7.53 /32 /24.74E-9 6.43 29

h dt Max Error (L2) Error Ratio (fixed Ntm) for sin(t)+e^(-x^2) with ε=, σσ =. /8 /24 2.7E-6 /6 /24 8.234E-7 2.5 for sin(t)+e^(-x^2) with ε=, σσ =. /8 /24 2.9E-6 /6 /24 7.99E-7 3.64 /32 /24 3.948E-7 2.58 for sin(t)+e^(-x^2) with ε=, σσ = /8 /24.993E-6 /6 /24 9.2E-7 2.2 /32 /24 3.72E-7 2.42 for sin(t)+e^(-x^2) with ε=, σσ = /8 /24.47E-6 /6 /24 3.2E-7 4.58 /32 /24 4.72E-8 6.83 for sin(t)+e^(-x^2) with ε=, σσ = /8 /24 5.57E-7 /6 /24 4.625E-8.96 /32 /24 4.32E-9.7 for sin(t)+e^(-x^2) with ε=, σσ = /8 /24 5.3E-7 /6 /24 3.57E-8 6.39 /32 /24 2.2E-9 3.83 for sin(t)+e^(-x^2) with ε=, σσ =. /8 /24 7.97E-7 /6 /24 6.93E-8 2.77 /32 /24 3.998E-9 5.49 for sin(t)+e^(-x^2) with ε=, σσ =. /8 /24 8.57E-7 /6 /24 6.4868E-8 3. /32 /24 4.9E-9 6.4 for sin(t)+e^(-x^2) with ε=, σσ = /8 /24 8.72E-7 /6 /24 7.32E-8 2.25 /32 /24 4.99E-9 6.92 3

h dt Max Error (L2) Error Ratio (fixed Ntm) for sin(t)+e^(-x^2) with ε=, σσ = /8 /24 7.47E-7 /6 /24 6.2E-8 2.35 /32 /24 3.79E-9 6.4 for sin(t)+e^(-x^2) with ε=, σσ = /8 /24 5.457E-7 /6 /24 4.38E-8 2.64 /32 /24 3.23E-9 3.43 for sin(t)+e^(-x^2) with ε=, σσ = /8 /24 5.25E-7 /6 /24 3.6786E-8 4.27 /32 /24 2.3872E-9 5.4 In polynomial degree 4, we again first investigate the rate of convergence of the solution with εε =. We again choose a time step much larger than we did in the Backward Euler scheme, t = /6. In order to test our results against those predicted by theory, we need the following inequality to hold in our experiments: t 2 h k+ We begin our experiments with a small penalty parameter, σ =., and increase it until we achieve the error ratios predicted by theory. We increase σ by an order of magnitude with each experiment, testing for σ =.,.,,,, and. With mesh sizes /8 and /6 we immediately see good accuracy with maximum error in the neighborhood of 9. However, the proper error ratios (in this case 2 k+ = 32) are not achieved until σ =. With mesh size /32 a good accuracy is achieved with σ = and convergence is only achieved at σ =3. Since convergence was established only with a high σ, we also tested for convergence with the additional values of σ = 5 and. This did not significantly increase accuracy or error ratios. Next, we test the rates of convergence for εε =, and as before we test for σ =.,.,,,, and. We see that good accuracy is achieved immediately, with maximum error in the neighborhood of 9 and for mesh sizes /8 and /6, and for mesh size /32 with σ =.. With σ = and, the maximum error for mesh size /8 is again around 9, for mesh size /6 around, and for mesh size /32 around 2. By (4), optimal convergence requires error ratios to be equal to 6. The error ratios vacillate between around 7 and 9.5 with σ =.,., and for mesh sizes /8 and /6. For the 3

same σ values, the error ratios for mesh size /32 are between 6.7 and 7.3. With σ = the error ratio equals 3.8 between mesh sizes /8 and /6 (better than optimal convergence), and equals 22.93 between mesh sizes /6 and /32 (better than optimal convergence). Finally, better than optimal convergence with ratios 32 are obtained for all mesh sizes with σ = and. The last experiment we conduct with solution u with basis functions of polynomial degree 4, is for εε =. Good accuracy for all mesh sizes is immediate, with maximum error between 9 and. With σ = and, the maximum error for mesh size /8 is around 9 for mesh size /6 around, and for mesh size /32 around 2. Optimal convergence requires error ratios to equal 6. The error ratios start out around 23 with σ =. for mesh sizes /8 and /6, and around.84 for mesh size /32. With σ = the error ratio equals 28.25 between mesh sizes /8 and /6, and equals.95 between mesh sizes /6 and /32. Finally, a ratio of between 3 and 32 is obtained for all mesh sizes with σ = and. Figure 7: Experiments with u (x, t) = sin(t) + e x2 and polynomial degree 4. h dt Max Error (L2) Error Ratio (fixed Ntm) With poly. Deg=4 for sin(t)+e^(-x^2) with ε =-, σσ =. /8 /6 8.23E-9 /6 /6 8.3E-.2 for sin(t)+e^(-x^2) with ε =-, σσ =. /8 /6 8.7E-9 /6 /6 8.E-. for sin(t)+e^(-x^2) with ε=-, σσ = /8 /6 8.2E-9 /6 /6 8.27E-.2 /32 /6 7.532E-.4 for sin(t)+e^(-x^2) with ε=-, σσ = /8 /6 7.37E-9 /6 /6 3.23E- 23.28 /32 /6 5.54E- 6. for sin(t)+e^(-x^2) with ε=-, σσ = /8 /6 9.4E-9 32

h dt Max Error (L2) Error Ratio (fixed Ntm) /6 /6 2.897E- 3.5 /32 /6 2.4E- 4.44 for sin(t)+e^(-x^2) with ε=-, σσ =3 /8 /6 9.387E-9 /6 /6 3.2E- 3.9 /32 /6 9.872E-2 3.6 for sin(t)+e^(-x^2) with ε=-, σσ =5 /8 /6 9.5686E-9 /6 /6 3.332E- 3.55 /32 /6 9.7239E-2 3.9 for sin(t)+e^(-x^2) with ε=-, σσ = /8 /6 9.87E-9 /6 /6 2.995E- 33. /32 /6 9.87E-2 3.3 for sin(t)+e^(-x^2) with ε=, σσ =. /8 /6 8.2E-9 /6 /6 4.92E- 9.58 /32 /6 6.57E- 6.7 for sin(t)+e^(-x^2) with ε=, σσ =. /8 /6 7.3473E-9 /6 /6 4.3865E- 6.75 /32 /6 6.2E- 7.29 for sin(t)+e^(-x^2) with ε=, σσ = /8 /6 7.3598E-9 /6 /6 4.27E- 7.86 /32 /6 6.75E- 6.78 for sin(t)+e^(-x^2) with ε=, σσ = /8 /6 6.87E-9 /6 /6 2.25E- 3.8 /32 /6 9.647E-2 22.93 for sin(t)+e^(-x^2) with ε=, σσ = /8 /6 9.92E-9 /6 /6 2.9E- 3.24 /32 /6 9.487E-2 3.69 for sin(t)+e^(-x^2) with ε=, σσ = /8 /6 9.79E-9 /6 /6 3.43E- 3.22 33

h dt Max Error (L2) Error Ratio (fixed Ntm) /32 /6 9.467E-2 33.2 for sin(t)+e^(-x^2) with ε=, σσ =. /8 /6 7.45E-9 /6 /6 3.25E- 23.39 /32 /6 2.78E-.84 for sin(t)+e^(-x^2) with ε=, σσ =. /8 /6 7.255E-9 /6 /6 4.27E- 7.5 /32 /6 2.57E- 5.98 for sin(t)+e^(-x^2) with ε=, σσ = /8 /6 6.997E-9 /6 /6 3.7E- 22.77 /32 /6 2.57E-.95 for sin(t)+e^(-x^2) with ε=, σσ = /8 /6 6.87E-9 /6 /6 2.47E- 28.25 /32 /6.37E- 8.53 for sin(t)+e^(-x^2) with ε=, σσ = /8 /6 9.27E-9 /6 /6 3.2E- 3.55 /32 /6 9.872E-2 3.52 for sin(t)+e^(-x^2) with ε=, σσ = /8 /6 9.38E-9 /6 /6 2.97E- 32. /32 /6 9.78E-2 29.97 5. CONCLUSIONS We have implemented high order DG methods in space, up to fourth order polynomial approximations. The two methods used for approximating solutions are the BE in time with DG in space, and DG in time and space methods. In the BE scheme, the time discretization was accomplished by a finite difference approximation of the time derivative. This is a first order, implicit scheme, which means 34

that there are no restrictions on the time step needed for the scheme to be stable. A restriction was imposed on the time step, however, in order to maintain the high order convergence for the space portion of the scheme. Similarly, DG in time and space is a second order, implicit in time scheme, with similar but more relaxed restrictions on time step to maintain high order convergence for the space portion of the scheme. Mainly because of these more relaxed requirements, the time step used in the DG in time and space scheme is much larger than in the BE in time method. This makes the DG in time and space method much more computationally efficient from this perspective. However, the fact that more calculations need to be performed to implement the DG in time and space scheme reduces the computational effectiveness of this scheme, at least in the D case. Our recommendation in the D case is for the implementation of the DG in time and space scheme for the advantage of the larger time steps in the scheme, and the advantages the scheme would yield in higher dimensional problems. The numerical rates obtained, confirmed the theoretical convergence rates. REFERENCES [] B. Riviere; M.F. Wheeler and V. Girault. "Improved energy estimates for interior penalty, constrained and discontinuous Galerkin methods for elliptic problems. Part I". Computational Geosciences, volume 8, p. 337-36, April 999. [2] B. Riviere. Discontinuous Galerkin Methods for Solving Elliptic and Parabolic Equations. Book to be published by SIAM. [3] M.F. Wheeler. An elliptic collocation-finite element method with interior penalties. SIAM Journal on Numerical Analysis, volume 5, p. 52-6, February 978. [4] Clint Dawson, Shuyu Sun, Mary F. Wheeler. Compatible algorithms for coupled flow and transport. Computer Methods in Applied Mechanics and Engineering, volume 93, p. 2565-258, June 24. 35