HIGH ORDER DISCONTINUOUS GALERKIN METHODS FOR 1D PARABOLIC EQUATIONS. Ahmet İzmirlioğlu. BS, University of Pittsburgh, 2004

Size: px
Start display at page:

Download "HIGH ORDER DISCONTINUOUS GALERKIN METHODS FOR 1D PARABOLIC EQUATIONS. Ahmet İzmirlioğlu. BS, University of Pittsburgh, 2004"

Transcription

1 HIGH ORDER DISCONTINUOUS GALERKIN METHODS FOR D PARABOLIC EQUATIONS by Ahmet İzmirlioğlu BS, University of Pittsburgh, 24 Submitted to the Graduate Faculty of Art and Sciences in partial fulfillment of the requirements for the degree of Master of Science University of Pittsburgh 29

2 UNIVERSITY OF PITTSBURGH ARTS AND SCIENCES This thesis was presented By Ahmet İzmirlioğlu It was defended on May 8 th, 28 and approved by Beatrice Riviere, PhD, Associate Professor Anna Vainchtein, PhD, Associate Professor David Swigon, PhD, Associate Professor Thesis Director: Beatrice Riviere, PhD, Associate Professor ii

3 Copyright by Ahmet İzmirlioğlu 29 iii

4 HIGH ORDER DISCONTINUOUS GALERKIN METHODS FOR D PARABOLIC EQUATIONS Ahmet İzmirlioğlu, M.S. University of Pittsburgh, 29 iv

5 TABLE OF CONTENTS. Introduction Problem Backward Euler And Discontinuous Galerkin Scheme Local Basis Functions Linear System Convergence Of The Dg Method DG In Time And Space Scheme Conclusions References... 35

6 LIST OF TABLES Table. Experiments with u (x, t) = sin(t) + e x2 and polynomial degree Table 2. Experiments with u (x, t) = sin(t) + e x2 and polynomial degree Table 3. Experiments with u (x, t) = sin(t) + e x2 and polynomial degree Table 4. Experiments with u 2 (x, t) = t 2 e x2 and polynomial degree Table 5. Experiments with u (x, t) = sin(t) + e x2 and polynomial degree Table 6. Experiments with u (x, t) = sin(t) + e x2 and polynomial degree vi

7 . INTRODUCTION The goal of this work is to compare the computational efficiency of the Backward Euler (BE) in time and high order Discontinuous Galerkin (DG) in space method vs. the computational efficiency of the DG in time and space method (high order only in space), for a one dimensional (D) parabolic equation. The DG methods have recently become popular thanks to certain features which may make them attractive to researchers, such as: Local, element-wise mass conservation; Flexibility to use high-order polynomial and non-polynomial basis functions; Ability to easily increase the order of approximation on each mesh element independently; Ability to achieve an almost exponential convergence rate when smooth solutions are captured on appropriate meshes; Suitability for parallel computations due to (relatively)local data communications; Applicability to problems with discontinuous coefficients and/or solutions; The DG methods have been successfully applied to a wide variety of problems ranging from the solid mechanics to the fluid mechanics. There are other methods which are used to solve similar problems, such as the finite difference method. The major disadvantage of this method is that it is a low order method. Additionally, DG method is well suited for handling unstructured meshes, compared to the finite difference method. There are also many commonly used finite element methods. However, adaptively increasing the degree of polynomial in these methods is not as straight forward as in the DG method. After we establish the formulation of the problem and delineate the construction of the solution methods, we conduct a number of computational experiments to test the rates of convergence of the utilized methods against theoretical predictions. We note that the BE method requires very small time steps during calculations in order to maintain high order convergence rates in space with the DG method. Such restrictions are much more relaxed in the case of the DG in time and space method, as will be explained in this thesis. This is the one clear advantage of the DG in time and space method against the BE in time and DG in space method.

8 2. PROBLEM We consider the following parabolic problem: u t (x, t) 2 u (x, t) = f(x, t), x [a, b], t (, τ), () x2 u(, t) = g (t), u(, t) = g (t), (2) u(x, ) = u (x). Here, f belongs to CC (,). We can assume that the problem ()-(2) has a solution in (a, b) R. We say that u is a strong solution of the above system if u CC 2 (,) and u satisfies the system pointwise. 3. BACKWARD EULER AND DISCONTINUOUS GALERKIN SCHEME Let = xx <xx <.. < xx NN = be a subdivision of [,] and let I n = [x n, x n ]. Denote by PP k the space of piecewise discontinuous polynomials of degree k: PP kk = { v v In P k (I n ), n =,, N } where P k (I n ) is the space of polynomials of degree k on the interval I n. To solve ()-(2), we will first use a combined Backward Euler scheme, and a Discontinuous Galerkin (DG) scheme. In order to define the method, we introduce a linear form L and a bilinear form aa εε (see [2]): L(t, v) = f(x, t)v(x)dx + σ h v(x )g (t) εv (x )g (t) + σ h v(x N)g (t) + εv (x N )g (t) where h = /N, and 2

9 N x n + N N aa εε (ww, v) = w (x)v (x)dx {w (x n )}[v(x n )] + εε {v (x n )}[w(x n )] +JJ (w, v) n= x n n= n= where, w, v PP kk, and JJ is the penalty term for the jump in the functions v and w, defined as: N JJ (w, v) = σ h [w(x n)][v(x n )] n= Here, σ is a non-negative real number called penalty parameter. In order to define the jump [ ] and average { } terms, we first define x n + and x n as follows: x n + : = lim ε (x n + ε) and x n : = lim ε (x n ε). Then, we define the jump of a function w at a point x n, for i =,, N, as the difference of the values of w from the right of point x n and from the left of point x n, ie; [w(x n )] = v(x n + ) v(x n ) Clearly, there is no jump at the initial and end points of the interval (points x and x N ), and by convention we set [w(x )] = v(x + ), [w(x N )] = v(x N ). If the function w is continuous at the point x n, then the jump equals. If the function w is discontinuous at the point x n, then the jump is non-zero. Additionally, we define the average term for a function v at a point x n, for i =,, N, as the average of the values of w from the right of point x n and from the left of point x n. ie; {v(x n )} = 2 (v(x n + ) + v(x n )) If the function v is continuous at point x n, then {v(x n )} = v(x n ). Similarly by convention, {v(x )} = v(x + ), {v(x N )} = v(x N ). The reason for the inclusion of the penalty terms will be explained in more detail in the next section. Also, ε is a real number, but we restrict ourselves to the cases εε {,,}. This restriction will allow us to identify the error in our estimates in the cases of the bilinear form being symmetric and non-symmetric. These cases are identified as NIPG 3

10 (Non-symmetric Interior Penalty Galerkin) when εε =, IIPG (Incomplete Interior Penalty Galerkin) when εε =, and SIPG (Symmetric Interior Penalty Galerkin) when εε =. The bilinear form is non-symmetric in the cases εε = aaaaaa only. [] Let t > be the time step and let t i = i t. We want to find an approximation DG (x) u(x, t). P i+ First, we solve for the initial solution P DG : For v PP kk, P DG (x)v(x)dx Then, we solve the following equation for P DG i+ PP kk and i : For v PP kk, t P i+ DG (x)v(x)dx + aa εε (P DG i+, v) = LL(t, v) + t P DG i (x)v(x)dx = u (x)v(x)dx 3. LOCAL BASIS FUNCTIONS We now need to discuss some details of our scheme. We will choose basis functions from PP kk to be used in our scheme. We will consider the case k=4. On each interval I n, we choose 5 basis functions {φφ n, φφ n, φφ 2 n, φφ 3 n, φφ 4 n } such that φφ n is constant φφ n is linear φφ 2 n is quadratic φφ 3 n is cubic φφ 4 n is quartic We will extend these functions to equal zero on all other intervals, and keep their names. These extended functions are global basis functions. This construction will have the benefit of causing the global basis functions to have local support. This will be very useful in calculations of our solution. From this construction, we observe that the global basis functions are not well defined at the points x i, for i =,, N. This can easily be illustrated by an example. 4

11 Consider the case of the two intervals [x, x ] and [x, x 2 ] belonging to [,]. On the point x, φφ assumes two values based on the local basis functions of each sub-interval. Therefore, φφ is not well defined on the points x i, for i =,, N. But, how do we choose a local basis function φφ i j in the first place? Before answering this question, we should shift our attention to a seemingly minor point. Again, we consider the case of k = 4. To be practical, we would like to use the monomial basis functions {, x, x 2, x 3, x 4 } of PP 4 on each interval I n. However, these basis functions need to be translated to each interval I n from the interval (-, ). The reason for our choice of (-, ) is, of course, due to our use of Gaussian quadrature in calculating the integral in our DG scheme. The translation is accomplished as follows: ϕ n (x) = ϕ n (x) = 2 x x n+ /2 x n+ x n ϕ 2 n (x) = 4 (x x n+ /2) 2 (x n+ x n ) 2 ϕ 3 n (x) = 8 (x x n+ /2) 3 (x n+ x n ) 3 ϕ 4 n (x) = 6 (x x n+ /2) 4 (x n+ x n ) 4 where x n + /2 = 2 (x n + x n+ ) is the midpoint of the interval I n. Since all intervals are of the same length h, this simplifies the basis functions to the following form: ϕ n (x) = ϕ n (x) = 2 h (x n + h) 2 ϕ n 2 (x) = 4 h 2 (x n + h)2 2 5

12 ϕ n 3 (x) = 8 h 3 (x n + h)3 2 ϕ n 4 (x) = 6 h 4 (x n + h)4 2 These basis functions have the following derivatives: ϕ n (x) = ϕ n (x) = 2 h ϕ n 2 (x) = 8 h 2 (x n + h) 2 ϕ n 3 (x) = 24 h 3 (x n + h)2 2 ϕ n 4 (x) = 64 h 4 (x n + h)3 2 We also need to calculate the basis functions over points shared by adjacent intervals. First, ϕ n (x n + ) =, ϕ n (x n + ) = ϕ n (x n + ) =, ϕ n (x n + ) = 2 h ϕ 2 n (x n + ) =, ϕ 2 n (x n + ) = 4 h ϕ n 3 (x + n ) =, ϕ n 3 (x + n ) = 6 h ϕ n 4 (x + n ) =, ϕ n 4 (x + n ) = 8 Next, ϕ n (x n ) =, ϕ n (x n ) = h 6

13 ϕ n (x n ) =, ϕ n (x n ) = 2 h ϕ 2 n (x n ) =, ϕ 2 n (x n ) = 4 h ϕ 3 n (x n ) =, ϕ 3 n (x n ) = 6 h ϕ 4 n (x n ) =, ϕ 4 n (x n ) = 8 h 3.2 LINEAR SYSTEM Using the above basis functions, we can expand the DG solution as: for every x (,). N 4 P l DG (x) = α l,j m= j= m φφ j m (x) m Here, α l,j are unknown real numbers to be solved for. With this decomposition of P DG l our scheme becomes: N 4 xx nn + NN t α m l+,j φφ m j (x)φφ n m j i (x)dx + α l+,j aa εε (φφ m m= j= xx nn 4 mm = jj =, φφ i n ) = L (φφ i n ) where, i L ϕ n = LL t l+ i, φφ n + t P l DG (x) v(x) dx which holds for all i 4 and n N. m Thus, we obtain a linear system Aα = b, where α is the vector with the components α l+,j. A very important technical point to make is that the global matrix A can be obtained by computing and assembling local matrices. The reason we can do this is that, by their n construction, the global basis functions ϕ j have local support. The matrices A n and M n correspond to the volume integral in our scheme, ie; 7

14 DG (P l+ ) (x) v n (x) dx = A n α l+ I n n where α l+ n = (α l+, n, α l+, DG n (P l+ )(x)v(x) dx = M n α l+ I n n,, α l+,4 ) T, (A n ) ij = (ϕ n i ) (x) ϕ n j (x) dx, and I n (M n ) ij = ϕ n i (x)ϕ n j (x) dx. I n The matrix B n corresponds to the interactions of the local basis functions of the interval I n. Additionally, the matrix C n corresponds to the interactions of local basis functions on I n-. These matrices can be calculated by expanding the average and jump terms in our scheme as: Bn = 2 (PDG l+) (x + n )v(x + n ) ε 2 P l+ DG (x + n )v (x + n ) + σσ h P l+ DG (x + n )v(x + n ) Cn = 2 (PDG l+) (x n )v(x n ) + ε 2 P l+ DG (x n )v (x n ) + σσ h P l+ DG (x n )v(x n ) As alluded to earlier, there are also very limited, but important, interactions between basis functions of adjacent intervals. The matrices D n and E n represent these interactions between the intervals I n and I n-. These matrices can also be calculated by expanding the average and jump terms in our scheme as: Dn = 2 (PDG l+) (x + n )v(x n ) ε 2 P l+ DG (x + n )v (x n ) σσ h P l+ DG (x + n )v(x n ) En = 2 (PDG l+) (x n )v(x + n ) + ε 2 P l+ DG (x n )v (x + n ) σσ h P l+ DG (x n )v(x + n ) Finally, F and F N are the local matrices arising from the boundary nodes x and x N. F = (P DG l+ ) (x )v(x ) εp DG l+ (x )v (x ) + σσ h P DG l+(x )v(x ) F N = (P DG l+ ) (x N )v(x N ) + εp DG l+ (x N )v (x N ) + σσ h P l+ DG (x N )v(x N ) The local matrices for interval I n, based on quartic polynomials, are: 8

15 4 4 A n = h σ σ 2 + σ 3 σ 4 + σ B n = ε σ + εε + σ 2 εε σ 3 + εε + σ 4 εε σ h 2εε + σ 2εε σ 2 + 2εε + σ 3 2εε σ 4 + 2εε + σ 3εε σ + 3εε + σ 2 3εε σ 3 + 3εε + σ 4 3εε σ 4εε + σ 4εε σ 2 + 4εε + σ 3 4εε σ 4 + 4εε + σ σ + σ 2 + σ 3 + σ 4 + σ C n = ε + σ + εε + σ 2 + εε + σ 3 + εε + σ 4 + εε + σ h 2εε + σ + 2εε + σ 2 + 2εε + σ 3 + 2εε + σ 4 + 2εε + σ 3εε + σ + 3εε + σ 2 + 3εε + σ 3 + 3εε + σ 4 + 3εε + σ 4εε + σ + 4εε + σ 2 + 4εε + σ 3 + 4εε + σ 4 + 4εε + σ σ + σ 2 σ 3 + σ 4 σ D n = ε σ + εε + σ 2 εε σ 3 + εε + σ 4 εε σ h 2εε σ + 2εε + σ 2 2εε σ 3 + 2εε + σ 4 2εε σ 3εε σ + 3εε + σ 2 3εε σ 3 + 3εε + σ 4 3εε σ 4εε σ + 4εε + σ 2 4εε σ 3 + 4εε + σ 4 4εε σ σ 2 σ 4 + σ 6 σ 8 + σ F = 2ε σ 2 + 2εε + σ 4 2εε σ 6 + 2εε + σ 8 2εε σ h 4εε + σ 2 4εε σ 4 + 4εε + σ 6 4εε σ 8 + 4εε + σ 6εε σ 2 + 6εε + σ 4 6εε σ 6 + 6εε + σ 8 6εε σ 8εε + σ 2 8εε σ 4 + 8εε + σ 6 8εε σ 8 + 8εε + σ σ 2 + σ 4 + σ 6 + σ 8 + σ F N = 2ε + σ 2 + 2εε + σ 4 + 2εε + σ 6 + 2εε + σ 8 + 2εε + σ h 4εε + σ 2 + 4εε + σ 4 + 4εε + σ 6 + 4εε + σ 8 + 4εε + σ 6εε + σ 2 + 6εε + σ 4 + 6εε + σ 6 + 6εε + σ 8 + 6εε + σ 8εε + σ 2 + 8εε + σ 4 + 8εε + σ 6 + 8εε + σ 8 + 8εε + σ Once all the local matrices are computed, we use them to assemble the global matrix. The assembly depends on the order of the unknowns α n l,j. So, assuming that the unknowns are listed as (α l+,, α l+,, α l+,2, α l+,3, α l+,4,, α N l+,, α N l+,, α N l+,2, α N l+,3, α N l+,4 ), 9

16 The global matrix has the following tri-diagonal form: Θ D E Θ D 2 E 2 Θ 2 D 3 E 3 D N 2 E N 2 Θ N D N E N Θ N where Θ n = A n + B n + C n+ + t M n, Θ = A + F + C + t M, and Θ N = A N- + F N + B N- + t M N CONVERGENCE OF THE DG METHOD Now, I would like to discuss the error obtained during this process. Our results will show that as one decreases the mesh size h (ie; increases the number of intervals N), then the numerical error decreases correspondingly. Define the numerical error obtained at the point (x, t i ) by: e h (t i )(x) = u(x, t i ) P i DG (x). Then, the L 2 norm of the error is: e h (t i ) L 2 (,) = (e h t i ) 2 dx One can prove that, [,3,4] 2. e h l (L 2 ) = o(h k+ + t) for εε = (3) and [,3,4] e h l (L 2 ) = o(h k + t) for εε = oooo. (4)

17 The following tables contain experimental results obtained by our method. The data confirms the theoretical results predicted by (3) and (4). We test the method with two exact solutions: u (x, t) = sin(t) + e x2 aaaaaa u 2 (x, t) = t 2 e x2 We first describe experiments with u. In polynomial degree 2, we first investigate the rate convergence of the solution with εε =. We choose a very small time step, t = /5. In order to test our results against those predicted by theory, we need the following inequality to hold in our experiments: t h k+ We begin our experiments with a small penalty parameter, σ =., and increase it until we achieve the error ratios predicted by theory. We increase σ by an order of magnitude with each experiment, testing for σ =.,.,,,, and. With mesh sizes /8 and /6, we see good accuracy with maximum error in the neighborhood of 5. However, the proper error ratios (in this case 2 k+ = 8) are not achieved until σ =. With mesh size /32 a good accuracy is only achieved at σ =, and convergence is sub-optimal until σ =. Next, we test the rates of convergence for εε =, and as before we test for σ =.,.,,,, and. We see that good accuracy is achieved immediately, with maximum error in the neighborhood of 5. As σ increases, we see that the maximum error becomes smaller as the mesh size h becomes smaller. With σ = and, the maximum error for mesh size /8 is around 5 for mesh size /6 around 6, and for mesh size /32 around 7. By (4), optimal convergence requires error ratios to be equal to 4. The error ratios start out around 2 with σ =.,., and for all mesh sizes. With σ = the error ratio equals 5.87 between mesh sizes /8 and /6 (better than optimal convergence), and equals 3.32 between mesh sizes /6 and /32 (sub-optimal convergence). Finally, better than optimal convergence with a ratio of around 8 is obtained for all mesh sizes with σ = and. The last experiment we conduct with solution u with basis functions of polynomial degree 2, is for εε =. Good accuracy for all mesh sizes is immediate, with maximum error in the neighborhood of 5. Again, as σ increases, we see that the maximum error becomes smaller as the mesh size h becomes smaller. With σ = and, the maximum error for mesh size /8 is around 5 for mesh size /6 around 6, and for mesh size /32 around 7. Optimal convergence requires error ratio to equal 4. The error ratios start

18 out around 3.5 with σ =.,., and for mesh sizes /8 and /6, and around 6 for mesh size /32. With σ = the error ratio equals 7.44 between mesh sizes /8 and /6, and equals 7.63 between mesh sizes /6 and /32. Finally, a ratio of around 8 is obtained for all mesh sizes with σ = and. Table : Experiments with u (x, t) = sin(t) + e x2 and polynomial degree 2. h Dt max error (L2) ratio (fixed Ntm) With poly. Deg=2 for sin(t)+e^(-x^2) with ε =-, σ =. /8 / E-5 /32 / E+5. for sin(t)+e^(-x^2) with ε =-, σ =. /8 / E-5 /6 / E-5.87 /32 / E+5. for sin(t)+e^(-x^2) with ε =-, σ = /8 / E-5 /6 /5.72E-4.72 /32 / E+5. for sin(t)+e^(-x^2) with ε =-, σ = /8 / E-5 /6 / E-5.68 /32 / E-. for sin(t)+e^(-x^2) with ε =-, σ = /8 /5 3.58E-5 /6 /5 4.82E /32 /5.E-5.4 for sin(t)+e^(-x^2) with ε =-, σ = /8 / E-5 /6 / E /32 /5 5.2E

19 In polynomial degree 3, we first investigate the rate convergence of the solution with εε =. We choose t = /5 as our time step, since in order to test our results against those predicted by theory we need the following inequality to hold in our experiments: t h k+ Again, we begin our experiments with a small penalty parameter, σ =., and increase it until we achieve the error ratios predicted by theory. We increase σ by an order of magnitude with each experiment, testing for σ =.,.,,,, and. With mesh size /8, we see good accuracy with maximum error in the neighborhood of 4. However, the proper error ratios (in this case 2 k+ = 6) are not achieved until σ =. With mesh size /32 a good accuracy and optimal convergence is only achieved at σ =. Next, we test the rates of convergence for εε =, and as before we test for σ =.,.,,,, and. We see that good accuracy is achieved immediately, with maximum error at most in the neighborhood of 6. As σ increases, we see that the maximum error becomes smaller as the mesh size h becomes smaller. With σ = and, the maximum error for mesh size /8 is around 7 for mesh size /6 around 8, and for mesh size /32 around 9. By (4), optimal convergence requires error ratios to be equal to 8. The error ratios start out around 2.65 with σ =., and. for all mesh sizes. With σ = the error ratio equals 7.39 between mesh sizes /8 and /6 (sub- optimal convergence), and equals 6.57 between mesh sizes /6 and /32 (sub-optimal convergence). Finally, better than optimal convergence with a ratio of at least.52 is obtained for all mesh sizes with σ = and. The last experiment we conduct with solution u with basis functions of polynomial degree 3, is for εε =. Good accuracy for all mesh sizes is immediate, with maximum error at most in the neighborhood of 7. Again, as σ increases, we see that the maximum error becomes smaller as the mesh size h becomes smaller. With σ = and, the maximum error for mesh size /8 is around 7 for mesh size /6 around 8, and for mesh size /32 around 9. Optimal convergence requires error ratio to equal 8. The error ratios start out around.94 with σ =.,., and for mesh sizes /8 and /6, and around 6.75 for mesh size /32. With σ = the error ratio equals 2.5 between mesh sizes /8 and /6, and equals 5.95 between mesh sizes /6 and /32. Finally, a ratio of around 4.57 is obtained between mesh sizes /8 and /6, and 5.4 between mesh sizes /6 and /32 with σ = and. 3

20 Table 2: Experiments with u (x, t) = sin(t) + e x2 and polynomial degree 3. h dt max error (L2) ratio (fixed Ntm) With poly. Deg=3 for sin(t)+e^(-x^2) with ε=-, σ =. /8 /5.337E-4 /6 / E+3. for sin(t)+e^(-x^2) with ε=-, σ =. /8 /5.87E-4 /6 /5 3.89E+3. /32 / E+8. for sin(t)+e^(-x^2) with ε=-, σ = /8 /5.398E-4 /6 / E+3. /32 / E+43. for sin(t)+e^(-x^2) with ε=-, σ = /8 /5 2.22E-5 /6 / E-. /32 /5.456E+22. for sin(t)+e^(-x^2) with ε=-, σ = /8 / E-7 /6 / E-6. /32 /5.56E+. for sin(t)+e^(-x^2) with ε=-, σ = /8 / E-7 /6 / E /32 / E for sin(t)+e^(-x^2) with ε=-, σ = /8 / E-7 /6 /5 3.62E /32 / E for sin(t)+e^(-x^2) with ε=, σ =. /8 /5 2.27E-6 /6 / E for sin(t)+e^(-x^2) with ε=, σ =. /8 / E-6 /6 / E

21 h dt max error (L2) ratio (fixed Ntm) /32 / E for sin(t)+e^(-x^2) with ε=, σ = /8 / E-6 /6 / E /32 / E-7.88 for sin(t)+e^(-x^2) with ε=, σ = /8 /5.425E-6 /6 / E /32 / E for sin(t)+e^(-x^2) with ε=, σ = /8 / E-7 /6 / E-8.52 /32 / E-9.7 for sin(t)+e^(-x^2) with ε=, σ = /8 / E-7 /6 / E /32 /5 2.33E for sin(t)+e^(-x^2) with ε=, σ =. /8 / E-7 /6 / E-8.94 /32 / E for sin(t)+e^(-x^2) with ε=, σ =. /8 / E-7 /6 /5 7.57E-8.96 /32 / E for sin(t)+e^(-x^2) with ε=, σ = /8 / E-7 /6 /5 7.87E-8.94 /32 / E for sin(t)+e^(-x^2) with ε=, σ = /8 / E-7 /6 /5 6.88E /32 / E for sin(t)+e^(-x^2) with ε=, σ = /8 / E-7 /6 /5 4.77E

22 h dt max error (L2) ratio (fixed Ntm) /32 / E for sin(t)+e^(-x^2) with ε=, σ = /8 /5 5.25E-7 /6 / E /32 / E Next, in polynomial degree 4, we first investigate the rate convergence of the solution with εε =. We choose t = /34 as our time step, since in order to test our results against those predicted by theory we need the following inequality to hold in our experiments: t h k+ Again, we begin our experiments with a small penalty parameter, σ =., and increase it until we achieve the error ratios predicted by theory. We increase σ by an order of magnitude with each experiment, testing for σ =.,.,,,, and. With mesh sizes /8 and /6, we see good accuracy with maximum error in the neighborhood of 9 and, respectively. However, the proper error ratios (in this case 2 k+ = 32) are not achieved for these mesh sizes until σ =. With mesh size /32 a good accuracy is achieved with σ =, and optimal convergence is only achieved at σ =3. Next, we test the rates of convergence for εε =, and as before we test for σ =.,.,,,, and. We see that good accuracy and better than optimal convergence is achieved immediately for mesh sizes /8 and /6, with maximum error at most in the neighborhood of 9 and error ratio equal to As σ increases, we see that the maximum error becomes smaller as the mesh size h becomes smaller. With σ = and, the maximum error for mesh size /8 is around 9 for mesh size /6 around, and for mesh size /32 around 2. By (4), optimal convergence requires error ratios to be equal to 6. The error ratios start out sub-optimally around 7 for mesh size /32 with σ =., and.. With σ = the error ratio remains beyond optimal at around 32 between mesh sizes /8 and /6, and improves to between mesh sizes /6 and /32 (better than optimal convergence). Finally, better than optimal convergence with a ratio of at least 3.9 is obtained for all mesh size with σ =. The last experiment we conduct with solution u with basis functions of polynomial degree 4, is for εε =. Good accuracy for all mesh sizes is immediate, with maximum error at most 6

23 in the neighborhood of 9. Again, as σ increases, we see that the maximum error becomes smaller as the mesh size h becomes smaller. With σ = and, the maximum error for mesh size /8 is around 9 for mesh size /6 around, and for mesh size /32 around 2. Optimal convergence requires error ratio to equal 6. The error ratios start out around 22. with σ =.,., and for mesh sizes /8 and /6, and around.85 for mesh size /32. With σ = the error ratio equals between mesh sizes /8 and /6, and equals 9.84 between mesh sizes /6 and /32. Finally, a ratio of over 3 is obtained between mesh sizes /8 and /6, and at least over 3 between mesh sizes /6 and /32 with σ = and. Table 3: Experiments with u (x, t) = sin(t) + e x2 and polynomial degree 4. h dt max error (L2) ratio (fixed Ntm) With poly. Deg=4 for sin(t)+e^(-x^2) with ε=-, σ =. /8 /34 8.2E-9 /6 / E for sin(t)+e^(-x^2) with ε=-, σ =. /8 / E-9 /6 / E-.3 for sin(t)+e^(-x^2) with ε=-, σ = /8 /34 8.4E-9 /6 / E- 9.9 /32 / E-.52 for sin(t)+e^(-x^2) with ε=-, σ = /8 / E-9 /6 / E /32 / E for sin(t)+e^(-x^2) with ε=-, σ = /8 / E-9 /6 / E- 3.8 /32 / E for sin(t)+e^(-x^2) with ε=-, σ =3 /8 / E-9 /6 /34 3.3E /32 / E

24 h dt max error (L2) ratio (fixed Ntm) for sin(t)+e^(-x^2) with ε=-, σ =5 /8 / E-9 /6 / E /32 / E for sin(t)+e^(-x^2) with ε=-, σ = /8 / E-9 /6 / E /32 / E for sin(t)+e^(-x^2) with ε=, σ =. /8 / E-9 /6 / E /32 / E- 7. for sin(t)+e^(-x^2) with ε=, σ =. /8 / E-9 /6 / E /32 /34 6.2E for sin(t)+e^(-x^2) with ε=, σ = /8 / E-9 /6 / E /32 / E- 7. for sin(t)+e^(-x^2) with ε=, σ = /8 / E-9 /6 / E /32 / E for sin(t)+e^(-x^2) with ε=, σ = /8 / E-9 /6 / E- 3.3 /32 / E for sin(t)+e^(-x^2) with ε=, σ = /8 / E-9 /6 / E /32 / E for sin(t)+e^(-x^2) with ε=, σ =. /8 / E-9 /6 /34 3.8E- 22. /32 / E-.85 for sin(t)+e^(-x^2) with ε=, σ =. /8 / E-9 8

25 h dt max error (L2) ratio (fixed Ntm) /6 /34 3.6E /32 / E-.94 for sin(t)+e^(-x^2) with ε=, σ = /8 / E-9 /6 / E /32 / E-.83 for sin(t)+e^(-x^2) with ε=, σ = /8 / E-9 /6 / E /32 / E for sin(t)+e^(-x^2) with ε=, σ = /8 / E-9 /6 / E- 3.9 /32 / E for sin(t)+e^(-x^2) with ε=, σ = /8 / E-9 /6 /34t 3.5E /32 / E Now, I will describe some experiments with u 2. Experiments with all polynomial degrees yielded similar results, so only polynomial degree 2 will be described. In polynomial degree 2, we first investigate the rate convergence of the solution with εε =. As with u, we choose a very small time step, t = /5. We begin our experiments with a small penalty parameter, σ =., and attempt to increase it until we achieve the error ratios predicted by theory. We increase σ by an order of magnitude with each experiment, testing for σ =.,.,,,, and. However, we immediately see an excellent approximation to the actual solution with mesh sizes /8 and /6, where we have accuracy with maximum error in the neighborhood of with σ =.. For mesh size /32, we achieve accuracy with maximum error in the neighborhood of with σ =. However, the proper error ratios (in this case 2 k+ = 8) are not achieved for any mesh size and any σ, since the approximation is so accurate. We achieve an error ratio of. between all mesh sizes for σ =. Next, we test the rates of convergence for εε =, and as before we test for σ =.,.,,,, and. We see that good accuracy is achieved immediately, with maximum error in the neighborhood of for all mesh sizes. Again, since the approximation is very 9

26 accurate, proper error ratios are not achieved (in this case (in this case 2 k = 4). We achieve an error ratio of. between all mesh sizes for σ =. The last experiment we conduct with solution u 2 with basis functions of polynomial degree 2, is for εε =. Good accuracy for all mesh sizes is immediate, with maximum error in the neighborhood of 5. With σ = and, the maximum error for mesh size /8 is around 5 for mesh size /6 around 6, and for mesh size /32 around 7. Optimal convergence requires error ratio to equal 4. The error ratios start out around 3.5 with σ =.,., and for mesh sizes /8 and /6, and around 6 for mesh size /32. With σ = the error ratio equals 7.44 between mesh sizes /8 and /6, and equals 6.7 between mesh sizes /6 and /32. Finally, a ratio of around 8 is obtained for all mesh sizes with σ = and. Table 4: Experiments with u 2 (x, t) = t 2 e x2 and polynomial degree 2. h dt max error (L2) ratio (fixed Ntm) for ((t)^2)*e^(-x^2) with ε=-, σ =. /8 / E- /6 / E-9.4 /32 / E+2. for ((t)^2)*e^(-x^2) with ε=-, σ =. /8 / E- /6 / E-9.5 /32 /5.5835E+2. for ((t)^2)*e^(-x^2) with ε=-, σ = /8 / E- /6 / E-9.4 /32 /5 4.97E+2. for ((t)^2)*e^(-x^2) with ε=-, σ = /8 / E- /6 / E-.6 /32 /5 2.79E-5. for ((t)^2)*e^(-x^2) with ε=-, σ = /8 /5 2.84E- /6 /5 2.87E-. /32 / E-.82 2

27 h dt max error (L2) ratio (fixed Ntm) for ((t)^2)*e^(-x^2) with ε=-, σ = /8 / E- /6 /5 2.88E-. /32 /5 2.88E-. for ((t)^2)*e^(-x^2) with ε=, σ =. /8 / E- /6 / E-. /32 / E-. for ((t)^2)*e^(-x^2) with ε=, σ =. /8 / E- /6 / E-. /32 / E-. for ((t)^2)*e^(-x^2) with ε=, σ =. /8 / E- /6 / E-. /32 / E-. for ((t)^2)*e^(-x^2) with ε=, σ = /8 / E- /6 / E-. /32 / E-. for ((t)^2)*e^(-x^2) with ε=, σ = /8 / E- /6 /5 2.86E-. /32 / E-. for ((t)^2)*e^(-x^2) with ε=, σ = /8 / E- /6 / E-. /32 / E-. for ((t)^2)*e^(-x^2) with ε=, σ =. /8 /5 3.79E-5 /6 /5.74E /32 /5.7688E for ((t)^2)*e^(-x^2) with ε=, σ =. /8 / E-5 /6 /5.38E /32 /5.5982E for ((t)^2)*e^(-x^2) with ε=, σ = /8 /5 3.82E-5 2

28 h dt max error (L2) ratio (fixed Ntm) /6 /5.823E /32 /5.7896E for ((t)^2)*e^(-x^2) with ε=, σ = /8 / E-5 /6 / E /32 /5 4.24E for ((t)^2)*e^(-x^2) with ε=, σ = /8 /5 3.36E-5 /6 / E /32 / E for ((t)^2)*e^(-x^2) with ε=, σ = /8 / E-5 /6 / E /32 /5 5.9E DG IN TIME AND SPACE SCHEME As for the BE method, we subdivide the time interval [, T]: where t n = n t for some time step t >. N T [, T] = [t n, t n+ ] On each subinterval (t n, t n+ ), the solution is derived by integrating and adding jump terms to n= u (x, t)v(x, t)dx + aa t εε (u, v) = L(t, v) Note that aa εε and L(v) are already discretized in space with the DG method in space. Thus, we have: 22

29 t n + u (x, t)v(x, t)dxdt + aa t n t εε (u, v)dt + u(x, t + n ) v(x, t + n )dx t n t n + t n + = L(t, v)dt + u(x, t n ) v(x, t + n )dx (5) t n We denote by P (n) (x, t) the approximation of u(x, t) on the interval (t n, t n+ ). We solve the following equation for P (n) (x, t): t n + P(n) t n + (x, t)v(x, t)dxdt + aa t n t εε (P (n), v)dt + P (n) (x, t + n ) v(x, t + n )dx t n t n + And by convention, P ( ) (x, t ) = u (x). = L(t, v)dt + P (n ) (x, t n ) v(x, t + n )dx (6) t n r In the above formula, we have v(x, t) = i= t i v i (x), with v i (x) usual polynomial of degree k in space and r is the degree of polynomials over time. Our choice of basis functions for r=4 as an example are: Now, with r=, we write:, t t n t, (t t n) 2 t 2, (t t n) 3 t 3, (t t n) 4 t 4 P (n) (x, t n ) = P (n) (x) + t t n t Therefore, (6) becomes P 2 (n) (x), for P (n), P 2 (n) in PP kk P (n) t = t P (n) 2 (x) t n t n + t P 2 (n) (x) v(x, t)dxdt t n + + aa εε P (n) (x) + t t n P (n) t 2 (x), v(x, t) dt + P (n) (x, t + n ) v(x, t + n )dx t n t n + We evaluate P (n) (x, t n + ): = L(t, v)dt + P (n ) (x, t n ) v(x, t + n )dx (7) t n 23

30 P (n) (x, t + n ) = P (x) + t n + t n P t 2 (x) = P (x), ie; the calculations only involve the space basis functions. First, we consider v(x, t) = v (x) for any v (x) in PP kk. Thus, (7) becomes Next, with v = t t n t 2 P 2 P (n) 2 (x) v (x)dx + aa εε (P, v ) t + aa εε (P 2, v ) t + P(n) (x, t + 2 n )v (x)dx (n) (x) t n + = L(t, v)dt + P (n ) (x, t n ) v (x)dx t n v (x), (7) becomes v (x)dx + t 2 aa εε P (n) (x), v + t 3 aa εε P (n) 2 (x), v + P (n) = t (t t n)l(t, v)dt + P (n ) (x, t n ) v (x)dx t n t n + Concerning the error in this scheme, as before, one can prove that e h l (L 2 ) = o(h k+ + t 2 ) for εε = and e h l (L 2 ) = o(h k + t 2 ) for εε = oooo. (x, t n + )v (x)dx The following tables contain experimental results obtained by our method. We test the method with the two exact solutions as before: u (x, t) = sin(t) + e x2 aaaaaa u 2 (x, t) = t 2 e x2 We first describe experiments with u. In polynomial degree 2, we first investigate the rate of convergence of the solution with εε =. We choose a time step much larger than we did in the Backward Euler scheme, t = /24. In order to test our results against those predicted by theory, we need the following inequality to hold in our experiments: t 2 h k+ 24

31 We begin our experiments with a small penalty parameter, σ =., and increase it until we achieve the error ratios predicted by theory. We increase σ by an order of magnitude with each experiment, testing for σ =.,.,,,, and. With mesh sizes /8 and /6, we see good accuracy with maximum error in the neighborhood of 5 and 4, respectively. However, the proper error ratios (in this case 2 k+ = 8) are not achieved until σ =. With mesh size /32 a good accuracy is only achieved at σ =, and convergence is sub-optimal until σ =. Next, we test the rates of convergence for εε =, and as before we test for σ =.,.,,,, and. We see that good accuracy is achieved immediately, with maximum error in the neighborhood of 5 for mesh sizes /8 and /6, and 6 for mesh size /32. With σ = and, the maximum error for mesh size /8 is around 5 for mesh size /6 around 6, and for mesh size /32 around 7. By (4), optimal convergence requires error ratios to be equal to 4. The error ratios start out between.85 and 3.65 with σ =.,., and for all mesh sizes. With σ = the error ratio equals 4.98 between mesh sizes /8 and /6 (better than optimal convergence), and equals 4.9 between mesh sizes /6 and /32 (better than optimal convergence). Finally, better than optimal convergence with a ratios between 6.86 and 8.3 are obtained for all mesh sizes with σ = and. The last experiment we conduct with solution u with basis functions of polynomial degree 2, is for εε =. Good accuracy for all mesh sizes is immediate, with maximum error between 5 and 6. With σ = and, the maximum error for mesh size /8 is around 5 for mesh size /6 around 6, and for mesh size /32 around 7. Optimal convergence requires error ratio to equal 4. The error ratios start out around 4 with σ =. for mesh sizes /8 and /6, and around 5 for mesh size /32. With σ = the error ratio equals 6.94 between mesh sizes /8 and /6, and equals 7.67 between mesh sizes /6 and /32. Finally, a ratio of around 8 is obtained for all mesh sizes with σ = and. Table 5: Experiments with u (x, t) = sin(t) + e x2 and polynomial degree 2. h dt Max Error (L2) Error Ratio (fixed Ntm) With poly. Deg=2 for sin(t)+e^(-x^2) with ε =-, σσ =. /8 / E-5 25

32 h dt Max Error (L2) Error Ratio (fixed Ntm) /6 / E-4.25 /32 /24 9.3E+5. for sin(t)+e^(-x^2) with ε =-, σσ =. /8 / E-5 /6 / E-5.8 /32 / E+5. for sin(t)+e^(-x^2) with ε =-, σσ = /8 / E-5 /6 /24 2.3E-4.39 /32 / E+5. for sin(t)+e^(-x^2) with ε =-, σσ = /8 /24.273E-5 /6 / E-5.63 /32 / E-. for sin(t)+e^(-x^2) with ε =-, σσ = /8 / E-5 /6 / E /32 / E-5.2 for sin(t)+e^(-x^2) with ε =-, σσ = /8 / E-5 /6 / E /32 /24 6.3E for sin(t)+e^(-x^2) with ε =, σσ =. /8 / E-5 /6 /24.32E /32 / E-6.69 for sin(t)+e^(-x^2) with ε =, σσ =. /8 / E-5 /6 / E /32 / E for sin(t)+e^(-x^2) with ε =, σσ = /8 / E-5 /6 /24.732E /32 / E

33 h dt Max Error (L2) Error Ratio (fixed Ntm) for sin(t)+e^(-x^2) with ε =, σσ = /8 /24.992E-5 /6 /24 4.2E /32 / E for sin(t)+e^(-x^2) with ε =, σσ = /8 / E-5 /6 / E /32 / E for sin(t)+e^(-x^2) with ε =, σσ = /8 / E-5 /6 /24 3.7E /32 / E for sin(t)+e^(-x^2) with ε =, σσ =. /8 /24 4.2E-5 /6 /24.23E /32 /24 2.E-6 5. for sin(t)+e^(-x^2) with ε =, σσ =. /8 / E-5 /6 /24.38E /32 / E for sin(t)+e^(-x^2) with ε =, σσ = /8 / E-5 /6 / E /32 /24.23E for sin(t)+e^(-x^2) with ε =, σσ = /8 / E-5 /6 / E /32 /24 4.3E for sin(t)+e^(-x^2) with ε =, σσ = /8 / E-5 /6 /24 4.2E /32 /24 5.9E for sin(t)+e^(-x^2) with ε =, σσ = /8 / E-5 /6 / E-6 8. /32 /24 5.2E

34 In polynomial degree 3, we again first investigate the rate of convergence of the solution with εε =. We again choose a time step much larger than we did in the Backward Euler scheme, t = /24. In order to test our results against those predicted by theory, we need the following inequality to hold in our experiments: t 2 h k+ We begin our experiments with a small penalty parameter, σ =., and increase it until we achieve the error ratios predicted by theory. We increase σ by an order of magnitude with each experiment, testing for σ =.,.,,,, and. With mesh size /8 we immediately see good accuracy with maximum error in the neighborhood of 4. However, the proper error ratios (in this case 2 k+ = 6) are not achieved until σ =. With mesh size /32 a good accuracy and convergence are only achieved at σ =. Since convergence was established only with a high σ, we also tested for convergence with the additional value of σ =. This did not significantly increase accuracy or error ratios. Next, we test the rates of convergence for εε =, and as before we test for σ =.,.,,,, and. We see that good accuracy is achieved immediately, with maximum error in the neighborhood of 6 and 7 for mesh sizes /8 and /6, and 7 for mesh size /32 with σ =.. With σ = and, the maximum error for mesh size /8 is around 7, for mesh size /6 around 8, and for mesh size /32 around 9. By (4), optimal convergence requires error ratios to be equal to 8. The error ratios start out between 2.2 and 3.64 with σ =.,., and for all mesh sizes. With σ = the error ratio equals 4.58 between mesh sizes /8 and /6 (sub optimal convergence), and equals 6.83 between mesh sizes /6 and /32 (sub optimal convergence). Finally, better than optimal convergence with ratios between.7 and 6.39 are obtained for all mesh sizes with σ = and. The last experiment we conduct with solution u with basis functions of polynomial degree 3, is for εε =. Good accuracy for all mesh sizes is immediate, with maximum error between 7 and 9. With σ = and, the maximum error for mesh size /8 is around 7 for mesh size /6 around 8, and for mesh size /32 around 9. Optimal convergence requires error ratios to equal 4. The error ratios start out around with σ =. for mesh sizes /8 and /6, and around 5.5 for mesh size /32. With σ = the error ratio equals 2.35 between mesh sizes /8 and /6, and equals 6.4 between mesh sizes /6 and /32. Finally, a ratio of between 4 and 5.5 is obtained for all mesh sizes with σ = and. 28

35 Table 6: Experiments with u (x, t) = sin(t) + e x2 and polynomial degree 3. h dt Max Error (L2) Error Ratio (fixed Ntm) With poly. Deg=3 for sin(t)+e^(-x^2) with ε =-, σσ =. /8 /24 2.3E-4 /6 / E+5. for sin(t)+e^(-x^2) with ε =-, σσ =. /8 /24.E-4 /6 / E+5. /32 / E+5. for sin(t)+e^(-x^2) with ε=-, σσ = /8 /24.57E-4 /6 / E+5. /32 /24.78E+3. for sin(t)+e^(-x^2) with ε=-, σσ = /8 /24 2.3E- /6 / E-.79 /32 / E+9. for sin(t)+e^(-x^2) with ε=-, σσ = /8 / E-7 /6 /24 6.2E-6.7 /32 /24 2.E+. for sin(t)+e^(-x^2) with ε=-, σσ = /8 / E-7 /6 / E /32 /24.99E for sin(t)+e^(-x^2) with ε=-, σσ = /8 /24 5.2E-7 /6 / E /32 /24.74E

36 h dt Max Error (L2) Error Ratio (fixed Ntm) for sin(t)+e^(-x^2) with ε=, σσ =. /8 /24 2.7E-6 /6 / E for sin(t)+e^(-x^2) with ε=, σσ =. /8 /24 2.9E-6 /6 / E /32 / E for sin(t)+e^(-x^2) with ε=, σσ = /8 /24.993E-6 /6 /24 9.2E /32 / E for sin(t)+e^(-x^2) with ε=, σσ = /8 /24.47E-6 /6 /24 3.2E /32 / E for sin(t)+e^(-x^2) with ε=, σσ = /8 / E-7 /6 / E-8.96 /32 / E-9.7 for sin(t)+e^(-x^2) with ε=, σσ = /8 /24 5.3E-7 /6 / E /32 /24 2.2E for sin(t)+e^(-x^2) with ε=, σσ =. /8 / E-7 /6 / E /32 / E for sin(t)+e^(-x^2) with ε=, σσ =. /8 / E-7 /6 / E-8 3. /32 /24 4.9E for sin(t)+e^(-x^2) with ε=, σσ = /8 / E-7 /6 / E /32 / E

37 h dt Max Error (L2) Error Ratio (fixed Ntm) for sin(t)+e^(-x^2) with ε=, σσ = /8 / E-7 /6 /24 6.2E /32 / E for sin(t)+e^(-x^2) with ε=, σσ = /8 / E-7 /6 / E /32 / E for sin(t)+e^(-x^2) with ε=, σσ = /8 / E-7 /6 / E /32 / E In polynomial degree 4, we again first investigate the rate of convergence of the solution with εε =. We again choose a time step much larger than we did in the Backward Euler scheme, t = /6. In order to test our results against those predicted by theory, we need the following inequality to hold in our experiments: t 2 h k+ We begin our experiments with a small penalty parameter, σ =., and increase it until we achieve the error ratios predicted by theory. We increase σ by an order of magnitude with each experiment, testing for σ =.,.,,,, and. With mesh sizes /8 and /6 we immediately see good accuracy with maximum error in the neighborhood of 9. However, the proper error ratios (in this case 2 k+ = 32) are not achieved until σ =. With mesh size /32 a good accuracy is achieved with σ = and convergence is only achieved at σ =3. Since convergence was established only with a high σ, we also tested for convergence with the additional values of σ = 5 and. This did not significantly increase accuracy or error ratios. Next, we test the rates of convergence for εε =, and as before we test for σ =.,.,,,, and. We see that good accuracy is achieved immediately, with maximum error in the neighborhood of 9 and for mesh sizes /8 and /6, and for mesh size /32 with σ =.. With σ = and, the maximum error for mesh size /8 is again around 9, for mesh size /6 around, and for mesh size /32 around 2. By (4), optimal convergence requires error ratios to be equal to 6. The error ratios vacillate between around 7 and 9.5 with σ =.,., and for mesh sizes /8 and /6. For the 3

38 same σ values, the error ratios for mesh size /32 are between 6.7 and 7.3. With σ = the error ratio equals 3.8 between mesh sizes /8 and /6 (better than optimal convergence), and equals between mesh sizes /6 and /32 (better than optimal convergence). Finally, better than optimal convergence with ratios 32 are obtained for all mesh sizes with σ = and. The last experiment we conduct with solution u with basis functions of polynomial degree 4, is for εε =. Good accuracy for all mesh sizes is immediate, with maximum error between 9 and. With σ = and, the maximum error for mesh size /8 is around 9 for mesh size /6 around, and for mesh size /32 around 2. Optimal convergence requires error ratios to equal 6. The error ratios start out around 23 with σ =. for mesh sizes /8 and /6, and around.84 for mesh size /32. With σ = the error ratio equals between mesh sizes /8 and /6, and equals.95 between mesh sizes /6 and /32. Finally, a ratio of between 3 and 32 is obtained for all mesh sizes with σ = and. Figure 7: Experiments with u (x, t) = sin(t) + e x2 and polynomial degree 4. h dt Max Error (L2) Error Ratio (fixed Ntm) With poly. Deg=4 for sin(t)+e^(-x^2) with ε =-, σσ =. /8 /6 8.23E-9 /6 /6 8.3E-.2 for sin(t)+e^(-x^2) with ε =-, σσ =. /8 /6 8.7E-9 /6 /6 8.E-. for sin(t)+e^(-x^2) with ε=-, σσ = /8 /6 8.2E-9 /6 /6 8.27E-.2 /32 / E-.4 for sin(t)+e^(-x^2) with ε=-, σσ = /8 /6 7.37E-9 /6 /6 3.23E /32 /6 5.54E- 6. for sin(t)+e^(-x^2) with ε=-, σσ = /8 /6 9.4E-9 32

39 h dt Max Error (L2) Error Ratio (fixed Ntm) /6 / E- 3.5 /32 /6 2.4E for sin(t)+e^(-x^2) with ε=-, σσ =3 /8 / E-9 /6 /6 3.2E- 3.9 /32 / E for sin(t)+e^(-x^2) with ε=-, σσ =5 /8 / E-9 /6 / E /32 / E for sin(t)+e^(-x^2) with ε=-, σσ = /8 /6 9.87E-9 /6 / E- 33. /32 /6 9.87E for sin(t)+e^(-x^2) with ε=, σσ =. /8 /6 8.2E-9 /6 /6 4.92E /32 /6 6.57E- 6.7 for sin(t)+e^(-x^2) with ε=, σσ =. /8 / E-9 /6 / E /32 /6 6.2E for sin(t)+e^(-x^2) with ε=, σσ = /8 / E-9 /6 /6 4.27E /32 /6 6.75E for sin(t)+e^(-x^2) with ε=, σσ = /8 /6 6.87E-9 /6 /6 2.25E- 3.8 /32 / E for sin(t)+e^(-x^2) with ε=, σσ = /8 /6 9.92E-9 /6 /6 2.9E /32 / E for sin(t)+e^(-x^2) with ε=, σσ = /8 /6 9.79E-9 /6 /6 3.43E

40 h dt Max Error (L2) Error Ratio (fixed Ntm) /32 / E for sin(t)+e^(-x^2) with ε=, σσ =. /8 /6 7.45E-9 /6 /6 3.25E /32 /6 2.78E-.84 for sin(t)+e^(-x^2) with ε=, σσ =. /8 / E-9 /6 /6 4.27E- 7.5 /32 /6 2.57E for sin(t)+e^(-x^2) with ε=, σσ = /8 / E-9 /6 /6 3.7E /32 /6 2.57E-.95 for sin(t)+e^(-x^2) with ε=, σσ = /8 /6 6.87E-9 /6 /6 2.47E /32 /6.37E for sin(t)+e^(-x^2) with ε=, σσ = /8 /6 9.27E-9 /6 /6 3.2E /32 / E for sin(t)+e^(-x^2) with ε=, σσ = /8 /6 9.38E-9 /6 /6 2.97E- 32. /32 /6 9.78E CONCLUSIONS We have implemented high order DG methods in space, up to fourth order polynomial approximations. The two methods used for approximating solutions are the BE in time with DG in space, and DG in time and space methods. In the BE scheme, the time discretization was accomplished by a finite difference approximation of the time derivative. This is a first order, implicit scheme, which means 34

41 that there are no restrictions on the time step needed for the scheme to be stable. A restriction was imposed on the time step, however, in order to maintain the high order convergence for the space portion of the scheme. Similarly, DG in time and space is a second order, implicit in time scheme, with similar but more relaxed restrictions on time step to maintain high order convergence for the space portion of the scheme. Mainly because of these more relaxed requirements, the time step used in the DG in time and space scheme is much larger than in the BE in time method. This makes the DG in time and space method much more computationally efficient from this perspective. However, the fact that more calculations need to be performed to implement the DG in time and space scheme reduces the computational effectiveness of this scheme, at least in the D case. Our recommendation in the D case is for the implementation of the DG in time and space scheme for the advantage of the larger time steps in the scheme, and the advantages the scheme would yield in higher dimensional problems. The numerical rates obtained, confirmed the theoretical convergence rates. REFERENCES [] B. Riviere; M.F. Wheeler and V. Girault. "Improved energy estimates for interior penalty, constrained and discontinuous Galerkin methods for elliptic problems. Part I". Computational Geosciences, volume 8, p , April 999. [2] B. Riviere. Discontinuous Galerkin Methods for Solving Elliptic and Parabolic Equations. Book to be published by SIAM. [3] M.F. Wheeler. An elliptic collocation-finite element method with interior penalties. SIAM Journal on Numerical Analysis, volume 5, p. 52-6, February 978. [4] Clint Dawson, Shuyu Sun, Mary F. Wheeler. Compatible algorithms for coupled flow and transport. Computer Methods in Applied Mechanics and Engineering, volume 93, p , June

Application of an Interval Backward Finite Difference Method for Solving the One-Dimensional Heat Conduction Problem

Application of an Interval Backward Finite Difference Method for Solving the One-Dimensional Heat Conduction Problem Application of an Interval Backward Finite Difference Method for Solving the One-Dimensional Heat Conduction Problem Malgorzata A. Jankowska 1, Andrzej Marciniak 2 and Tomasz Hoffmann 2 1 Poznan University

More information

PDE Project Course 1. Adaptive finite element methods

PDE Project Course 1. Adaptive finite element methods PDE Project Course 1. Adaptive finite element methods Anders Logg logg@math.chalmers.se Department of Computational Mathematics PDE Project Course 03/04 p. 1 Lecture plan Introduction to FEM FEM for Poisson

More information

Outline. 1 Introduction. 2 Algorithms. 3 Examples. Algorithm 1 General coordinate minimization framework. 1: Choose x 0 R n and set k 0.

Outline. 1 Introduction. 2 Algorithms. 3 Examples. Algorithm 1 General coordinate minimization framework. 1: Choose x 0 R n and set k 0. Outline Coordinate Minimization Daniel P. Robinson Department of Applied Mathematics and Statistics Johns Hopkins University November 27, 208 Introduction 2 Algorithms Cyclic order with exact minimization

More information

Lecture 4. Finite difference and finite element methods

Lecture 4. Finite difference and finite element methods Finite difference and finite element methods Lecture 4 Outline Black-Scholes equation From expectation to PDE Goal: compute the value of European option with payoff g which is the conditional expectation

More information

Chapter 5 Finite Difference Methods. Math6911 W07, HM Zhu

Chapter 5 Finite Difference Methods. Math6911 W07, HM Zhu Chapter 5 Finite Difference Methods Math69 W07, HM Zhu References. Chapters 5 and 9, Brandimarte. Section 7.8, Hull 3. Chapter 7, Numerical analysis, Burden and Faires Outline Finite difference (FD) approximation

More information

Introduction to Numerical PDEs

Introduction to Numerical PDEs Introduction to Numerical PDEs Varun Shankar February 16, 2016 1 Introduction In this chapter, we will introduce a general classification scheme for linear second-order PDEs, and discuss when they have

More information

Financial Risk Management

Financial Risk Management Financial Risk Management Professor: Thierry Roncalli Evry University Assistant: Enareta Kurtbegu Evry University Tutorial exercices #4 1 Correlation and copulas 1. The bivariate Gaussian copula is given

More information

King s College London

King s College London King s College London University Of London This paper is part of an examination of the College counting towards the award of a degree. Examinations are governed by the College Regulations under the authority

More information

Numerical schemes for SDEs

Numerical schemes for SDEs Lecture 5 Numerical schemes for SDEs Lecture Notes by Jan Palczewski Computational Finance p. 1 A Stochastic Differential Equation (SDE) is an object of the following type dx t = a(t,x t )dt + b(t,x t

More information

Pricing Algorithms for financial derivatives on baskets modeled by Lévy copulas

Pricing Algorithms for financial derivatives on baskets modeled by Lévy copulas Pricing Algorithms for financial derivatives on baskets modeled by Lévy copulas Christoph Winter, ETH Zurich, Seminar for Applied Mathematics École Polytechnique, Paris, September 6 8, 26 Introduction

More information

A distributed Laplace transform algorithm for European options

A distributed Laplace transform algorithm for European options A distributed Laplace transform algorithm for European options 1 1 A. J. Davies, M. E. Honnor, C.-H. Lai, A. K. Parrott & S. Rout 1 Department of Physics, Astronomy and Mathematics, University of Hertfordshire,

More information

Finite Element Method

Finite Element Method In Finite Difference Methods: the solution domain is divided into a grid of discrete points or nodes the PDE is then written for each node and its derivatives replaced by finite-divided differences In

More information

A Study on Numerical Solution of Black-Scholes Model

A Study on Numerical Solution of Black-Scholes Model Journal of Mathematical Finance, 8, 8, 37-38 http://www.scirp.org/journal/jmf ISSN Online: 6-44 ISSN Print: 6-434 A Study on Numerical Solution of Black-Scholes Model Md. Nurul Anwar,*, Laek Sazzad Andallah

More information

EC316a: Advanced Scientific Computation, Fall Discrete time, continuous state dynamic models: solution methods

EC316a: Advanced Scientific Computation, Fall Discrete time, continuous state dynamic models: solution methods EC316a: Advanced Scientific Computation, Fall 2003 Notes Section 4 Discrete time, continuous state dynamic models: solution methods We consider now solution methods for discrete time models in which decisions

More information

MAFS Computational Methods for Pricing Structured Products

MAFS Computational Methods for Pricing Structured Products MAFS550 - Computational Methods for Pricing Structured Products Solution to Homework Two Course instructor: Prof YK Kwok 1 Expand f(x 0 ) and f(x 0 x) at x 0 into Taylor series, where f(x 0 ) = f(x 0 )

More information

A local RBF method based on a finite collocation approach

A local RBF method based on a finite collocation approach Boundary Elements and Other Mesh Reduction Methods XXXVIII 73 A local RBF method based on a finite collocation approach D. Stevens & H. Power Department of Mechanical Materials and Manufacturing Engineering,

More information

Using radial basis functions for option pricing

Using radial basis functions for option pricing Using radial basis functions for option pricing Elisabeth Larsson Division of Scientific Computing Department of Information Technology Uppsala University Actuarial Mathematics Workshop, March 19, 2013,

More information

FINITE DIFFERENCE METHODS

FINITE DIFFERENCE METHODS FINITE DIFFERENCE METHODS School of Mathematics 2013 OUTLINE Review 1 REVIEW Last time Today s Lecture OUTLINE Review 1 REVIEW Last time Today s Lecture 2 DISCRETISING THE PROBLEM Finite-difference approximations

More information

Financial Market Models. Lecture 1. One-period model of financial markets & hedging problems. Imperial College Business School

Financial Market Models. Lecture 1. One-period model of financial markets & hedging problems. Imperial College Business School Financial Market Models Lecture One-period model of financial markets & hedging problems One-period model of financial markets a 4 2a 3 3a 3 a 3 -a 4 2 Aims of section Introduce one-period model with finite

More information

25 Increasing and Decreasing Functions

25 Increasing and Decreasing Functions - 25 Increasing and Decreasing Functions It is useful in mathematics to define whether a function is increasing or decreasing. In this section we will use the differential of a function to determine this

More information

King s College London

King s College London King s College London University Of London This paper is part of an examination of the College counting towards the award of a degree. Examinations are governed by the College Regulations under the authority

More information

Infinite Reload Options: Pricing and Analysis

Infinite Reload Options: Pricing and Analysis Infinite Reload Options: Pricing and Analysis A. C. Bélanger P. A. Forsyth April 27, 2006 Abstract Infinite reload options allow the user to exercise his reload right as often as he chooses during the

More information

Sparse Wavelet Methods for Option Pricing under Lévy Stochastic Volatility models

Sparse Wavelet Methods for Option Pricing under Lévy Stochastic Volatility models Sparse Wavelet Methods for Option Pricing under Lévy Stochastic Volatility models Norbert Hilber Seminar of Applied Mathematics ETH Zürich Workshop on Financial Modeling with Jump Processes p. 1/18 Outline

More information

d n U i dx n dx n δ n U i

d n U i dx n dx n δ n U i Last time Taylor s series on equally spaced nodes Forward difference d n U i d n n U i h n + 0 h Backward difference d n U i d n n U i h n + 0 h Centered difference d n U i d n δ n U i or 2 h n + 0 h2

More information

Research Article Exponential Time Integration and Second-Order Difference Scheme for a Generalized Black-Scholes Equation

Research Article Exponential Time Integration and Second-Order Difference Scheme for a Generalized Black-Scholes Equation Applied Mathematics Volume 1, Article ID 796814, 1 pages doi:11155/1/796814 Research Article Exponential Time Integration and Second-Order Difference Scheme for a Generalized Black-Scholes Equation Zhongdi

More information

Project 1: Double Pendulum

Project 1: Double Pendulum Final Projects Introduction to Numerical Analysis II http://www.math.ucsb.edu/ atzberg/winter2009numericalanalysis/index.html Professor: Paul J. Atzberger Due: Friday, March 20th Turn in to TA s Mailbox:

More information

Computational Finance Finite Difference Methods

Computational Finance Finite Difference Methods Explicit finite difference method Computational Finance Finite Difference Methods School of Mathematics 2018 Today s Lecture We now introduce the final numerical scheme which is related to the PDE solution.

More information

Interpolation. 1 What is interpolation? 2 Why are we interested in this?

Interpolation. 1 What is interpolation? 2 Why are we interested in this? Interpolation 1 What is interpolation? For a certain function f (x we know only the values y 1 = f (x 1,,y n = f (x n For a point x different from x 1,,x n we would then like to approximate f ( x using

More information

Short-time-to-expiry expansion for a digital European put option under the CEV model. November 1, 2017

Short-time-to-expiry expansion for a digital European put option under the CEV model. November 1, 2017 Short-time-to-expiry expansion for a digital European put option under the CEV model November 1, 2017 Abstract In this paper I present a short-time-to-expiry asymptotic series expansion for a digital European

More information

F A S C I C U L I M A T H E M A T I C I

F A S C I C U L I M A T H E M A T I C I F A S C I C U L I M A T H E M A T I C I Nr 38 27 Piotr P luciennik A MODIFIED CORRADO-MILLER IMPLIED VOLATILITY ESTIMATOR Abstract. The implied volatility, i.e. volatility calculated on the basis of option

More information

1 Explicit Euler Scheme (or Euler Forward Scheme )

1 Explicit Euler Scheme (or Euler Forward Scheme ) Numerical methods for PDE in Finance - M2MO - Paris Diderot American options January 2018 Files: https://ljll.math.upmc.fr/bokanowski/enseignement/2017/m2mo/m2mo.html We look for a numerical approximation

More information

Pricing Dynamic Solvency Insurance and Investment Fund Protection

Pricing Dynamic Solvency Insurance and Investment Fund Protection Pricing Dynamic Solvency Insurance and Investment Fund Protection Hans U. Gerber and Gérard Pafumi Switzerland Abstract In the first part of the paper the surplus of a company is modelled by a Wiener process.

More information

32.4. Parabolic PDEs. Introduction. Prerequisites. Learning Outcomes

32.4. Parabolic PDEs. Introduction. Prerequisites. Learning Outcomes Parabolic PDEs 32.4 Introduction Second-order partial differential equations (PDEs) may be classified as parabolic, hyperbolic or elliptic. Parabolic and hyperbolic PDEs often model time dependent processes

More information

Exact shape-reconstruction by one-step linearization in EIT

Exact shape-reconstruction by one-step linearization in EIT Exact shape-reconstruction by one-step linearization in EIT Bastian von Harrach harrach@ma.tum.de Department of Mathematics - M1, Technische Universität München, Germany Joint work with Jin Keun Seo, Yonsei

More information

Backpropagation. Deep Learning Theory and Applications. Kevin Moon Guy Wolf

Backpropagation. Deep Learning Theory and Applications. Kevin Moon Guy Wolf Deep Learning Theory and Applications Backpropagation Kevin Moon (kevin.moon@yale.edu) Guy Wolf (guy.wolf@yale.edu) CPSC/AMTH 663 Calculating the gradients We showed how neural networks can learn weights

More information

EFFICIENT MONTE CARLO ALGORITHM FOR PRICING BARRIER OPTIONS

EFFICIENT MONTE CARLO ALGORITHM FOR PRICING BARRIER OPTIONS Commun. Korean Math. Soc. 23 (2008), No. 2, pp. 285 294 EFFICIENT MONTE CARLO ALGORITHM FOR PRICING BARRIER OPTIONS Kyoung-Sook Moon Reprinted from the Communications of the Korean Mathematical Society

More information

PDE Methods for the Maximum Drawdown

PDE Methods for the Maximum Drawdown PDE Methods for the Maximum Drawdown Libor Pospisil, Jan Vecer Columbia University, Department of Statistics, New York, NY 127, USA April 1, 28 Abstract Maximum drawdown is a risk measure that plays an

More information

NUMERICAL METHODS OF PARTIAL INTEGRO-DIFFERENTIAL EQUATIONS FOR OPTION PRICE

NUMERICAL METHODS OF PARTIAL INTEGRO-DIFFERENTIAL EQUATIONS FOR OPTION PRICE Trends in Mathematics - New Series Information Center for Mathematical Sciences Volume 13, Number 1, 011, pages 1 5 NUMERICAL METHODS OF PARTIAL INTEGRO-DIFFERENTIAL EQUATIONS FOR OPTION PRICE YONGHOON

More information

A model reduction approach to numerical inversion for parabolic partial differential equations

A model reduction approach to numerical inversion for parabolic partial differential equations A model reduction approach to numerical inversion for parabolic partial differential equations Liliana Borcea Alexander V. Mamonov 2, Vladimir Druskin 3, Mikhail Zaslavsky 3 University of Michigan, Ann

More information

Comparative Study between Linear and Graphical Methods in Solving Optimization Problems

Comparative Study between Linear and Graphical Methods in Solving Optimization Problems Comparative Study between Linear and Graphical Methods in Solving Optimization Problems Mona M Abd El-Kareem Abstract The main target of this paper is to establish a comparative study between the performance

More information

Of Rocket Science, Finance, and Nuclear Data: REWIND (Ranking Experiments by Weighting for Improved Nuclear Data)

Of Rocket Science, Finance, and Nuclear Data: REWIND (Ranking Experiments by Weighting for Improved Nuclear Data) Of Rocket Science, Finance, and Nuclear Data: REWIND (Ranking Experiments by Weighting for Improved Nuclear Data) G. Palmiotti Idaho National Laboratory May 20, 2015 December 2012 SG39, Paris, France Introduction

More information

The Intermediate Value Theorem states that if a function g is continuous, then for any number M satisfying. g(x 1 ) M g(x 2 )

The Intermediate Value Theorem states that if a function g is continuous, then for any number M satisfying. g(x 1 ) M g(x 2 ) APPM/MATH 450 Problem Set 5 s This assignment is due by 4pm on Friday, October 25th. You may either turn it in to me in class or in the box outside my office door (ECOT 235). Minimal credit will be given

More information

Pricing Implied Volatility

Pricing Implied Volatility Pricing Implied Volatility Expected future volatility plays a central role in finance theory. Consequently, accurate estimation of this parameter is crucial to meaningful financial decision-making. Researchers

More information

hp-version Discontinuous Galerkin Methods on Polygonal and Polyhedral Meshes

hp-version Discontinuous Galerkin Methods on Polygonal and Polyhedral Meshes hp-version Discontinuous Galerkin Methods on Polygonal and Polyhedral Meshes Andrea Cangiani Department of Mathematics University of Leicester Joint work with: E. Georgoulis & P. Dong (Leicester), P. Houston

More information

Galerkin Least Square FEM for the European option price with CEV model

Galerkin Least Square FEM for the European option price with CEV model Galerkin Least Square FEM for the European option price with CEV model A Major Qualifying Project Submitted to the Faculty of Worcester Polytechnic Institute In partial fulfillment of requirements for

More information

A THREE-FACTOR CONVERGENCE MODEL OF INTEREST RATES

A THREE-FACTOR CONVERGENCE MODEL OF INTEREST RATES Proceedings of ALGORITMY 01 pp. 95 104 A THREE-FACTOR CONVERGENCE MODEL OF INTEREST RATES BEÁTA STEHLÍKOVÁ AND ZUZANA ZÍKOVÁ Abstract. A convergence model of interest rates explains the evolution of the

More information

AD in Monte Carlo for finance

AD in Monte Carlo for finance AD in Monte Carlo for finance Mike Giles giles@comlab.ox.ac.uk Oxford University Computing Laboratory AD & Monte Carlo p. 1/30 Overview overview of computational finance stochastic o.d.e. s Monte Carlo

More information

Solution 2.1. We determine the accumulation function/factor and use it as follows.

Solution 2.1. We determine the accumulation function/factor and use it as follows. Applied solutions The time value of money: Chapter questions Solution.. We determine the accumulation function/factor and use it as follows. a. The accumulation factor is A(t) =. t. b. The accumulation

More information

Chapter 3: Black-Scholes Equation and Its Numerical Evaluation

Chapter 3: Black-Scholes Equation and Its Numerical Evaluation Chapter 3: Black-Scholes Equation and Its Numerical Evaluation 3.1 Itô Integral 3.1.1 Convergence in the Mean and Stieltjes Integral Definition 3.1 (Convergence in the Mean) A sequence {X n } n ln of random

More information

Integrating Feynman Kac Eq

Integrating Feynman Kac Eq Integrating Feynman Kac Eq uations Using Hermite Qu intic Finite Elements A White Paper by Visual Numerics, Inc. November 008 Visual Numerics, Inc. 500 Wilcrest Drive, Suite 00 Houston, TX 7704 USA www.vni.com

More information

"Pricing Exotic Options using Strong Convergence Properties

Pricing Exotic Options using Strong Convergence Properties Fourth Oxford / Princeton Workshop on Financial Mathematics "Pricing Exotic Options using Strong Convergence Properties Klaus E. Schmitz Abe schmitz@maths.ox.ac.uk www.maths.ox.ac.uk/~schmitz Prof. Mike

More information

Advanced Numerical Methods for Financial Problems

Advanced Numerical Methods for Financial Problems Advanced Numerical Methods for Financial Problems Pricing of Derivatives Krasimir Milanov krasimir.milanov@finanalytica.com Department of Research and Development FinAnalytica Ltd. Seminar: Signal Analysis

More information

Numerical Evaluation of Multivariate Contingent Claims

Numerical Evaluation of Multivariate Contingent Claims Numerical Evaluation of Multivariate Contingent Claims Phelim P. Boyle University of California, Berkeley and University of Waterloo Jeremy Evnine Wells Fargo Investment Advisers Stephen Gibbs University

More information

Unit 2: Modeling in the Frequency Domain Part 2: The Laplace Transform

Unit 2: Modeling in the Frequency Domain Part 2: The Laplace Transform The Laplace Transform Unit 2: Modeling in the Frequency Domain Part 2: The Laplace Transform Engineering 5821: Control Systems I Faculty of Engineering & Applied Science Memorial University of Newfoundland

More information

Pricing American Options Using a Space-time Adaptive Finite Difference Method

Pricing American Options Using a Space-time Adaptive Finite Difference Method Pricing American Options Using a Space-time Adaptive Finite Difference Method Jonas Persson Abstract American options are priced numerically using a space- and timeadaptive finite difference method. The

More information

A High-order Front-tracking Finite Difference Method for Pricing American Options under Jump-Diffusion Models

A High-order Front-tracking Finite Difference Method for Pricing American Options under Jump-Diffusion Models A High-order Front-tracking Finite Difference Method for Pricing American Options under Jump-Diffusion Models Jari Toivanen Abstract A free boundary formulation is considered for the price of American

More information

THE OPTIMAL ASSET ALLOCATION PROBLEMFOR AN INVESTOR THROUGH UTILITY MAXIMIZATION

THE OPTIMAL ASSET ALLOCATION PROBLEMFOR AN INVESTOR THROUGH UTILITY MAXIMIZATION THE OPTIMAL ASSET ALLOCATION PROBLEMFOR AN INVESTOR THROUGH UTILITY MAXIMIZATION SILAS A. IHEDIOHA 1, BRIGHT O. OSU 2 1 Department of Mathematics, Plateau State University, Bokkos, P. M. B. 2012, Jos,

More information

arxiv: v1 [q-fin.pm] 13 Mar 2014

arxiv: v1 [q-fin.pm] 13 Mar 2014 MERTON PORTFOLIO PROBLEM WITH ONE INDIVISIBLE ASSET JAKUB TRYBU LA arxiv:143.3223v1 [q-fin.pm] 13 Mar 214 Abstract. In this paper we consider a modification of the classical Merton portfolio optimization

More information

A model reduction approach to numerical inversion for parabolic partial differential equations

A model reduction approach to numerical inversion for parabolic partial differential equations A model reduction approach to numerical inversion for parabolic partial differential equations Liliana Borcea Alexander V. Mamonov 2, Vladimir Druskin 2, Mikhail Zaslavsky 2 University of Michigan, Ann

More information

Exact shape-reconstruction by one-step linearization in EIT

Exact shape-reconstruction by one-step linearization in EIT Exact shape-reconstruction by one-step linearization in EIT Bastian von Harrach harrach@math.uni-mainz.de Zentrum Mathematik, M1, Technische Universität München, Germany Joint work with Jin Keun Seo, Yonsei

More information

Evaluation of Asian option by using RBF approximation

Evaluation of Asian option by using RBF approximation Boundary Elements and Other Mesh Reduction Methods XXVIII 33 Evaluation of Asian option by using RBF approximation E. Kita, Y. Goto, F. Zhai & K. Shen Graduate School of Information Sciences, Nagoya University,

More information

IEOR E4703: Monte-Carlo Simulation

IEOR E4703: Monte-Carlo Simulation IEOR E4703: Monte-Carlo Simulation Other Miscellaneous Topics and Applications of Monte-Carlo Martin Haugh Department of Industrial Engineering and Operations Research Columbia University Email: martin.b.haugh@gmail.com

More information

Random Variables and Probability Distributions

Random Variables and Probability Distributions Chapter 3 Random Variables and Probability Distributions Chapter Three Random Variables and Probability Distributions 3. Introduction An event is defined as the possible outcome of an experiment. In engineering

More information

Valuation of performance-dependent options in a Black- Scholes framework

Valuation of performance-dependent options in a Black- Scholes framework Valuation of performance-dependent options in a Black- Scholes framework Thomas Gerstner, Markus Holtz Institut für Numerische Simulation, Universität Bonn, Germany Ralf Korn Fachbereich Mathematik, TU

More information

Multiname and Multiscale Default Modeling

Multiname and Multiscale Default Modeling Multiname and Multiscale Default Modeling Jean-Pierre Fouque University of California Santa Barbara Joint work with R. Sircar (Princeton) and K. Sølna (UC Irvine) Special Semester on Stochastics with Emphasis

More information

Functional vs Banach space stochastic calculus & strong-viscosity solutions to semilinear parabolic path-dependent PDEs.

Functional vs Banach space stochastic calculus & strong-viscosity solutions to semilinear parabolic path-dependent PDEs. Functional vs Banach space stochastic calculus & strong-viscosity solutions to semilinear parabolic path-dependent PDEs Andrea Cosso LPMA, Université Paris Diderot joint work with Francesco Russo ENSTA,

More information

OPTIMAL PORTFOLIO CONTROL WITH TRADING STRATEGIES OF FINITE

OPTIMAL PORTFOLIO CONTROL WITH TRADING STRATEGIES OF FINITE Proceedings of the 44th IEEE Conference on Decision and Control, and the European Control Conference 005 Seville, Spain, December 1-15, 005 WeA11.6 OPTIMAL PORTFOLIO CONTROL WITH TRADING STRATEGIES OF

More information

arxiv: v1 [q-fin.cp] 1 Nov 2016

arxiv: v1 [q-fin.cp] 1 Nov 2016 Essentially high-order compact schemes with application to stochastic volatility models on non-uniform grids arxiv:1611.00316v1 [q-fin.cp] 1 Nov 016 Bertram Düring Christof Heuer November, 016 Abstract

More information

Optimal prepayment of Dutch mortgages*

Optimal prepayment of Dutch mortgages* 137 Statistica Neerlandica (2007) Vol. 61, nr. 1, pp. 137 155 Optimal prepayment of Dutch mortgages* Bart H. M. Kuijpers ABP Investments, P.O. Box 75753, NL-1118 ZX Schiphol, The Netherlands Peter C. Schotman

More information

The Pennsylvania State University. The Graduate School. Department of Industrial Engineering AMERICAN-ASIAN OPTION PRICING BASED ON MONTE CARLO

The Pennsylvania State University. The Graduate School. Department of Industrial Engineering AMERICAN-ASIAN OPTION PRICING BASED ON MONTE CARLO The Pennsylvania State University The Graduate School Department of Industrial Engineering AMERICAN-ASIAN OPTION PRICING BASED ON MONTE CARLO SIMULATION METHOD A Thesis in Industrial Engineering and Operations

More information

Chapter 7 One-Dimensional Search Methods

Chapter 7 One-Dimensional Search Methods Chapter 7 One-Dimensional Search Methods An Introduction to Optimization Spring, 2014 1 Wei-Ta Chu Golden Section Search! Determine the minimizer of a function over a closed interval, say. The only assumption

More information

The University of Chicago, Booth School of Business Business 41202, Spring Quarter 2017, Mr. Ruey S. Tsay. Solutions to Final Exam

The University of Chicago, Booth School of Business Business 41202, Spring Quarter 2017, Mr. Ruey S. Tsay. Solutions to Final Exam The University of Chicago, Booth School of Business Business 41202, Spring Quarter 2017, Mr. Ruey S. Tsay Solutions to Final Exam Problem A: (40 points) Answer briefly the following questions. 1. Describe

More information

Version A. Problem 1. Let X be the continuous random variable defined by the following pdf: 1 x/2 when 0 x 2, f(x) = 0 otherwise.

Version A. Problem 1. Let X be the continuous random variable defined by the following pdf: 1 x/2 when 0 x 2, f(x) = 0 otherwise. Math 224 Q Exam 3A Fall 217 Tues Dec 12 Version A Problem 1. Let X be the continuous random variable defined by the following pdf: { 1 x/2 when x 2, f(x) otherwise. (a) Compute the mean µ E[X]. E[X] x

More information

Trust Region Methods for Unconstrained Optimisation

Trust Region Methods for Unconstrained Optimisation Trust Region Methods for Unconstrained Optimisation Lecture 9, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Trust

More information

Solution of the problem of the identified minimum for the tri-variate normal

Solution of the problem of the identified minimum for the tri-variate normal Proc. Indian Acad. Sci. (Math. Sci.) Vol., No. 4, November 0, pp. 645 660. c Indian Academy of Sciences Solution of the problem of the identified minimum for the tri-variate normal A MUKHERJEA, and M ELNAGGAR

More information

SYLLABUS AND SAMPLE QUESTIONS FOR MSQE (Program Code: MQEK and MQED) Syllabus for PEA (Mathematics), 2013

SYLLABUS AND SAMPLE QUESTIONS FOR MSQE (Program Code: MQEK and MQED) Syllabus for PEA (Mathematics), 2013 SYLLABUS AND SAMPLE QUESTIONS FOR MSQE (Program Code: MQEK and MQED) 2013 Syllabus for PEA (Mathematics), 2013 Algebra: Binomial Theorem, AP, GP, HP, Exponential, Logarithmic Series, Sequence, Permutations

More information

European option pricing under parameter uncertainty

European option pricing under parameter uncertainty European option pricing under parameter uncertainty Martin Jönsson (joint work with Samuel Cohen) University of Oxford Workshop on BSDEs, SPDEs and their Applications July 4, 2017 Introduction 2/29 Introduction

More information

FX Smile Modelling. 9 September September 9, 2008

FX Smile Modelling. 9 September September 9, 2008 FX Smile Modelling 9 September 008 September 9, 008 Contents 1 FX Implied Volatility 1 Interpolation.1 Parametrisation............................. Pure Interpolation.......................... Abstract

More information

Exam M Fall 2005 PRELIMINARY ANSWER KEY

Exam M Fall 2005 PRELIMINARY ANSWER KEY Exam M Fall 005 PRELIMINARY ANSWER KEY Question # Answer Question # Answer 1 C 1 E C B 3 C 3 E 4 D 4 E 5 C 5 C 6 B 6 E 7 A 7 E 8 D 8 D 9 B 9 A 10 A 30 D 11 A 31 A 1 A 3 A 13 D 33 B 14 C 34 C 15 A 35 A

More information

Fractional PDE Approach for Numerical Solution of Some Jump-Diffusion Models

Fractional PDE Approach for Numerical Solution of Some Jump-Diffusion Models Fractional PDE Approach for Numerical Solution of Some Jump-Diffusion Models Andrey Itkin 1 1 HAP Capital and Rutgers University, New Jersey Math Finance and PDE Conference, New Brunswick 2009 A.Itkin

More information

American Options; an American delayed- Exercise model and the free boundary. Business Analytics Paper. Nadra Abdalla

American Options; an American delayed- Exercise model and the free boundary. Business Analytics Paper. Nadra Abdalla American Options; an American delayed- Exercise model and the free boundary Business Analytics Paper Nadra Abdalla [Geef tekst op] Pagina 1 Business Analytics Paper VU University Amsterdam Faculty of Sciences

More information

1 Explicit Euler Scheme (or Euler Forward Scheme )

1 Explicit Euler Scheme (or Euler Forward Scheme ) Numerical methods for PDE in Finance - M2MO - Paris Diderot American options January 2017 Files: https://ljll.math.upmc.fr/bokanowski/enseignement/2016/m2mo/m2mo.html We look for a numerical approximation

More information

An IMEX-method for pricing options under Bates model using adaptive finite differences Rapport i Teknisk-vetenskapliga datorberäkningar

An IMEX-method for pricing options under Bates model using adaptive finite differences Rapport i Teknisk-vetenskapliga datorberäkningar PROJEKTRAPPORT An IMEX-method for pricing options under Bates model using adaptive finite differences Arvid Westlund Rapport i Teknisk-vetenskapliga datorberäkningar Jan 2014 INSTITUTIONEN FÖR INFORMATIONSTEKNOLOGI

More information

Tests for Two Means in a Multicenter Randomized Design

Tests for Two Means in a Multicenter Randomized Design Chapter 481 Tests for Two Means in a Multicenter Randomized Design Introduction In a multicenter design with a continuous outcome, a number of centers (e.g. hospitals or clinics) are selected at random

More information

Log-Robust Portfolio Management

Log-Robust Portfolio Management Log-Robust Portfolio Management Dr. Aurélie Thiele Lehigh University Joint work with Elcin Cetinkaya and Ban Kawas Research partially supported by the National Science Foundation Grant CMMI-0757983 Dr.

More information

Cash Accumulation Strategy based on Optimal Replication of Random Claims with Ordinary Integrals

Cash Accumulation Strategy based on Optimal Replication of Random Claims with Ordinary Integrals arxiv:1711.1756v1 [q-fin.mf] 6 Nov 217 Cash Accumulation Strategy based on Optimal Replication of Random Claims with Ordinary Integrals Renko Siebols This paper presents a numerical model to solve the

More information

An option-theoretic valuation model for residential mortgages with stochastic conditions and discount factors

An option-theoretic valuation model for residential mortgages with stochastic conditions and discount factors Graduate Theses and Dissertations Iowa State University Capstones, Theses and Dissertations 2 An option-theoretic valuation model for residential mortgages with stochastic conditions and discount factors

More information

A Numerical Approach to the Estimation of Search Effort in a Search for a Moving Object

A Numerical Approach to the Estimation of Search Effort in a Search for a Moving Object Proceedings of the 1. Conference on Applied Mathematics and Computation Dubrovnik, Croatia, September 13 18, 1999 pp. 129 136 A Numerical Approach to the Estimation of Search Effort in a Search for a Moving

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

A No-Arbitrage Theorem for Uncertain Stock Model

A No-Arbitrage Theorem for Uncertain Stock Model Fuzzy Optim Decis Making manuscript No (will be inserted by the editor) A No-Arbitrage Theorem for Uncertain Stock Model Kai Yao Received: date / Accepted: date Abstract Stock model is used to describe

More information

An inverse finite element method for pricing American options under linear complementarity formulations

An inverse finite element method for pricing American options under linear complementarity formulations Mathematics Applied in Science and Technology. ISSN 0973-6344 Volume 10, Number 1 (2018), pp. 1 17 Research India Publications http://www.ripublication.com/mast.htm An inverse finite element method for

More information

Quasi-Monte Carlo for Finance

Quasi-Monte Carlo for Finance Quasi-Monte Carlo for Finance Peter Kritzer Johann Radon Institute for Computational and Applied Mathematics (RICAM) Austrian Academy of Sciences Linz, Austria NCTS, Taipei, November 2016 Peter Kritzer

More information

Math 416/516: Stochastic Simulation

Math 416/516: Stochastic Simulation Math 416/516: Stochastic Simulation Haijun Li lih@math.wsu.edu Department of Mathematics Washington State University Week 13 Haijun Li Math 416/516: Stochastic Simulation Week 13 1 / 28 Outline 1 Simulation

More information

No ANALYTIC AMERICAN OPTION PRICING AND APPLICATIONS. By A. Sbuelz. July 2003 ISSN

No ANALYTIC AMERICAN OPTION PRICING AND APPLICATIONS. By A. Sbuelz. July 2003 ISSN No. 23 64 ANALYTIC AMERICAN OPTION PRICING AND APPLICATIONS By A. Sbuelz July 23 ISSN 924-781 Analytic American Option Pricing and Applications Alessandro Sbuelz First Version: June 3, 23 This Version:

More information

Numerical Solution of a Linear Black-Scholes Models: A Comparative Overview

Numerical Solution of a Linear Black-Scholes Models: A Comparative Overview IOSR Journal of Engineering (IOSRJEN) ISSN (e): 5-3, ISSN (p): 78-879 Vol. 5, Issue 8 (August. 5), V3 PP 45-5 www.iosrjen.org Numerical Solution of a Linear Black-Scholes Models: A Comparative Overview

More information

(RP13) Efficient numerical methods on high-performance computing platforms for the underlying financial models: Series Solution and Option Pricing

(RP13) Efficient numerical methods on high-performance computing platforms for the underlying financial models: Series Solution and Option Pricing (RP13) Efficient numerical methods on high-performance computing platforms for the underlying financial models: Series Solution and Option Pricing Jun Hu Tampere University of Technology Final conference

More information

Risk-Return Optimization of the Bank Portfolio

Risk-Return Optimization of the Bank Portfolio Risk-Return Optimization of the Bank Portfolio Ursula Theiler Risk Training, Carl-Zeiss-Str. 11, D-83052 Bruckmuehl, Germany, mailto:theiler@risk-training.org. Abstract In an intensifying competition banks

More information

BROWNIAN MOTION Antonella Basso, Martina Nardon

BROWNIAN MOTION Antonella Basso, Martina Nardon BROWNIAN MOTION Antonella Basso, Martina Nardon basso@unive.it, mnardon@unive.it Department of Applied Mathematics University Ca Foscari Venice Brownian motion p. 1 Brownian motion Brownian motion plays

More information

The Optimization Process: An example of portfolio optimization

The Optimization Process: An example of portfolio optimization ISyE 6669: Deterministic Optimization The Optimization Process: An example of portfolio optimization Shabbir Ahmed Fall 2002 1 Introduction Optimization can be roughly defined as a quantitative approach

More information

Richardson Extrapolation Techniques for the Pricing of American-style Options

Richardson Extrapolation Techniques for the Pricing of American-style Options Richardson Extrapolation Techniques for the Pricing of American-style Options June 1, 2005 Abstract Richardson Extrapolation Techniques for the Pricing of American-style Options In this paper we re-examine

More information