Graph signal processing for clustering

Size: px
Start display at page:

Download "Graph signal processing for clustering"

Transcription

1 Graph signal processing for clustering Nicolas Tremblay PANAMA Team, INRIA Rennes with Rémi Gribonval, Signal Processing Laboratory 2, EPFL, Lausanne with Pierre Vandergheynst.

2 What s clustering? N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

3 Given a series of N objects : N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

4 Given a series of N objects : 1/ Find adapted descriptors N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

5 Given a series of N objects : 1/ Find adapted descriptors 2/ Cluster N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

6 After step 1, one has : N vectors in d dimensions (descriptor dimension) : x 1, x 2,, x N R d N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

7 After step 1, one has : N vectors in d dimensions (descriptor dimension) : x 1, x 2,, x N R d and their distance matrix D R N N. N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

8 After step 1, one has : N vectors in d dimensions (descriptor dimension) : x 1, x 2,, x N R d and their distance matrix D R N N. The goal of clustering is to assign a label c(i) = 1,, k to each object i in order to organize / simplify / analyze the data. N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

9 After step 1, one has : N vectors in d dimensions (descriptor dimension) : x 1, x 2,, x N R d and their distance matrix D R N N. The goal of clustering is to assign a label c(i) = 1,, k to each object i in order to organize / simplify / analyze the data. There exists two different general types of methods : methods directly based on the x i and/or D like k-means or hierarchical clustering. N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

10 After step 1, one has : N vectors in d dimensions (descriptor dimension) : x 1, x 2,, x N R d and their distance matrix D R N N. The goal of clustering is to assign a label c(i) = 1,, k to each object i in order to organize / simplify / analyze the data. There exists two different general types of methods : methods directly based on the x i and/or D like k-means or hierarchical clustering. graph-based methods. N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

11 Graph construction from the distance matrix D Create a graph G = (V, E) : N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

12 Graph construction from the distance matrix D Create a graph G = (V, E) : each node in V is one of the N objects N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

13 Graph construction from the distance matrix D Create a graph G = (V, E) : each node in V is one of the N objects each pair of nodes (i, j) is connected if the associated distance D(i, j) is small enough. N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

14 Graph construction from the distance matrix D Create a graph G = (V, E) : each node in V is one of the N objects each pair of nodes (i, j) is connected if the associated distance D(i, j) is small enough. For example, two connectivity possibilities : Gaussian kernel : 1. all pairs of nodes are connected with links of weights exp( D(i, j)/σ) 2. remove all links of weight inferior to ɛ N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

15 Graph construction from the distance matrix D Create a graph G = (V, E) : each node in V is one of the N objects each pair of nodes (i, j) is connected if the associated distance D(i, j) is small enough. For example, two connectivity possibilities : Gaussian kernel : 1. all pairs of nodes are connected with links of weights exp( D(i, j)/σ) 2. remove all links of weight inferior to ɛ k nearest neighbors : connect each node to its k nearest neighbors. N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

16 The problem now states : Given the graph G representing the similarity between the N objects, find a partition of all nodes in k clusters. N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

17 The problem now states : Given the graph G representing the similarity between the N objects, find a partition of all nodes in k clusters. Many methods exist [Fortunato 10] : Modularity (or other cost-function) optimisation methods [Newman 04] Random walk methods [Delvenne 10] Methods inspired from statistical physics [Krzakala 12], information theory [Rosvall 07]... spectral methods... N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

18 Three useful matrices The adjacency matrix : The degree matrix : W = S = The Laplacian matrix : L = S W = N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

19 Three useful matrices The adjacency matrix : The degree matrix : W = S = The Laplacian matrix : L = S W = N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

20 The classical spectral clustering algorithm [Von Luxburg 06] : Given the N-node graph G of adjacency matrix W : 1. Compute : U k = (u 1 u 2 u k ) the first k eigenvectors of L = S W. N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

21 The classical spectral clustering algorithm [Von Luxburg 06] : Given the N-node graph G of adjacency matrix W : 1. Compute : U k = (u 1 u 2 u k ) the first k eigenvectors of L = S W. 2. Consider each node i as a point in R k : f i = U k δ i. N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

22 The classical spectral clustering algorithm [Von Luxburg 06] : Given the N-node graph G of adjacency matrix W : 1. Compute : U k = (u 1 u 2 u k ) the first k eigenvectors of L = S W. 2. Consider each node i as a point in R k : f i = U k δ i. 3. Run k-means with the Euclidean distance : D ij = f i f j and obtain k clusters. N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

23 What s the point of using a graph? N points in d = 2 dimensions. Result with k-means (k=2) : After creating a graph from the N points interdistances, and running the spectral clustering algorithm (with k=2) : N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

24 Computation bottlenecks of the spectral clustering algorithm When N and/or k become too large, there are two main bottlenecks in the algorithm : 1. The partial eigendecomposition of the Laplacian. N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

25 Computation bottlenecks of the spectral clustering algorithm When N and/or k become too large, there are two main bottlenecks in the algorithm : 1. The partial eigendecomposition of the Laplacian. 2. k-means. N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

26 Computation bottlenecks of the spectral clustering algorithm When N and/or k become too large, there are two main bottlenecks in the algorithm : 1. The partial eigendecomposition of the Laplacian. 2. k-means. Our goal : Circumvent both! N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

27 What s graph signal processing? N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

28 What s a graph signal? C 0 20 J F M A M J J A S O N D 15 C Heure N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

29 What s a graph signal? C 0 20 J F M A M J J A S O N D 15 C Heure N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

30 What s a graph signal? C 0 20 J F M A M J J A S O N D 15 C Heure N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

31 What s a graph signal? C 0 20 J F M A M J J A S O N D 15 C Heure N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

32 What s a graph signal? C 0 20 J F M A M J J A S O N D 15 C Heure N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

33 What s a graph signal? C 0 20 J F M A M J J A S O N D 15 C Heure N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

34 What s the graph Fourier matrix? [Hammond 11] The classical graph : L cl = All classical Fourier modes are the eigenvectors of L cl N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

35 What s the graph Fourier matrix? [Hammond 11] The classical graph : Any graph : L cl = L All classical Fourier modes are the eigenvectors of L cl By analogy, any graph s Fourier modes are the eigenvectors of its Laplacian matrix L. N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

36 The graph Fourier matrix L = S W Its eigenvectors : U = (u 1 u 2 u N ) form the graph Fourier orthonormal basis. Its eigenvalues : 0 = λ 1 λ 2 λ N represent the graph frequencies. λ i is the squared frequency associated to the Fourier mode u i. N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

37 Illustration Low frequency : High frequency : N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

38 The Fourier transform given f R N a signal on a graph of size N. N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

39 The Fourier transform given f R N a signal on a graph of size N. ˆf is obtained by decomposing f on the eigenvectors u i : < u 1, f > < u 2, f > ˆf = < u 3, f >..., i.e. < u N, f > ˆf = U f N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

40 The Fourier transform given f R N a signal on a graph of size N. ˆf is obtained by decomposing f on the eigenvectors u i : < u 1, f > < u 2, f > ˆf = < u 3, f >..., i.e. < u N, f > ˆf = U f Inversely, the inverse Fourier transform reads : f = U ˆf N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

41 The Fourier transform given f R N a signal on a graph of size N. ˆf is obtained by decomposing f on the eigenvectors u i : < u 1, f > < u 2, f > ˆf = < u 3, f >..., i.e. < u N, f > ˆf = U f Inversely, the inverse Fourier transform reads : f = U ˆf The Parseval theorem stays valid : (g, h) < g, h > = < ĝ, ĥ > N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

42 Filtering 1 Given a filter function g defined in the Fourier space. g(λ) λ N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

43 Filtering 1 Given a filter function g defined in the Fourier space. g(λ) λ In the Fourier space, the signal filtered by g reads : ˆf g = ˆf (1) g(λ 1 ) ˆf (2) g(λ 2 ) ˆf (3) g(λ 3 )... ˆf (N) g(λ N ) = Ĝ ˆf with Ĝ = g(λ 1 ) g(λ 2 ) g(λ 3 ) g(λ N ) N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

44 Filtering 1 Given a filter function g defined in the Fourier space. g(λ) λ In the Fourier space, the signal filtered by g reads : ˆf g = ˆf (1) g(λ 1 ) ˆf (2) g(λ 2 ) ˆf (3) g(λ 3 )... ˆf (N) g(λ N ) = Ĝ ˆf with Ĝ = g(λ 1 ) g(λ 2 ) g(λ 3 ) g(λ N ) In the node space, the filtered signal f g reads therefore : f g = U Ĝ U f = Gf N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

45 So where s the link? N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

46 Remember : the classical spectral clustering algorithm Given the N-node graph G of adjacency matrix W : 1. Compute : U k = (u 1 u 2 u k ) the first k eigenvectors of L = S W. 2. Consider each node i as a point in R k : f i = U k δ i. 3. Run k-means with the Euclidean distance : D ij = f i f j and obtain k clusters. N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

47 Remember : the classical spectral clustering algorithm Given the N-node graph G of adjacency matrix W : 1. Compute : U k = (u 1 u 2 u k ) the first k eigenvectors of L = S W. 2. Consider each node i as a point in R k : f i = U k δ i. 3. Run k-means with the Euclidean distance : D ij = f i f j and obtain k clusters. Let s work on the first bottleneck : estimate D ij without partially diagonalizing the Laplacian matrix. N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

48 Ideal low-pass filtering 1st step : assume we know U k and λ k 1.5 Given h λk an ideal LP, H λk = UĤλ k U = U k U k is its filter matrix. h λk (λ) λ k 1 2 λ N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

49 Given h λk an ideal LP, H λk = UĤλ k U = U k U k is its filter matrix. Ideal low-pass filtering 1st step : assume we know U k and λ k h λk (λ) λ k 1 2 Let R = (r 1 r 2 r η ) R N η be a random Gaussian matrix. We define f i = (H λk R) δ i R η and D ij = f i f j λ N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

50 Given h λk an ideal LP, H λk = UĤλ k U = U k U k is its filter matrix. Ideal low-pass filtering 1st step : assume we know U k and λ k h λk (λ) λ k 1 2 Let R = (r 1 r 2 r η ) R N η be a random Gaussian matrix. We define f i = (H λk R) δ i R η and D ij = f i f j. Norm conservation theorem for ideal filter Let ɛ > 0, if η > η 0 log N ɛ 2, then, with proba > 1 1/N, we have : (i, j) [1, N] 2 (1 ɛ)d ij D ij (1 + ɛ)d ij λ N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

51 Non-ideal low-pass filtering 2nd step : assume all we know is λ k In practice, we use a poly approx of order m of h λk : h λk = m α l λ l h λk. l=1 h λk (λ) λ k 1 2 λ ideal m=100 m=20 m=5 Does not require the knowledge of U k. Only involves matrix-vector multiplications [costs O(m E )]. N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

52 In practice, we use a poly approx of order m of h λk : h λk = Non-ideal low-pass filtering 2nd step : assume all we know is λ k m α l λ l h λk. l=1 h λk (λ) λ k 1 2 Indeed, in this case, filtering a vector x reads : m H λk x = U h λk (Λ)U x = U α l Λ l U x = l=1 λ ideal m=100 m=20 m=5 m α l L l x Does not require the knowledge of U k. Only involves matrix-vector multiplications [costs O(m E )]. l=1 N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

53 In practice, we use a poly approx of order m of h λk : h λk = Non-ideal low-pass filtering 2nd step : assume all we know is λ k m α l λ l h λk. l=1 h λk (λ) λ k 1 2 Indeed, in this case, filtering a vector x reads : m H λk x = U h λk (Λ)U x = U α l Λ l U x = l=1 λ ideal m=100 m=20 m=5 m α l L l x Does not require the knowledge of U k. Only involves matrix-vector multiplications [costs O(m E )]. The theorem stays (more or less) valid with this non-ideal filtering! l=1 N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

54 Last step : estimate λ k Goal : given L, estimate its k-th eigenvalue as fast as possible. N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

55 Last step : estimate λ k Goal : given L, estimate its k-th eigenvalue as fast as possible. We use eigencount techniques (also based on polynomial filtering of random vectors!) : given the interval [0, b], get an approximation of the number of enclosed eigenvalues. And find λ k by dichotomy on b. N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

56 Accelerated spectral algorithm Given the N-node graph G of adjacency matrix W : N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

57 Accelerated spectral algorithm Given the N-node graph G of adjacency matrix W : 1. Estimate λ k, the k-th eigenvalue of L. N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

58 Accelerated spectral algorithm Given the N-node graph G of adjacency matrix W : 1. Estimate λ k, the k-th eigenvalue of L. 2. Generate η random graph signals in matrix R R N η. N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

59 Accelerated spectral algorithm Given the N-node graph G of adjacency matrix W : 1. Estimate λ k, the k-th eigenvalue of L. 2. Generate η random graph signals in matrix R R N η. 3. Filter them with H λk and treat each node i as a point in R η : f i = δ i H λk R. N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

60 Accelerated spectral algorithm Given the N-node graph G of adjacency matrix W : 1. Estimate λ k, the k-th eigenvalue of L. 2. Generate η random graph signals in matrix R R N η. 3. Filter them with H λk and treat each node i as a point in R η : f i = δ i H λk R. 4. Run k-means with the Euclidean distance : D ij = f i f j and obtain k clusters. N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

61 Accelerated spectral algorithm Given the N-node graph G of adjacency matrix W : 1. Estimate λ k, the k-th eigenvalue of L. 2. Generate η random graph signals in matrix R R N η. 3. Filter them with H λk and treat each node i as a point in R η : f i = δ i H λk R. 4. Run k-means with the Euclidean distance : D ij = f i f j and obtain k clusters. Let s work on the second bottleneck : avoid k-means in possibly very large dimension N (step 4). N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

62 Fast spectral algorithm? Given the N-node graph G of adjacency matrix W : 1. Estimate λ k, the k-th eigenvalue of L. 2. Generate η random graph signals in matrix R R N η. 3. Filter them with H λk and treat each node i as a point in R η : f i = δ i H λk R. 4. Sample randomly ρ k log k << N nodes out of N : f r i = M f i = (M H λk R) δ r i. 5. Run k-means in this reduced space with the Euclidean distance : D ij r = f r i f j r and obtain k clusters. 6. Interpolate cluster indicator functions cl r on the whole graph : c l = arg min x R N Mx cr l 2 + µxl x. N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

63 Compressive spectral clustering : a summary 1. generate a feature vector for each node by filtering few random gaussian random signal on G ; N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

64 Compressive spectral clustering : a summary 1. generate a feature vector for each node by filtering few random gaussian random signal on G ; 2. subsample the set of nodes ; N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

65 Compressive spectral clustering : a summary 1. generate a feature vector for each node by filtering few random gaussian random signal on G ; 2. subsample the set of nodes ; 3. cluster the reduced set of nodes ; N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

66 Compressive spectral clustering : a summary 1. generate a feature vector for each node by filtering few random gaussian random signal on G ; 2. subsample the set of nodes ; 3. cluster the reduced set of nodes ; 4. interpolate the cluster indicator vectors back to the complete graph. N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

67 This work was done in collaboration with : Gilles PUY and Rémi GRIBONVAL from the PANAMA team (INRIA). Pierre VANDERGHEYNST from EPFL. N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

68 This work was done in collaboration with : Gilles PUY and Rémi GRIBONVAL from the PANAMA team (INRIA). Pierre VANDERGHEYNST from EPFL. Part of this work has been published (or submitted) : Circumventing the first bottleneck has been accepted to ICASSP 2016 Interpolation of k-bandlimited graph signals has been submitted to ACHA in November (an application of which helps us circumvent the second bottleneck). N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

69 Perspectives and difficult questions Two difficult questions (among others) : 1. Given a semi-definite positive matrix, how to estimate as fast as possible its k-th eigenvalue, and only that one? 2. How to subsample ρ nodes out of N while ensuring that clustering them in k classes is the result one would have obtained by clustering all N nodes? N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

70 Perspectives and difficult questions Two difficult questions (among others) : 1. Given a semi-definite positive matrix, how to estimate as fast as possible its k-th eigenvalue, and only that one? 2. How to subsample ρ nodes out of N while ensuring that clustering them in k classes is the result one would have obtained by clustering all N nodes? Perspectives 1. How about if nodes are added one by one? 2. Rational filters instead of polynomial filters? 3. Approximating other spectral clustering algorithms? N. Tremblay Graph signal processing for clustering Rennes, 13th of January / 26

Monte Carlo Methods for Uncertainty Quantification

Monte Carlo Methods for Uncertainty Quantification Monte Carlo Methods for Uncertainty Quantification Abdul-Lateef Haji-Ali Based on slides by: Mike Giles Mathematical Institute, University of Oxford Contemporary Numerical Techniques Haji-Ali (Oxford)

More information

Using radial basis functions for option pricing

Using radial basis functions for option pricing Using radial basis functions for option pricing Elisabeth Larsson Division of Scientific Computing Department of Information Technology Uppsala University Actuarial Mathematics Workshop, March 19, 2013,

More information

Market Risk Analysis Volume I

Market Risk Analysis Volume I Market Risk Analysis Volume I Quantitative Methods in Finance Carol Alexander John Wiley & Sons, Ltd List of Figures List of Tables List of Examples Foreword Preface to Volume I xiii xvi xvii xix xxiii

More information

9.1 Principal Component Analysis for Portfolios

9.1 Principal Component Analysis for Portfolios Chapter 9 Alpha Trading By the name of the strategies, an alpha trading strategy is to select and trade portfolios so the alpha is maximized. Two important mathematical objects are factor analysis and

More information

Machine Learning for Quantitative Finance

Machine Learning for Quantitative Finance Machine Learning for Quantitative Finance Fast derivative pricing Sofie Reyners Joint work with Jan De Spiegeleer, Dilip Madan and Wim Schoutens Derivative pricing is time-consuming... Vanilla option pricing

More information

Supplementary Material for Combinatorial Partial Monitoring Game with Linear Feedback and Its Application. A. Full proof for Theorems 4.1 and 4.

Supplementary Material for Combinatorial Partial Monitoring Game with Linear Feedback and Its Application. A. Full proof for Theorems 4.1 and 4. Supplementary Material for Combinatorial Partial Monitoring Game with Linear Feedback and Its Application. A. Full proof for Theorems 4.1 and 4. If the reader will recall, we have the following problem-specific

More information

Large-Scale SVM Optimization: Taking a Machine Learning Perspective

Large-Scale SVM Optimization: Taking a Machine Learning Perspective Large-Scale SVM Optimization: Taking a Machine Learning Perspective Shai Shalev-Shwartz Toyota Technological Institute at Chicago Joint work with Nati Srebro Talk at NEC Labs, Princeton, August, 2008 Shai

More information

STATISTICAL ANALYSIS OF HIGH FREQUENCY FINANCIAL TIME SERIES: INDIVIDUAL AND COLLECTIVE STOCK DYNAMICS

STATISTICAL ANALYSIS OF HIGH FREQUENCY FINANCIAL TIME SERIES: INDIVIDUAL AND COLLECTIVE STOCK DYNAMICS Erasmus Mundus Master in Complex Systems STATISTICAL ANALYSIS OF HIGH FREQUENCY FINANCIAL TIME SERIES: INDIVIDUAL AND COLLECTIVE STOCK DYNAMICS June 25, 2012 Esteban Guevara Hidalgo esteban guevarah@yahoo.es

More information

Interpretive Structural Modeling of Interactive Risks

Interpretive Structural Modeling of Interactive Risks Interpretive Structural Modeling of Interactive isks ick Gorvett, FCAS, MAAA, FM, AM, Ph.D. Ningwei Liu, Ph.D. 2 Call Paper Program 26 Enterprise isk Management Symposium Chicago, IL Abstract The typical

More information

A Highly Efficient Shannon Wavelet Inverse Fourier Technique for Pricing European Options

A Highly Efficient Shannon Wavelet Inverse Fourier Technique for Pricing European Options A Highly Efficient Shannon Wavelet Inverse Fourier Technique for Pricing European Options Luis Ortiz-Gracia Centre de Recerca Matemàtica (joint work with Cornelis W. Oosterlee, CWI) Models and Numerics

More information

The data-driven COS method

The data-driven COS method The data-driven COS method Á. Leitao, C. W. Oosterlee, L. Ortiz-Gracia and S. M. Bohte Delft University of Technology - Centrum Wiskunde & Informatica Reading group, March 13, 2017 Reading group, March

More information

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)

Part 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL) Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective

More information

Lattices from equiangular tight frames with applications to lattice sparse recovery

Lattices from equiangular tight frames with applications to lattice sparse recovery Lattices from equiangular tight frames with applications to lattice sparse recovery Deanna Needell Dept of Mathematics, UCLA May 2017 Supported by NSF CAREER #1348721 and Alfred P. Sloan Fdn The compressed

More information

EE/AA 578 Univ. of Washington, Fall Homework 8

EE/AA 578 Univ. of Washington, Fall Homework 8 EE/AA 578 Univ. of Washington, Fall 2016 Homework 8 1. Multi-label SVM. The basic Support Vector Machine (SVM) described in the lecture (and textbook) is used for classification of data with two labels.

More information

2007 ASTIN Colloquium Call For Papers. Using Interpretive Structural Modeling to Identify and Quantify Interactive Risks

2007 ASTIN Colloquium Call For Papers. Using Interpretive Structural Modeling to Identify and Quantify Interactive Risks 27 ASTIN Colloquium Call For Papers Title of paper: Topic of paper: Names of authors: Organization: Address: Using Interpretive Structural Modeling to Identify and Quantify Interactive isks isk Management

More information

The data-driven COS method

The data-driven COS method The data-driven COS method Á. Leitao, C. W. Oosterlee, L. Ortiz-Gracia and S. M. Bohte Delft University of Technology - Centrum Wiskunde & Informatica CMMSE 2017, July 6, 2017 Álvaro Leitao (CWI & TUDelft)

More information

Energy Systems under Uncertainty: Modeling and Computations

Energy Systems under Uncertainty: Modeling and Computations Energy Systems under Uncertainty: Modeling and Computations W. Römisch Humboldt-University Berlin Department of Mathematics www.math.hu-berlin.de/~romisch Systems Analysis 2015, November 11 13, IIASA (Laxenburg,

More information

arxiv: v1 [math.st] 18 Sep 2018

arxiv: v1 [math.st] 18 Sep 2018 Gram Charlier and Edgeworth expansion for sample variance arxiv:809.06668v [math.st] 8 Sep 08 Eric Benhamou,* A.I. SQUARE CONNECT, 35 Boulevard d Inkermann 900 Neuilly sur Seine, France and LAMSADE, Universit

More information

Algebraic Problems in Graphical Modeling

Algebraic Problems in Graphical Modeling Algebraic Problems in Graphical Modeling Mathias Drton Department of Statistics University of Chicago Outline 1 What (roughly) are graphical models? a.k.a. Markov random fields, Bayesian networks,... 2

More information

Investing through Economic Cycles with Ensemble Machine Learning Algorithms

Investing through Economic Cycles with Ensemble Machine Learning Algorithms Investing through Economic Cycles with Ensemble Machine Learning Algorithms Thomas Raffinot Silex Investment Partners Big Data in Finance Conference Thomas Raffinot (Silex-IP) Economic Cycles-Machine Learning

More information

The University of Sydney School of Mathematics and Statistics. Computer Project

The University of Sydney School of Mathematics and Statistics. Computer Project The University of Sydney School of Mathematics and Statistics Computer Project MATH2070/2970: Optimisation and Financial Mathematics Semester 2, 2018 Web Page: http://www.maths.usyd.edu.au/u/im/math2070/

More information

The assignment game: Decentralized dynamics, rate of convergence, and equitable core selection

The assignment game: Decentralized dynamics, rate of convergence, and equitable core selection 1 / 29 The assignment game: Decentralized dynamics, rate of convergence, and equitable core selection Bary S. R. Pradelski (with Heinrich H. Nax) ETH Zurich October 19, 2015 2 / 29 3 / 29 Two-sided, one-to-one

More information

Exercise List: Proving convergence of the (Stochastic) Gradient Descent Method for the Least Squares Problem.

Exercise List: Proving convergence of the (Stochastic) Gradient Descent Method for the Least Squares Problem. Exercise List: Proving convergence of the (Stochastic) Gradient Descent Method for the Least Squares Problem. Robert M. Gower. October 3, 07 Introduction This is an exercise in proving the convergence

More information

A Non-Normal Principal Components Model for Security Returns

A Non-Normal Principal Components Model for Security Returns A Non-Normal Principal Components Model for Security Returns Sander Gerber Babak Javid Harry Markowitz Paul Sargen David Starer February 21, 219 Abstract We introduce a principal components model for securities

More information

CMPSCI 311: Introduction to Algorithms Second Midterm Practice Exam SOLUTIONS

CMPSCI 311: Introduction to Algorithms Second Midterm Practice Exam SOLUTIONS CMPSCI 311: Introduction to Algorithms Second Midterm Practice Exam SOLUTIONS November 17, 2016. Name: ID: Instructions: Answer the questions directly on the exam pages. Show all your work for each question.

More information

A way to improve incremental 2-norm condition estimation

A way to improve incremental 2-norm condition estimation A way to improve incremental 2-norm condition estimation Jurjen Duintjer Tebbens Institute of Computer Science Academy of Sciences of the Czech Republic duintjertebbens@cs.cas.cz Miroslav Tůma Institute

More information

A model reduction approach to numerical inversion for parabolic partial differential equations

A model reduction approach to numerical inversion for parabolic partial differential equations A model reduction approach to numerical inversion for parabolic partial differential equations Liliana Borcea Alexander V. Mamonov 2, Vladimir Druskin 3, Mikhail Zaslavsky 3 University of Michigan, Ann

More information

Design of a Financial Application Driven Multivariate Gaussian Random Number Generator for an FPGA

Design of a Financial Application Driven Multivariate Gaussian Random Number Generator for an FPGA Design of a Financial Application Driven Multivariate Gaussian Random Number Generator for an FPGA Chalermpol Saiprasert, Christos-Savvas Bouganis and George A. Constantinides Department of Electrical

More information

Online Appendices, Not for Publication

Online Appendices, Not for Publication Online Appendices, Not for Publication Appendix A. Network definitions In this section, we provide basic definitions and interpretations for the different network characteristics that we consider. See

More information

Trust Region Methods for Unconstrained Optimisation

Trust Region Methods for Unconstrained Optimisation Trust Region Methods for Unconstrained Optimisation Lecture 9, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Trust

More information

On multivariate Multi-Resolution Analysis, using generalized (non homogeneous) polyharmonic splines. or: A way for deriving RBF and associated MRA

On multivariate Multi-Resolution Analysis, using generalized (non homogeneous) polyharmonic splines. or: A way for deriving RBF and associated MRA MAIA conference Erice (Italy), September 6, 3 On multivariate Multi-Resolution Analysis, using generalized (non homogeneous) polyharmonic splines or: A way for deriving RBF and associated MRA Christophe

More information

Review of Global Industry Classification

Review of Global Industry Classification Review of Global Industry Classification László Nagy, Department of Finance Budapest University of Technology and Economics Magyar tudósok körútja 2, Budapest H-1117, Hungary E-mail: nagyl@finance.bme.hu

More information

Nonlinear Manifold Learning for Financial Markets Integration

Nonlinear Manifold Learning for Financial Markets Integration Nonlinear Manifold Learning for Financial Markets Integration George Tzagkarakis 1 & Thomas Dionysopoulos 1,2 1 EONOS Investment Technologies, Paris (FR) 2 Dalton Strategic Partnership, London (UK) Nice,

More information

Higher Order Freeness: A Survey. Roland Speicher Queen s University Kingston, Canada

Higher Order Freeness: A Survey. Roland Speicher Queen s University Kingston, Canada Higher Order Freeness: A Survey Roland Speicher Queen s University Kingston, Canada Second order freeness and fluctuations of random matrices: Mingo + Speicher: I. Gaussian and Wishart matrices and cyclic

More information

Order book resilience, price manipulations, and the positive portfolio problem

Order book resilience, price manipulations, and the positive portfolio problem Order book resilience, price manipulations, and the positive portfolio problem Alexander Schied Mannheim University PRisMa Workshop Vienna, September 28, 2009 Joint work with Aurélien Alfonsi and Alla

More information

Square-Root Measurement for Ternary Coherent State Signal

Square-Root Measurement for Ternary Coherent State Signal ISSN 86-657 Square-Root Measurement for Ternary Coherent State Signal Kentaro Kato Quantum ICT Research Institute, Tamagawa University 6-- Tamagawa-gakuen, Machida, Tokyo 9-86, Japan Tamagawa University

More information

ECS171: Machine Learning

ECS171: Machine Learning ECS171: Machine Learning Lecture 15: Tree-based Algorithms Cho-Jui Hsieh UC Davis March 7, 2018 Outline Decision Tree Random Forest Gradient Boosted Decision Tree (GBDT) Decision Tree Each node checks

More information

Lattice based cryptography

Lattice based cryptography Lattice based cryptography Abderrahmane Nitaj University of Caen Basse Normandie, France Kuala Lumpur, Malaysia, June 23, 2014 Abderrahmane Nitaj (LMNO) Q AK ËAÓ Lattice based cryptography 1 / 54 Contents

More information

A model reduction approach to numerical inversion for parabolic partial differential equations

A model reduction approach to numerical inversion for parabolic partial differential equations A model reduction approach to numerical inversion for parabolic partial differential equations Liliana Borcea Alexander V. Mamonov 2, Vladimir Druskin 2, Mikhail Zaslavsky 2 University of Michigan, Ann

More information

Rough Heston models: Pricing, hedging and microstructural foundations

Rough Heston models: Pricing, hedging and microstructural foundations Rough Heston models: Pricing, hedging and microstructural foundations Omar El Euch 1, Jim Gatheral 2 and Mathieu Rosenbaum 1 1 École Polytechnique, 2 City University of New York 7 November 2017 O. El Euch,

More information

BARUCH COLLEGE MATH 2003 SPRING 2006 MANUAL FOR THE UNIFORM FINAL EXAMINATION

BARUCH COLLEGE MATH 2003 SPRING 2006 MANUAL FOR THE UNIFORM FINAL EXAMINATION BARUCH COLLEGE MATH 003 SPRING 006 MANUAL FOR THE UNIFORM FINAL EXAMINATION The final examination for Math 003 will consist of two parts. Part I: Part II: This part will consist of 5 questions similar

More information

Risk control of mean-reversion time in. statistical arbitrage

Risk control of mean-reversion time in. statistical arbitrage Risk control of mean-reversion time in statistical arbitrage Joongyeub Yeo George Papanicolaou December 17, 2017 Abstract This paper deals with the risk associated with the mis-estimation of mean-reversion

More information

A Hybrid Commodity and Interest Rate Market Model

A Hybrid Commodity and Interest Rate Market Model A Hybrid Commodity and Interest Rate Market Model K.F. Pilz and E. Schlögl University of Technology Sydney Australia September 5, Abstract A joint model of commodity price and interest rate risk is constructed

More information

Course information FN3142 Quantitative finance

Course information FN3142 Quantitative finance Course information 015 16 FN314 Quantitative finance This course is aimed at students interested in obtaining a thorough grounding in market finance and related empirical methods. Prerequisite If taken

More information

Structural similarities between Input-Output tables: a comparison of OECD economies.

Structural similarities between Input-Output tables: a comparison of OECD economies. Structural similarities between Input-Output tables: a comparison of OECD economies. John Holt a * a Department of Mathematics, University of Auckland, Auckland, New Zealand john.holt@holtdatascience.co.nz

More information

RISK-NEUTRAL VALUATION AND STATE SPACE FRAMEWORK. JEL Codes: C51, C61, C63, and G13

RISK-NEUTRAL VALUATION AND STATE SPACE FRAMEWORK. JEL Codes: C51, C61, C63, and G13 RISK-NEUTRAL VALUATION AND STATE SPACE FRAMEWORK JEL Codes: C51, C61, C63, and G13 Dr. Ramaprasad Bhar School of Banking and Finance The University of New South Wales Sydney 2052, AUSTRALIA Fax. +61 2

More information

Barrier Option. 2 of 33 3/13/2014

Barrier Option. 2 of 33 3/13/2014 FPGA-based Reconfigurable Computing for Pricing Multi-Asset Barrier Options RAHUL SRIDHARAN, GEORGE COOKE, KENNETH HILL, HERMAN LAM, ALAN GEORGE, SAAHPC '12, PROCEEDINGS OF THE 2012 SYMPOSIUM ON APPLICATION

More information

Network Structure and the Aggregation of Information: Theory and Evidence from Indonesia

Network Structure and the Aggregation of Information: Theory and Evidence from Indonesia VOL. VOL NO. ISSUE NETWORK STRUCTURE AND INFORMATION AGGREGATION 47 Network Structure and the Aggregation of Information: Theory and Evidence from Indonesia Vivi Alatas, Abhijit Banerjee, Arun G. Chandrasekhar,

More information

2.1 Mathematical Basis: Risk-Neutral Pricing

2.1 Mathematical Basis: Risk-Neutral Pricing Chapter Monte-Carlo Simulation.1 Mathematical Basis: Risk-Neutral Pricing Suppose that F T is the payoff at T for a European-type derivative f. Then the price at times t before T is given by f t = e r(t

More information

Dependence Structure and Extreme Comovements in International Equity and Bond Markets

Dependence Structure and Extreme Comovements in International Equity and Bond Markets Dependence Structure and Extreme Comovements in International Equity and Bond Markets René Garcia Edhec Business School, Université de Montréal, CIRANO and CIREQ Georges Tsafack Suffolk University Measuring

More information

2D penalized spline (continuous-by-continuous interaction)

2D penalized spline (continuous-by-continuous interaction) 2D penalized spline (continuous-by-continuous interaction) Two examples (RWC, Section 13.1): Number of scallops caught off Long Island Counts are made at specific coordinates. Incidence of AIDS in Italian

More information

Chapter 5 Finite Difference Methods. Math6911 W07, HM Zhu

Chapter 5 Finite Difference Methods. Math6911 W07, HM Zhu Chapter 5 Finite Difference Methods Math69 W07, HM Zhu References. Chapters 5 and 9, Brandimarte. Section 7.8, Hull 3. Chapter 7, Numerical analysis, Burden and Faires Outline Finite difference (FD) approximation

More information

IMPA Commodities Course : Forward Price Models

IMPA Commodities Course : Forward Price Models IMPA Commodities Course : Forward Price Models Sebastian Jaimungal sebastian.jaimungal@utoronto.ca Department of Statistics and Mathematical Finance Program, University of Toronto, Toronto, Canada http://www.utstat.utoronto.ca/sjaimung

More information

Sublinear Time Algorithms Oct 19, Lecture 1

Sublinear Time Algorithms Oct 19, Lecture 1 0368.416701 Sublinear Time Algorithms Oct 19, 2009 Lecturer: Ronitt Rubinfeld Lecture 1 Scribe: Daniel Shahaf 1 Sublinear-time algorithms: motivation Twenty years ago, there was practically no investigation

More information

Sparse Representations

Sparse Representations Sparse Representations Joel A. Tropp Department of Mathematics The University of Michigan jtropp@umich.edu Research supported in part by NSF and DARPA 1 Introduction Sparse Representations (Numerical Analysis

More information

A Hybrid Commodity and Interest Rate Market Model

A Hybrid Commodity and Interest Rate Market Model A Hybrid Commodity and Interest Rate Market Model University of Technology, Sydney June 1 Literature A Hybrid Market Model Recall: The basic LIBOR Market Model The cross currency LIBOR Market Model LIBOR

More information

Higher Order Freeness: A Survey. Roland Speicher Queen s University Kingston, Canada

Higher Order Freeness: A Survey. Roland Speicher Queen s University Kingston, Canada Higher Order Freeness: A Survey Roland Speicher Queen s University Kingston, Canada Second order freeness and fluctuations of random matrices: Mingo + Speicher: I. Gaussian and Wishart matrices and cyclic

More information

Pricing Financial Derivatives with Multi-Task Machine Learning and Mixed Effects Models

Pricing Financial Derivatives with Multi-Task Machine Learning and Mixed Effects Models Pricing Financial Derivatives with Multi-Task Machine Learning and Mixed Effects Models Adrian Chan Duke University April 25, 2012 Abstract This paper reviews machine learning methods on forecasting financial

More information

Monte Carlo Methods in Finance

Monte Carlo Methods in Finance Monte Carlo Methods in Finance Peter Jackel JOHN WILEY & SONS, LTD Preface Acknowledgements Mathematical Notation xi xiii xv 1 Introduction 1 2 The Mathematics Behind Monte Carlo Methods 5 2.1 A Few Basic

More information

Techniques for Calculating the Efficient Frontier

Techniques for Calculating the Efficient Frontier Techniques for Calculating the Efficient Frontier Weerachart Kilenthong RIPED, UTCC c Kilenthong 2017 Tee (Riped) Introduction 1 / 43 Two Fund Theorem The Two-Fund Theorem states that we can reach any

More information

Supplemental Online Appendix to Han and Hong, Understanding In-House Transactions in the Real Estate Brokerage Industry

Supplemental Online Appendix to Han and Hong, Understanding In-House Transactions in the Real Estate Brokerage Industry Supplemental Online Appendix to Han and Hong, Understanding In-House Transactions in the Real Estate Brokerage Industry Appendix A: An Agent-Intermediated Search Model Our motivating theoretical framework

More information

A potentially useful approach to model nonlinearities in time series is to assume different behavior (structural break) in different subsamples

A potentially useful approach to model nonlinearities in time series is to assume different behavior (structural break) in different subsamples 1.3 Regime switching models A potentially useful approach to model nonlinearities in time series is to assume different behavior (structural break) in different subsamples (or regimes). If the dates, the

More information

Time Series Analysis. Prof. R. Kruse, Chr. Braune Intelligent Data Analysis 1

Time Series Analysis. Prof. R. Kruse, Chr. Braune Intelligent Data Analysis 1 Time Series Analysis Prof. R. Kruse, Chr. Braune Intelligent Data Analysis 1 Time Series Motivation Decomposition Models Additive models, multiplicative models Global Approaches Regression With and without

More information

A GENERAL FORMULA FOR OPTION PRICES IN A STOCHASTIC VOLATILITY MODEL. Stephen Chin and Daniel Dufresne. Centre for Actuarial Studies

A GENERAL FORMULA FOR OPTION PRICES IN A STOCHASTIC VOLATILITY MODEL. Stephen Chin and Daniel Dufresne. Centre for Actuarial Studies A GENERAL FORMULA FOR OPTION PRICES IN A STOCHASTIC VOLATILITY MODEL Stephen Chin and Daniel Dufresne Centre for Actuarial Studies University of Melbourne Paper: http://mercury.ecom.unimelb.edu.au/site/actwww/wps2009/no181.pdf

More information

Log-Robust Portfolio Management

Log-Robust Portfolio Management Log-Robust Portfolio Management Dr. Aurélie Thiele Lehigh University Joint work with Elcin Cetinkaya and Ban Kawas Research partially supported by the National Science Foundation Grant CMMI-0757983 Dr.

More information

Multivariate Cox PH model with log-skew-normal frailties

Multivariate Cox PH model with log-skew-normal frailties Multivariate Cox PH model with log-skew-normal frailties Department of Statistical Sciences, University of Padua, 35121 Padua (IT) Multivariate Cox PH model A standard statistical approach to model clustered

More information

Using condition numbers to assess numerical quality in HPC applications

Using condition numbers to assess numerical quality in HPC applications Using condition numbers to assess numerical quality in HPC applications Marc Baboulin Inria Saclay / Université Paris-Sud, France INRIA - Illinois Petascale Computing Joint Laboratory 9th workshop, June

More information

Write legibly. Unreadable answers are worthless.

Write legibly. Unreadable answers are worthless. MMF 2021 Final Exam 1 December 2016. This is a closed-book exam: no books, no notes, no calculators, no phones, no tablets, no computers (of any kind) allowed. Do NOT turn this page over until you are

More information

Convergence of trust-region methods based on probabilistic models

Convergence of trust-region methods based on probabilistic models Convergence of trust-region methods based on probabilistic models A. S. Bandeira K. Scheinberg L. N. Vicente October 24, 2013 Abstract In this paper we consider the use of probabilistic or random models

More information

arxiv: v2 [q-fin.cp] 22 Jun 2017

arxiv: v2 [q-fin.cp] 22 Jun 2017 arxiv:1701.01429v2 [q-fin.cp] 22 Jun 2017 Chebyshev Reduced Basis Function applied to Option Valuation. Javier de Frutos and Víctor Gatón June 23, 2017 Abstract We present a numerical method for the frequent

More information

Implementing Candidate Graded Encoding Schemes from Ideal Lattices

Implementing Candidate Graded Encoding Schemes from Ideal Lattices Implementing Candidate Graded Encoding Schemes from Ideal Lattices Martin R. Albrecht 1, Catalin Cocis 2, Fabien Laguillaumie 3 and Adeline Langlois 4 1. Information Security Group, Royal Holloway, University

More information

12 The Bootstrap and why it works

12 The Bootstrap and why it works 12 he Bootstrap and why it works For a review of many applications of bootstrap see Efron and ibshirani (1994). For the theory behind the bootstrap see the books by Hall (1992), van der Waart (2000), Lahiri

More information

Market Risk Analysis Volume II. Practical Financial Econometrics

Market Risk Analysis Volume II. Practical Financial Econometrics Market Risk Analysis Volume II Practical Financial Econometrics Carol Alexander John Wiley & Sons, Ltd List of Figures List of Tables List of Examples Foreword Preface to Volume II xiii xvii xx xxii xxvi

More information

Cross-Section Performance Reversion

Cross-Section Performance Reversion Cross-Section Performance Reversion Maxime Rivet, Marc Thibault and Maël Tréan Stanford University, ICME mrivet, marcthib, mtrean at stanford.edu Abstract This article presents a way to use cross-section

More information

Genetics and/of basket options

Genetics and/of basket options Genetics and/of basket options Wolfgang Karl Härdle Elena Silyakova Ladislaus von Bortkiewicz Chair of Statistics Humboldt-Universität zu Berlin http://lvb.wiwi.hu-berlin.de Motivation 1-1 Basket derivatives

More information

symmys.com 3.2 Projection of the invariants to the investment horizon

symmys.com 3.2 Projection of the invariants to the investment horizon 122 3 Modeling the market In the swaption world the underlying rate (3.57) has a bounded range and thus it does not display the explosive pattern typical of a stock price. Therefore the swaption prices

More information

RESEARCH ARTICLE. The Penalized Biclustering Model And Related Algorithms Supplemental Online Material

RESEARCH ARTICLE. The Penalized Biclustering Model And Related Algorithms Supplemental Online Material Journal of Applied Statistics Vol. 00, No. 00, Month 00x, 8 RESEARCH ARTICLE The Penalized Biclustering Model And Related Algorithms Supplemental Online Material Thierry Cheouo and Alejandro Murua Département

More information

DASL a b. Benjamin Reish. Supplement to. Oklahoma State University. Stillwater, OK

DASL a b. Benjamin Reish. Supplement to. Oklahoma State University. Stillwater, OK Benjamin Reish Supplement to Concurrent Learning Adaptive Control for Systems with Unknown Sign of Control Effectiveness DASL a b a technical report from Oklahoma State University Stillwater, OK Report

More information

COSC 311: ALGORITHMS HW4: NETWORK FLOW

COSC 311: ALGORITHMS HW4: NETWORK FLOW COSC 311: ALGORITHMS HW4: NETWORK FLOW Solutions 1 Warmup 1) Finding max flows and min cuts. Here is a graph (the numbers in boxes represent the amount of flow along an edge, and the unadorned numbers

More information

Robustness, Canalyzing Functions and Systems Design

Robustness, Canalyzing Functions and Systems Design Robustness, Canalyzing Functions and Systems Design Johannes Rauh Nihat Ay SFI WORKING PAPER: 2012-11-021 SFI Working Papers contain accounts of scientific work of the author(s) and do not necessarily

More information

Risk profile clustering strategy in portfolio diversification

Risk profile clustering strategy in portfolio diversification Risk profile clustering strategy in portfolio diversification Cathy Yi-Hsuan Chen Wolfgang Karl Härdle Alla Petukhina Ladislaus von Bortkiewicz Chair of Statistics Humboldt-Universität zu Berlin lvb.wiwi.hu-berlin.de

More information

Chapter 8: CAPM. 1. Single Index Model. 2. Adding a Riskless Asset. 3. The Capital Market Line 4. CAPM. 5. The One-Fund Theorem

Chapter 8: CAPM. 1. Single Index Model. 2. Adding a Riskless Asset. 3. The Capital Market Line 4. CAPM. 5. The One-Fund Theorem Chapter 8: CAPM 1. Single Index Model 2. Adding a Riskless Asset 3. The Capital Market Line 4. CAPM 5. The One-Fund Theorem 6. The Characteristic Line 7. The Pricing Model Single Index Model 1 1. Covariance

More information

High-Frequency Data Analysis and Market Microstructure [Tsay (2005), chapter 5]

High-Frequency Data Analysis and Market Microstructure [Tsay (2005), chapter 5] 1 High-Frequency Data Analysis and Market Microstructure [Tsay (2005), chapter 5] High-frequency data have some unique characteristics that do not appear in lower frequencies. At this class we have: Nonsynchronous

More information

Cumulants and triangles in Erdős-Rényi random graphs

Cumulants and triangles in Erdős-Rényi random graphs Cumulants and triangles in Erdős-Rényi random graphs Valentin Féray partially joint work with Pierre-Loïc Méliot (Orsay) and Ashkan Nighekbali (Zürich) Institut für Mathematik, Universität Zürich Probability

More information

Equity correlations implied by index options: estimation and model uncertainty analysis

Equity correlations implied by index options: estimation and model uncertainty analysis 1/18 : estimation and model analysis, EDHEC Business School (joint work with Rama COT) Modeling and managing financial risks Paris, 10 13 January 2011 2/18 Outline 1 2 of multi-asset models Solution to

More information

Premia 14 HESTON MODEL CALIBRATION USING VARIANCE SWAPS PRICES

Premia 14 HESTON MODEL CALIBRATION USING VARIANCE SWAPS PRICES Premia 14 HESTON MODEL CALIBRATION USING VARIANCE SWAPS PRICES VADIM ZHERDER Premia Team INRIA E-mail: vzherder@mailru 1 Heston model Let the asset price process S t follows the Heston stochastic volatility

More information

CS134: Networks Spring Random Variables and Independence. 1.2 Probability Distribution Function (PDF) Number of heads Probability 2 0.

CS134: Networks Spring Random Variables and Independence. 1.2 Probability Distribution Function (PDF) Number of heads Probability 2 0. CS134: Networks Spring 2017 Prof. Yaron Singer Section 0 1 Probability 1.1 Random Variables and Independence A real-valued random variable is a variable that can take each of a set of possible values in

More information

Parametric Inference and Dynamic State Recovery from Option Panels. Nicola Fusari

Parametric Inference and Dynamic State Recovery from Option Panels. Nicola Fusari Parametric Inference and Dynamic State Recovery from Option Panels Nicola Fusari Joint work with Torben G. Andersen and Viktor Todorov July 2012 Motivation Under realistic assumptions derivatives are nonredundant

More information

Bayesian Finance. Christa Cuchiero, Irene Klein, Josef Teichmann. Obergurgl 2017

Bayesian Finance. Christa Cuchiero, Irene Klein, Josef Teichmann. Obergurgl 2017 Bayesian Finance Christa Cuchiero, Irene Klein, Josef Teichmann Obergurgl 2017 C. Cuchiero, I. Klein, and J. Teichmann Bayesian Finance Obergurgl 2017 1 / 23 1 Calibrating a Bayesian model: a first trial

More information

Table of Contents. Kocaeli University Computer Engineering Department 2011 Spring Mustafa KIYAR Optimization Theory

Table of Contents. Kocaeli University Computer Engineering Department 2011 Spring Mustafa KIYAR Optimization Theory 1 Table of Contents Estimating Path Loss Exponent and Application with Log Normal Shadowing...2 Abstract...3 1Path Loss Models...4 1.1Free Space Path Loss Model...4 1.1.1Free Space Path Loss Equation:...4

More information

Computational Statistics Handbook with MATLAB

Computational Statistics Handbook with MATLAB «H Computer Science and Data Analysis Series Computational Statistics Handbook with MATLAB Second Edition Wendy L. Martinez The Office of Naval Research Arlington, Virginia, U.S.A. Angel R. Martinez Naval

More information

Lecture 6. 1 Polynomial-time algorithms for the global min-cut problem

Lecture 6. 1 Polynomial-time algorithms for the global min-cut problem ORIE 633 Network Flows September 20, 2007 Lecturer: David P. Williamson Lecture 6 Scribe: Animashree Anandkumar 1 Polynomial-time algorithms for the global min-cut problem 1.1 The global min-cut problem

More information

Fact: The graph of a rational function p(x)/q(x) (in reduced terms) will be have no jumps except at the zeros of q(x), where it shoots off to ±.

Fact: The graph of a rational function p(x)/q(x) (in reduced terms) will be have no jumps except at the zeros of q(x), where it shoots off to ±. Rational functions Some of these are not polynomials. 5 1/x 4x 5 + 4x 2 x+1 x 1 (x + 3)(x + 2)() Nonetheless these non-polynomial functions are built out of polynomials. Maybe we can understand them in

More information

MAKING OPTIMISATION TECHNIQUES ROBUST WITH AGNOSTIC RISK PARITY

MAKING OPTIMISATION TECHNIQUES ROBUST WITH AGNOSTIC RISK PARITY Technical Note May 2017 MAKING OPTIMISATION TECHNIQUES ROBUST WITH AGNOSTIC RISK PARITY Introduction The alternative investment industry is becoming ever more accessible to those wishing to diversify away

More information

Application of an Interval Backward Finite Difference Method for Solving the One-Dimensional Heat Conduction Problem

Application of an Interval Backward Finite Difference Method for Solving the One-Dimensional Heat Conduction Problem Application of an Interval Backward Finite Difference Method for Solving the One-Dimensional Heat Conduction Problem Malgorzata A. Jankowska 1, Andrzej Marciniak 2 and Tomasz Hoffmann 2 1 Poznan University

More information

Rosario Nunzio Mantegna

Rosario Nunzio Mantegna Credit markets as networked markets: the cases of bank-firm credit relationships in Japan and emid interbank market Rosario Nunzio Mantegna Central European University, Budapest, Hungary Palermo University,

More information

SPECTRAL ANALYSIS OF STOCK-RETURN VOLATILITY, CORRELATION, AND BETA

SPECTRAL ANALYSIS OF STOCK-RETURN VOLATILITY, CORRELATION, AND BETA 15 IEEE Signal Processing and Signal Processing Education Workshop (SP/SPE) SPECTRAL ANALYSIS OF STOCK-RETURN VOLATILITY, CORRELATION, AND BETA A. Shomesh E. Chaudhuri Massachusetts Institute of Technology

More information

The Probabilistic Method - Probabilistic Techniques. Lecture 7: Martingales

The Probabilistic Method - Probabilistic Techniques. Lecture 7: Martingales The Probabilistic Method - Probabilistic Techniques Lecture 7: Martingales Sotiris Nikoletseas Associate Professor Computer Engineering and Informatics Department 2015-2016 Sotiris Nikoletseas, Associate

More information

Tangent Lévy Models. Sergey Nadtochiy (joint work with René Carmona) Oxford-Man Institute of Quantitative Finance University of Oxford.

Tangent Lévy Models. Sergey Nadtochiy (joint work with René Carmona) Oxford-Man Institute of Quantitative Finance University of Oxford. Tangent Lévy Models Sergey Nadtochiy (joint work with René Carmona) Oxford-Man Institute of Quantitative Finance University of Oxford June 24, 2010 6th World Congress of the Bachelier Finance Society Sergey

More information

UNIT 2. Greedy Method GENERAL METHOD

UNIT 2. Greedy Method GENERAL METHOD UNIT 2 GENERAL METHOD Greedy Method Greedy is the most straight forward design technique. Most of the problems have n inputs and require us to obtain a subset that satisfies some constraints. Any subset

More information