Large dense matrices arise from the discretization of many physical phenomena in computational sciences. In statistics very large dense covariance matrices are used for describing random fields and processes. One can, for instance, describe distribution of dust particles in the atmosphere, concentration of mineral resources in the earth's crust or uncertain permeability coefficient in reservoir modeling. When the problem size grows, storing and computing with the full dense matrix becomes prohibitively expensive both in terms of computational complexity and physical memory requirements. Fortunately, these matrices can often be approximated by a class of data sparse matrices called hierarchical matrices (H-matrices) where various sub-blocks of the matrix are approximated by low rank matrices. These matrices can be stored in memory that grows linearly with the problem size. In addition, arithmetic operations on these H-matrices, such as matrix-vector multiplication, can be completed in almost linear time. Originally the H-matrix technique was developed for the approximation of stiffness matrices coming from partial differential and integral equations. Parallelizing these arithmetic operations on the GPU has been the focus of this work and we will present work done on the matrix vector operation on the GPU using the KSPARSE library.
|Original language||English (US)|
|Title of host publication||International Computational Science and Engineering Conference (ICSEC15)|
|Publisher||Extended abstract to the International Computational Science and Engineering Conference (ICSEC15)|
|State||Published - Mar 25 2015|
Bibliographical noteKAUST Repository Item: Exported on 2020-10-01
Acknowledgements: SRI Uncertainty Quantification Center at KAUST,
Extreme Computing Research Center at KAUST