TY - JOUR

T1 - Accelerated Cyclic Reduction: A Distributed-Memory Fast Solver for Structured Linear Systems

AU - Chavez Chavez, Gustavo Ivan

AU - Turkiyyah, George

AU - Zampini, Stefano

AU - Ltaief, Hatem

AU - Keyes, David E.

N1 - KAUST Repository Item: Exported on 2020-04-23
Acknowledgements: We thank the anonymous reviewers for their detailed comments and suggestions for this manuscript. The authors would also like to thank Ronald Kriemann from the Max-Planck-Institute for Mathematics in the Sciences for development and continuous support of HLibPro, Alexander Litvinenko from the King Abdullah University of Science and Technology (KAUST) for the enlightening discussions and advice, and Pieter Ghysels from the Lawrence Berkeley National Laboratory for his recommendations on the use of STRUMPACK. Support from the KAUST Supercomputing Laboratory and access to Shaheen is gratefully acknowledged. The work of all authors was supported by the Extreme Computing Research Center at KAUST.

PY - 2017/12/15

Y1 - 2017/12/15

N2 - We present Accelerated Cyclic Reduction (ACR), a distributed-memory fast solver for rank-compressible block tridiagonal linear systems arising from the discretization of elliptic operators, developed here for three dimensions. Algorithmic synergies between Cyclic Reduction and hierarchical matrix arithmetic operations result in a solver that has O(kNlogN(logN+k2)) arithmetic complexity and O(k Nlog N) memory footprint, where N is the number of degrees of freedom and k is the rank of a block in the hierarchical approximation, and which exhibits substantial concurrency. We provide a baseline for performance and applicability by comparing with the multifrontal method with and without hierarchical semi-separable matrices, with algebraic multigrid and with the classic cyclic reduction method. Over a set of large-scale elliptic systems with features of nonsymmetry and indefiniteness, the robustness of the direct solvers extends beyond that of the multigrid solver, and relative to the multifrontal approach ACR has lower or comparable execution time and size of the factors, with substantially lower numerical ranks. ACR exhibits good strong and weak scaling in a distributed context and, as with any direct solver, is advantageous for problems that require the solution of multiple right-hand sides. Numerical experiments show that the rank k patterns are of O(1) for the Poisson equation and of O(n) for the indefinite Helmholtz equation. The solver is ideal in situations where low-accuracy solutions are sufficient, or otherwise as a preconditioner within an iterative method.

AB - We present Accelerated Cyclic Reduction (ACR), a distributed-memory fast solver for rank-compressible block tridiagonal linear systems arising from the discretization of elliptic operators, developed here for three dimensions. Algorithmic synergies between Cyclic Reduction and hierarchical matrix arithmetic operations result in a solver that has O(kNlogN(logN+k2)) arithmetic complexity and O(k Nlog N) memory footprint, where N is the number of degrees of freedom and k is the rank of a block in the hierarchical approximation, and which exhibits substantial concurrency. We provide a baseline for performance and applicability by comparing with the multifrontal method with and without hierarchical semi-separable matrices, with algebraic multigrid and with the classic cyclic reduction method. Over a set of large-scale elliptic systems with features of nonsymmetry and indefiniteness, the robustness of the direct solvers extends beyond that of the multigrid solver, and relative to the multifrontal approach ACR has lower or comparable execution time and size of the factors, with substantially lower numerical ranks. ACR exhibits good strong and weak scaling in a distributed context and, as with any direct solver, is advantageous for problems that require the solution of multiple right-hand sides. Numerical experiments show that the rank k patterns are of O(1) for the Poisson equation and of O(n) for the indefinite Helmholtz equation. The solver is ideal in situations where low-accuracy solutions are sufficient, or otherwise as a preconditioner within an iterative method.

UR - http://hdl.handle.net/10754/626403

UR - http://www.sciencedirect.com/science/article/pii/S0167819117302041

UR - http://www.scopus.com/inward/record.url?scp=85042919840&partnerID=8YFLogxK

U2 - 10.1016/j.parco.2017.12.001

DO - 10.1016/j.parco.2017.12.001

M3 - Article

AN - SCOPUS:85042919840

VL - 74

SP - 65

EP - 83

JO - Parallel Computing

JF - Parallel Computing

SN - 0167-8191

ER -