Sie können Bookmarks mittels Listen verwalten, loggen Sie sich dafür bitte in Ihr SLUB Benutzerkonto ein.
Medientyp:
E-Artikel
Titel:
Algorithm 967 : A Distributed-Memory Fast Multipole Method for Volume Potentials
:
A Distributed-Memory Fast Multipole Method for Volume Potentials
Beteiligte:
Malhotra, Dhairya;
Biros, George
Erschienen:
Association for Computing Machinery (ACM), 2017
Erschienen in:ACM Transactions on Mathematical Software
Beschreibung:
<jats:p>
The solution of a constant-coefficient elliptic Partial Differential Equation (PDE) can be computed using an integral transform: A convolution with the fundamental solution of the PDE, also known as a volume potential. We present a Fast Multipole Method (FMM) for computing volume potentials and use them to construct spatially adaptive solvers for the Poisson, Stokes, and low-frequency Helmholtz problems. Conventional N-body methods apply to discrete particle interactions. With volume potentials, one replaces the sums with volume integrals. Particle N-body methods can be used to accelerate such integrals. but it is more efficient to develop a special FMM. In this article, we discuss the efficient implementation of such an FMM. We use high-order piecewise Chebyshev polynomials and an octree data structure to represent the input and output fields and enable spectrally accurate approximation of the near-field and the Kernel Independent FMM (KIFMM) for the far-field approximation. For distributed-memory parallelism, we use space-filling curves, locally essential trees, and a hypercube-like communication scheme developed previously in our group. We present new near and far interaction traversals that optimize cache usage. Also, unlike particle N-body codes, we need a 2:1 balanced tree to allow for precomputations. We present a fast scheme for 2:1 balancing. Finally, we use vectorization, including the AVX instruction set on the Intel Sandy Bridge architecture to get better than 50% of peak floating-point performance. We use task parallelism to employ the Xeon Phi on the Stampede platform at the Texas Advanced Computing Center (TACC). We achieve about 600
<jats:sc>gflop</jats:sc>
/s of double-precision performance on a single node. Our largest run on Stampede took 3.5s on 16K cores for a problem with 18
<jats:sc>e</jats:sc>
+9 unknowns for a highly nonuniform particle distribution (corresponding to an effective resolution exceeding 3
<jats:sc>e</jats:sc>
+23 unknowns since we used 23 levels in our octree).
</jats:p>