• Medientyp: E-Artikel
  • Titel: Exploiting parallelism in matrix-computation kernels for symmetric multiprocessor systems : Matrix-multiplication and matrix-addition algorithm optimizations by software pipelining and threads allocation : Matrix-multiplication and matrix-addition algorithm optimizations by software pipelining and threads allocation
  • Beteiligte: D'alberto, Paolo; Bodrato, Marco; Nicolau, Alexandru
  • Erschienen: Association for Computing Machinery (ACM), 2011
  • Erschienen in: ACM Transactions on Mathematical Software, 38 (2011) 1, Seite 1-30
  • Sprache: Englisch
  • DOI: 10.1145/2049662.2049664
  • ISSN: 0098-3500; 1557-7295
  • Schlagwörter: Applied Mathematics ; Software
  • Entstehung:
  • Anmerkungen:
  • Beschreibung: We present a simple and efficient methodology for the development, tuning, and installation of matrix algorithms such as the hybrid Strassen's and Winograd's fast matrix multiply or their combination with the 3M algorithm for complex matrices (i.e., hybrid: a recursive algorithm as Strassen's until a highly tuned BLAS matrix multiplication allows performance advantages). We investigate how modern Symmetric Multiprocessor (SMP) architectures present old and new challenges that can be addressed by the combination of an algorithm design with careful and natural parallelism exploitation at the function level (optimizations) such as function-call parallelism, function percolation, and function software pipelining. We have three contributions: first, we present a performance overview for double- and double-complex-precision matrices for state-of-the-art SMP systems; second, we introduce new algorithm implementations: a variant of the 3M algorithm and two new different schedules of Winograd's matrix multiplication (achieving up to 20% speedup with respect to regular matrix multiplication). About the latter Winograd's algorithms: one is designed to minimize the number of matrix additions and the other to minimize the computation latency of matrix additions; third, we apply software pipelining and threads allocation to all the algorithms and we show how this yields up to 10% further performance improvements.