• Media type: E-Article
  • Title: Optimizing High-Throughput Inference on Graph Neural Networks at Shared Computing Facilities with the NVIDIA Triton Inference Server
  • Contributor: Savard, Claire; Manganelli, Nicholas; Holzman, Burt; Gray, Lindsey; Perloff, Alexx; Pedro, Kevin; Stenson, Kevin; Ulmer, Keith
  • Published: Springer Science and Business Media LLC, 2024
  • Published in: Computing and Software for Big Science, 8 (2024) 1
  • Language: English
  • DOI: 10.1007/s41781-024-00123-2
  • ISSN: 2510-2036; 2510-2044
  • Origination:
  • Footnote:
  • Description: AbstractWith machine learning applications now spanning a variety of computational tasks, multi-user shared computing facilities are devoting a rapidly increasing proportion of their resources to such algorithms. Graph neural networks (GNNs), for example, have provided astounding improvements in extracting complex signatures from data and are now widely used in a variety of applications, such as particle jet classification in high energy physics (HEP). However, GNNs also come with an enormous computational penalty that requires the use of GPUs to maintain reasonable throughput. At shared computing facilities, such as those used by physicists at Fermi National Accelerator Laboratory (Fermilab), methodical resource allocation and high throughput at the many-user scale are key to ensuring that resources are being used as efficiently as possible. These facilities, however, primarily provide CPU-only nodes, which proves detrimental to time-to-insight and computational throughput for workflows that include machine learning inference. In this work, we describe how a shared computing facility can use the NVIDIA Triton Inference Server to optimize its resource allocation and computing structure, recovering high throughput while scaling out to multiple users by massively parallelizing their machine learning inference. To demonstrate the effectiveness of this system in a realistic multi-user environment, we use the Fermilab Elastic Analysis Facility augmented with the Triton Inference Server to provide scalable and high-throughput access to a HEP-specific GNN and report on the outcome.
  • Access State: Open Access