• Medientyp: E-Artikel
  • Titel: Brain-inspired Cognition in Next-generation Racetrack Memories
  • Beteiligte: Khan, Asif Ali; Ollivier, Sébastien; Longofono, Stephen; Hempel, Gerald; Castrillon, Jeronimo; Jones, Alex K.
  • Erschienen: Association for Computing Machinery (ACM), 2022
  • Erschienen in: ACM Transactions on Embedded Computing Systems
  • Sprache: Englisch
  • DOI: 10.1145/3524071
  • ISSN: 1539-9087; 1558-3465
  • Schlagwörter: Hardware and Architecture ; Software
  • Entstehung:
  • Anmerkungen:
  • Beschreibung: <jats:p> Hyperdimensional computing (HDC) is an emerging computational framework inspired by the brain that operates on vectors with thousands of dimensions to emulate cognition. Unlike conventional computational frameworks that operate on numbers, HDC, like the brain, uses high-dimensional random vectors and is capable of one-shot learning. HDC is based on a well-defined set of arithmetic operations and is highly error resilient. The core operations of HDC manipulate HD vectors in bulk bit-wise fashion, offering many opportunities to leverage parallelism. Unfortunately, on conventional von Neumann architectures, the continuous movement of HD vectors among the processor and the memory can make the cognition task prohibitively slow and energy intensive. Hardware accelerators only marginally improve related metrics. In contrast, even partial implementations of an HDC framework inside memory can provide considerable performance/energy gains as demonstrated in prior work using memristors. This article presents an architecture based on <jats:italic>racetrack memory</jats:italic> (RTM) to conduct and accelerate the entire HDC framework within memory. The proposed solution requires minimal additional CMOS circuitry by leveraging a read operation across multiple domains in RTMs called <jats:italic>transverse read</jats:italic> (TR) to realize exclusive-or ( <jats:monospace>XOR</jats:monospace> ) and addition operations. To minimize the CMOS circuitry overhead, an RTM nanowire-based counting mechanism is proposed. Using language recognition as the example workload, the proposed RTM HDC system reduces the energy consumption by 8.6× compared to the state-of-the-art in-memory implementation. Compared to dedicated hardware design realized with an FPGA, RTM-based HDC processing demonstrates 7.8× and 5.3× improvements in the overall runtime and energy consumption, respectively. </jats:p>