• Media type: E-Article
  • Title: Differentiable Subset Pruning of Transformer Heads
  • Contributor: Li, Jiaoda; Cotterell, Ryan; Sachan, Mrinmaya
  • imprint: MIT Press - Journals, 2021
  • Published in: Transactions of the Association for Computational Linguistics
  • Language: English
  • DOI: 10.1162/tacl_a_00436
  • ISSN: 2307-387X
  • Origination:
  • Footnote:
  • Description: <jats:title>Abstract</jats:title> <jats:p>Multi-head attention, a collection of several attention mechanisms that independently attend to different parts of the input, is the key ingredient in the Transformer. Recent work has shown, however, that a large proportion of the heads in a Transformer’s multi-head attention mechanism can be safely pruned away without significantly harming the performance of the model; such pruning leads to models that are noticeably smaller and faster in practice. Our work introduces a new head pruning technique that we term differentiable subset pruning. ntuitively, our method learns per- head importance variables and then enforces a user-specified hard constraint on the number of unpruned heads. he importance variables are learned via stochastic gradient descent. e conduct experiments on natural language inference and machine translation; we show that differentiable subset pruning performs comparably or better than previous works while offering precise control of the sparsity level.1</jats:p>
  • Access State: Open Access