• Media type: E-Article
  • Title: Lyrics segmentation via bimodal text–audio representation
  • Contributor: Fell, Michael; Nechaev, Yaroslav; Meseguer-Brocal, Gabriel; Cabrio, Elena; Gandon, Fabien; Peeters, Geoffroy
  • imprint: Cambridge University Press (CUP), 2022
  • Published in: Natural Language Engineering
  • Language: English
  • DOI: 10.1017/s1351324921000024
  • ISSN: 1351-3249; 1469-8110
  • Keywords: Artificial Intelligence ; Linguistics and Language ; Language and Linguistics ; Software
  • Origination:
  • Footnote:
  • Description: <jats:title>Abstract</jats:title><jats:p>Song lyrics contain repeated patterns that have been proven to facilitate automated lyrics segmentation, with the final goal of detecting the building blocks (e.g., chorus, verse) of a song text. Our contribution in this article is twofold. First, we introduce a convolutional neural network (CNN)-based model that learns to segment the lyrics based on their repetitive text structure. We experiment with novel features to reveal different kinds of repetitions in the lyrics, for instance based on phonetical and syntactical properties. Second, using a novel corpus where the song text is synchronized to the audio of the song, we show that the text and audio modalities capture complementary structure of the lyrics and that combining both is beneficial for lyrics segmentation performance. For the purely text-based lyrics segmentation on a dataset of 103k lyrics, we achieve an F-score of 67.4%, improving on the state of the art (59.2% F-score). On the synchronized text–audio dataset of 4.8k songs, we show that the additional audio features improve segmentation performance to 75.3% F-score, significantly outperforming the purely text-based approaches.</jats:p>