• Media type: E-Article
  • Title: End-to-end modeling and transfer learning for audiovisual emotion recognition in-the-wild
  • Contributor: Dresvyanskiy, Denis [Author]; Ryumina, Elena [Author]; Kaya, Heysem [Author]; Markitantov, Maxim [Author]; Karpov, Alexey [Author]; Minker, Wolfgang [Author]
  • imprint: Universität Ulm, 2023-09-22T13:09:35Z
  • Language: English
  • DOI: https://doi.org/10.18725/OPARU-50389
  • Keywords: Affective Computing ; face processing ; deep learning architectures ; multimodal representations ; DDC 004 / Data processing & computer science ; multimodal fusion ; Emotion recognition
  • Origination:
  • Footnote: Diese Datenquelle enthält auch Bestandsnachweise, die nicht zu einem Volltext führen.
  • Description: As emotions play a central role in human communication, automatic emotion recognition has attracted increasing attention in the last two decades. While multimodal systems enjoy high performances on lab-controlled data, they are still far from providing ecological validity on non-lab-controlled, namely “in-the-wild” data. This work investigates audiovisual deep learning approaches to emotion recognition in in-the-wild problem. Inspired by the outstanding performance of end-to-end and transfer learning techniques, we explored the effectiveness of architectures in which a modality-specific Convolutional Neural Network (CNN) is followed by a Long Short-Term Memory Recurrent Neural Network (LSTM-RNN) using the AffWild2 dataset under the Affective Behavior Analysis in-the-Wild (ABAW) challenge protocol. We deployed unimodal end-to-end and transfer learning approaches within a multimodal fusion system, which generated final predictions using a weighted score fusion scheme. Exploiting the proposed deep-learning-based multimodal system, we reached a test set challenge performance measure of 48.1% on the ABAW 2020 Facial Expressions challenge, which advances the first-runner-up performance. ; publishedVersion
  • Access State: Open Access
  • Rights information: Attribution (CC BY)