• Media type: E-Article
  • Title: Optimised preprocessing for automatic mouth gesture classification
  • Contributor: Brumm, Maren [Author]; Grigat, Rolf-Rainer [Author]
  • Corporation: Technische Universität Hamburg ; Technische Universität Hamburg, Vision Systems – Bildverarbeitungssysteme
  • Published: 2020
  • Published in: Proceedings of the LREC 2020 Workshop on Legal and Ethical Issues in Human Language Technologies (LEGAL2020) ; (2020), Seite 27–32
  • Language: English
  • DOI: 10.15480/882.3614
  • Identifier:
  • Keywords: Sign Language Recognition/Generation ; Machine Translation ; SpeechToSpeech Translation ; Statistical and Machine Learning Methods
  • Origination:
  • Footnote: Sonstige Körperschaft: Technische Universität Hamburg
    Sonstige Körperschaft: Technische Universität Hamburg, Vision Systems – Bildverarbeitungssysteme
  • Description: Mouth gestures are facial expressions in sign language, that do not refer to lip patterns of a spoken language. Research on this topic has been limited so far. The aim of this work is to automatically classify mouth gestures from video material by training a neural network. This could render time-consuming manual annotation unnecessary and help advance the field of automatic sign language translation. However, it is a challenging task due to the little data available as training material and the similarity of different mouth gesture classes. In this paper we focus on the preprocessing of the data, such as finding the area of the face important for mouth gesture recognition. Furthermore we analyse the duration of mouth gestures and determine the optimal length of video clips for classification. Our experiments show, that this can improve the classification results significantly and helps to reach a near human accuracy.
  • Access State: Open Access
  • Rights information: Attribution - Non Commercial (CC BY-NC)