• Medientyp: E-Artikel
  • Titel: Multimodal signal processing and machine learning for hearing instruments
  • Beteiligte: Zhang, Tao; McKinney, Martin
  • Erschienen: Acoustical Society of America (ASA), 2018
  • Erschienen in: The Journal of the Acoustical Society of America
  • Sprache: Englisch
  • DOI: 10.1121/1.5035696
  • ISSN: 0001-4966; 1520-8524
  • Schlagwörter: Acoustics and Ultrasonics ; Arts and Humanities (miscellaneous)
  • Entstehung:
  • Anmerkungen:
  • Beschreibung: <jats:p>With advances in wireless technology and sensor miniaturization, more and more non-audio sensors become available to and are being integrated into hearing instruments. These sensors help not only improve speech understanding and sound quality, enhance hearing usability and expand the hearing instruments' capabilities to health and wellness monitoring. However, the introduction of these sensors also present a new set of challenges to researchers and engineers. Compared with traditional audio sensors for hearing instruments, these new sensor inputs can come from different modalities and often have different scales and sampling frequencies. In some cases, they are not linear or synchronized to each other. In this presentation, we will review these challenges in details in the context of hearing instruments applications. Furthermore, we will demonstrate how multimodal signal processing and machine learning can be used to overcome these challenges and bring a greater degree of satisfactions to the end users. Finally, future directions in multimodal signal processing and machine learning research for hearing instruments will be discussed.</jats:p>