• Media type: E-Article
  • Title: Addressing Text-Dependent Speaker Verification Using Singing Speech
  • Contributor: Shi, Yan; Zhou, Juanjuan; Long, Yanhua; Li, Yijie; Mao, Hongwei
  • imprint: MDPI AG, 2019
  • Published in: Applied Sciences
  • Language: English
  • DOI: 10.3390/app9132636
  • ISSN: 2076-3417
  • Keywords: Fluid Flow and Transfer Processes ; Computer Science Applications ; Process Chemistry and Technology ; General Engineering ; Instrumentation ; General Materials Science
  • Origination:
  • Footnote:
  • Description: <jats:p>The automatic speaker verification (ASV) has achieved significant progress in recent years. However, it is still very challenging to generalize the ASV technologies to new, unknown and spoofing conditions. Most previous studies focused on extracting the speaker information from natural speech. This paper attempts to address the speaker verification from another perspective. The speaker identity information was exploited from singing speech. We first designed and released a new corpus for speaker verification based on singing and normal reading speech. Then, the speaker discrimination was compared and analyzed between natural and singing speech in different feature spaces. Furthermore, the conventional Gaussian mixture model, the dynamic time warping and the state-of-the-art deep neural network were investigated. They were used to build text-dependent ASV systems with different training-test conditions. Experimental results show that the voiceprint information in the singing speech was more distinguishable than the one in the normal speech. More than relative 20% reduction of equal error rate was obtained on both the gender-dependent and independent 1 s-1 s evaluation tasks.</jats:p>
  • Access State: Open Access