• Media type: E-Article
  • Title: Help Them Understand: Testing and Improving Voice User Interfaces
  • Contributor: Guglielmi, Emanuela; Rosa, Giovanni; Scalabrino, Simone; Bavota, Gabriele; Oliveto, Rocco
  • Published: Association for Computing Machinery (ACM), 2024
  • Published in: ACM Transactions on Software Engineering and Methodology (2024)
  • Language: English
  • DOI: 10.1145/3654438
  • ISSN: 1049-331X; 1557-7392
  • Origination:
  • University thesis:
  • Footnote:
  • Description: <jats:p> Voice-based virtual assistants are becoming increasingly popular. Such systems provide frameworks to developers for building custom apps. End-users can interact with such apps through a Voice User Interface (VUI), which allows the user to use natural language commands to perform actions. Testing such apps is not trivial: The same command can be expressed in different semantically equivalent ways. In this paper, we introduce VUI-UPSET, an approach that adapts chatbot-testing approaches to VUI-testing. We conducted an empirical study to understand how VUI-UPSET compares to two state-of-the-art approaches ( <jats:italic>i.e.,</jats:italic> a chatbot testing technique and ChatGPT) in terms of (i) correctness of the generated paraphrases, and (ii) capability of revealing bugs. To this aim, we analyzed 14,898 generated paraphrases for 40 Alexa Skills. Our results show that VUI-UPSET generates more bug-revealing paraphrases than the two baselines with, however, ChatGPT being the approach generating the highest percentage of correct paraphrases. We also tried to use the generated paraphrases to improve the skills. We tried to include in the <jats:italic>voice interaction models</jats:italic> of the skills (i) only the bug-revealing paraphrases, (ii) all the valid paraphrases. We observed that including only bug-revealing paraphrases is sometimes not sufficient to make all the tests pass. </jats:p>