• Media type: E-Article
  • Title: Accessible Video Calling : Enabling Nonvisual Perception of Visual Conversation Cues : Enabling Nonvisual Perception of Visual Conversation Cues
  • Contributor: Shi, Lei; Tomlinson, Brianna J.; Tang, John; Cutrell, Edward; McDuff, Daniel; Venolia, Gina; Johns, Paul; Rowan, Kael
  • imprint: Association for Computing Machinery (ACM), 2019
  • Published in: Proceedings of the ACM on Human-Computer Interaction, 3 (2019) CSCW, Seite 1-22
  • Language: English
  • DOI: 10.1145/3359233
  • ISSN: 2573-0142
  • Keywords: Computer Networks and Communications ; Human-Computer Interaction ; Social Sciences (miscellaneous)
  • Origination:
  • Footnote:
  • Description: <jats:p>Nonvisually Accessible Video Calling (NAVC) is a prototype that detects visual conversation cues in a video call and uses audio cues to convey them to a user who is blind or low-vision. NAVC uses audio cues inspired by movie soundtracks to convey Attention, Agreement, Disagreement, Happiness, Thinking, and Surprise. When designing NAVC, we partnered with people who are blind or low-vision through a user-centered design process that included need-finding interviews and design reviews. To evaluate NAVC, we conducted a user study with 16 participants. The study provided feedback on the NAVC prototype and showed that the participants could easily discern some cues, like Attention and Agreement, but had trouble distinguishing others. The accuracy of the prototype in detecting conversation cues emerged as a key concern, especially in avoiding false positives and in detecting negative emotions, which tend to be masked in social conversations. This research identified challenges and design opportunities in using AI models to enable accessible video calling.</jats:p>