Abstract

Proliferation of voice-controlled personal-assistant devices poses accessibility barriers for Deaf and Hard of Hearing (DHH) users. It is estimated that 128 million people use a voice assistant in the United States, and 4.2 billion digital voice assistants being used around the world, e.g. through over 60,000 different smart home devices that support Amazon's Alexa, which is one instance of virtual assistants [97,134]. The ongoing (at the time of this dissertation) coronavirus pandemic has accelerated the uptake of home-based, voice-controlled devices. As artificial intelligence researchers and developers are working on American Sign Language (ASL) recognition technologies, Human-Computer Interaction (HCI) research is needed to understand what Deaf and Hard of Hearing (DHH) users may want from this technology and how to best design the interaction. This dissertation's contributions are split into four parts: 1. Part I: DHH Interest This part addresses the gap in knowledge about DHH users' experience with personal assistant devices, shedding light on DHH users' prior experience with this technology and their interest in devices that could understand sign-language commands. This insight supports ongoing technological advancement in sign-language recognition technologies. 2. Part II: Dataset Collection With the context that there is a ASL data bottleneck for these technologies, this part investigates remote ASL data collection at scale, creating an online sign language data collection platform and testing its viability. Extending upon this, a continuous signing data collection platform was made and is tested. This part also employs a remote data collection protocol using a Wizard-of-Oz prototype personal assistant device, to allow DHH users to spontaneously interact with such a device in sign language. The data collected from this is described, and an in-person Wizard-of-Oz experiment is conducted to investigate aspects that were not possible through the remote protocol. 3. Part III: DHH Behavior From the remote Wizard-of-Oz methodology in Part II, this part presents analysis of this data and addresses the gap of knowledge when it comes to ASL interaction with personal assistant devices. This part also describes analysis of the in-person Wizard-of-Oz experiment mentioned in Part II, showing what we have learned about the linguistic properties of in-person interaction. 4. Part IV: Privacy Concerns This last part touches on a theme that occured during parts I-III; privacy concerns. This part utilizes state-of-the-art image processing technology and guides the development of ASL-optimized face technology to protect anonymity of DHH users' to their satisfaction. Lastly, a small interview study was appended to the in-person Wizard-of-Oz experiment to further confirm whether DHH users would be more comfortable in using a personal assistant device if face-disguise technology was embedded.

Library of Congress Subject Headings

Intelligent personal assistants (Computer software)--Design; Gesture recognition (Computer science); American Sign Language--Data processing

Publication Date

2-2023

Document Type

Dissertation

Student Type

Graduate

Degree Name

Computing and Information Sciences (Ph.D.)

Department, Program, or Center

Computer Science (GCCIS)

Advisor

Matt Huenerfauth

Advisor/Committee Member

Kristen Shinohara

Advisor/Committee Member

Roshan Peiris

Campus

RIT – Main Campus

Plan Codes

COMPIS-PHD

Share

COinS