We discuss issues of Artificial Intelligence (AI) fairness for people with disabilities, with examples drawn from our research on human-computer interaction (HCI) for AI-based systems for people who are Deaf or Hard of Hearing (DHH). In particular, we discuss the need for inclusion of data from people with disabilities in training sets, the lack of interpretability of AI systems, ethical responsibilities of access technology researchers and companies, the need for appropriate evaluation metrics for AI-based access technologies (to determine if they are ready to be deployed and if they can be trusted by users), and the ways in which AI systems influence human behavior and influence the set of abilities needed by users to successfully interact with computing systems.
Date of creation, presentation, or exhibit
Department, Program, or Center
School of Information (GCCIS)
Sushant Kafle, Abraham Glasser, Sedeeq Al-khazraji, Larwan Berke, Matthew Seita, and Matt Huenerfauth. 2020. Artificial intelligence fairness in the context of accessibility research on intelligent systems for people who are deaf or hard of hearing. SIGACCESS Access. Comput., 125, Article 4 (October 2019), 1 pages. DOI:https://doi.org/10.1145/3386296.3386300
RIT – Main Campus