Abstract

To compare methods of displaying speech-recognition confidence of automatic captions, we analyzed eye-tracking and response data from deaf or hard of hearing participants viewing videos.

Creative Commons License

Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.

Publication Date

2017

Comments

© 2017 The authors and California State University, Northridge

Document Type

Article

Department, Program, or Center

School of Information (GCCIS)

Campus

RIT – Main Campus

Share

COinS