American Sign Language (ASL) is a visual gestural language which is used by many people who are deaf or hard-of-hearing. In this paper, we design a visual recognition system based on action recognition techniques to recognize individual ASL signs. Specifically, we focus on recognition of words in videos of continuous ASL signing. The proposed framework combines multiple signal modalities because ASL includes gestures of both hands, body movements, and facial expressions. We have collected a corpus of RBG + depth videos of multi-sentence ASL performances, from both fluent signers and ASL students; this corpus has served as a source for training and testing sets for multiple evaluation experiments reported in this paper. Experimental results demonstrate that the proposed framework can automatically recognize ASL.
Date of creation, presentation, or exhibit
Department, Program, or Center
Information Sciences and Technologies (GCCIS)
C. Zhang, Y. Tian and M. Huenerfauth, "Multi-modality American Sign Language recognition," 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, 2016, pp. 2881-2885. doi: 10.1109/ICIP.2016.7532886
RIT – Main Campus