Author

Divya Mandloi

Abstract

This thesis describes the ongoing development of an image processing technique for the translation of American Sign Language (ASL) finger-spelling to text. The present analysis is the phase one of a broader project, the Sign2 Project, which is focused on a complete technological approach to the translation of ASL to digital audio and/or text. The methodology adopted in this analysis employs a gray-scale image processing technique to convert the American Sign Language finger-spelling to text. It attempts to process static images of the subject considered, and then matches them to a statistical database of pre-processed images to ultimately recognize the specific set of signed letters. This phase of the Sign2 Project considers the hand of the subject alone and not the entire subject, as its scope is restricted to recognizing the finger spelling and not the American Sign Language as a whole. Since the approach taken in this analysis is vision-based, the amount of processing is minimized as compared to other approaches and hence projects itself as a viable technique to be implemented in real time systems. Devices like kiosks and PDAs can incorporate this technology to enable communication between the hearing and non-hearing individuals who are geographically placed apart with the least possible run times which is mandatory for real-time systems. In this investigation, I intend to describe the approach to the phase one problem and demonstrate the results thus derived, where several words are distinguished and recognized with a fairly high degree of reliability.

Publication Date

2006

Document Type

Thesis

Student Type

Graduate

Degree Name

Telecommunications Engineering Technology (MS)

Department, Program, or Center

Electrical, Computer and Telecommunications Engineering Technology (CAST)

Advisor

Glenn, Chance

Comments

Note: imported from RIT’s Digital Media Library running on DSpace to RIT Scholar Works in December 2013.

Campus

RIT – Main Campus

Share

COinS