Author

Philip Hays

Abstract

Computer recognition of American Sign Language (ASL) is a computationally intensive task. Although it has generally been performed using powerful lab workstations, this research investigates transcription of static ASL signs using an application on a consumer-level mobile device. The application provides real-time sign to text translation by processing a live video stream to detect the ASL alphabet as well as custom signs to perform tasks on the device. In this work several avenues for classification and processing were ex-plored to evaluate performance for mobile ASL transcription. The cho-sen classification algorithm uses locality preserving projections (LPP) with trained support vector machines (SVMs). Processing was investigated using either the mobile device only or with cloud assistance. In comparison to the native mobile application, the cloud-assisted application increased classification speed, reduced memory usage, and kept the network usage low while barely increasing the power required. A distributed solution has been created that will provide a new way of interacting with the mobile device in a native way to a hard-of-hearing person while also considering the network, power and processing constraints of the mobile device.

Library of Congress Subject Headings

Finger spelling--Data processing; American Sign Language--Transliteration--Data processing; Computer vision; Pattern recognition systems; Mobile computing; Cloud computing

Publication Date

10-1-2012

Document Type

Thesis

Department, Program, or Center

Computer Engineering (KGCOE)

Advisor

Melton, Roy

Comments

Note: imported from RIT’s Digital Media Library running on DSpace to RIT Scholar Works. Physical copy available through RIT's The Wallace Library at: HV2477 .H39 2012

Campus

RIT – Main Campus

Share

COinS