Abstract

A silent speech interface is a system that allows people doing speech communication without using their own speech sounds. Today, a variety of speech interfaces have been developed using biological signals such as the eye movement, and the articulatory. These interfaces are mainly for supporting people who have speech disorder to communicate with others, yet there are many speech disorder that have not been addressed by the current technologies. The possible cause of the issue is the limited numbers of the biological signals used for the speech interface. The uncovered issues with speech disorders can be addressed through identifying new biological signals for speech interface development. Therefore, we aim to find new biological signals that can be used for speech interface developments. The biological signals we focused on were the vibration of the vocal folds and brain waves. After measuring the data and extracting the features, we verified whether this data can be used to classify speech sounds through machine learning models: Support Vector Machine for the vocal folds vibration, and Echo State Network for the brain waves. As a result, using the vocal folds vibration signals, Japanese vowels could be classified with 71 % accuracy on average. Using the brain waves, five different consonants were classified with 28.3 % accuracy on average. These findings indicate the possibility that the vocal folds vibration signals and the brain waves can be used as new biological signals for speech interface developments. From this study, we were able to discover some needed improvements that should be considered in the future that may lead to further improvement in the classification accuracy.

Publication Date

2-2022

Document Type

Thesis

Student Type

Graduate

Degree Name

Computer Engineering (MS)

Department, Program, or Center

Computer Engineering (KGCOE)

Advisor

Andres Kwasinski

Advisor/Committee Member

Corey Merkel

Advisor/Committee Member

Minoru Nakazawa

Campus

RIT – Main Campus

Share

COinS