Abstract

Facial model lip-sync is a large field of research within the animation industry. The mouth is a complex facial feature to animate, thus multiple techniques have arisen to simplify this process. These techniques, however, can lead to unappealing flat animation that lack full facial expression or eerie over-expressive animations that make the viewer uneasy. This thesis proposes an animation system that produces natural speech movements while conveying facial expression and compares them to previous techniques. This system used a text input of the dialogue to generate a phoneme-to-blend shape map to automate the facial model. An actor was motion captured to record the audio, provide speech motion data, and to directly control the facial expression in the regions of the face other than the mouth. The actor's speech motion and the phoneme-to-blend shape map worked in conjunction to create a final lip-synced animation that viewers compared to phonetic driven animation and animation created with just motion capture. In this comparison, this system's resultant animation was the least favorite, while the dampened motion capture animation gained the most preference.

Library of Congress Subject Headings

Computer animation--Technique; Facial expression--Data processing; Lips--Movements--Computer simulation

Publication Date

5-30-2017

Document Type

Thesis

Student Type

Graduate

Degree Name

Computer Science (MS)

Department, Program, or Center

Computer Science (GCCIS)

Advisor

Joe Geigel

Advisor/Committee Member

Alejandro Perez Sanchez

Advisor/Committee Member

Reynold Bailey

Comments

Physical copy available from RIT's Wallace Library at TR897.7 .M34 2017

Campus

RIT – Main Campus

Plan Codes

COMPSCI-MS

Share

COinS