Abstract

Deep neural networks for video-based eye tracking have demonstrated resilience to noisy environments, stray reflections, and low resolution. However, to train these networks, a large number of manually annotated images are required. To alleviate the cumbersome process of manual labeling, computer graphics rendering is employed to automatically generate a large corpus of annotated eye images under various conditions. In this work, we introduce a synthetic eye image and video generation platform called RIT-Eyes that improves upon previous work by adding features such as an active deformable iris, an aspherical cornea, retinal retro-reflection, and gaze-coordinated eye-lid deformations. To demonstrate the utility of our platform, we render images reflecting the represented gaze distributions inherent in two publicly available eye image datasets, NVGaze and OpenEDS. Additionally, we also render two datasets which mimic the characteristics of Pupil Labs Core mobile eye tracker. Our platform enables users to render realistic eye images by providing parameters for camera position, illuminator position, and head and eye pose. The pipeline can also be used to render temporal sequences of realistic eye movements captured in datasets such as Gaze-in-Wild.

Library of Congress Subject Headings

Eye tracking--Data processing; Eye--Imaging; Neural networks (Computer science); Machine learning

Publication Date

6-2020

Document Type

Thesis

Student Type

Graduate

Degree Name

Computer Science (MS)

Department, Program, or Center

Computer Science (GCCIS)

Advisor

Reynold Bailey

Campus

RIT – Main Campus

Plan Codes

COMPSCI-MS

Share

COinS