Video-based eye tracking techniques have become increasingly attractive in many research fields, such as visual perception and human-computer interface design. The technique primarily relies on the positional difference between the center of the eye's pupil and the first-surface reflection at the cornea, the corneal reflection (CR). This difference vector is mapped to determine an observer's point of regard (POR). In current head-mounted video-based eye trackers, the systems are limited in several aspects, such as inadequate measurement range and misdetection of eye features (pupil and CR). This research first proposes a new `structured illumination' configuration, using multiple IREDs to illuminate the eye, to ensure that eye positions can still be tracked even during extreme eye movements (up to ±45° horizontally and ±25° vertically). Then eye features are detected by a two-stage processing approach. First, potential CRs and the pupil are isolated based on statistical information in an eye image. Second, genuine CRs are distinguished by a novel CR location prediction technique based on the well-correlated relationship between the offset of the pupil and that of the CR. The optical relationship of the pupil and CR offsets derived in this thesis can be applied to two typical illumination configurations - collimated and near-source ones- in the video-based eye tracking system. The relationships from the optical derivation and that from an experimental measurement match well. Two application studies, smooth pursuit dynamics in controlled static (laboratory) and unconstrained vibrating (car) environments were conducted. In the first study, the extended stimuli (color photographs subtending 2° and 17°, respectively) were found to enhance smooth pursuit movements induced by realistic images, and the eye velocity for tracking a small dot (subtending <0.1°) was saturated at about 64 deg/sec while the saturation velocity occurred at higher velocities for the extended images. The difference in gain due to target size was significant between dot and the two extended stimuli, while no statistical difference existed between the two extended stimuli. In the second study, twovisual stimuli same as in the first study were used. The visual performance was impaired dramatically due to the whole body motion in the car, even in the tracking of a slowly moving target (2 deg/sec); the eye was found not able to perform a pursuit task as smooth as in the static environment though the unconstrained head motion in the unstable condition was supposed to enhance the visual performance.
Library of Congress Subject Headings
Eye--Movements--Data processing; Motion perception (Vision)--Computer simulation; Video recordings--Data processing
Department, Program, or Center
Chester F. Carlson Center for Imaging Science (COS)
Li, Feng, "Optimizations and applications in head-mounted video-based eye tracking" (2011). Thesis. Rochester Institute of Technology. Accessed from
RIT – Main Campus