This research is initiated to enhance the video-based eye tracker’s performance to detect small eye movements. Chaudhary and Pelz, 2019, created an excellent foundation on their motion tracking of iris features to detect small eye movements, where they successfully used the classical handcrafted feature extraction methods like Scale InvariantFeature Transform (SIFT) to match the features on iris image frames. They extracted features from the eye-tracking videos and then used patent  an approach of tracking the geometric median of the distribution. This patent  excludes outliers, and the velocity is approximated by scaling by the sampling rate. To detect the microsaccades (small, rapid eye movements that occur in only one eye at a time) thresholding was used to estimate the velocity in the following paper. Our goal is to create a robust mathematical model to create a 2D feature distribution in the given patent . In this regard, we worked in two steps. First, we studied a large number of multiple recent deep learning approaches along with the classical hand-crafted feature extractor like SIFT, to extract the features from the collected eye tracker videos from Multidisciplinary Vision Research Lab(MVRL) and then showed the best matching process for our given RIT-Eyes dataset. The goal is to make the feature extraction as robust as possible. Secondly, we clearly showed that deep learning methods can detect more feature points from the iris images and that matching of the extracted features frame by frame is more accurate than the classical approach.
Library of Congress Subject Headings
Eye tracking--Data processing; Digital video--Data processing; Machine learning; Pattern recognition systems; Biometric identification
Applied Statistics (MS)
Jabin, Anisia, "Video-based iris feature extraction and matching using Deep Learning" (2020). Thesis. Rochester Institute of Technology. Accessed from
RIT – Main Campus