Description

The study of human perception has evolved from examining simple tasks executed in reduced laboratory conditions to the examination of complex, real-world behaviors. Virtual environments represent the next evolutionary step by allowing full stimulus control and repeatability for human subjects, and a testbed for evaluating models of human behavior. Visual resolution varies dramatically across the visual field, dropping orders of magnitude from central to peripheral vision. Humans move their gaze about a scene several times every second, projecting taskcritical areas of the scene onto the central retina. These eye movements are made even when the immediate task does not require high spatial resolution. Such “attentionally-driven” eye movements are important because they provide an externally observable marker of the way subjects deploy their attention while performing complex, real-world tasks. Tracking subjects’ eye movements while they perform complex tasks in virtual environments provides a window into perception. In addition to the ability to track subjects’ eyes in virtual environments, concurrent EEG recording provides a further indicator of cognitive state. We have developed a virtual reality laboratory in which head-mounted displays (HMDs) are instrumented with infrared video-based eyetrackers to monitor subjects’ eye movements while they perform a range of complex tasks such as driving, and manual tasks requiring careful eye-hand coordination. A go-kart mounted on a 6DOF motion platform provides kinesthetic feedback to subjects as they drive through a virtual town; a dual-haptic interface consisting of two SensAble Phantom extended range devices allows free motion and realistic force-feedback within a 1^3 m volume (Refer to PDF file for exact formulas).

Date of creation, presentation, or exhibit

5-24-1999

Comments

Copyright 1999 Society of Photo-Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited.

This work was supported in part by NIH Resource Grant P41 RR09283, EY05729, and an RIT College of Science Project Initiation Grant.

Note: imported from RIT’s Digital Media Library running on DSpace to RIT Scholar Works in February 2014.

Document Type

Conference Paper

Department, Program, or Center

Chester F. Carlson Center for Imaging Science (COS)

Campus

RIT – Main Campus

Share

COinS