The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is an established, first-principles based scene simulation tool that produces synthetic multispectral and hyperspectral images from the visible to long wave infrared (0.4 to 20 microns). Over the last few years, significant enhancements such as spectral polarimetric and active Light Detection and Ranging (LIDAR) models have also been incorporated into the software, providing an extremely powerful tool for algorithm testing and sensor evaluation. However, the extensive time required to create large-scale scenes has limited DIRSIG’s ability to generate scenes “on demand.” To date, scene generation has been a laborious, time-intensive process, as the terrain model, CAD objects and background maps have to be created and attributed manually. To shorten the time required for this process, we are initiating a research effort that aims to reduce the man-in-the-loop requirements for several aspects of synthetic hyperspectral scene construction. Through a fusion of 3D LIDAR data with passive imagery, we are working to semi-automate several of the required tasks in the DIRSIG scene creation process. Additionally, many of the remaining tasks will also realize a shortened implementation time through this application of multi-modal imagery. This paper reports on the progress made thus far in achieving these objectives.

Date of creation, presentation, or exhibit



Proceedings of Laser Radar Technology and Applications XI 6214 (2006) "Semi-automated DIRSIG scene modeling from 3D LIDAR and passive imaging sources," Proceedings of the Defense Security Symposium. International Society of Optical Engineers. Held at Gaylord Palms Resort and Convention Center: Orlando, Florida: 17-21 April 2006. Copyright 2006 Society of Photo-Optical Instrumentation Engineers. This paper was published in Proceedings of Laser Radar Technology and Applications XI, SPIE vol. 6214 and is made available as an electronic reprint with permission of SPIE. One print or electronic copy may be made for personal use only. Systematic or multiple reproduction, distribution to multiple locations via electronic or other means, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited. This work has been supported in part by the NGA under University Research Initiative HM1582-05-1-2005, “Automated Imagery Analysis and Scene Modeling.”Note: imported from RIT’s Digital Media Library running on DSpace to RIT Scholar Works in February 2014.

Document Type

Conference Proceeding

Department, Program, or Center

Chester F. Carlson Center for Imaging Science (COS)


RIT – Main Campus