Author

Jie zhang

Abstract

With the increasing availability of low-cost digital cameras with small or medium sized sensors, more and more airborne images are available with high resolution, which enhances the possibility in establishing three dimensional models for urban areas. The high accuracy of representation of buildings in urban areas is required for asset valuation or disaster recovery. Many automatic methods for modeling and reconstruction are applied to aerial images together with Light Detection and Ranging (LiDAR) data. If LiDAR data are not provided, manual steps must be applied, which results in semi-automated technique.

The automated extraction of 3D urban models can be aided by the automatic extraction of dense point clouds. The more dense the point clouds, the easier the modeling and the higher the accuracy. Also oblique aerial imagery provides more facade information than nadir images, such as building height and texture. So a method for automatic dense point cloud extraction from oblique images is desired.

In this thesis, a modified workflow for the automated extraction of dense point clouds from oblique images is proposed and tested. The result reveals that this modified workflow works well and a very dense point cloud can be extracted from only two oblique images with slightly higher accuracy in flat areas than the one extracted by the original workflow.

The original workflow was established by previous research at the Rochester Institute of Technology (RIT) for point cloud extraction from nadir images. For oblique images, a first modification is proposed in the feature detection part by replacing the Scale-Invariant Feature Transform (SIFT) algorithm with the Affine Scale-Invariant Feature Transform (ASIFT) algorithm. After that, in order to realize a very dense point cloud, the Semi-Global Matching (SGM) algorithm is implemented in the second modification to compute the disparity map from a stereo image pair, which can then be used to reproject pixels back to a point cloud. A noise removal step is added in the third modification. The point cloud from the modified workflow is much denser compared to the result from the original workflow.

An accuracy assessment is made in the end to evaluate the point cloud extracted from the modified workflow. From the two flat areas, subsets of points are selected from both original and modified workflow, and then planes are fitted to them, respectively. The Mean Squared Error (MSE) of the points to the fitted plane is compared. The point subsets from the modified workflow have slightly lower MSEs than the ones from the original workflow, respectively. This suggests a much more dense and more accurate point cloud can lead to clear roof borders for roof extraction and improve the possibility of 3D feature detection for 3D point cloud registration.

Publication Date

11-20-2013

Document Type

Thesis

Student Type

Graduate

Degree Name

Imaging Science (MS)

Department, Program, or Center

Chester F. Carlson Center for Imaging Science (COS)

Advisor

John Kerekes

Comments

Physical copy available from RIT's Wallace Library at TA1637 .Z53 2013

Campus

RIT – Main Campus

Plan Codes

IMGS-MS

Share

COinS