Abstract

The widespread availability of small and inexpensive mobile computing devices and the desire to connect them at any time in any place has driven the need to develop an accurate means of self-localization. Devices that typically operate outdoors use GPS for localization. However, most mobile computing devices operate not only outdoors but indoors where GPS is typically unavailable. Therefore, other localization techniques must be used. Currently, there are several commercially available indoor localization systems. However, most of these systems rely on specialized hardware which must be installed in the mobile device as well as the building of operation. The deployment of this additional infrastructure may be unfeasible or costly. This work addresses the problem of indoor self-localization of mobile devices without the use of specialized infrastructure. We aim to leverage existing assets rather than deploy new infrastructure. The problem of self-localization utilizing single and dual sensor systems has been well studied. Typically, dual sensor systems are used when the limitations of a single sensor prevent it from functioning with the required level of performance and accuracy. A second sensor is often used to complement and improve the measurements of the first one. Sometimes it is better to use more than two sensors. In this work the use of three sensors with complementary characteristics was explored. The three sensor system that was developed included a positional sensor, an inertial sensor and a visual sensor. Positional information was obtained via radio localization. Acceleration information was obtained via an accelerometer and visual object identification was performed with a video camera. This system was selected as representative of typical ubiquitous computing devices that will be capable of developing an awareness of their environment in order to provide users with contextually relevant information. As a part of this research a prototype system consisting of a video camera, accelerometer and an 802.11g receiver was built. The specific sensors were chosen for their low cost and ubiquitous nature and by their ability to complement each other in a self-localization task using existing infrastructure. A Discrete Kalman filter was designed to fuse the sensor information in an effort to get the best possible estimate of the system position. Experimental results showed that the system could, when provided with a reasonable initial position estimate, determine its position with an average error of 8.26 meters.

Library of Congress Subject Headings

Ubiquitous computing; Mobile computing; Multisensor data fusion

Publication Date

11-15-2006

Document Type

Thesis

Department, Program, or Center

Computer Engineering (KGCOE)

Advisor

Cockburn, Juan

Advisor/Committee Member

Canosa, Roxanne

Comments

Note: imported from RIT’s Digital Media Library running on DSpace to RIT Scholar Works. Physical copy available through RIT's The Wallace Library at: QA76.5915 .Z36 2006

Campus

RIT – Main Campus

Share

COinS