Abstract

This thesis investigates a method for using contextual information in text recognition. This is based on the premise that, while reading, humans recognize words with missing or garbled characters by examining the surrounding characters and then selecting the appropriate character. The correct character is chosen based on an inherent knowledge of the language and spelling techniques. We can then model this statistically. The approach taken by this Thesis is to combine feature extraction techniques, Neural Networks and Hidden Markov Modeling. This method of character recognition involves a three step process: pixel image preprocessing, neural network classification and context interpretation. Pixel image preprocessing applies a feature extraction algorithm to original bit mapped images, which produces a feature vector for the original images which are input into a neural network. The neural network performs the initial classification of the characters by producing ten weights, one for each character. The magnitude of the weight is translated into the confidence the network has in each of the choices. The greater the magnitude and separation, the more confident the neural network is of a given choice. The output of the neural network is the input for a context interpreter. The context interpreter uses Hidden Markov Modeling (HMM) techniques to determine the most probable classification for all characters based on the characters that precede that character and character pair statistics. The HMMs are built using an a priori knowledge of the language: a statistical description of the probabilities of digrams. Experimentation and verification of this method combines the development and use of a preprocessor program, a Cascade Correlation Neural Network and a HMM context interpreter program. Results from these experiments show the neural network successfully classified 88.2 percent of the characters. Expanding this to the word level, 63 percent of the words were correctly identified. Adding the Hidden Markov Modeling improved the word recognition to 82.9 percent.

Library of Congress Subject Headings

Optical character recognition devices; Optical pattern recognition; Neural networks (Computer science); Hidden Markov models

Publication Date

1992

Document Type

Thesis

Department, Program, or Center

Computer Science (GCCIS)

Advisor

Not Listed

Comments

Note: imported from RIT’s Digital Media Library running on DSpace to RIT Scholar Works. Physical copy available through RIT's The Wallace Library at: TA1640 .E44 1992

Campus

RIT – Main Campus

Share

COinS