Abstract

Artificial neural networks (ANNs) are powerful tools for machine learning with applications in many areas including speech recognition, image classification, medical diagnosis, and spam filtering. It has been shown that ANNs can approximate any function to any degree of accuracy given enough neurons and training time. However, there is no guarantee on the number of neurons required or the time it will take to train them. These are the main disadvantages of using ANNs. This thesis develops an algorithm which uses regression-based techniques to decrease the number of training epochs. A modification of the Delta Rule, combined with techniques established for regression training of single-layer networks, has resulted in much faster training than standard gradient descent in many cases. The algorithm showed statistically significant improvements over standard backpropagation in the number of iterations, the total training time, the resulting error, and the accuracy of the resulting classifier in most cases. The algorithm was tested on several datasets of varying complexity and the results are presented.

Library of Congress Subject Headings

Neural networks (Computer science); Machine learning; Computer algorithms; Learning classifier systems

Publication Date

7-2012

Document Type

Thesis

Student Type

Graduate

Degree Name

Computer Science (MS)

Department, Program, or Center

Computer Science (GCCIS)

Advisor

Zack Butler

Comments

Physical copy available from RIT's Wallace Library at QA76.87 .S44 2014

Campus

RIT – Main Campus

Share

COinS