We present a novel training algorithm for a feed forward neural network with a single hidden layer of nodes (i.e., two layers of connection weights). Our algorithm is capable of training networks for hard problems, such as the classic two-spirals problem. The weights in the first layer are determined using a quasirandom number generator. These weights are frozen---they are never modified during the training process. The second layer of weights is trained as a simple linear discriminator using methods such as the pseudo-inverse, with possible iterations. We also study the problem of reducing the hidden layer: pruning low-weight nodes and a genetic algorithm search for good subsets.
Date of creation, presentation, or exhibit
Department, Program, or Center
Chester F. Carlson Center for Imaging Science (COS)
Anderson, Peter; Gaborski, Roger; Ge, Ming; and Raghavendra, Sanjay, "Using quasirandom numbers in neural networks" (1995). Accessed from
RIT – Main Campus