Artificial neural networks (ANNs) have become widely used over the past decade due to their excellent performance in applications such as object recognition and classification, time series forecasting, machine translation, and more. Key factors contributing to the success of ANNs in these domains include the availability of large datasets and constant improvements in powerful parallel processing architectures such as graphics processing units (GPUs). However, modern ANN models' growing size and complexity preclude their deployment on resource-constrained hardware for edge applications. Neuromorphic computing seeks to address this challenge by using brain-inspired principles to co-design the ANN hardware and algorithms for improved size, weight, and power (SWaP) consumption. Unfortunately, though, standard gradient-based ANN training algorithms like backpropagation incur a large overhead when they are implemented in neuromorphic hardware. In addition, the gradient calculations required for the backpropagation algorithm are especially challenging to implement in neuromorphic hardware, which typically has low precision, functional discontinuities, non-linear weights, and several other non-ideal behaviors. This thesis proposes a novel circuit design for training memristor-based neuromorphic systems using the weighted sum simultaneous perturbation algorithm (WSSPA). WSSPA is a zeroth-order ANN training algorithm that estimates loss gradients using perturbations of neuron inputs, avoiding large-overhead circuitry needed for backpropagation. The training circuit optimizes the ANN by adjusting the conductance of the memristor-based weights using the loss gradient estimates. Current mode design techniques, including the translinear principle, are used to significantly reduce the number of transistors required for existing implementations, thereby improving the SWaP efficiency. The proposed circuit consists of a sample and hold circuit, an error calculation circuit, a memristor threshold voltage selector, and synapse input selector circuits. The design was implemented in LTSpice, using 45 nm predictive technology MOSFET models and a simple linear memristor model fit to published experimental data. Demonstration of the proposed hardware was carried out by training an ANN to perform Boolean logic functions such as XOR. Results show that the current mode circuit design of WSSPA is able to converge on the correct functionality within 16 training iterations. After this, the ANN output current oscillates around the target. Areas for future work include making the feedback adjustment voltage value dependent on the error magnitude, rather than just the sign. This will help the error to converge to 0 after training, eliminating output oscillations. In addition, future studies should confirm the scalability of the proposed design for larger neural networks.

Publication Date


Document Type


Student Type


Degree Name

Computer Engineering (MS)

Department, Program, or Center

Computer Engineering (KGCOE)


Cory E. Merkel

Advisor/Committee Member

Marcin Lukowiak

Advisor/Committee Member

Hussin Ketout


RIT – Main Campus