Abstract

Artificial neural networks (ANNs) have gained significant popularity in the last decade for solving narrow AI problems in domains such as healthcare, transportation, and defense. As ANNs become more ubiquitous, it is imperative to understand their associated safety, security, and privacy vulnerabilities. Recently, it has been shown that ANNs are susceptible to a number of adversarial evasion attacks - inputs that cause the ANN to make high-confidence misclassifications despite being almost indistinguishable from the data used to train and test the network. This thesis explores to what degree finding these examples may be aided by using side-channel information, specifically power consumption, of hardware implementations of ANNs.

A blackbox threat scenario is assumed, where an attacker has access to the ANN hardware’s input, outputs, and topology, but the trained model parameters are unknown. The extraction of the ANN parameters is performed by training a surrogate model using a dataset derived from querying the blackbox (oracle) model. The effect of the surrogate’s training set size on the accuracy of the extracted parameters was examined. It was found that the distance between the surrogate and oracle parameters increased with larger training set sizes, while the angle between the two parameter vectors held approximately constant at 90 degrees. However, it was found that the transferability of attacks from the surrogate to the oracle improved linearly with increased training set size with lower attack strength.

Next, a novel method was developed to incorporate power consumption side-channel information from the oracle model into the surrogate training based on a Siamese neural network structure and a simplified power model. Comparison between surrogate models trained with and without power consumption data indicated that incorporation of the side channel information increases the fidelity of the model extraction by up to 30%. However, no improvement of transferability of adversarial examples was found, indicating behavior dissimilarity of the models despite them being closer in weight space.

Library of Congress Subject Headings

Neural networks (Computer science)--Security measures; Cyberterrorism--Computer simulation

Publication Date

1-2021

Document Type

Thesis

Student Type

Graduate

Degree Name

Computer Engineering (MS)

Department, Program, or Center

Computer Engineering (KGCOE)

Advisor

Cory E. Merkel

Advisor/Committee Member

Raymond Ptucha

Advisor/Committee Member

Marcin Lukowiak

Campus

RIT – Main Campus

Plan Codes

CMPE-MS

Share

COinS