Quantization of neural network models is becoming a necessary step in deploying artificial intelligence (AI) at the edge. The quantization process reduces the precision of model parameters, thereby lowering memory and computational costs. However, in doing so, this process also limits the model’s representational capacity, which can alter both its performance on nominal inputs (clean accuracy) as well as its robustness to adversarial attacks (adversarial accuracy). Few researchers have explored these two metrics simultaneously in the context of quantized neural networks, leaving several open questions about the security and trustworthiness of AI algorithms implemented on edge devices. This research explores the effects of different weight quantization schemes on both clean and adversarial accuracies of neural network models subjected to memory constraints. Two models—VGG-16 and a 3-layer multilayer perceptron (MLP)—were studied with the MNIST and CIFAR-10 image classification datasets. The weights of the models were quantized during training using the deterministic rounding technique. The models were either quantized homogeneously, with all weights quantized to the same precision, or heterogeneously, with weights quantized to different precisions. Several different bitwidths were used for homogeneous quantization, while several different probability mass function-based distributions of bitwidths were used for heterogeneous quantization. To the best of the author’s knowledge, this is the first work to study adversarial robustness under homogeneous quantization based on different probability mass functions. Results show that clean accuracy generally increases when quantized homogeneously at higher bitwidths. For the heterogeneously quantized VGG-16, the distributions that contain a higher quantity of low bitwidth weights have worse performance than those that did not. The heterogeneously quantized MLP performance, however, is generally consistent across distributions. Both models perform far better on the MNIST dataset than the CIFAR-10. For the MNIST dataset, the VGG-16 model displayed higher levels of adversarial robustness when the quantization of the model contained a greater quantity of lower bitwidth weights. However, the adversarial robustness of the MLP decreases with larger attack strength for all bitwidths. Neither model shows convincing levels of adversarial robustness on the CIFAR-10 dataset. Overall, the results of this research show that both clean and adversarial accuracies have complex dependencies on the total capacity of weight memory and the distribution of precisions among individual weights.

Library of Congress Subject Headings

Geometric quantization--Evaluation; Neural networks (Computer science); Computer security

Publication Date


Document Type


Student Type


Degree Name

Computer Engineering (MS)

Department, Program, or Center

Computer Engineering (KGCOE)


Cory Merkel

Advisor/Committee Member

Amlan Ganguly

Advisor/Committee Member

Dongfang Liu


RIT – Main Campus

Plan Codes