Abstract

The discovery of Adversarial Examples — data points which are easily recognized by humans, but which fool artificial classifiers with ease, is relatively new in the world of machine learning. Corruptions imperceptible to the human eye are often sufficient to fool state of the art classifiers. The resolution of this problem has been the subject of a great deal of research in recent years as the prevalence of Deep Neural Networks grows in everyday systems. To this end, we propose InfoMixup , a novel method to improve the robustness of Deep Neural Networks without significantly affecting performance on clean samples. Our work is focused in the domain of image classification, a popular target in contemporary literature due to the proliferation of Deep Neural Networks in modern products. We show that our method achieves state of the art improvements in robustness against a variety of attacks under several measures.

Publication Date

8-8-2022

Document Type

Thesis

Student Type

Graduate

Degree Name

Computer Engineering (MS)

Department, Program, or Center

Computer Engineering (KGCOE)

Advisor

Cory E. Merkel

Advisor/Committee Member

Andres Kwasinski

Advisor/Committee Member

Matthew Wright

Campus

RIT – Main Campus

Share

COinS