Abstract

Picture yourself resting or attending a meeting in the back seat while your car is driving you home with all the privacy a driverless vehicle offers. Designing autonomous systems involves decoding environmental cues and making safe decisions. Convolution Neural Networks (CNNs) are the leading choices for computer vision tasks due to their high performance and scalability to pieces of hardware. They have long been run on Graphical Processing Units (GPUs) and Central Processing Units (CPUs). Yet, today, there is an urgent need to accelerate CNNs in low-power consumption hardware for real-time inference. This research aims to design a configurable hardware accelerator for 8-bit fixed point audio and image CNN models. An audio network is developed to classify environmental sounds from children playing in the streets, car horns, and sirens; an image network is designed to classify cars, lanes, road signs, traffic lights, and pedestrians. The two CNNs are quantized from a 32-bit floating-point to an 8-bit fixed-point format while maintaining high accuracy. The hardware accelerator is verified in SystemVerilog and compared to similar works.

Publication Date

8-2022

Document Type

Thesis

Student Type

Graduate

Degree Name

Electrical Engineering (MS)

Department, Program, or Center

Department of Electrical and Microelectronic Engineering (KGCOE)

Advisor

Mark A. Indovina

Advisor/Committee Member

Dorin Patru

Advisor/Committee Member

Carlos Barrios

Campus

RIT – Main Campus

Share

COinS