Deep Convolutional Neural Network Based Object Detection Inference Acceleration Using FPGA PDF Download

Are you looking for read ebook online? Search for your book and save it on your Kindle device, PC, phones or tablets. Download Deep Convolutional Neural Network Based Object Detection Inference Acceleration Using FPGA PDF full book. Access full book title Deep Convolutional Neural Network Based Object Detection Inference Acceleration Using FPGA by Solomon Negussie Tesema. Download full books in PDF and EPUB format.

Deep Convolutional Neural Network Based Object Detection Inference Acceleration Using FPGA

Deep Convolutional Neural Network Based Object Detection Inference Acceleration Using FPGA PDF Author: Solomon Negussie Tesema
Publisher:
ISBN:
Category :
Languages : en
Pages : 0

Book Description
Object detection is one of the most challenging yet essential computer vision research areas. It means labeling and localizing all known objects of interest on an input image using tightly fit rectangular bounding boxes around the objects. Object detection, having passed through several evolutions and progressions, nowadays relies on the successes of image classification networks based on deep convolutional neural networks. However, as the depth and complication of convolutional neural networks increased, detection speed reduced, and accuracy increased. Unfortunately, most computer vision applications, such as real-time object tracking on an embedded system, requires lightweight, fast and accurate object detection. As a result, object detection acceleration has become a hot research area, with much attention given to FPGA-based acceleration due to FPGA's high-energy efficiency, high-data bandwidth, and flexible programmability.This Ph.D. dissertation proposes incrementally improving object detection models by repurposing existing well-known object detectors into lighter, more accurate, and faster models. Our models achieve a comparable accuracy while being lightweight and faster compared with some of the top state-of-the-art detectors. We also propose and implement object detection inference acceleration using FPGA boards of different capacities and resources. We focus on high resource and energy-efficient inference acceleration implementations while preserving the object detector's accuracy performance. Last but not least, we present various auxiliary contributions such as a highly significant synthetic image generation or augmentation technique for training an object detector which is critical for achieving a high-performance object detector. Overall, our work in this thesis has two parts: designing and implementing lightweight and accurate CPU and GPU-based object detection models and implementing high-throughput, energy, and resource-efficient object detection inference acceleration on an FPGA.