Author : Kyle D. Shiflett
Publisher :
ISBN 13 :
Total Pages : 0 pages
Book Rating : 4.:/5 (137 download)
Book Synopsis Photonic Deep Neural Network Accelerators for Scaling to the Next Generation of High-performance Processing by : Kyle D. Shiflett
Download or read book Photonic Deep Neural Network Accelerators for Scaling to the Next Generation of High-performance Processing written by Kyle D. Shiflett and published by . This book was released on 2022 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt: Improvements from electronic processor and interconnect performance scaling are narrowing due to fundamental challenges faced at the device level. Compounding the issue, increasing demand for large, accurate deep neural network models has placed significant pressure on the current generation of processors. The slowing of Moore’s law and the breakdown of Dennard scaling leaves no room for innovative solutions in traditional digital architectures to meet this demand. To address these scaling issues, architectures have moved away from general-purpose computation towards fixed-function hardware accelerators to handle demanding computation. Although electronic accelerators alleviate some of the pressure of deep neural network workloads, they are still burdened by electronic device and interconnect scaling problems. There is potential to further scale computer architectures by utilizing emerging technology, such as photonics. The low-loss interconnects and energy-efficient modulators provided by photonics could help drive future performance scaling. This could innovate the next generation of high-bandwidth, bandwidth-dense interconnects, and high-speed, energy-efficient processors by taking advantage of the inherent parallelism of light. This dissertation investigates photonic architectures for communication and computation acceleration to meet the machine learning processing requirements of future systems. The benefits of photonics is explored for bit-level parallelism, data-level parallelism, and in-network computation. The research performed in this dissertation shows that photonics has the4 potential to enable the next generation of deep neural network application performance by improving energy-efficiency and reducing compute latency. The evaluations in this dissertation conclude that photonic accelerators can: (1) Reduce energy-delay product by 73.9% at the bit-level on convolutional neural network workloads; (2) Improve throughput by 110× and improve energy-delay product by 74× on convolutions neural network workloads by exploiting data-level parallelism; (3) Improve network utilization while giving a 3.6× speedup, and reducing energy-delay product by 9.3× by performing in-network computatio