陈夕子
通讯/办公地址:
所属单位:The Hong Kong University of Science and Technology (HKUST)
发表刊物:24th Asia and South Pacific Design Automation Conference (ASP-DAC,CCF-C类)
关键字:Convolutional Neural Network (CNN), output activation sparsity, computation saving
摘要:Recently Resistive-RAM (RRAM) crossbar has been used in the design of the accelerator of convolutional neural networks (CNNs) to solve the memory wall issue. However, the intensive multiplyaccumulate computations (MACs) executed at the crossbars during the inference phase are still the bottleneck for the further improvement of energy efficiency and throughput. In this work, we explore several methods to reduce the computations for the RRAM-based CNN accelerators. First, the output sparsity resulting from the widely employed Rectified Linear Unit is exploited, and a significant portion of computations are bypassed through an early detection of the negative output activations. Second, an adaptive approximation is proposed to terminate the MAC early when the sum of the partial results of the remaining computations is considered to be within a certain range of the intermediate accumulated result and thus has an insignificant contribution to the inference. In order to determine these redundant computations, a novel runtime estimation on the maximum and minimum values of each output activation is developed and used during the MAC operation. Experimental results show that around 70% of the computations can be reduced during the inference with a negligible accuracy loss smaller than 0.2%. As a result, the energy efficiency and the throughput are improved by over 2.9 and 2.8 times, respectively, compared with the state-of-the-art RRAM-based accelerators.
备注:中国计算机学会 CCF-C 类
合写作者:Jingyang Zhu,Jingbo Jiang,Chi-Ying Tsui
第一作者:Xizi Chen
是否译文:否
发表时间:2019-01-01