Click:   The Last Update Time:--

Xizi Chen

Supervisor of Master's Candidates
Name (Simplified Chinese):Xizi Chen
Name (English):Xizi Chen
Name (Pinyin):chenxizi
Administrative Position:专任教师
Academic Titles:专任教师
Status:Employed
Education Level:博士
Degree:Doctoral degree
Business Address:华中农业大学第一综合楼B座413
E-Mail:
Alma Mater:The Hong Kong University of Science and Technology
Teacher College:College of Informatics
School/Department:Huazhong Agricultural University
Discipline:Computer Architecture    Computer Applications Technology    
Other Contact Information:

PostalAddress:

Email:

Paper Publications
CompRRAE: RRAM-Based Convolutional Neural Network Accelerator with Reduced Computations Through a Runtime Activation Estimation
Release time:2021-09-08    Hits:

Affiliation of Author(s):The Hong Kong University of Science and Technology (HKUST)

Journal:24th Asia and South Pacific Design Automation Conference (ASP-DAC, CCF-C)

Key Words:Convolutional Neural Network (CNN), output activation sparsity, computation saving

Abstract:Recently Resistive-RAM (RRAM) crossbar has been used in the design of the accelerator of convolutional neural networks (CNNs) to solve the memory wall issue. However, the intensive multiplyaccumulate computations (MACs) executed at the crossbars during the inference phase are still the bottleneck for the further improvement of energy efficiency and throughput. In this work, we explore several methods to reduce the computations for the RRAM-based CNN accelerators. First, the output sparsity resulting from the widely employed Rectified Linear Unit is exploited, and a significant portion of computations are bypassed through an early detection of the negative output activations. Second, an adaptive approximation is proposed to terminate the MAC early when the sum of the partial results of the remaining computations is considered to be within a certain range of the intermediate accumulated result and thus has an insignificant contribution to the inference. In order to determine these redundant computations, a novel runtime estimation on the maximum and minimum values of each output activation is developed and used during the MAC operation. Experimental results show that around 70% of the computations can be reduced during the inference with a negligible accuracy loss smaller than 0.2%. As a result, the energy efficiency and the throughput are improved by over 2.9 and 2.8 times, respectively, compared with the state-of-the-art RRAM-based accelerators.

Note:CCF-C

Co-author:Jingyang Zhu,Jingbo Jiang,Chi-Ying Tsui

First Author:Xizi Chen

Translation or Not:no

Date of Publication:2019-01-01