访问量:   最后更新时间:--

陈夕子

硕士生导师
教师姓名:陈夕子
教师英文名称:Xizi Chen
教师拼音名称:chenxizi
职务:专任教师
主要任职:专任教师
职称:副研究员
在职信息:在职
学历:博士
学位:博士学位
办公地点:华中农业大学第一综合楼B座413
电子邮箱:
毕业院校:香港科技大学
所属院系:信息学院
所在单位:信息学院
学科:计算机系统结构    计算机应用技术    
其他联系方式

通讯/办公地址:

论文成果
SparseNN: An Energy-Efficient Neural Network Accelerator Exploiting Input and Output Sparsity
发布时间:2021-09-08    点击次数:

DOI码:10.23919/DATE.2018.8342010

所属单位:Hong Kong University of Science and Technology (HKUST)

发表刊物:Design, Automation and Test in Europe Conference and Exhibition (DATE,CCF-B类)

关键字:Neural networks, training, sparsity, computer architecture, scheduling, prediction algorithms

摘要:Contemporary Deep Neural Network (DNN) contains millions of synaptic connections with tens to hundreds of layers. The large computational complexity poses a challenge to the hardware design. In this work, we leverage the intrinsic activation sparsity of DNN to substantially reduce the execution cycles and the energy consumption. An end-to-end training algorithm is proposed to develop a lightweight (less than 5% overhead) run-time predictor for the output activation sparsity on the fly. Furthermore, an energy-efficient hardware architecture, SparseNN, is proposed to exploit both the input and output sparsity. SparseNN is a scalable architecture with distributed memories and processing elements connected through a dedicated on-chip network. Compared with the state-of-the-art accelerators which only exploit the input sparsity, SparseNN can achieve a 10%-70% improvement in throughput and a power reduction of around 50%.

备注:中国计算机学会 CCF-B 类

合写作者:Jingbo Jiang,Xizi Chen,Chi-Ying Tsui

第一作者:Jingyang Zhu

是否译文:

发表时间:2018-01-01