陈夕子
通讯/办公地址:
DOI码:10.1109/DAC18072.2020.9218701
所属单位:The Hong Kong University of Science and Technology (HKUST)
发表刊物:57th Design Automation Conference (DAC,CCF-A类)
关键字:Convolutional Neural Network (CNN), pruning, weight sparsity, model compression
摘要:The unstructured sparsity after pruning poses a challenge to the efficient implementation of deep learning models in existing regular architectures like systolic arrays. The coarse-grained structured pruning, on the other hand, tends to have higher accuracy loss than unstructured pruning when the pruned models are of the same size. In this work, we propose a compression method based on the unstructured pruning and a novel weight permutation scheme. Through permutation, the sparse weight matrix is further compressed to a small and dense format to make full use of the hardware resources. Compared to the state-of-theart works, the matrix compression rate is effectively improved from 5.88x to 10.28x. As a result, the throughput and energy efficiency are improved by 2.12 and 1.57 times, respectively.
备注:中国计算机学会 CCF-A 类
合写作者:Jingyang Zhu,Jingbo Jiang,Chi-Ying Tsui
第一作者:Xizi Chen
是否译文:否
发表时间:2020-01-01