Xizi Chen
PostalAddress:
Email:
DOI number:10.1109/DAC18072.2020.9218701
Affiliation of Author(s):The Hong Kong University of Science and Technology (HKUST)
Journal:57th Design Automation Conference (DAC, CCF-A)
Key Words:Convolutional Neural Network (CNN), pruning, weight sparsity, model compression
Abstract:The unstructured sparsity after pruning poses a challenge to the efficient implementation of deep learning models in existing regular architectures like systolic arrays. The coarse-grained structured pruning, on the other hand, tends to have higher accuracy loss than unstructured pruning when the pruned models are of the same size. In this work, we propose a compression method based on the unstructured pruning and a novel weight permutation scheme. Through permutation, the sparse weight matrix is further compressed to a small and dense format to make full use of the hardware resources. Compared to the state-of-theart works, the matrix compression rate is effectively improved from 5.88x to 10.28x. As a result, the throughput and energy efficiency are improved by 2.12 and 1.57 times, respectively.
Note:中国计算机学会 CCF-A 类
Co-author:Jingyang Zhu,Jingbo Jiang,Chi-Ying Tsui
First Author:Xizi Chen
Translation or Not:no
Date of Publication:2020-01-01