tsaccessamount   tslastupdatetime--

陈夕子

tsgtutor
tsname陈夕子
tsenameXizi Chen
tsnamepinyinchenxizi
tsjob专任教师
tsworkexperience专任教师
tsjobtype在职
tseducation博士
tsdegree博士学位
tsofficelocation华中农业大学第一综合楼B座413
tsemail
tsgraduateduniversity香港科技大学
tsteachercollege信息学院
tsunit信息学院
tsdiscipline计算机系统结构    计算机应用技术    
tsothercontact

通讯/办公地址:

论文成果
Tight Compression: Compressing CNN Model Tightly Through Unstructured Pruning and Simulated Annealing Based Permutation
tsreleasetime2021-09-08    tsclick

tsdoi10.1109/DAC18072.2020.9218701

tsunitThe Hong Kong University of Science and Technology (HKUST)

tsjournalname57th Design Automation Conference (DAC,CCF-A类)

tskeywordConvolutional Neural Network (CNN), pruning, weight sparsity, model compression

tssummaryThe unstructured sparsity after pruning poses a challenge to the efficient implementation of deep learning models in existing regular architectures like systolic arrays. The coarse-grained structured pruning, on the other hand, tends to have higher accuracy loss than unstructured pruning when the pruned models are of the same size. In this work, we propose a compression method based on the unstructured pruning and a novel weight permutation scheme. Through permutation, the sparse weight matrix is further compressed to a small and dense format to make full use of the hardware resources. Compared to the state-of-theart works, the matrix compression rate is effectively improved from 5.88x to 10.28x. As a result, the throughput and energy efficiency are improved by 2.12 and 1.57 times, respectively.

tsremark中国计算机学会 CCF-A 类

tsauthorsJingyang Zhu,Jingbo Jiang,Chi-Ying Tsui

tsfirstauthorXizi Chen

tstranslation

tspublishtime2020-01-01