Click:   The Last Update Time:--

Xizi Chen

Supervisor of Master's Candidates
Name (Simplified Chinese):Xizi Chen
Name (English):Xizi Chen
Name (Pinyin):chenxizi
Administrative Position:专任教师
Academic Titles:专任教师
Status:Employed
Education Level:博士
Degree:Doctoral degree
Business Address:华中农业大学第一综合楼B座413
E-Mail:
Alma Mater:The Hong Kong University of Science and Technology
Teacher College:College of Informatics
School/Department:Huazhong Agricultural University
Discipline:Computer Architecture    Computer Applications Technology    
Other Contact Information:

PostalAddress:

Email:

Paper Publications
Tight Compression: Compressing CNN Model Tightly Through Unstructured Pruning and Simulated Annealing Based Permutation
Release time:2021-09-08    Hits:

DOI number:10.1109/DAC18072.2020.9218701

Affiliation of Author(s):The Hong Kong University of Science and Technology (HKUST)

Journal:57th Design Automation Conference (DAC, CCF-A)

Key Words:Convolutional Neural Network (CNN), pruning, weight sparsity, model compression

Abstract:The unstructured sparsity after pruning poses a challenge to the efficient implementation of deep learning models in existing regular architectures like systolic arrays. The coarse-grained structured pruning, on the other hand, tends to have higher accuracy loss than unstructured pruning when the pruned models are of the same size. In this work, we propose a compression method based on the unstructured pruning and a novel weight permutation scheme. Through permutation, the sparse weight matrix is further compressed to a small and dense format to make full use of the hardware resources. Compared to the state-of-theart works, the matrix compression rate is effectively improved from 5.88x to 10.28x. As a result, the throughput and energy efficiency are improved by 2.12 and 1.57 times, respectively.

Note:中国计算机学会 CCF-A 类

Co-author:Jingyang Zhu,Jingbo Jiang,Chi-Ying Tsui

First Author:Xizi Chen

Translation or Not:no

Date of Publication:2020-01-01