Searching for just a few words should be enough to get started. If you need to make more complex queries, use the tips below to guide you.
Article type: Research Article
Authors: Li, Weia | He, Yongxinga | Zhang, Xiaoyua | Tang, Yongchuana; b; *
Affiliations: [a] College of Computer Science, Zhejiang University, Hangzhou, Zhejiang, China | [b] Alibaba-Zhejiang University Joint Research Institute of Frontier Technologise, Hangzhou, Zhejiang, China
Correspondence: [*] Corresponding author: Yongchuan Tang, College of Computer Science, Zhejiang University, Hangzhou, Zhejiang 310027, China. E-mail: [email protected].
Abstract: With the help of network compression algorithms, deep neural networks can be applied on low-power embedded systems and mobile devices such as drones, satellites, and smartphones. Filter pruning is a sub-direction of network compression research, which reduces memory and computational consumption by reducing the number of parameters of model filters. Previous works utilized the “more-simple-less-important” criterion for pruning filters. That is, filters with the smaller norm or more sparse weights in the network are preferentially pruned. In this paper, we found that feature maps are not fully positively correlated with the sparsity of filter weights by observing the visualization of feature maps and the corresponding filters. Hence, we came up with the idea that the priority of filter pruning should be determined by redundancy rather than sparsity. The redundancy of a filter is the measure of whether the output of the filter is repeated with other filters. Based on this, we defined a criterion called redundancy index to rank the filters and introduced it into our filter pruning strategy. Extensive experiments demonstrate the effectiveness of our approach on different model architectures, including VGGNet, GoogleNet, DenseNet, and ResNet. The models compressed with our strategy surpass the state-of-the-art in terms of Floating Point Operations Per Second (FLOPs), parameters reduction, and classification accuracy.
Keywords: Filter pruning, clustering, K-means, dimensionality reduction, t-SNE
DOI: 10.3233/IDA-226810
Journal: Intelligent Data Analysis, vol. 27, no. 4, pp. 911-933, 2023
IOS Press, Inc.
6751 Tepper Drive
Clifton, VA 20124
USA
Tel: +1 703 830 6300
Fax: +1 703 830 2300
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
IOS Press
Nieuwe Hemweg 6B
1013 BG Amsterdam
The Netherlands
Tel: +31 20 688 3355
Fax: +31 20 687 0091
[email protected]
For editorial issues, permissions, book requests, submissions and proceedings, contact the Amsterdam office [email protected]
Inspirees International (China Office)
Ciyunsi Beili 207(CapitaLand), Bld 1, 7-901
100025, Beijing
China
Free service line: 400 661 8717
Fax: +86 10 8446 7947
[email protected]
For editorial issues, like the status of your submitted paper or proposals, write to [email protected]
如果您在出版方面需要帮助或有任何建, 件至: [email protected]