Is Complexity Required for Neural Network Pruning? A Case Study on Global Magnitude Pruning

Page view(s)
7
Checked on Jan 08, 2025
Is Complexity Required for Neural Network Pruning? A Case Study on Global Magnitude Pruning
Title:
Is Complexity Required for Neural Network Pruning? A Case Study on Global Magnitude Pruning
Journal Title:
2024 IEEE Conference on Artificial Intelligence (CAI)
Keywords:
Publication Date:
30 July 2024
Citation:
Gupta, M., Camci, E., Keneta, V. R., Vaidyanathan, A., Kanodia, R., James, A., Foo, C.-S., Wu, M., & Lin, J. (2024, June 25). Is Complexity Required for Neural Network Pruning? A Case Study on Global Magnitude Pruning. 2024 IEEE Conference on Artificial Intelligence (CAI). https://doi.org/10.1109/cai59869.2024.00144
Abstract:
Pruning neural networks has become popular in the last decade when it was shown that a large number of weights can be safely removed from modern neural networks without compromising accuracy. Numerous pruning methods have been proposed since, each claiming to be better than prior art, however, at the cost of increasingly complex pruning methodologies. These methodologies include utilizing importance scores, getting feedback through back-propagation or having heuristicsbased pruning rules amongst others. In this work, we question whether this pattern of introducing complexity is really necessary to achieve better pruning results. We benchmark these SOTA techniques against a simple pruning baseline, namely, Global Magnitude Pruning (Global MP), that ranks weights in order of their magnitudes and prunes the smallest ones. Surprisingly, we find that vanilla Global MP performs very well against the SOTA techniques. When considering sparsity-accuracy tradeoff, Global MP performs better than all SOTA techniques at all sparsity ratios. When considering FLOPs-accuracy tradeoff, some SOTA techniques outperform Global MP at lower sparsity ratios, however, Global MP starts performing well at high sparsity ratios and performs very well at extremely high sparsity ratios. Moreover, we find that a common issue that many pruning algorithms run into at high sparsity rates, namely, layer-collapse, can be easily fixed in Global MP. We explore why layer collapse occurs in networks and how it can be mitigated in Global MP by utilizing a technique called Minimum Threshold. We showcase the above findings on various models (WRN-28- 8, ResNet-32, ResNet-50, MobileNet-V1 and FastGRNN) and multiple datasets (CIFAR-10, ImageNet and HAR-2).
License type:
Publisher Copyright
Funding Info:
There was no specific funding for the research done
Description:
© 2024 IEEE.  Personal use of this material is permitted.  Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
ISBN:
979-8-3503-5409-6
Files uploaded:

File Size Format Action
ieeecai2024.pdf 283.72 KB PDF Request a copy