Gupta, M., Camci, E., Keneta, V. R., Vaidyanathan, A., Kanodia, R., James, A., Foo, C.-S., Wu, M., & Lin, J. (2024, June 25). Is Complexity Required for Neural Network Pruning? A Case Study on Global Magnitude Pruning. 2024 IEEE Conference on Artificial Intelligence (CAI). https://doi.org/10.1109/cai59869.2024.00144
Abstract:
Pruning neural networks has become popular in
the last decade when it was shown that a large number of
weights can be safely removed from modern neural networks
without compromising accuracy. Numerous pruning methods
have been proposed since, each claiming to be better than prior
art, however, at the cost of increasingly complex pruning methodologies.
These methodologies include utilizing importance scores,
getting feedback through back-propagation or having heuristicsbased
pruning rules amongst others. In this work, we question
whether this pattern of introducing complexity is really necessary
to achieve better pruning results. We benchmark these SOTA
techniques against a simple pruning baseline, namely, Global
Magnitude Pruning (Global MP), that ranks weights in order
of their magnitudes and prunes the smallest ones. Surprisingly,
we find that vanilla Global MP performs very well against the
SOTA techniques. When considering sparsity-accuracy tradeoff,
Global MP performs better than all SOTA techniques at
all sparsity ratios. When considering FLOPs-accuracy tradeoff,
some SOTA techniques outperform Global MP at lower
sparsity ratios, however, Global MP starts performing well at
high sparsity ratios and performs very well at extremely high
sparsity ratios. Moreover, we find that a common issue that
many pruning algorithms run into at high sparsity rates, namely,
layer-collapse, can be easily fixed in Global MP. We explore why
layer collapse occurs in networks and how it can be mitigated in
Global MP by utilizing a technique called Minimum Threshold.
We showcase the above findings on various models (WRN-28-
8, ResNet-32, ResNet-50, MobileNet-V1 and FastGRNN) and
multiple datasets (CIFAR-10, ImageNet and HAR-2).
License type:
Publisher Copyright
Funding Info:
There was no specific funding for the research done