Xu, K., Xiu Lee, A. H., Zhao, Z., Wang, Z., Wu, M., & Lin, W. (2023). Metagrad: Adaptive Gradient Quantization with Hypernetworks. 2023 IEEE International Conference on Image Processing (ICIP). https://doi.org/10.1109/icip49359.2023.10222371
Abstract:
A popular track of network compression approach is Quantization aware Training (QAT), which accelerates the forward pass during the neural network training and inference. However, not much prior efforts have been made to quantize and accelerate the backward pass during training, even though that contributes around half of the training time. This can be partly attributed to the fact that errors of low-precision gradients during backward cannot be amortized by the training objective as in the QAT setting. In this work, we propose to solve this problem by incorporating the gradients into the computation graph of the next training iteration via a hypernetwork. Various experiments on CIFAR-10 dataset with different CNN network architectures demonstrate that our hypernetwork-based approach can effectively reduce the negative effect of gradient quantization noise and successfully quantizes the gradients to INT4 with only 0.64 accuracy drop for VGG-16 on CIFAR-10.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the ASTAR - AI3 HTCO Seed Fund
Grant Reference no. : A1892b0026
This research / project is supported by the ASTAR - GAP Fund
Grant Reference no. : C211118009