QLP: Deep Q-Learning for Pruning Deep Neural Networks

Page view(s)
107
Checked on Nov 23, 2024
QLP: Deep Q-Learning for Pruning Deep Neural Networks
Title:
QLP: Deep Q-Learning for Pruning Deep Neural Networks
Journal Title:
IEEE Transactions on Circuits and Systems for Video Technology
Publication Date:
18 April 2022
Citation:
Camci, E., Gupta, M., Wu, M., & Lin, J. (2022). QLP: Deep Q-Learning for Pruning Deep Neural Networks. IEEE Transactions on Circuits and Systems for Video Technology, 1–1. https://doi.org/10.1109/tcsvt.2022.3167951
Abstract:
We present a novel, deep Q-learning based method, QLP, for pruning deep neural networks (DNNs). Given a DNN, our method intelligently determines favorable layer-wise sparsity ratios, which are then implemented via unstructured, magnitude-based, weight pruning. In contrast to previous reinforcement learning (RL) based pruning methods, our method is not forced to prune a DNN within a single, sequential pass from the first layer to the last. It visits each layer multiple times and prunes them little by little at each visit, achieving superior granular pruning. Moreover, our method is not restricted to a subset of actions within the feasible action space. It has the flexibility to execute a whole range of sparsity ratios (0% - 100%) for each layer. This enables aggressive pruning without compromising accuracy. Furthermore, our method does not require a complex state definition; it features a simple, generic definition that is composed of only the index and the density of the layers, which leads to less computational demand while observing the state at each interaction. Lastly, our method utilizes a carefully designed curriculum that enables learning targeted policies for each sparsity regime, which helps to deliver better accuracy, especially at high sparsity levels. We conduct batched performance tests at compelling sparsity levels (up to 98%), present extensive ablation studies to justify our RL-related design choices, and compare our method with the state-of-the-art, including RL-based and other pruning methods. Our method sets the new state-of-the-art results in most of the experiments with ResNet-32 and ResNet-56 over CIFAR-10 dataset as well as ResNet-50 and MobileNet-v1 over ILSVRC2012 (ImageNet) dataset.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the A*STAR - AME Programmatic Funds
Grant Reference no. : A1892b0026

This research / project is supported by the A*STAR - AME Programmatic Funds
Grant Reference no. : A19E3b0099
Description:
© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works
ISSN:
1051-8215
1558-2205
Files uploaded:

File Size Format Action
tcsvt-final-single.pdf 8.15 MB PDF Open