From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of Deep Neural Networks

Page view(s)
13
Checked on Sep 06, 2024
From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of Deep Neural Networks
Title:
From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of Deep Neural Networks
Journal Title:
IEEE Transactions on Neural Networks and Learning Systems
Keywords:
Publication Date:
14 June 2024
Citation:
Geng, X., Wang, Z., Chen, C., Xu, Q., Xu, K., Jin, C., Gupta, M., Yang, X., Chen, Z., Aly, M. M. S., Lin, J., Wu, M., & Li, X. (2024). From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of Deep Neural Networks. IEEE Transactions on Neural Networks and Learning Systems, 1–21. https://doi.org/10.1109/tnnls.2024.3394494
Abstract:
Deep neural networks (DNNs) have been widely used in many artificial intelligence (AI) tasks. However, deploying them brings significant challenges due to the huge cost of memory, energy, and computation. To address these challenges, researchers have developed various model compression techniques such as model quantization and model pruning. Recently, there has been a surge in research of compression methods to achieve model efficiency while retaining the performance. Furthermore, more and more works focus on customizing the DNN hardware accelerators to better leverage the model compression techniques. In addition to efficiency, preserving security and privacy is critical for deploying DNNs. However, the vast and diverse body of related works can be overwhelming. This inspires us to conduct a comprehensive survey on recent research toward the goal of high-performance, cost-efficient, and safe deployment of DNNs. Our survey first covers the mainstream model compression techniques such as model quantization, model pruning, knowledge distillation, and optimizations of non-linear operations. We then introduce recent advances in designing hardware accelerators that can adapt to efficient model compression approaches. Additionally, we discuss how homomorphic encryption can be integrated to secure DNN deployment. Finally, we discuss several issues, such as hardware evaluation, generalization, and integration of various compression approaches. Overall, we aim to provide a big picture of efficient DNNs, from algorithm to hardware accelerators and security perspectives.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the A*STAR - AME Programmatic Fund
Grant Reference no. : A1892b0026

This research / project is supported by the A*STAR - AME Programmatic Fund
Grant Reference no. : A19E3b0099

This research / project is supported by the A*STAR - MTC Programmatic Fund
Grant Reference no. : M23L7b0021

This research / project is supported by the National Research Foundation - AME Young Individual Research Grant
Grant Reference no. : A2084c0167

This research / project is supported by the A*STAR - Career Development Fund
Grant Reference no. : C210812035
Description:
© 2024 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
ISSN:
2162-237X
2162-2388
Files uploaded:

File Size Format Action
tnnls-2023-s27644.pdf 611.29 KB PDF Request a copy