Adversarial Robustness of Deep Neural Networks: A Survey from a Formal Verification Perspective

Page view(s)
71
Checked on Sep 27, 2024
Adversarial Robustness of Deep Neural Networks: A Survey from a Formal Verification Perspective
Title:
Adversarial Robustness of Deep Neural Networks: A Survey from a Formal Verification Perspective
Journal Title:
IEEE Transactions on Dependable and Secure Computing
Publication Date:
30 May 2022
Citation:
Meng, M. H., Bai, G., Teo, S. G., Hou, Z., Xiao, Y., Lin, Y., & Dong, J. S. (2022). Adversarial Robustness of Deep Neural Networks: A Survey from a Formal Verification Perspective. IEEE Transactions on Dependable and Secure Computing, 1–1. https://doi.org/10.1109/tdsc.2022.3179131
Abstract:
Neural networks have been widely applied in security applications such as spam and phishing detection, intrusion prevention, and malware detection. This black-box method, however, often has uncertainty and poor explainability in applications. Furthermore, neural networks themselves are often vulnerable to adversarial attacks. For those reasons, there is a high demand for trustworthy and rigorous methods to verify the robustness of neural network models. Adversarial robustness, which concerns the reliability of a neural network when dealing with maliciously manipulated inputs, is one of the hottest topics in security and machine learning. In this work, we survey existing literature in adversarial robustness verification for neural networks and collect 39 diversified research works across machine learning, security, and software engineering domains. We systematically analyze their approaches, including how robustness is formulated, what verification techniques are used, and the strengths and limitations of each technique. We provide a taxonomy from a formal verification perspective for a comprehensive understanding of this topic. We classify the existing techniques based on property specification, problem reduction, and reasoning strategies. We also demonstrate representative techniques that have been applied in existing studies with a sample model. Finally, we discuss open questions for future research.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the National Research Foundation / National University of Singapore - Trustworthy Software Systems – Core Technologies Grant
Grant Reference no. : NSOE-TSS2019-05

This research / project is supported by the Ministry of Education - Tier 2
Grant Reference no. : T2EP20120-0019

This research / project is supported by the Ministry of Education - Tier 1
Grant Reference no. : T1-251RES1901

This research / project is supported by the Ministry of Education - Tier 3
Grant Reference no. : MOET32020-0004

A*STAR ACIS Scholarship, the University of Queensland under the NSRSG grant 4018264-617225 and the GSP Seed Funding and CISCO Systems (USA) Pte Ltd and NUS under its Cisco /NUS Accelerated Digital Economy Corporate Laboratory (Award I21001E0002)
Description:
© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
ISSN:
1545-5971
1941-0018
2160-9209
Files uploaded:

File Size Format Action
survey-verify-nn-tdsc.pdf 3.11 MB PDF Open