Resilience of Large Language Models for Noisy Instructions

Page view(s)
11
Checked on Feb 16, 2025
Resilience of Large Language Models for Noisy Instructions
Title:
Resilience of Large Language Models for Noisy Instructions
Journal Title:
Findings of the Association for Computational Linguistics: EMNLP 2024
Keywords:
Publication Date:
27 November 2024
Citation:
Wang, B., Wei, C., Liu, Z., Lin, G., & Chen, N. F. (2024). Resilience of Large Language Models for Noisy Instructions. Findings of the Association for Computational Linguistics: EMNLP 2024, 11939–11950. https://doi.org/10.18653/v1/2024.findings-emnlp.697
Abstract:
As the rapidly advancing domain of natural language processing (NLP), large language models (LLMs) have emerged as powerful tools for interpreting human commands and generating text across various tasks. Nonetheless, the resilience of LLMs to handle text containing inherent errors, stemming from human interactions and collaborative systems, has not been thoroughly explored. Our study investigates the resilience of LLMs against five common types of disruptions including 1) ASR (Automatic Speech Recognition) errors, 2) OCR (Optical Character Recognition) errors, 3) grammatical mistakes, 4) typographical errors, and 5) distractive content. We aim to investigate how these models react by deliberately embedding these errors into instructions. Our findings reveal that while some LLMs show a degree of resistance to certain types of noise, their overall performance significantly suffers. This emphasizes the importance of further investigation into enhancing model resilience. In response to the observed decline in performance, our study also evaluates a “re-pass” strategy, designed to purify the instructions of noise before the LLMs process them. Our analysis indicates that correcting noisy instructions, particularly for open-source LLMs, presents significant challenges.
License type:
Attribution 4.0 International (CC BY 4.0)
Funding Info:
This research / project is supported by the National Research Foundation, Singapore and Infocomm Media Development Authority, Singapore - National Large Language Models Funding Initiative
Grant Reference no. : SC20/24-734900
Description:
ACL materials are Copyright © 1963–2025 ACL. Permission is granted to make copies for the purposes of teaching and research. Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License.
ISBN:
2024.findings-emnlp.697
Files uploaded:

File Size Format Action
2024findings-emnlp697.pdf 328.32 KB PDF Open