Han, X., Li, R., Li, X., & Pan, J. Z. (2023). A divide and conquer framework for Knowledge Editing. Knowledge-Based Systems, 279, 110826. https://doi.org/10.1016/j.knosys.2023.110826
Abstract:
As Pre-trained language models (LMs) play an important role in various Natural Language Processing
(NLP) tasks, it is becoming increasingly important to make sure the knowledge learned from LMs is
valid and correct. Unlike conventional knowledge bases, LMs implicitly memorize knowledge in their
parameters, which makes it harder to correct if some knowledge is incorrectly inferred or obsolete.
The task of Knowledge Editing is to correct errors in language models, avoiding the expensive
overhead associated with retraining the model from scratch. While existing methods have shown some
promising results, they fail on multi-edits as they ignore the conflicts between these edits.
In the paper, we propose a novel framework to divide-and-conquer edits with parallel Editors.
Specifically, we design explicit and implicit multi-editor models to learn diverse editing strategies in
terms of dynamic structure and dynamic parameters respectively, which allows solving the conflict
data in an efficient end-to-end manner.
Our main findings are: (i) State of the art Knowledge Editing methods with multiple editing
capability, such as MEND and ENN, can hardly outperform the fine-tuning method; (ii) Our proposed
models outperform the fine-tuning method over the two widely used datasets for Knowledge Editing;
(iii) Additional analytical experiments verify that our approach can learn diverse editing strategies,
thus better adapting to multiple editing than state-of-the-art methods.
License type:
Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)
Funding Info:
National Natural Science Foundation of China (No.61936012),
National Key Research and Development Program of China (No.2020AAA0106100), National Natural Science Foundation of China (No.62076155), Chang Jiang Scholars Program (J2019032).