Document-Level Event Extraction via Information Interactivion Based on Event Relation and Argument Correlation

Page view(s)
17
Checked on Jul 12, 2024
Document-Level Event Extraction via Information Interactivion Based on Event Relation and Argument Correlation
Title:
Document-Level Event Extraction via Information Interactivion Based on Event Relation and Argument Correlation
Journal Title:
The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation
DOI:
Keywords:
Publication Date:
20 May 2024
Citation:
Bangze Pan, Yang Li, Suge Wang, Xiaoli Li, Deyu Li, Jian Liao and Jianxing Zheng, & Document-Level Event Extraction via Information Interaction Based on Event Relation and Argument Correlation", The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (Coling) 2024.
Abstract:
Document-level Event Extraction (DEE) is a vital task in NLP as it seeks to automatically recognize and extract event information from a document. However, current approaches often overlook intricate relationships among events and subtle correlations among arguments within a document, which can significantly impact the effectiveness of event type recognition and the extraction of cross-sentence arguments in DEE task. This paper proposes a novel Correlation Association Interactive Network (CAINet), comprising two key components: event relationship graph and argument correlation graph. In particular, the event relationship graph models the relationship among various events through structural associations among event nodes and sentence nodes, to improve the accuracy of event recognition. On the other hand, the arguments correlation graph models the correlations among arguments by quantifying the strength of association among arguments, to effectively aggregate cross-sentence arguments, contributing to the overall success of DEE. Furthermore, we use the large language model to execute DEE task experiments. Experimental results show the proposed CAINet outperforms existing state-of-the-art models and large language models in terms of F1-score across two benchmark datasets.
License type:
Publisher Copyright
Funding Info:
National Key Research and Development Program of China (2022QY0300-01), National Natural Science Foundation of China (62376143, 62106130, 62076158, 62072294, 62272286), Natural Science Foundation of Shanxi Province, China (20210302124084),Scientifc and Technological Innovation Programs of Higher Education Institutions in Shanxi, China (2021L284), and CCF-Zhipu AI Large Model Foundation of China (CCF-Zhipu202310).

National Natural Science Foundation of China
Description:
ISSN:
Nil
Files uploaded:

File Size Format Action
1438-paper.pdf 277.00 KB PDF Open