GLGR: Question-aware Global-to-Local Graph Reasoning for Multi-party Dialogue Reading Comprehension

Page view(s)
29
Checked on Aug 26, 2024
GLGR: Question-aware Global-to-Local Graph Reasoning for Multi-party Dialogue Reading Comprehension
Title:
GLGR: Question-aware Global-to-Local Graph Reasoning for Multi-party Dialogue Reading Comprehension
Journal Title:
Findings of the Association for Computational Linguistics: EMNLP 2023
Keywords:
Publication Date:
10 December 2023
Citation:
Li, Y., Zou, B., Fan, Y., Li, X., Aw, A. T., & Hong, Y. (2023). GLGR: Question-aware Global-to-Local Graph Reasoning for Multi-party Dialogue Reading Comprehension. Findings of the Association for Computational Linguistics: EMNLP 2023. https://doi.org/10.18653/v1/2023.findings-emnlp.122
Abstract:
Graph reasoning contributes to the integration of discretely-distributed attentive information (clues) for Multi-party Dialogue Reading Comprehension (MDRC). This is attributed primarily to multi-hop reasoning over global conversational structures. However, existing approaches barely apply questions for anti-noise graph reasoning. More seriously, the local semantic structures in utterances are neglected, although they are beneficial for bridging across semantically-related clues. In this paper, we propose a question-aware global-to-local graph reasoning approach. It expands the canonical Interlocutor-Utterance graph by introducing a question node, enabling comprehensive global graph reasoning. More importantly, it constructs a semantic-role graph for each utterance, and accordingly performs local graph reasoning conditioned on the semantic relations. We design a two-stage encoder network to implement the progressive reasoning from the global graph to local. The experiments on the benchmark datasets Molweni and FriendsQA show that our approach yields significant improvements, compared to BERT and ELECTRA baselines. It achieves 73.6% and 77.2% F1-scores on Molweni and FriendsQA, respectively, outperforming state-of-the-art methods that employ different pretrained language models as backbones.
License type:
Attribution 4.0 International (CC BY 4.0)
Funding Info:
This research is supported by core funding from: I2R
Grant Reference no. : CR-2021- 001

The research is supported by National Key R&D Program of China (2020YFB1313601), National Science Foundation of China (62376182, 62076174)
Description:
ISBN:
2023.findings-emnlp.122
Files uploaded:

File Size Format Action
2023findings-emnlp122.pdf 752.07 KB PDF Open