Longyin Zhang, Bowei Zou, and Ai Ti Aw. 2024. Empowering Tree-structured Entailment Reasoning: Rhetorical Perception and LLM-driven Interpretability. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 5783–5793, Torino, Italia. ELRA and ICCL.
Abstract:
The study delves into the construction of entailment trees for science question answering (SQA), employing a novel framework termed Tree-structured Entailment Reasoning (TER). Current research on entailment tree construction presents significant challenges, primarily due to the ambiguities and similarities among candidate science facts, which considerably complicate the fact retrieval process. Moreover, the existing models exhibit limitations in effectively modeling the sequence of reasoning states, understanding the intricate relations between neighboring entailment tree nodes, and generating intermediate conclusions. To this end, we explore enhancing the TER performance from three aspects: First, improving retrieval capabilities by modeling and referring to the chained reasoning states; Second, enhancing TER by infusing knowledge that bridges the gap between reasoning types and rhetorical relations. Third, exploring a task-specific large language model tuning scheme to mitigate deficiencies in intermediate conclusion generation. Experiments on the English EntailmentBank demonstrate the effectiveness of the proposed methods in augmenting the quality of tree-structured entailment reasoning to a certain extent.
License type:
Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)
Funding Info:
This research is supported by core funding from: Institute for Infocomm Research
Grant Reference no. : CR-2021-001