Yunxiao Zhao, Zhiqiang Wang, Xiaoli Li, Jiye Liang, and Ru Li. 2024. AGR: Reinforced Causal Agent-Guided Self-explaining Rationalization. In Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 510–518, Bangkok, Thailand. Association for Computational Linguistics.
Abstract:
Most existing rationalization approaches are
susceptible to degeneration accumulation due
to a lack of effective control over the learning
direction of the model during training. To address
this issue, we propose a novel approach
AGR (Agent-Guided Rationalization), guiding
the next action of the model based on its current
training state. Specifically, we introduce causal
intervention calculus to quantify the causal effects
inherent during rationale training, and utilize
reinforcement learning process to refine the
learning bias of them. Furthermore, we pretrain
an agent within this reinforced causal environment
to guide the next step of the model. We
theoretically demonstrate that a good model
needs the desired guidance, and empirically
show the effectiveness of our approach, outperforming
existing state-of-the-art methods on
BeerAdvocate and HotelReview datasets.
License type:
Attribution 4.0 International (CC BY 4.0)
Funding Info:
This work was supported by the National Natural Science
Foundation of China (Nos.62376144, 62272285, 62076155) and the Science and Technology Co-operation and Exchange Special Project of Shanxi Province (No.202204041101016)