Analyzing and Evaluating Faithfulness in Dialogue Summarization

Page view(s)
5
Checked on Feb 05, 2025
Analyzing and Evaluating Faithfulness in Dialogue Summarization
Title:
Analyzing and Evaluating Faithfulness in Dialogue Summarization
Journal Title:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Keywords:
Publication Date:
07 December 2022
Citation:
Wang, B., Zhang, C., Zhang, Y., Chen, Y., & Li, H. (2022). Analyzing and Evaluating Faithfulness in Dialogue Summarization. Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 4897–4908. https://doi.org/10.18653/v1/2022.emnlp-main.325
Abstract:
Dialogue summarization is abstractive in nature, making it suffer from factual errors. The factual correctness of summaries has the highest priority before practical applications. Many efforts have been made to improve faithfulness in text summarization. However, there is a lack of systematic study on dialogue summarization systems. In this work, we first perform the fine-grained human analysis on the faithfulness of dialogue summaries and observe that over 35% of generated summaries are faithfully inconsistent respective the source dialogues. Furthermore, we present a new model-level faithfulness evaluation method. It examines generation models with multi-choice questions created by rule-based transformations. Experimental results show that our evaluation schema is a strong proxy for the factual correctness of summarization models. The human-annotated faithfulness samples and the evaluation toolkit are released to facilitate future research toward faithful dialogue summarization.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the Agency for Science, Technology and Research (A*STAR) - Advanced Manufacturing and Engineering (AME) Programmatic Funding Scheme
Grant Reference no. : A18A2b0046

This research / project is supported by the Agency for Science, Technology and Research (A*STAR) - National Robotics Program: Human-Robot Interaction Phase 1
Grant Reference no. : 1922500054

This research / project is supported by the Shenzhen Research Institute of Big Data - NA
Grant Reference no. : T00120220002
Description:
©2022 Association for Computational Linguistics. Permission is granted to make copies for the purposes of teaching and research. Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License.
ISSN:
2022.emnlp-main.325