FineD-Eval: Fine-grained Automatic Dialogue-Level Evaluation

Page view(s)
11
Checked on Feb 06, 2025
FineD-Eval: Fine-grained Automatic Dialogue-Level Evaluation
Title:
FineD-Eval: Fine-grained Automatic Dialogue-Level Evaluation
Journal Title:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Keywords:
Publication Date:
07 December 2022
Citation:
Zhang, C., D’Haro, L. F., Zhang, Q., Friedrichs, T., & Li, H. (2022). FineD-Eval: Fine-grained Automatic Dialogue-Level Evaluation. Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 3336–3355. https://doi.org/10.18653/v1/2022.emnlp-main.220
Abstract:
Recent model-based reference-free metrics for open-domain dialogue evaluation exhibit promising correlations with human judgment. However, they either perform turn-level evaluation or look at a single dialogue quality dimension. One would expect a good evaluation metric to assess multiple quality dimensions at the dialogue level. To this end, we are motivated to propose a multi-dimensional dialogue-level metric, which consists of three sub-metrics with each targeting a specific dimension. The sub-metrics are trained with novel self-supervised objectives and exhibit strong correlations with human judgment for their respective dimensions. Moreover, we explore two approaches to combine the sub-metrics: metric ensemble and multitask learning. Both approaches yield a holistic metric that significantly outperforms individual sub-metrics. Compared to the existing state-of-the-art metric, the combined metrics achieve around 16% relative improvement on average across three high-quality dialogue-level evaluation benchmarks.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the Agency for Science, Technology and Research (A*STAR) - National Robotics Program: Human-Robot Interaction Phase 1
Grant Reference no. : 1922500054

This research / project is supported by the Agency for Science, Technology and Research (A*STAR) - Advanced Manufacturing and Engineering (AME) Programmatic Funding Scheme: ); Human Robot Collaborative AI
Grant Reference no. : A18A2b0046

This research / project is supported by the Shenzhen Research Institute of Big Data - NA
Grant Reference no. : T00120220002

This research / project is supported by the Robert Bosch (SEA) Pte Ltd - EDB’s Industrial Postgraduate Programme – II (EDB-IPP), project title: Applied Natural Language Processing
Grant Reference no. :
Description:
© 2022 Association for Computational Linguistics. Permission is granted to make copies for the purposes of teaching and research. Materials published in or after 2016 are licensed on a Creative Commons Attribution 4.0 International License.
ISSN:
2022.emnlp-main.220