Topic-aware Pointer-Generator Networks for Summarizing Spoken Conversations

Page view(s)
55
Checked on Jan 18, 2025
Topic-aware Pointer-Generator Networks for Summarizing Spoken Conversations
Title:
Topic-aware Pointer-Generator Networks for Summarizing Spoken Conversations
Journal Title:
IEEE Automatic Speech Recognition and Understanding Workshop
DOI:
Keywords:
Publication Date:
14 December 2019
Citation:
Abstract:
Due to the lack of publicly available resources, conversation summarization has received far less attention than text summarization. As the purpose of conversations is to exchange information between at least two interlocutors, key information about a certain topic is often scattered and spanned across multiple utterances and turns from different speakers. This phenomenon is more pronounced during spoken conversations, where speech characteristics such as backchanneling and false-starts might interrupt the topical flow. Moreover, topic diffusion and (intra-utterance) topic drift are also more common in human-to-human conversations. Such linguistic characteristics of dialogue topics make sentence-level extractive summarization approaches used in spoken documents ill-suited for summarizing conversations. Pointer-generator networks have effectively demonstrated its strength at integrating extractive and abstractive capabilities through neural modeling in text summarization. To the best of our knowledge, to date no one has adopted it for summarizing conversations. In this work, we propose a topic-aware architecture to exploit the inherent hierarchical structure in conversations to further adapt the pointer-generator model. Our approach significantly outperforms competitive baselines, achieves more efficient learning outcomes, and attains more robust performance.
License type:
PublisherCopyrights
Funding Info:
This research was supported by funding for Digital Health from the Institute for Infocomm Research (I2R) and the Science and Engineering Research Council (Project No. A1818g0044), A*STAR. This work was conducted using resources from the Human Language Technology unit at I2R. We thank R. E. Banchs, L. F. D’Haro, P. Krishnaswamy, H. Lim, F. A. Suhaimi and S. Ramasamy at I2R, and W. L. Chow, A. Ng, H. C. Oh, and S. C. Tong at Changi General Hospital for insightful discussions.
Description:
© 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
ISBN:

Files uploaded:
File Size Format Action
There are no attached files.