Contextual Paralinguistic Data Creation for Multi-Modal Speech-LLM: Data Condensation and Spoken QA Generation

Page view(s)
0
Checked on
Contextual Paralinguistic Data Creation for Multi-Modal Speech-LLM: Data Condensation and Spoken QA Generation
Title:
Contextual Paralinguistic Data Creation for Multi-Modal Speech-LLM: Data Condensation and Spoken QA Generation
Journal Title:
Interspeech 2025
Publication Date:
17 August 2025
Citation:
Qiongqiong Wang, Hardik B Sailor, Tianchi Liu, and Ai Ti Aw, “Contextual paralinguistic data creation for multi-modal Speech-LLM: Data condensation and spoken QA generation,” in Proc. Interspeech, 2025.
Abstract:
Current speech-LLMs exhibit limited capability in contextual reasoning alongside paralinguistic understanding, primarily due to the lack of Question-Answer (QA) datasets that cover both aspects. We propose a novel framework for dataset generation from in-the-wild speech data, that integrates contextual reasoning with paralinguistic information. It consists of a pseudo paralinguistic label-based data condensation of in-the-wild speech and LLM-based Contextual Paralinguistic QA (CPQA) generation. The effectiveness is validated by a strong correlation in evaluations of the Qwen2-Audio-7B-Instruct model on a dataset created by our framework and human-generated CPQA dataset. The results also reveal the speech-LLM’s limitations in handling empathetic reasoning tasks, highlighting the need for such datasets and more robust models. The proposed framework is first of its kind and has potential in training more robust speech-LLMs with paralinguistic reasoning capabilities.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the National Research Foundation, Singapore - National Large Language Models Funding Initiative
Grant Reference no. : SC20/24-734900
Description:
ISSN:
2958-1796
Files uploaded:

File Size Format Action
camera-ready.pdf 694.28 KB PDF Open