Holistic Multi-modal Memory Network for Movie Question Answering

Page view(s)
35
Checked on Sep 27, 2024
Holistic Multi-modal Memory Network for Movie Question Answering
Title:
Holistic Multi-modal Memory Network for Movie Question Answering
Journal Title:
IEEE Transactions on Image Processing (TIP)
Publication Date:
02 August 2019
Citation:
A. Wang, A. T. Luu, C. Foo, H. Zhu, Y. Tay and V. Chandrasekhar, "Holistic Multi-modal Memory Network for Movie Question Answering," in IEEE Transactions on Image Processing. doi: 10.1109/TIP.2019.2931534
Abstract:
Answering questions using multi-modal context is a challenging problem as it requires a deep integration of diverse data sources. Existing approaches only consider a subset of all possible interactions among data sources during one attention hop. In this paper, we present a Holistic Multi-modal Memory Network (HMMN) framework that fully considers interactions between different input sources (multi-modal context, question) at each hop. In addition, to hone in on relevant information, our framework takes answer choices into consideration during the context retrieval stage. Our HMMN framework effectively integrates information from the multi-modal context, question, and answer choices, enabling more informative context to be retrieved for question answering. Experimental results on the MovieQA and TVQA datasets validate the effectiveness of our HMMN framework. Extensive ablation studies show the importance of holistic reasoning and reveal the contributions of different attention strategies to model performance.
License type:
PublisherCopyrights
Funding Info:
A*STAR Deep Learning 2.0 Program with grant number A1718G0045, A*STAR CHEEM with grant number A1718G0048
Description:
(c) 2019 IEEE
ISSN:
1057-7149
1941-0042
Files uploaded:
File Size Format Action
There are no attached files.