Depth and Video Segmentation Based Visual Attention for Embodied Question Answering

Page view(s)
108
Checked on Oct 21, 2024
Depth and Video Segmentation Based Visual Attention for Embodied Question Answering
Title:
Depth and Video Segmentation Based Visual Attention for Embodied Question Answering
Journal Title:
IEEE Transactions on Pattern Analysis and Machine Intelligence
Publication Date:
04 January 2022
Citation:
Luo, Lin, G., Yao, Y., Liu, F., Liu, Z., & Tang, Z. (2022). Depth and Video Segmentation Based Visual Attention for Embodied Question Answering. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1–1. https://doi.org/10.1109/tpami.2021.3139957
Abstract:
Embodied Question Answering (EQA) is a newly defined research area where an agent is required to answer the user’s questions by exploring the real-world environment. It has attracted increasing research interests due to its broad applications in personal assistants and in-home robots. Most of the existing methods perform poorly in terms of answering and navigation accuracy due to the absence of fine-level semantic information, stability to the ambiguity, and 3D spatial information of the virtual environment. To tackle these problems, we propose a depth and segmentation based visual attention mechanism for Embodied Question Answering. Firstly, we extract local semantic features by introducing a novel high-speed video segmentation framework. Then guided by the extracted semantic features, a depth and segmentation based visual attention mechanism is proposed for the Visual Question Answering (VQA) sub-task. Further, a feature fusion strategy is designed to guide the navigator’s training process without much additional computational cost. The ablation experiments show that our method effectively boosts the performance of the VQA module and navigation module, leading to 4.9% and 5.6% overall improvement in EQA accuracy on House3D and Matterport3D datasets respectively.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the National Research Foundation (NRF) - AI Singapore Programme
Grant Reference no. : AISG-RP-2018-003

This research / project is supported by the Ministry of Education (MOE) - Tier-1 research grants
Grant Reference no. : RG28/18 (S), RG22/19 (S) and RG95/20

It is also supported by the National Natural Science Foundation of China (No.62102182, 61976116, and 61905114), Natural Science Foundation of Jiangsu Province (No. BK20210327), Fundamental Research Funds for the Central Universities (No. 30920021135), and the National Key R&D Program of China (2021YFF0602101).
Description:
© 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works
ISSN:
0162-8828
1939-3539
Files uploaded: