Deep Multimodal Feature Analysis for Action Recognition in RGB+D Videos

Deep Multimodal Feature Analysis for Action Recognition in RGB+D Videos
Title:
Deep Multimodal Feature Analysis for Action Recognition in RGB+D Videos
Other Titles:
IEEE Transactions on Pattern Analysis and Machine Intelligence
Keywords:
Publication Date:
05 April 2017
Citation:
Abstract:
Single modality action recognition on RGB or depth sequences has been extensively explored recently. It is generally accepted that each of these two modalities has different strengths and limitations for the task of action recognition. Therefore, analysis of the RGB+D videos can help us to better study the complementary properties of these two types of modalities and achieve higher levels of performance. In this paper, we propose a new deep autoencoder based shared-specific feature factorization network to separate input multimodal signals into a hierarchy of components. Further, based on the structure of the features, a structured sparsity learning machine is proposed which utilizes mixed norms to apply regularization within components and group selection between them for better classification performance. Our experimental results show the effectiveness of our cross-modality feature analysis framework by achieving state-of-the-art accuracy for action classification on five challenging benchmark datasets.
License type:
PublisherCopyrights
Funding Info:
Description:
(c) 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.
ISSN:
0162-8828
Files uploaded:

File Size Format Action
07892950.pdf 599.70 KB PDF Open