Action Recognition for Depth Video using Multi-view Dynamic Images

Page view(s)
20
Checked on Mar 25, 2024
Action Recognition for Depth Video using Multi-view Dynamic Images
Title:
Action Recognition for Depth Video using Multi-view Dynamic Images
Journal Title:
Information Sciences
Publication Date:
24 December 2018
Citation:
Yang Xiao, Jun Chen, Yancheng Wang, Zhiguo Cao, Joey Tianyi Zhou, Xiang Bai, Action recognition for depth video using multi-view dynamic images, Information Sciences, Volume 480, 2019, Pages 287-304, ISSN 0020-0255, https://doi.org/10.1016/j.ins.2018.12.050.
Abstract:
Dynamic imaging is a recently proposed action description paradigm for simultaneously capturing motion and temporal evolution information, particularly in the context of deep convolutional neural networks (CNNs). Compared with optical flow for motion characterization, dynamic imaging exhibits superior efficiency and compactness. Inspired by the success of dynamic imaging in RGB video, this study extends it to the depth domain. To better exploit three-dimensional (3D) characteristics, multi-view dynamic images are proposed. In particular, the raw depth video is densely projected with respect to different virtual imaging viewpoints by rotating the virtual camera within the 3D space. Subsequently, dynamic images are extracted from the obtained multi-view depth videos and multi-view dynamic images are thus constructed from these images. Accordingly, more view-tolerant visual cues can be involved. A novel CNN model is then proposed to perform feature learning on multi-view dynamic images. Particularly, the dynamic images from different views share the same convolutional layers but correspond to different fully connected layers. This is aimed at enhancing the tuning effectiveness on shallow convolutional layers by alleviating the gradient vanishing problem. Moreover, as the spatial occurrence variation of the actions may impair the CNN, an action proposal approach is also put forth. In experiments, the proposed approach can achieve state-of-the-art performance on three challenging datasets.
License type:
http://creativecommons.org/licenses/by-nc-nd/4.0/
Funding Info:
This work is jointly supported by the Natural Science Foundation of China (Grant No. 61502187, 61876211, 61602193 and 61702182), the National Key R&D Program of China (No. 2018YFB1004600), the International Science and Technology Cooperation Program of Hubei Province, China (Grant No. 2017AHB051), the HUST Interdisciplinary Innovation Team Foundation (Grant No. 2016JCTD120), and Hunan Provincial Natural Science Foundation of China (Grant 2018JJ3254). Joey Tianyi Zhou is supported by Programmatic Grant No. A1687b0033 from the Singapore government’s Research, Innovation and Enterprise 2020 plan (Advanced Manufacturing and Engineering domain).
Description:
ISSN:
0020-0255
1872-6291
Files uploaded: