Pose-Invariant Kinematic Features for Action Recognition

Page view(s)
Checked on Mar 01, 2024
Pose-Invariant Kinematic Features for Action Recognition
Pose-Invariant Kinematic Features for Action Recognition
Journal Title:
Proceedings of the Asia-Pacific Signal and Information Processing Association Annual Summit and Conference 2017
Publication URL:
Publication Date:
15 December 2017
Recognition of actions from videos is a difficult task due to several factors like dynamic backgrounds, occlusion, pose variations observed. To tackle the pose variation problem, we propose a simple method based on a novel set of pose-invariant kinematic features which are encoded in a human body centric space. The proposed framework begins with detection of neck point, which will serve as a origin of body centric space. We propose a deep learning based classifier to detect neck point based on the output of fully connected network layer. With the help of the detected neck, propagation mechanism is proposed to divide the foreground region into head, torso and leg grids. The motion observed in each of these body part grids are represented using a set of pose-invariant kinematic features. These features represent motion of foreground or body region with respect to the detected neck point’s motion and encoded based on view in a human body centric space. Based on these features, pose invariant action recognition can be achieved. Due to the body centric space is used, non-upright human posture actions can also be handled easily. To test its effectiveness in non-upright human postures in actions, a new dataset is introduced with 8 non-upright actions performed by 35 subjects in 3 different views. Experiments have been conducted on benchmark and newly proposed non-upright action dataset to identify limitations and get insights on the proposed framework.
License type:
Funding Info:

Files uploaded: