Rethinking Human Motion Prediction with Symplectic Integral

Page view(s)
12
Checked on Jan 10, 2025
Rethinking Human Motion Prediction with Symplectic Integral
Title:
Rethinking Human Motion Prediction with Symplectic Integral
Journal Title:
2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Keywords:
Publication Date:
16 September 2024
Citation:
Chen, H., Lyu, K., Liu, Z., Yin, Y., Yang, X., & Lyu, Y. (2024). Rethinking Human Motion Prediction with Symplectic Integral. 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2134–2143. https://doi.org/10.1109/cvpr52733.2024.00208
Abstract:
Long-term and accurate forecasting is the long-standing pursuit of the human motion prediction task. Existing methods typically suffer from dramatic degradation in prediction accuracy with increasing prediction horizon. It comes down to two reasons: 1) Insufficient numerical stability caused by unforeseen high noise and complex feature relationships in the data, and 2) Inadequate modeling stability caused by unreasonable step sizes and undesirable parameter updates in the prediction. In this paper, we design a novel and symplectic integral-inspired framework named symplectic integral neural network (SINN), which engages symplectic trajectories to optimize the pose representation and employs a stable symplectic operator to alternately model the dynamic context. Specifically, we design a Symplectic Representation Encoder that performs on enhanced human pose representation to obtain trajectories on the symplectic manifold, ensuring numerical stability based on Hamiltonian mechanics and symplectic spatial splitting algorithm. We further present the Symplectic Temporal Aggregation module, which splits the long-term prediction into multiple accurate short-term predictions generated by a symplectic operator to secure modeling stability. Moreover, our approach is model-agnostic and can be efficiently integrated with different physical dynamics models. The experimental results demonstrate that our method achieves the new state of-the-art, outperforming existing methods by 20.1% on Human3.6M, 16.7% on CUM Mocap, and 10.2% on 3DPW.
License type:
Publisher Copyright
Funding Info:
This work is supported by the National Natural Science Foundation of China (No. 62372402).

This work is also supported by the Key R&D Program of Zhejiang Province (No. 2023C01217).
Description:
© 2024 IEEE.  Personal use of this material is permitted.  Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
ISSN:
2575-7075
Files uploaded:

File Size Format Action
cvpr2024-motion-prediction.pdf 2.36 MB PDF Request a copy