MOS:A Low Latency and Lightweight Framework for Face Detection, Landmark Localization, and Head Pose Estimation

Page view(s)
21
Checked on Jan 29, 2024
MOS:A Low Latency and Lightweight Framework for Face Detection, Landmark Localization, and Head Pose Estimation
Title:
MOS:A Low Latency and Lightweight Framework for Face Detection, Landmark Localization, and Head Pose Estimation
Journal Title:
British Machine Vision Conference (BMVC) 2021
DOI:
Keywords:
Publication Date:
24 November 2021
Citation:
Cheng. J., Liu, Y., Gu, Z., Gao, S., Wang, D., Zeng, Y. MOS:A Low Latency and Lightweight Framework for Face Detection, Landmark Localization, and Head Pose Estimation. 32nd British Machine Vision Conference (BMVC) 2021
Abstract:
With the emergence of service robots and surveillance cameras, dynamic face recognition (DFR) in wild has received much attention in recent years. Face detection and head pose estimation are two important steps for DFR. Very often, the pose is estimated after the face detection. However, such sequential computations lead to higher latency. In this paper, we propose a low latency and lightweight network for simultaneous face detection, landmark localization and head pose estimation. Inspired by the observation that it is more challenging to locate the facial landmarks for faces with large angles, a pose loss is proposed to constrain the learning. Moreover, we also propose an uncertainty multi-task loss to learn the weights of individual tasks automatically. Another challenge is that robots often use low computational units like ARM based computing core and we often need to use lightweight networks instead of the heavy ones, which lead to performance drop especially for small and hard faces. In this paper, we propose online feedback sampling to augment the training samples across different scales, which increases the diversity of training data automatically. Through validation in commonly used WIDER FACE, AFLW and AFLW2000 datasets, the results show that the proposed method achieves the state-of-the-art performance in low computational resources.
License type:
Publisher Copyright
Funding Info:
The work is supported in part by Key-Area Research and Development Program of Guangdong Province, China, under grant 2019B010154003, and the Program of Guangdong Provincial Key Laboratory of Robot Localization and Navigation Technology, under grant 2020B121202
Description:
ISBN:
NA