Pose-invariant descriptor for facial emotion recognition

Pose-invariant descriptor for facial emotion recognition
Pose-invariant descriptor for facial emotion recognition
Other Titles:
Machine Vision and Applications
Publication Date:
19 October 2016
Shojaeilangari, S., Yau, WY. & Teoh, EK. Machine Vision and Applications (2016) 27: 1063. https://doi.org/10.1007/s00138-016-0794-2
Most facial emotion recognition algorithms assume that the face is near frontal and the face pose fixed during the recognition process. However, such constrain limits the adoption for real-world applications. To solve this, poseinvariant descriptor for emotion recognition is required. This work proposes a novel pose-invariant dynamic descriptor that encodes the relative movement information of facial landmarks. The proposed feature set is able to handle speed variations and continuous head pose variations while the subject is expressing an emotion. In addition, the proposed method is fast and thus can be realize real-time implementation for real world application. Performance evaluation done using three publicly available databases; Cohn-Kanade (CK+), Amsterdam Dynamic Facial Expression Set (ADFES), and Audio Visual Emotion Challenge (AVEC 2011) showed that our proposed method outperforms the state-of-the-art methods.
License type:
Funding Info:
Agency for Science, Technology and Research (A*STAR), Singapore
Files uploaded:

File Size Format Action
samane1-mva-revised2-160515.pdf 453.87 KB PDF Open