Deep Future Gaze: Gaze Anticipation on Egocentric Videos Using Adversarial Networks

Page view(s)
Checked on May 30, 2024
Deep Future Gaze: Gaze Anticipation on Egocentric Videos Using Adversarial Networks
Deep Future Gaze: Gaze Anticipation on Egocentric Videos Using Adversarial Networks
Journal Title:
Publication Date:
07 May 2021
We introduce a new problem of gaze anticipation on egocentric videos. This substantially extends the conventional gaze prediction problem to future frames by no longer confining it on the current frame. To solve this problem, we propose a new generative adversarial neural network based model, Deep Future Gaze (DFG). DFG generates multiple future frames conditioned on the single current frame and anticipates corresponding future gazes in next few seconds. It consists of two networks: generator and discriminator. The generator uses a two-stream spatial temporal convolution architecture (3D-CNN) explicitly untangling the foreground and the background to generate future frames. It then attaches another 3D-CNN for gaze anticipation based on these synthetic frames. The discriminator plays against the generator by differentiating the synthetic frames of the generator from the real frames. Through competition with discriminator, the generator progressively improves quality of the future frames and thus anticipates future gaze better. Experimental results on the publicly available egocentric datasets show that DFG significantly outperforms all well established baselines. Moreover, we demonstrate that DFG achieves better performance of gaze prediction on current frames than state-of-the-art methods. This is due to benefiting from learning motion discriminative representations in frame generation. We further contribute a new egocentric dataset (OST) in the object search task. DFG also achieves the best performance for this challenging dataset.
License type:
Funding Info:
JCO VIP REVIVE programme (1335h00098); National University of Singapore startup grant R-263-000C08-133; Ministry of Education of Singapore AcRF Tier One grant R-263-000-C21-112
(c) 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.

Files uploaded:

File Size Format Action
1775-poster.pdf 1.66 MB PDF Open
1775-supp.pdf 944.38 KB PDF Open
1775.pdf 6.48 MB PDF Open