Edge-Gan: Edge Conditioned Multi-View Face Image Generation

Page view(s)
43
Checked on Nov 17, 2024
Edge-Gan: Edge Conditioned Multi-View Face Image Generation
Title:
Edge-Gan: Edge Conditioned Multi-View Face Image Generation
Journal Title:
IEEE International Conference on Image Processing
Publication Date:
30 September 2020
Citation:
Abstract:
Reconstructing photorealistic multi-view images from an image with an arbitrary view has a wide range of applications in the field of face generation. However, most current pixel-based generation models cannot generate sufficiently realistic enough images. To address this problem, we propose an edge-conditioned multi-view image generation model called Edge-GAN. Edge-GAN utilizes edge information to guide the image generation based on the perspective of the target view while the details of the input image are used to influence the target image. Edge-GAN combines the input image with the target pose information to generate a coarse image with an approximate target outline which is then refined to a better quality using adversanal training. Experiments conducted show that our Edge-GAN is able to generate high-quality images of people with convincing details.
License type:
http://creativecommons.org/licenses/by-nc-nd/4.0/
Funding Info:
There is no specific funding for this work.
Description:
“© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.”
ISSN:
2381-8549
ISBN:
978-1-7281-6395-6
Files uploaded: