Edge-Gan: Edge Conditioned Multi-View Face Image Generation

Page view(s)
Checked on Mar 26, 2024
Edge-Gan: Edge Conditioned Multi-View Face Image Generation
Edge-Gan: Edge Conditioned Multi-View Face Image Generation
Journal Title:
IEEE International Conference on Image Processing
Publication Date:
30 September 2020
Reconstructing photorealistic multi-view images from an image with an arbitrary view has a wide range of applications in the field of face generation. However, most current pixel-based generation models cannot generate sufficiently realistic enough images. To address this problem, we propose an edge-conditioned multi-view image generation model called Edge-GAN. Edge-GAN utilizes edge information to guide the image generation based on the perspective of the target view while the details of the input image are used to influence the target image. Edge-GAN combines the input image with the target pose information to generate a coarse image with an approximate target outline which is then refined to a better quality using adversanal training. Experiments conducted show that our Edge-GAN is able to generate high-quality images of people with convincing details.
License type:
Funding Info:
There is no specific funding for this work.
“© 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.”
Files uploaded: