Reconstructing photorealistic multi-view images from an image with an arbitrary view has a wide range of applications in the field of face generation. However, most current pixel-based generation models cannot generate sufficiently realistic enough images. To address this problem, we propose an edge-conditioned multi-view image generation model called Edge-GAN. Edge-GAN utilizes edge information to guide the image generation based on the perspective of the target view while the details of the input image are used to influence the target image. Edge-GAN combines the input image with the target pose information to generate a coarse image with an approximate target outline which is then refined to a better quality using adversanal training. Experiments conducted show that our Edge-GAN is able to generate high-quality images of people with convincing details.