Edge Priors Image Inpainting with StyleGAN2
- Chong Fu,
- Mengzhen Chi,
- Xu Zheng,
- Jialei Chen,
- Qing Li,
- C. -W. Sham
Chong Fu
Northeastern University
Corresponding Author:fuchong@mail.neu.edu.cn
Author ProfileXu Zheng
Hong Kong University of Science and Technology Guangzhou Thrust of Artificial Intelligence Information Hub
Author ProfileC. -W. Sham
The University of Auckland School of Computer Science
Author ProfileAbstract
Image inpainting represents a fundamental task in computer vision,
primarily focusing on the generation of missing content within an image
to restore its integrity and aesthetics. Existing GAN-based approaches
often yield content with ambiguity and entail high training costs. They
tend to concentrate narrowly on damaged regions, leading to distortions
along edges, which consequently hampers generalization. To overcome
these challenges and achieve high-fidelity image inpainting, we
introduce an image editing algorithm to the image inpainting task by
designing two distinct networks. The first network, Edge-e4e, uses
pretrained StyleGAN2 for global image generation, mitigating edge
distortions in damaged regions and reducing training costs.
Simultaneously, we incorporate contour information in the damaged areas
to ensure the correctness of the restoration content. The second
network, the Appending network, includes two style-based encoders and a
generator to refine the images restored by the Edge-e4e network.
Specifically, we subtract the restored images from the input images in
the channel dimension to obtain a distortion map, which serves as a
prior to refine the restored images. The encoders extract features from
the input images and distortion map, while the generator is employed to
generate optimized images. To enhance the quality of refined images, we
propose integrating plugin and modulate plugin modules into the
Appending network for style extraction and fusion, leveraging the
available information from input images and blending it into the
generator. Experimental results demonstrate that our algorithm achieves
high-fidelity restoration and excellent generalization, with optimal FID
and Lpips metrics of 0.0631 and 0.875, respectively.25 Feb 2024Submitted to Expert Systems 11 Apr 2024Reviewer(s) Assigned
17 Sep 2024Review(s) Completed, Editorial Evaluation Pending
20 Sep 2024Editorial Decision: Revise Major
19 Dec 20241st Revision Received