Liu, Z., Li, Z., Wu, X., Liu, Z., & Chen, W. (2022). DSRGAN: Detail Prior-Assisted Perceptual Single Image Super-Resolution via Generative Adversarial Networks. IEEE Transactions on Circuits and Systems for Video Technology, 32(11), 7418–7431. https://doi.org/10.1109/tcsvt.2022.3188433
Abstract:
The generative adversarial network (GAN) is
successfully applied to study the perceptual single image superresolution
(SISR). However, since the GAN is data-driven,
it has a fundamental limitation on restoring real high frequency
information for an unknown instance (or image) during test.
On the other hand, the conventional model-based methods have
a superiority to achieve instance adaptation as they operate
by considering the statistics of each instance (or image) only.
Motivated by this, we propose a novel model-based algorithm,
which can extract the detail layer of an image efficiently. The
detail layer represents the high frequency information of image
and it is constituted of image edges and fine textures. It is
seamlessly incorporated into the GAN and serves as a prior
knowledge to assist the GAN in generating more realistic details.
The proposed method, named DSRGAN, takes advantages from
both the model-based conventional algorithm and the data-driven
deep learning network. Experimental results demonstrate that
the DSRGAN outperforms the state-of-the-art SISR methods on
perceptual metrics, meanwhile achieving comparable results in
terms of fidelity metrics. Following the DSRGAN, it is feasible to
incorporate other conventional image processing algorithms into
a deep learning network to form a model-based deep SISR
License type:
Publisher Copyright
Funding Info:
There was no specific funding for the research done