Fan, L., Wei, X., Zhou, M., Yan, J., Pu, H., Luo, J., & Li, Z. (2025). A Semantic-Aware Detail Adaptive Network for Image Enhancement. IEEE Transactions on Circuits and Systems for Video Technology, 35(2), 1787–1800. https://doi.org/10.1109/tcsvt.2024.3483191
Abstract:
Low-light images often suffer from varying degrees of visual degradation. Current methods for recovering image texture details fail to rely on the self-adaptive correlation texture direction of the image itself, which leads the network to be unable to address the local texture characteristics of different images. To address this challenge, we propose a semantic aware detail adaptive network (SDANet) that fully considers the
image detail information. The network divides low-light images into high-frequency and low-frequency parts. Learning different forms of noise through a novel total variation regularization module with adaptive weights ensures that the final high frequency part adequately integrates the texture information of the image. Simultaneously, a detail-adaptive module is incorporated to restore finer details in the resulting image. SDANet not only effectively suppresses noise in real low-light images while considering texture details but also effectively addresses the degradation of visible information, and it performs better than other state-of-the-art methods. The code is available at https://github.com/cheer79/SDANet.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the Agency for Science, Technology and Research - Robotics Horizontal Technology Coordinating Office Project
Grant Reference no. : C221518005