{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,19]],"date-time":"2026-02-19T06:34:07Z","timestamp":1771482847597,"version":"3.50.1"},"reference-count":59,"publisher":"MDPI AG","issue":"5","license":[{"start":{"date-parts":[[2025,5,20]],"date-time":"2025-05-20T00:00:00Z","timestamp":1747699200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Symmetry"],"abstract":"<jats:p>Low-light image enhancement remains a challenging task in computer vision due to the complex interplay of noise, asymmetrical artifacts, illumination non-uniformity, and detail preservation. Existing methods such as traditional histogram equalization, gamma correction, and Retinex-based approaches often struggle to balance contrast improvement and naturalness preservation. Deep learning methods such as CNNs and transformers have shown promise, but face limitations in modeling multi-scale illumination and long-range dependencies. To address these issues, we propose WIGformer, a novel wavelet-based illumination-guided transformer framework for low-light image enhancement. The proposed method extends the single-stage Retinex theory to explicitly model noise in both reflectance and illumination components. It introduces a wavelet illumination estimator with a Wavelet Feature Enhancement Convolution (WFEConv) module to capture multi-scale illumination features and an illumination feature-guided corruption restorer with an Illumination-Guided Enhanced Multihead Self-Attention (IGEMSA) mechanism. WIGformer leverages the symmetry properties of wavelet transforms to achieve multi-scale illumination estimation, ensuring balanced feature extraction across different frequency bands. The IGEMSA mechanism integrates adaptive feature refinement and illumination guidance to suppress noise and artifacts while preserving fine details. The same mechanism allows us to further exploit symmetrical dependencies between illumination and reflectance components, enabling robust and natural enhancement of low-light images. Extensive experiments on the LOL-V1, LOL-V2-Real, and LOL-V2-Synthetic datasets demonstrate that WIGformer achieves state-of-the-art performance and outperforms existing methods, with PSNR improvements of up to 26.12 dB and an SSIM score of 0.935. The qualitative results demonstrate WIGformer\u2019s superior capability to not only restore natural illumination but also maintain structural symmetry in challenging conditions, preserving balanced luminance distributions and geometric regularities that are characteristic of properly exposed natural scenes.<\/jats:p>","DOI":"10.3390\/sym17050798","type":"journal-article","created":{"date-parts":[[2025,5,20]],"date-time":"2025-05-20T11:50:31Z","timestamp":1747741831000},"page":"798","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["WIGformer: Wavelet-Based Illumination-Guided Transformer"],"prefix":"10.3390","volume":"17","author":[{"ORCID":"https:\/\/orcid.org\/0009-0006-4202-9655","authenticated-orcid":false,"given":"Wensheng","family":"Cao","sequence":"first","affiliation":[{"name":"School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, China"}]},{"given":"Tianyu","family":"Yan","sequence":"additional","affiliation":[{"name":"School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, China"}]},{"given":"Zhile","family":"Li","sequence":"additional","affiliation":[{"name":"School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, China"}]},{"given":"Jiongyao","family":"Ye","sequence":"additional","affiliation":[{"name":"School of Information Science and Engineering, East China University of Science and Technology, Shanghai 200237, China"}]}],"member":"1968","published-online":{"date-parts":[[2025,5,20]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"1752","DOI":"10.1109\/TCE.2007.4429280","article-title":"Brightness Preserving Dynamic Histogram Equalization for Image Contrast Enhancement","volume":"53","author":"Ibrahim","year":"2007","journal-title":"IEEE Trans. Consum. Electron."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"3431","DOI":"10.1109\/TIP.2011.2157513","article-title":"Contextual and variational contrast enhancement","volume":"20","author":"Celik","year":"2011","journal-title":"IEEE Trans. Image Process."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"158","DOI":"10.1016\/j.dsp.2003.07.002","article-title":"A simple and effective histogram equalization approach to image enhancement","volume":"14","author":"Cheng","year":"2004","journal-title":"Digit. Signal Process."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"1032","DOI":"10.1109\/TIP.2012.2226047","article-title":"Efficient contrast enhancement using adaptive gamma correction with weighting distribution","volume":"22","author":"Huang","year":"2012","journal-title":"IEEE Trans. Image Process."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"133","DOI":"10.1016\/j.displa.2009.03.006","article-title":"A real-time image processor with combining dynamic contrast ratio enhancement and inverse gamma correction for PDP","volume":"30","author":"Wang","year":"2009","journal-title":"Displays"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"108","DOI":"10.1038\/scientificamerican1277-108","article-title":"The retinex theory of color vision","volume":"237","author":"Land","year":"1977","journal-title":"Sci. Am."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"965","DOI":"10.1109\/83.597272","article-title":"A multiscale retinex for bridging the gap between color images and the human observation of scenes","volume":"6","author":"Jobson","year":"1997","journal-title":"IEEE Trans. Image Process."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"451","DOI":"10.1109\/83.557356","article-title":"Properties and performance of a center\/surround retinex","volume":"6","author":"Jobson","year":"1997","journal-title":"IEEE Trans. Image Process."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"100","DOI":"10.1117\/1.1636183","article-title":"Retinex processing for automatic image enhancement","volume":"13","author":"Rahman","year":"2004","journal-title":"J. Electron. Imaging"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"3538","DOI":"10.1109\/TIP.2013.2261309","article-title":"Naturalness preserved enhancement algorithm for non-uniform illumination images","volume":"22","author":"Wang","year":"2013","journal-title":"IEEE Trans. Image Process."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Fu, X., Zeng, D., Huang, Y., Zhang, X.P., and Ding, X. (2016, January 27\u201330). A weighted variational model for simultaneous reflectance and illumination estimation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.304"},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"82","DOI":"10.1016\/j.sigpro.2016.05.031","article-title":"A fusion-based enhancing method for weakly illuminated images","volume":"129","author":"Fu","year":"2016","journal-title":"Signal Process."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"982","DOI":"10.1109\/TIP.2016.2639450","article-title":"LIME: Low-light image enhancement via illumination map estimation","volume":"26","author":"Guo","year":"2016","journal-title":"IEEE Trans. Image Process."},{"key":"ref_14","unstructured":"Lv, F., Lu, F., Wu, J., and Lim, C. (2018, January 3\u20136). MBLLEN: Low-light image\/video enhancement using cnns. Proceedings of the BMVC, Newcastle, UK."},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Wang, R., Zhang, Q., Fu, C.W., Shen, X., Zheng, W.S., and Jia, J. (2019, January 15\u201320). Underexposed photo enhancement using deep illumination estimation. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00701"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Moran, S., Marza, P., McDonagh, S., Parisot, S., and Slabaugh, G. (2020, January 13\u201319). Deeplpf: Deep local parametric filters for image enhancement. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.01284"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.H., and Shao, L. (2020, January 23\u201328). Learning enriched features for real image restoration and enhancement. Proceedings of the Computer Vision\u2013ECCV 2020: 16th European Conference, Glasgow, UK. Proceedings, Part XXV 16.","DOI":"10.1007\/978-3-030-58595-2_30"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Xu, X., Wang, R., Fu, C.W., and Jia, J. (2022, January 18\u201324). Snr-aware low-light image enhancement. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01719"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"1013","DOI":"10.1007\/s11263-020-01407-x","article-title":"Beyond brightening low-light images","volume":"129","author":"Zhang","year":"2021","journal-title":"Int. J. Comput. Vis."},{"key":"ref_20","unstructured":"Wei, C., Wang, W., Yang, W., and Liu, J. (2018). Deep retinex decomposition for low-light enhancement. arXiv."},{"key":"ref_21","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., and Polosukhin, I. (2017, January 4\u20139). Attention is all you need. Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017), Long Beach, CA, USA."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Zhang, Z., Jiang, Y., Jiang, J., Wang, X., Luo, P., and Gu, J. (2021, January 11\u201317). Star: A structure-aware lightweight transformer for real-time image enhancement. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, BC, Canada.","DOI":"10.1109\/ICCV48922.2021.00407"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Cai, Y., Bian, H., Lin, J., Wang, H., Timofte, R., and Zhang, Y. (2023, January 2\u20133). Retinexformer: One-stage retinex-based transformer for low-light image enhancement. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Paris, France.","DOI":"10.1109\/ICCV51070.2023.01149"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Zhang, X., Zhao, Y., Gu, C., Lu, C., and Zhu, S. (2023, January 18\u201323). Spa-former: An effective and lightweight transformer for image shadow removal. Proceedings of the 2023 International Joint Conference on Neural Networks (IJCNN), Gold Coast, Australia.","DOI":"10.1109\/IJCNN54540.2023.10191081"},{"key":"ref_25","unstructured":"Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., and Gelly, S. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Wang, Z., Cun, X., Bao, J., Zhou, W., Liu, J., and Li, H. (2022, January 18\u201324). Uformer: A general u-shaped transformer for image restoration. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01716"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Daubechies, I. (1992). Ten Lectures on Wavelets, SIAM.","DOI":"10.1137\/1.9781611970104"},{"key":"ref_28","unstructured":"Ronneberger, O., Fischer, P., and Brox, T. (2015, January 5\u20139). U-net: Convolutional networks for biomedical image segmentation. Proceedings of the Medical Image Computing and Computer-Assisted Intervention\u2014MICCAI 2015: 18th International Conference, Munich, Germany. Proceedings, Part III 18."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Wang, T., Lu, C., Sun, Y., Yang, M., Liu, C., and Ou, C. (2021). Automatic ECG classification using continuous wavelet transform and convolutional neural network. Entropy, 23.","DOI":"10.3390\/e23010119"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Huang, H., He, R., Sun, Z., and Tan, T. (2017, January 22\u201329). Wavelet-srnet: A wavelet-based cnn for multi-scale face super resolution. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.187"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Guo, T., Seyed Mousavi, H., Huu Vu, T., and Monga, V. (2017, January 21\u201326). Deep wavelet prediction for image super-resolution. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA.","DOI":"10.1109\/CVPRW.2017.148"},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"255","DOI":"10.1016\/j.patcog.2016.11.015","article-title":"SAR image segmentation based on convolutional-wavelet neural network and Markov random field","volume":"64","author":"Duan","year":"2017","journal-title":"Pattern Recognit."},{"key":"ref_33","unstructured":"Williams, T., and Li, R. (May, January 30). Wavelet pooling for convolutional neural networks. Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada."},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3450626.3459836","article-title":"Swagan: A style-based wavelet-driven generative model","volume":"40","author":"Gal","year":"2021","journal-title":"ACM Trans. Graph. (TOG)"},{"key":"ref_35","first-page":"478","article-title":"Wavelet score-based generative modeling","volume":"35","author":"Guth","year":"2022","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Phung, H., Dao, Q., and Tran, A. (2023, January 17\u201324). Wavelet diffusion models are fast and scalable image generators. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada.","DOI":"10.1109\/CVPR52729.2023.00983"},{"key":"ref_37","first-page":"20592","article-title":"Wavelet feature maps compression for image-to-image CNNs","volume":"35","author":"Finder","year":"2022","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_38","unstructured":"Finder, S.E., Amoyal, R., Treister, E., and Freifeld, O. (October, January 29). Wavelet convolutions for large receptive fields. Proceedings of the European Conference on Computer Vision, Milan, Italy."},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"2072","DOI":"10.1109\/TIP.2021.3050850","article-title":"Sparse gradient regularized deep retinex network for robust low-light image enhancement","volume":"30","author":"Yang","year":"2021","journal-title":"IEEE Trans. Image Process."},{"key":"ref_40","unstructured":"Liu, X., Wu, Z., Li, A., Vasluianu, F.A., Zhang, Y., Gu, S., Zhang, L., Zhu, C., Timofte, R., and Jin, Z. (2024, January 16\u201322). NTIRE 2024 challenge on low light image enhancement: Methods and results. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA."},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"600","DOI":"10.1109\/TIP.2003.819861","article-title":"Image quality assessment: From error visibility to structural similarity","volume":"13","author":"Wang","year":"2004","journal-title":"IEEE Trans. Image Process."},{"key":"ref_42","first-page":"8024","article-title":"Pytorch: An imperative style, high-performance deep learning library","volume":"32","author":"Paszke","year":"2019","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_43","unstructured":"Kingma, D.P., and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv."},{"key":"ref_44","unstructured":"Loshchilov, I., and Hutter, F. (2016). Sgdr: Stochastic gradient descent with warm restarts. arXiv."},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Dong, X., Pang, Y., and Wen, J. (2010, January 26\u201330). Fast efficient algorithm for enhancement of low lighting video. Proceedings of the ACM SIGGRApH 2010 Posters, Los Angeles, CA, USA.","DOI":"10.1145\/1836845.1836920"},{"key":"ref_46","doi-asserted-by":"crossref","first-page":"2828","DOI":"10.1109\/TIP.2018.2810539","article-title":"Structure-revealing low-light image enhancement via robust retinex model","volume":"27","author":"Li","year":"2018","journal-title":"IEEE Trans. Image Process."},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"650","DOI":"10.1016\/j.patcog.2016.06.008","article-title":"LLNet: A deep autoencoder approach to natural low-light image enhancement","volume":"61","author":"Lore","year":"2017","journal-title":"Pattern Recognit."},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Zhang, J., and Guo, X. (2019, January 21\u201325). Kindling the darkness: A practical low-light image enhancer. Proceedings of the 27th ACM International Conference on Multimedia, Nice, France.","DOI":"10.1145\/3343031.3350926"},{"key":"ref_49","doi-asserted-by":"crossref","unstructured":"Guo, C., Li, C., Guo, J., Loy, C.C., Hou, J., Kwong, S., and Cong, R. (2020, January 13\u201319). Zero-reference deep curve estimation for low-light image enhancement. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00185"},{"key":"ref_50","doi-asserted-by":"crossref","first-page":"2340","DOI":"10.1109\/TIP.2021.3051462","article-title":"Enlightengan: Deep light enhancement without paired supervision","volume":"30","author":"Jiang","year":"2021","journal-title":"IEEE Trans. Image Process."},{"key":"ref_51","doi-asserted-by":"crossref","unstructured":"Liu, R., Ma, L., Zhang, J., Fan, X., and Luo, Z. (2021, January 20\u201325). Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.01042"},{"key":"ref_52","doi-asserted-by":"crossref","first-page":"1934","DOI":"10.1109\/TPAMI.2022.3167175","article-title":"Learning enriched features for fast image restoration and enhancement","volume":"45","author":"Zamir","year":"2022","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., and Yang, M.H. (2022, January 18\u201324). Restormer: Efficient transformer for high-resolution image restoration. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00564"},{"key":"ref_54","doi-asserted-by":"crossref","unstructured":"Ma, L., Ma, T., Liu, R., Fan, X., and Luo, Z. (2022, January 18\u201324). Toward fast, flexible, and robust low-light image enhancement. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00555"},{"key":"ref_55","doi-asserted-by":"crossref","first-page":"109039","DOI":"10.1016\/j.patcog.2022.109039","article-title":"LAE-Net: A locally-adaptive embedding network for low-light image enhancement","volume":"133","author":"Liu","year":"2023","journal-title":"Pattern Recognition"},{"key":"ref_56","doi-asserted-by":"crossref","first-page":"48","DOI":"10.1007\/s11263-022-01667-9","article-title":"Low-light image enhancement via breaking down the darkness","volume":"131","author":"Guo","year":"2023","journal-title":"Int. J. Comput. Vis."},{"key":"ref_57","doi-asserted-by":"crossref","first-page":"3610","DOI":"10.1007\/s11263-024-02065-z","article-title":"CRetinex: A progressive color-shift aware Retinex model for low-light image enhancement","volume":"132","author":"Xu","year":"2024","journal-title":"Int. J. Comput. Vis."},{"key":"ref_58","doi-asserted-by":"crossref","first-page":"110490","DOI":"10.1016\/j.patcog.2024.110490","article-title":"Lit me up: A reference free adaptive low light image enhancement for in-the-wild conditions","volume":"153","author":"Khan","year":"2024","journal-title":"Pattern Recognit."},{"key":"ref_59","doi-asserted-by":"crossref","first-page":"111033","DOI":"10.1016\/j.patcog.2024.111033","article-title":"An illumination-guided dual attention vision transformer for low-light image enhancement","volume":"158","author":"Wen","year":"2025","journal-title":"Pattern Recognit."}],"container-title":["Symmetry"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2073-8994\/17\/5\/798\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,9]],"date-time":"2025-10-09T17:36:05Z","timestamp":1760031365000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2073-8994\/17\/5\/798"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,5,20]]},"references-count":59,"journal-issue":{"issue":"5","published-online":{"date-parts":[[2025,5]]}},"alternative-id":["sym17050798"],"URL":"https:\/\/doi.org\/10.3390\/sym17050798","relation":{},"ISSN":["2073-8994"],"issn-type":[{"value":"2073-8994","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,5,20]]}}}