{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,20]],"date-time":"2026-04-20T19:45:04Z","timestamp":1776714304015,"version":"3.51.2"},"reference-count":27,"publisher":"MDPI AG","issue":"11","license":[{"start":{"date-parts":[[2025,6,4]],"date-time":"2025-06-04T00:00:00Z","timestamp":1748995200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Applied Sciences"],"abstract":"<jats:p>Low-light conditions often lead to severe degradation in image quality, impairing critical computer vision tasks in applications such as surveillance and mobile imaging. In this paper, we propose a lightweight deep learning framework for low-light image enhancement, designed to balance visual quality with computational efficiency, with potential for deployment in latency-sensitive and resource-constrained environments. The architecture builds upon a UNet-inspired encoder\u2013decoder structure, enhanced with attention modules and trained using a combination of perceptual and structural loss functions. Our training strategy utilizes a hybrid dataset composed of both real low-light images and synthetically generated image pairs created through controlled exposure adjustment and noise modeling. Experimental results on benchmark datasets such as LOL and SID demonstrate that our model achieves a Peak Signal-to-Noise Ratio (PSNR) of up to 28.4 dB and a Structural Similarity Index (SSIM) of 0.88 while maintaining a small parameter footprint (~1.3 M) and low inference latency (~6 FPS on Jetson Nano). The proposed approach offers a promising solution for industrial applications such as real-time surveillance, mobile photography, and embedded vision systems.<\/jats:p>","DOI":"10.3390\/app15116330","type":"journal-article","created":{"date-parts":[[2025,6,4]],"date-time":"2025-06-04T11:14:12Z","timestamp":1749035652000},"page":"6330","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":7,"title":["Low-Light Image Enhancement Using Deep Learning: A Lightweight Network with Synthetic and Benchmark Dataset Evaluation"],"prefix":"10.3390","volume":"15","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-8872-5721","authenticated-orcid":false,"given":"Manuel J. C. S.","family":"Reis","sequence":"first","affiliation":[{"name":"Engineering Department, Institute of Electronics and Informatics Engineering of Aveiro (IEETA), University of Tr\u00e1s-os-Montes e Alto Douro, Quinta de Prados, 5000-801 Vila Real, Portugal"}]}],"member":"1968","published-online":{"date-parts":[[2025,6,4]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"982","DOI":"10.1109\/TIP.2016.2639450","article-title":"LIME: Low-Light Image Enhancement via Illumination Map Estimation","volume":"26","author":"Guo","year":"2017","journal-title":"IEEE Trans. Image Process."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"650","DOI":"10.1016\/j.patcog.2016.06.008","article-title":"LLNet: A Deep Autoencoder Approach to Natural Low-Light Image Enhancement","volume":"61","author":"Lore","year":"2017","journal-title":"Pattern Recognit."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"115723","DOI":"10.1016\/j.image.2019.115723","article-title":"Underwater Image Enhancement Based on Conditional Generative Adversarial Network","volume":"81","author":"Yang","year":"2020","journal-title":"Signal Process. Image Commun."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"108411","DOI":"10.1016\/j.engappai.2024.108411","article-title":"INSPIRATION: A Reinforcement Learning-Based Human Visual Perception-Driven Image Enhancement Paradigm for Underwater Scenes","volume":"133","author":"Wang","year":"2024","journal-title":"Eng. Appl. Artif. Intell."},{"key":"ref_5","first-page":"5609317","article-title":"Large Foundation Model Empowered Discriminative Underwater Image Enhancement","volume":"63","author":"Wang","year":"2025","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1364\/JOSA.61.000001","article-title":"Lightness and Retinex Theory","volume":"61","author":"Land","year":"1971","journal-title":"JOSA"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"451","DOI":"10.1109\/83.557356","article-title":"Properties and Performance of a Center\/Surround Retinex","volume":"6","author":"Jobson","year":"1997","journal-title":"IEEE Trans. Image Process."},{"key":"ref_8","unstructured":"Wei, C., Wang, W., Yang, W., and Liu, J. (2018). Deep Retinex Decomposition for Low-Light Enhancement. arXiv."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Guo, C., Li, C., Guo, J., Loy, C.C., Hou, J., Kwong, S., and Cong, R. (2020, January 13\u201319). Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement. Proceedings of the 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00185"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"1076","DOI":"10.1109\/TCSVT.2021.3073371","article-title":"RetinexDIP: A Unified Deep Framework for Low-Light Image Enhancement","volume":"32","author":"Zhao","year":"2022","journal-title":"IEEE Trans. Circuits Syst. Video Technol."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Chen, C., Chen, Q., Xu, J., and Koltun, V. (2018, January 18\u201323). Learning to See in the Dark. Proceedings of the 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00347"},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"2340","DOI":"10.1109\/TIP.2021.3051462","article-title":"EnlightenGAN: Deep Light Enhancement Without Paired Supervision","volume":"30","author":"Jiang","year":"2021","journal-title":"IEEE Trans. Image Process."},{"key":"ref_13","unstructured":"Zhang, Y., Di, X., Wu, J., Fu, R., Li, Y., Wang, Y., Xu, Y., Yang, G., and Wang, C. (2023). A Fast and Lightweight Network for Low-Light Image Enhancement. arXiv."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Ma, L., Ma, T., Liu, R., Fan, X., and Luo, Z. (2022, January 18\u201324). Toward Fast, Flexible, and Robust Low-Light Image Enhancement. Proceedings of the 2022 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00555"},{"key":"ref_15","unstructured":"Wang, K., Cui, Z., Jia, J., Xu, H., Wu, G., Zhuang, Y., Chen, L., Hu, Z., and Qian, Y. (2022). Linear Array Network for Low-Light Image Enhancement. arXiv."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"4364","DOI":"10.1109\/TIP.2019.2910412","article-title":"Low-Light Image Enhancement via a Deep Hybrid Network","volume":"28","author":"Ren","year":"2019","journal-title":"IEEE Trans. Image Process."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Xu, K., Yang, X., Yin, B., and Lau, R.W.H. (2020, January 13\u201319). Learning to Restore Low-Light Images via Decomposition-and-Enhancement. Proceedings of the 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00235"},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"355","DOI":"10.1016\/S0734-189X(87)80186-X","article-title":"Adaptive Histogram Equalization and Its Variations","volume":"39","author":"Pizer","year":"1987","journal-title":"Comput. Vis. Graph. Image Process."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"965","DOI":"10.1109\/83.597272","article-title":"A Multiscale Retinex for Bridging the Gap between Color Images and the Human Observation of Scenes","volume":"6","author":"Jobson","year":"1997","journal-title":"IEEE Trans. Image Process."},{"key":"ref_20","unstructured":"Felsberg, M., Heyden, A., and Kr\u00fcger, N. (2017). A New Image Contrast Enhancement Algorithm Using Exposure Fusion Framework. Computer Analysis of Images and Patterns, Springer International Publishing."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Yang, W., Wang, S., Fang, Y., Wang, Y., and Liu, J. (2020, January 13\u201319). From Fidelity to Perceptual Quality: A Semi-Supervised Approach for Low-Light Image Enhancement. Proceedings of the 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00313"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and Chen, L.-C. (2018, January 18\u201323). MobileNetV2: Inverted Residuals and Linear Bottlenecks. Proceedings of the 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00474"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Han, K., Wang, Y., Tian, Q., Guo, J., Xu, C., and Xu, C. (2020, January 13\u201319). GhostNet: More Features From Cheap Operations. Proceedings of the 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00165"},{"key":"ref_24","unstructured":"Tan, M., and Le, Q. (2019, January 9\u201315). EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Hu, J., Shen, L., and Sun, G. (2018, January 18\u201323). Squeeze-and-Excitation Networks. Proceedings of the 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00745"},{"key":"ref_26","unstructured":"Simonyan, K., and Zisserman, A. (2015). Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Zhang, R., Isola, P., Efros, A.A., Shechtman, E., and Wang, O. (2018, January 18\u201323). The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. Proceedings of the 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00068"}],"container-title":["Applied Sciences"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2076-3417\/15\/11\/6330\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,9]],"date-time":"2025-10-09T17:46:45Z","timestamp":1760032005000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2076-3417\/15\/11\/6330"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,6,4]]},"references-count":27,"journal-issue":{"issue":"11","published-online":{"date-parts":[[2025,6]]}},"alternative-id":["app15116330"],"URL":"https:\/\/doi.org\/10.3390\/app15116330","relation":{},"ISSN":["2076-3417"],"issn-type":[{"value":"2076-3417","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,6,4]]}}}