{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,9]],"date-time":"2025-12-09T09:51:19Z","timestamp":1765273879723,"version":"3.46.0"},"reference-count":45,"publisher":"MDPI AG","issue":"12","license":[{"start":{"date-parts":[[2025,12,9]],"date-time":"2025-12-09T00:00:00Z","timestamp":1765238400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"National Research Foundation of South Africa","award":["PMDS230505102760"],"award-info":[{"award-number":["PMDS230505102760"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Algorithms"],"abstract":"<jats:p>Low-light video enhancement remains a challenge, specifically due to the challenging task of acquiring paired low-light video data. This paper proposes Zero-3DCE, a 3D version of Zero-DCE. Zero-3DCE differs from Zero-DCE by (i) introducing 3D separable convolutions for temporal consistency, (ii) integrating spatial attention for region-specific enhancement, and (iii) combining MS-SSIM and edge-based losses for structural preservation. Separable convolutions are utilized to capture 3D data while maintaining real-time speed, while a spatial attention network guides the model to regions that require enhancement by adaptively weighting spatial regions across all channels. Coupled with YOLOv11m, Zero-3DCE improves detection accuracy under low-light conditions. The model is trained with a combination of single-frame and multi-frame data. Results showed that Zero-3DCE outperformed other low-light enhancers in enhancing both 2D and 3D data while achieving real-time speeds. Zero-3DCE outperforms Zero-DCE by +3.4 dB in PSNR, and achieves up to 0.11 higher SSIM, demonstrating significant perceptual and structural enhancement.<\/jats:p>","DOI":"10.3390\/a18120775","type":"journal-article","created":{"date-parts":[[2025,12,9]],"date-time":"2025-12-09T09:02:50Z","timestamp":1765270970000},"page":"775","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Zero-3DCE: A Low-Light Video Enhancement for More Robust Computer Vision Tasks"],"prefix":"10.3390","volume":"18","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-7804-9598","authenticated-orcid":false,"given":"Mpilo Mbulelo","family":"Tatana","sequence":"first","affiliation":[{"name":"Department of Electronic and Computer Engineering, Durban University of Technology, Durban 4001, South Africa"}]},{"given":"Rito Clifford","family":"Maswanganyi","sequence":"additional","affiliation":[{"name":"Department of Electronic and Computer Engineering, Durban University of Technology, Durban 4001, South Africa"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6641-8894","authenticated-orcid":false,"given":"Philani","family":"Khumalo","sequence":"additional","affiliation":[{"name":"Department of Electronic and Computer Engineering, Durban University of Technology, Durban 4001, South Africa"}]}],"member":"1968","published-online":{"date-parts":[[2025,12,9]]},"reference":[{"key":"ref_1","unstructured":"Cai, Y., Bian, H., Lin, J., Wang, H., Timofte, R., and Zhang, Y. (2023, January 1\u20136). Retinexformer: One-stage retinex-based transformer for low-light image enhancement. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Paris, France."},{"key":"ref_2","unstructured":"Liu, Y., Huang, T., Dong, W., Wu, F., Li, X., and Shi, G. (2023, January 1\u20136). Low-light image enhancement with multi-stage residue quantization and brightness-aware attention. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Paris, France."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Xu, X., Wang, R., and Lu, J. (2023, January 17\u201324). Low-light image enhancement via structure modeling and guidance. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada.","DOI":"10.1109\/CVPR52729.2023.00954"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Liu, F., and Fan, L. (2025). A review of advancements in low-light image enhancement using deep learning. arXiv.","DOI":"10.1016\/j.neucom.2025.131052"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Yu, C., Han, G., Pan, M., Wu, X., and Deng, A. (2025). Zero-TCE: Zero Reference Tri-Curve Enhancement for Low-Light Images. Appl. Sci., 15.","DOI":"10.3390\/app15020701"},{"key":"ref_6","unstructured":"He, J., Xue, M., Ning, A., and Song, C. (2024). Zero-reference lighting estimation diffusion model for low-light image enhancement. arXiv."},{"key":"ref_7","unstructured":"Khanam, R., and Hussain, M. (2024). Yolov11: An overview of the key architectural enhancements. arXiv."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Guo, C., Li, C., Guo, J., Loy, C.C., Hou, J., Kwong, S., and Cong, R. (2020, January 14\u201319). Zero-reference deep curve estimation for low-light image enhancement. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Washington, DC, USA.","DOI":"10.1109\/CVPR42600.2020.00185"},{"key":"ref_9","unstructured":"Lv, F., Lu, F., Wu, J., and Lim, C. (2018, January 3\u20136). MBLLEN: Low-Light Image\/Video Enhancement Using CNNs. Proceedings of the BMVC, Newcastle, UK."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"48","DOI":"10.1007\/s11263-022-01667-9","article-title":"Low-light image enhancement via breaking down the darkness","volume":"131","author":"Guo","year":"2023","journal-title":"Int. J. Comput. Vis."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"44","DOI":"10.5815\/ijmecs.2018.05.06","article-title":"Implementation of gray level image transformation techniques","volume":"10","author":"Baidoo","year":"2018","journal-title":"Int. J. Mod. Educ. Comput. Sci."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"8093","DOI":"10.1007\/s11042-022-12087-9","article-title":"Low light enhancement algorithm for color images using intuitionistic fuzzy sets with histogram equalization","volume":"81","author":"Jebadass","year":"2022","journal-title":"Multimed. Tools Appl."},{"key":"ref_13","unstructured":"Chen, Y., Wen, C., Liu, W., and He, W. (2023). A depth iterative illumination estimation network for low-light image enhancement based on retinex theory. Sci. Rep., 13."},{"key":"ref_14","unstructured":"Zhang, Y., Zhang, J., and Guo, X. (2019, January 21\u201325). Kindling the darkness: A practical low-light image enhancer. Proceedings of the 27th ACM International Conference on Multimedia, Nice, France."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"650","DOI":"10.1016\/j.patcog.2016.06.008","article-title":"LLNet: A deep autoencoder approach to natural low-light image enhancement","volume":"61","author":"Lore","year":"2017","journal-title":"Pattern Recognit."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"2340","DOI":"10.1109\/TIP.2021.3051462","article-title":"Enlightengan: Deep light enhancement without paired supervision","volume":"30","author":"Jiang","year":"2021","journal-title":"IEEE Trans. Image Process."},{"key":"ref_17","unstructured":"Zhang, J., Li, H., and Huo, Z. (2024). Unsupervised Boosted Fusion Network for Single Low-light Image Enhancement, IEEE Access."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Ming, F., Wei, Z., and Zhang, J. (2023). Unsupervised low-light image enhancement in the fourier transform domain. Appl. Sci., 14.","DOI":"10.3390\/app14010332"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"107003","DOI":"10.1016\/j.engappai.2023.107003","article-title":"A semi-supervised network framework for low-light image enhancement","volume":"126","author":"Chen","year":"2023","journal-title":"Eng. Appl. Artif. Intell."},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"127106","DOI":"10.1016\/j.eswa.2025.127106","article-title":"Low-light image enhancement with quality-oriented pseudo labels via semi-supervised contrastive learning","volume":"276","author":"Jiang","year":"2025","journal-title":"Expert Syst. Appl."},{"key":"ref_21","first-page":"4225","article-title":"Learning to enhance low-light image via zero-reference deep curve estimation","volume":"44","author":"Li","year":"2021","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_22","unstructured":"Zhang, L., Zhang, L., Liu, X., Shen, Y., Zhang, S., and Zhao, S. (2019, January 21\u201325). Zero-shot restoration of back-lit images using deep internal learning. Proceedings of the 27th ACM International Conference on Multimedia, Nice, France."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"4703","DOI":"10.1007\/s11263-024-02084-w","article-title":"Temporally consistent enhancement of low-light videos via spatial-temporal compatible learning","volume":"132","author":"Zhu","year":"2024","journal-title":"Int. J. Comput. Vis."},{"key":"ref_24","first-page":"1","article-title":"AdaEnlight: Energy-aware low-light video stream enhancement on mobile devices","volume":"6","author":"Liu","year":"2023","journal-title":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol."},{"key":"ref_25","unstructured":"Zhang, G., Zhang, Y., Yuan, X., and Fu, Y. (2024, January 17\u201321). Binarized low-light raw video enhancement. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Washington, DC, USA."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Wang, Z., Simoncelli, E.P., and Bovik, A.C. (2003). Multiscale structural similarity for image quality assessment. The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, IEEE.","DOI":"10.1109\/ACSSC.2003.1292216"},{"key":"ref_27","unstructured":"Gandhi, V., and Gandhi, S. (2025). Fine-Tuning Without Forgetting: Adaptation of YOLOv8 Preserves COCO Performance. arXiv."},{"key":"ref_28","first-page":"31","article-title":"Improving Vehicle Detection in Challenging Datasets: YOLOv5s and Frozen Layers Analysis","volume":"5","author":"Rafi","year":"2023","journal-title":"Int. J. Inform. Comput."},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"5372","DOI":"10.1109\/TIP.2013.2284059","article-title":"Contrast enhancement based on layered difference representation of 2D histograms","volume":"22","author":"Lee","year":"2013","journal-title":"IEEE Trans. Image Process."},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"982","DOI":"10.1109\/TIP.2016.2639450","article-title":"LIME: Low-light image enhancement via illumination map estimation","volume":"26","author":"Guo","year":"2016","journal-title":"IEEE Trans. Image Process."},{"key":"ref_31","unstructured":"Wei, C., Wang, W., Yang, W., and Liu, J. (2018). Deep retinex decomposition for low-light enhancement. arXiv."},{"key":"ref_32","unstructured":"Zheng, S., Ma, Y., Pan, J., Lu, C., and Gupta, G. (2022). Low-light image and video enhancement: A comprehensive survey and beyond. arXiv."},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"9396","DOI":"10.1109\/TPAMI.2021.3126387","article-title":"Low-light image and video enhancement using deep learning: A survey","volume":"44","author":"Li","year":"2021","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_34","unstructured":"Anantrasirichai, N., Lin, R., Malyugina, A., and Bull, D. (2024). BVI-Lowlight: Fully Registered Benchmark Dataset for Low-Light Video Enhancement. arXiv."},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"3507","DOI":"10.1109\/TIP.2023.3286254","article-title":"DTCM: Joint optimization of dark enhancement and action recognition in videos","volume":"32","author":"Tu","year":"2023","journal-title":"IEEE Trans. Image Process."},{"key":"ref_36","unstructured":"Yang, W., Wang, S., Fang, Y., Wang, Y., and Liu, J. (2020, January 14\u201319). From fidelity to perceptual quality: A semi-supervised approach for low-light image enhancement. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Washington, DC, USA."},{"key":"ref_37","unstructured":"Liu, R., Ma, L., Zhang, J., Fan, X., and Luo, Z. (2021, January 20\u201325). Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA."},{"key":"ref_38","unstructured":"Jin, Y., Yang, W., and Tan, R.T. (2022, January 23\u201327). Unsupervised night image enhancement: When layer decomposition meets light-effects suppression. Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel."},{"key":"ref_39","first-page":"79734","article-title":"Global structure-aware diffusion process for low-light image enhancement","volume":"36","author":"Hou","year":"2024","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Brateanu, A., Balmez, R., Orhei, C., Ancuti, C., and Ancuti, C. (2025). Enhancing Low-Light Images with Kolmogorov\u2013Arnold Networks in Transformer Attention. Sensors, 25.","DOI":"10.3390\/s25020327"},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"93","DOI":"10.1007\/s11063-024-11565-5","article-title":"Rethinking zero-DCE for low-light image enhancement","volume":"56","author":"Mi","year":"2024","journal-title":"Neural Process. Lett."},{"key":"ref_42","unstructured":"Verner, K. (2024, September 10). Human Detection Dataset CCTV Footage of Humans. Available online: https:\/\/www.kaggle.com\/datasets\/constantinwerner\/human-detection-dataset."},{"key":"ref_43","unstructured":"Sharma, A. (2024, September 10). Weapon Detection Dataset Weapon Detection Including KNIFE, Gun, Pistol etc. Available online: https:\/\/www.kaggle.com\/datasets\/ankan1998\/weapon-detection-dataset."},{"key":"ref_44","unstructured":"School (2024, September 10). Person, Weapon Datasets Dataset. Available online: https:\/\/universe.roboflow.com\/school-fin7c\/person-weapon-datasets."},{"key":"ref_45","doi-asserted-by":"crossref","first-page":"5737","DOI":"10.1109\/TIP.2020.2981922","article-title":"Advancing image understanding in poor visibility environments: A collective benchmark study","volume":"29","author":"Yang","year":"2020","journal-title":"IEEE Trans. Image Process."}],"container-title":["Algorithms"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1999-4893\/18\/12\/775\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,12,9]],"date-time":"2025-12-09T09:47:01Z","timestamp":1765273621000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1999-4893\/18\/12\/775"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,12,9]]},"references-count":45,"journal-issue":{"issue":"12","published-online":{"date-parts":[[2025,12]]}},"alternative-id":["a18120775"],"URL":"https:\/\/doi.org\/10.3390\/a18120775","relation":{},"ISSN":["1999-4893"],"issn-type":[{"type":"electronic","value":"1999-4893"}],"subject":[],"published":{"date-parts":[[2025,12,9]]}}}