{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,12]],"date-time":"2025-10-12T02:12:38Z","timestamp":1760235158029,"version":"build-2065373602"},"reference-count":40,"publisher":"MDPI AG","issue":"15","license":[{"start":{"date-parts":[[2021,7,22]],"date-time":"2021-07-22T00:00:00Z","timestamp":1626912000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Tianjin Intelligent Security Industry Chain Technology Adaptation and Application Project","award":["18ZXZNGX00320"],"award-info":[{"award-number":["18ZXZNGX00320"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Due to the non-uniform illumination conditions, images captured by sensors often suffer from uneven brightness, low contrast and noise. In order to improve the quality of the image, in this paper, a multi-path interaction network is proposed to enhance the R, G, B channels, and then the three channels are combined into the color image and further adjusted in detail. In the multi-path interaction network, the feature maps in several encoding\u2013decoding subnetworks are used to exchange information across paths, while a high-resolution path is retained to enrich the feature representation. Meanwhile, in order to avoid the possible unnatural results caused by the separation of the R, G, B channels, the output of the multi-path interaction network is corrected in detail to obtain the final enhancement results. Experimental results show that the proposed method can effectively improve the visual quality of low-light images, and the performance is better than the state-of-the-art methods.<\/jats:p>","DOI":"10.3390\/s21154986","type":"journal-article","created":{"date-parts":[[2021,7,22]],"date-time":"2021-07-22T22:37:14Z","timestamp":1626993434000},"page":"4986","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":6,"title":["Low-Light Image Enhancement Based on Multi-Path Interaction"],"prefix":"10.3390","volume":"21","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-6039-7386","authenticated-orcid":false,"given":"Bai","family":"Zhao","sequence":"first","affiliation":[{"name":"School of Microelectronics, Tianjin University, Tianjin 300072, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3932-9228","authenticated-orcid":false,"given":"Xiaolin","family":"Gong","sequence":"additional","affiliation":[{"name":"School of Microelectronics, Tianjin University, Tianjin 300072, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2388-6831","authenticated-orcid":false,"given":"Jian","family":"Wang","sequence":"additional","affiliation":[{"name":"School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China"},{"name":"National Ocean Technology Center, Tianjin 300112, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8344-1351","authenticated-orcid":false,"given":"Lingchao","family":"Zhao","sequence":"additional","affiliation":[{"name":"School of Microelectronics, Tianjin University, Tianjin 300072, China"}]}],"member":"1968","published-online":{"date-parts":[[2021,7,22]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"4901","DOI":"10.1109\/JSEN.2020.2966034","article-title":"Fusion of 3D LIDAR and camera data for object detection in autonomous vehicle applications","volume":"20","author":"Zhao","year":"2020","journal-title":"IEEE Sens. J."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"117746","DOI":"10.1109\/ACCESS.2020.3005386","article-title":"Detection of fruit-bearing branches and localization of litchi clusters for vision-based harvesting robots","volume":"8","author":"Li","year":"2020","journal-title":"IEEE Access"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"36","DOI":"10.1016\/j.rcim.2019.03.001","article-title":"Real-time detection of surface deformation and strain in recycled aggregate concrete-filled steel tubular columns via four-ocular vision","volume":"59","author":"Tang","year":"2019","journal-title":"Robot. Comput. Integr. Manuf."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"1236021","DOI":"10.1155\/2020\/1236021","article-title":"Vision-based three-dimensional reconstruction and monitoring of large-scale steel tubular structures","volume":"2020","author":"Tang","year":"2020","journal-title":"Adv. Civ. Eng."},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Zhang, T., Chowdhery, A., Bahl, P., Jamieson, K., and Banerjee, S. (2015, January 7\u201311). The design and implementation of a wireless video surveillance system. Proceedings of the Annual International Conference on Mobile Computing and Networking, Paris, France.","DOI":"10.1145\/2789168.2790123"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"106237","DOI":"10.1016\/j.compag.2021.106237","article-title":"3D global mapping of large-scale unstructured orchard integrating eye-in-hand stereo vision and SLAM","volume":"187","author":"Chen","year":"2021","journal-title":"Comput. Electron. Agric."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Rashed, H., Ramzy, M., Vaquero, V., El Sallab, A., Sistu, G., and Yogamani, S. (2019, January 27\u201328). FuseMODNet: Real-Time Camera and LiDAR Based Moving Object Detection for Robust Low-Light Autonomous Driving. Proceedings of the IEEE International Conference on Computer Vision Workshop, Seoul, Korea.","DOI":"10.1109\/ICCVW.2019.00293"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Wang, R., Zhang, Q., Fu, C.W., Shen, X., Zheng, W.S., and Jia, J. (2019, January 15\u201320). Underexposed photo enhancement using deep illumination estimation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00701"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Ai, S., and Kwon, J. (2020). Extreme low-light image enhancement for surveillance cameras using attention U-Net. Sensors, 20.","DOI":"10.3390\/s20020495"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3458281","article-title":"Exploring Image Enhancement for Salient Object Detection in Low Light Images","volume":"17","author":"Xu","year":"2021","journal-title":"ACM Trans. Multimed. Comput. Commun. Appl."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Ma, S., Ma, H., Xu, Y., Li, S., Lv, C., and Zhu, M. (2018). A low-light sensor image enhancement algorithm based on HSI color model. Sensors, 18.","DOI":"10.3390\/s18103583"},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"889","DOI":"10.1109\/83.841534","article-title":"Adaptive image contrast enhancement using generalizations of histogram equalization","volume":"9","author":"Stark","year":"2000","journal-title":"IEEE Trans. Image Process."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"593","DOI":"10.1109\/TCE.2007.381734","article-title":"A dynamic histogram equalization for image contrast enhancement","volume":"53","author":"Kabir","year":"2007","journal-title":"IEEE Trans. Consum. Electron."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"1752","DOI":"10.1109\/TCE.2007.4429280","article-title":"Brightness preserving dynamic histogram equalization for image contrast enhancement","volume":"53","author":"Ibrahim","year":"2007","journal-title":"IEEE Trans. Consum. Electron."},{"key":"ref_15","unstructured":"Ying, Z., Li, G., and Gao, W. (2017). A bio-inspired multi-exposure fusion framework for low-light image enhancement. arXiv."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"968","DOI":"10.1109\/TCSVT.2018.2828141","article-title":"LECARM: Low-light image enhancement using the camera response model","volume":"29","author":"Ren","year":"2018","journal-title":"IEEE Trans. Circuits Syst. Video Technol."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"108","DOI":"10.1038\/scientificamerican1277-108","article-title":"The retinex theory of color vision","volume":"237","author":"Land","year":"1977","journal-title":"Sci. Am."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Fu, X., Zeng, D., Huang, Y., Zhang, X.P., and Ding, X. (2016, January 27\u201330). A weighted variational model for simultaneous reflectance and illumination estimation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.304"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"982","DOI":"10.1109\/TIP.2016.2639450","article-title":"LIME: Low-light image enhancement via illumination map estimation","volume":"26","author":"Guo","year":"2016","journal-title":"IEEE Trans. Image Process."},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"965","DOI":"10.1109\/83.597272","article-title":"A multiscale retinex for bridging the gap between color images and the human observation of scenes","volume":"6","author":"Jobson","year":"1997","journal-title":"IEEE Trans. Image Process."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"82","DOI":"10.1016\/j.sigpro.2016.05.031","article-title":"A fusion-based enhancing method for weakly illuminated images","volume":"129","author":"Fu","year":"2016","journal-title":"Signal Process."},{"key":"ref_22","unstructured":"Dong, X., Wang, G., Pang, Y., Li, W., Wen, J., Meng, W., and Lu, Y. (2011, January 11\u201315). Fast efficient algorithm for enhancement of low lighting video. Proceedings of the IEEE International Conference on Multimedia and Expo, Barcelona, Spain."},{"key":"ref_23","unstructured":"Wei, C., Wang, W., Yang, W., and Liu, J. (2018, January 3\u20136). Deep retinex decomposition for low-light enhancement. Proceedings of the British Machine Vision Conference, Newcastle, UK."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Xu, K., Yang, X., Yin, B., and Lau, R.W.H. (2020, January 13\u201319). Learning to restore low-light images via decomposition-and-enhancement. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00235"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"284","DOI":"10.1016\/j.jvcir.2019.04.008","article-title":"End-to-end single image enhancement based on a dual network cascade model","volume":"61","author":"Chen","year":"2019","journal-title":"J. Vis. Commun. Image Represent."},{"key":"ref_26","unstructured":"Lv, F., Lu, F., Wu, J., and Lim, C. (2018, January 3\u20136). MBLLEN: Low-Light Image\/Video Enhancement Using CNNs. Proceedings of the British Machine Vision Conference, Newcastle, UK."},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"7984","DOI":"10.1109\/TIP.2020.3008396","article-title":"Lightening network for low-light image enhancement","volume":"29","author":"Wang","year":"2020","journal-title":"IEEE Trans. Image Process."},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"650","DOI":"10.1016\/j.patcog.2016.06.008","article-title":"LLNet: A deep autoencoder approach to natural low-light image enhancement","volume":"61","author":"Lore","year":"2017","journal-title":"Pattern Recognit."},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"2340","DOI":"10.1109\/TIP.2021.3051462","article-title":"Enlightengan: Deep light enhancement without paired supervision","volume":"30","author":"Jiang","year":"2021","journal-title":"IEEE Trans. Image Process."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Chen, Y.S., Wang, Y.C., Kao, M.H., and Chuang, Y.Y. (2018, January 13\u201319). Deep photo enhancer: Unpaired learning for image enhancement from photographs with gans. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR.2018.00660"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Liu, Y., Wang, Z., Zeng, Y., Zeng, H., and Zhao, D. (2021, January 13). PD-GAN: Perceptual-Details GAN for Extremely Noisy Low Light Image Enhancement. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Toronto, ON, Canada.","DOI":"10.1109\/ICASSP39728.2021.9413433"},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Guo, C., Li, C., Guo, J., Loy, C.C., Hou, J., Kwong, S., and Cong, R. (2020, January 13\u201319). Zero-reference deep curve estimation for low-light image enhancement. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00185"},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"2175","DOI":"10.1007\/s11263-021-01466-8","article-title":"Attention guided low-light image enhancement with a large scale low-light simulation dataset","volume":"129","author":"Lv","year":"2021","journal-title":"Int. J. Comput. Vis."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Ronneberger, O., Fischer, P., and Brox, T. (2015, January 5\u20139). U-net: Convolutional networks for biomedical image segmentation. Proceedings of the International Conference on Medical Image Computing and Computer Assisted Intervention, Munich, Germany.","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Jiang, Z., Li, H., Liu, L., Men, A., and Wang, H. (2021). A Switched View of Retinex: Deep Self-Regularized Low-Light Image Enhancement. arXiv.","DOI":"10.1016\/j.neucom.2021.05.025"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Sun, K., Xiao, B., Liu, D., and Wang, J. (2019, January 15\u201320). Deep high-resolution representation learning for human pose estimation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00584"},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"600","DOI":"10.1109\/TIP.2003.819861","article-title":"Image quality assessment: From error visibility to structural similarity","volume":"13","author":"Wang","year":"2004","journal-title":"IEEE Trans. Image Process."},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"2049","DOI":"10.1109\/TIP.2018.2794218","article-title":"Learning a deep single image contrast enhancer from multi-exposure images","volume":"27","author":"Cai","year":"2018","journal-title":"IEEE Trans. Image Process."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Wang, W., Wei, C., Yang, W., and Liu, J. (2018, January 15\u201319). GLADNet: Low-light enhancement network with global awareness. Proceedings of the 13th IEEE International Conference on Automatic Face & Gesture Recognition, Xi\u2019an, China.","DOI":"10.1109\/FG.2018.00118"},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"209","DOI":"10.1109\/LSP.2012.2227726","article-title":"Making a \u201ccompletely blind\u201d image quality analyzer","volume":"20","author":"Mittal","year":"2012","journal-title":"IEEE Signal Process. Lett."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/21\/15\/4986\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T06:33:21Z","timestamp":1760164401000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/21\/15\/4986"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,7,22]]},"references-count":40,"journal-issue":{"issue":"15","published-online":{"date-parts":[[2021,8]]}},"alternative-id":["s21154986"],"URL":"https:\/\/doi.org\/10.3390\/s21154986","relation":{},"ISSN":["1424-8220"],"issn-type":[{"type":"electronic","value":"1424-8220"}],"subject":[],"published":{"date-parts":[[2021,7,22]]}}}