{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,12]],"date-time":"2026-03-12T20:13:13Z","timestamp":1773346393041,"version":"3.50.1"},"reference-count":54,"publisher":"MDPI AG","issue":"3","license":[{"start":{"date-parts":[[2023,1,25]],"date-time":"2023-01-25T00:00:00Z","timestamp":1674604800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"National Natural Science Foundation of China","award":["61971007"],"award-info":[{"award-number":["61971007"]}]},{"name":"National Natural Science Foundation of China","award":["61571013"],"award-info":[{"award-number":["61571013"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Convolutional neural network (CNN)-based autonomous driving object detection algorithms have excellent detection results on conventional datasets, but the detector performance can be severely degraded in low-light foggy weather environments. Existing methods have difficulty in achieving a balance between low-light image enhancement and object detection. To alleviate this problem, this paper proposes a foggy traffic environment object detection framework, IDOD-YOLOV7. This network is based on joint optimal learning of image defogging module IDOD (AOD + SAIP) and YOLOV7 detection modules. Specifically, for low-light foggy images, we propose to improve the image quality by joint optimization of image defogging (AOD) and image enhancement (SAIP), where the parameters of the SAIP module are predicted by a miniature CNN network and the AOD module performs image defogging by optimizing the atmospheric scattering model. The experimental results show that the IDOD module not only improves the image defogging quality for low-light fog images but also achieves better results in objective evaluation indexes such as PSNR and SSIM. The IDOD and YOLOV7 learn jointly in an end-to-end manner so that object detection can be performed while image enhancement is executed in a weakly supervised manner. Finally, a low-light fogged traffic image dataset (FTOD) was built by physical fogging in order to solve the domain transfer problem. The training of IDOD-YOLOV7 network by a real dataset (FTOD) improves the robustness of the model. We performed various experiments to visually and quantitatively compare our method with several state-of-the-art methods to demonstrate its superiority over the others. The IDOD-YOLOV7 algorithm not only suppresses the artifacts of low-light fog images and improves the visual effect of images but also improves the perception of autonomous driving in low-light foggy environments.<\/jats:p>","DOI":"10.3390\/s23031347","type":"journal-article","created":{"date-parts":[[2023,1,26]],"date-time":"2023-01-26T01:30:30Z","timestamp":1674696630000},"page":"1347","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":99,"title":["IDOD-YOLOV7: Image-Dehazing YOLOV7 for Object Detection in Low-Light Foggy Traffic Environments"],"prefix":"10.3390","volume":"23","author":[{"given":"Yongsheng","family":"Qiu","sequence":"first","affiliation":[{"name":"School of Electrical and Control Engineering, North China University of Technology, Beijing 100144, China"}]},{"given":"Yuanyao","family":"Lu","sequence":"additional","affiliation":[{"name":"School of Information Science and Technology, North China University of Technology, Beijing 100144, China"}]},{"given":"Yuantao","family":"Wang","sequence":"additional","affiliation":[{"name":"School of Electrical and Control Engineering, North China University of Technology, Beijing 100144, China"}]},{"given":"Haiyang","family":"Jiang","sequence":"additional","affiliation":[{"name":"School of Electrical and Control Engineering, North China University of Technology, Beijing 100144, China"}]}],"member":"1968","published-online":{"date-parts":[[2023,1,25]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"18840","DOI":"10.1109\/ACCESS.2019.2897283","article-title":"A PSO and BFO-based learning strategy applied to faster R-CNN for object detection in autonomous driving","volume":"7","author":"Wang","year":"2019","journal-title":"IEEE Access"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"1137","DOI":"10.1109\/TPAMI.2016.2577031","article-title":"Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks","volume":"39","author":"Ren","year":"2015","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S.E., Fu, C., and Berg, A.C. (2016, January 11\u201314). SSD: Single Shot MultiBox Detector. Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands.","DOI":"10.1007\/978-3-319-46448-0_2"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"318","DOI":"10.1109\/TPAMI.2018.2858826","article-title":"Focal Loss for Dense Object Detection","volume":"42","author":"Lin","year":"2020","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"110530","DOI":"10.1016\/j.measurement.2021.110530","article-title":"Detection of coal and gangue based on improved YOLOv5. 1 which embedded scSE module","volume":"188","author":"Yan","year":"2022","journal-title":"Measurement"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"1721","DOI":"10.1109\/TPAMI.2015.2491937","article-title":"Adherent raindrop modeling, detection and removal in video","volume":"38","author":"You","year":"2015","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Cao, J., Song, C., Song, S., Peng, S., Wang, D., Shao, Y., and Xiao, F. (2020). Front Vehicle Detection Algorithm for Smart Car Based on Improved SSD Model. Sensors, 20.","DOI":"10.3390\/s20164646"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"19","DOI":"10.1007\/s11220-021-00342-6","article-title":"Modified Cascade RCNN Based on Contextual Information for Vehicle Detection","volume":"22","author":"Han","year":"2021","journal-title":"Sens. Imaging"},{"key":"ref_9","first-page":"2623","article-title":"DSNet: Joint Semantic Learning for Object Detection in Inclement Weather Conditions","volume":"43","author":"Huang","year":"2020","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Dong, H., Pan, J., Xiang, L., Hu, Z., Zhang, X., Wang, F., and Yang, M.H. (2020, January 16\u201320). Multi-scale boosted dehazing network with dense feature fusion. Proceedings of the 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00223"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Sindagi, V.A., Oza, P., Yasarla, R., and Patel, V.M. (2020, January 23\u201328). Prior-based domain adaptive object detection for hazy and rainy conditions. Proceedings of the European Conference on Computer Vision, Glasgow, UK.","DOI":"10.1007\/978-3-030-58568-6_45"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Chen, Y., Li, W., Sakaridis, C., Dai, D., and Gool, L.V. (2018, January 18\u201323). Domain Adaptive Faster R-CNN for Object Detection in the Wild. Proceedings of the 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00352"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"2502","DOI":"10.1109\/TMM.2021.3082687","article-title":"Uncertainty-Aware Unsupervised Domain Adaptation in Object Detection","volume":"24","author":"Guan","year":"2021","journal-title":"IEEE Trans. Multimed."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"102946","DOI":"10.1016\/j.trc.2020.102946","article-title":"Domain adaptation from daytime to nighttime: A situation-sensitive vehicle detection and traffic flow parameter estimation framework","volume":"124","author":"Li","year":"2021","journal-title":"Transp. Res. Part C Emerg. Technol."},{"key":"ref_15","unstructured":"Li, J., Xu, R., Ma, J., Zou, Q., Ma, J., and Yu, H. (2022). Domain Adaptive Object Detection for Autonomous Driving under Foggy Weather. arXiv."},{"key":"ref_16","unstructured":"Wang, C., Bochkovskiy, A., and Liao, H.M. (2022). YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv."},{"key":"ref_17","unstructured":"Narasimhan, S.G., and Nayar, S.K. (2000, January 13\u201315). Chromatic framework for vision in bad weather. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Hilton Head, SC, USA."},{"key":"ref_18","unstructured":"He, K., Sun, J., and Tang, X.J. (2009, January 20\u201325). Single Image Haze Removal Using Dark Channel Prior. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Miami, FL, USA."},{"key":"ref_19","unstructured":"Nayar, S.K., and Narasimhan, S.G. (September, January 31). Vision in bad weather. Proceedings of the Seventh IEEE International Conference on Computer Vision, Cairo, Egypt."},{"key":"ref_20","unstructured":"Li, B., Peng, X., Wang, Z., Xu, J., and Feng, D. (2017). An All-in-One Network for Dehazing and Beyond. arXiv."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"1985","DOI":"10.1109\/TIP.2019.2948279","article-title":"Fast Single Image Dehazing Using Saturation Based Transmission Map Estimation","volume":"29","author":"Kim","year":"2020","journal-title":"IEEE Trans. Image Process."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"263","DOI":"10.1007\/s11263-011-0508-1","article-title":"Bayesian Defogging","volume":"98","author":"Nishino","year":"2011","journal-title":"Int. J. Comput. Vis."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"3522","DOI":"10.1109\/TIP.2015.2446191","article-title":"A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior","volume":"24","author":"Zhu","year":"2015","journal-title":"IEEE Trans. Image Process."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Li, B., Peng, X., Wang, Z., Xu, J., and Feng, D. (2017, January 22\u201329). AOD-Net: All-in-One Dehazing Network. Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy.","DOI":"10.1109\/ICCV.2017.511"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"5187","DOI":"10.1109\/TIP.2016.2598681","article-title":"DehazeNet: An End-to-End System for Single Image Haze Removal","volume":"25","author":"Cai","year":"2016","journal-title":"IEEE Trans. Image Process."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Liu, X., Ma, Y., Shi, Z., and Chen, J. (November, January 27). Grid DehazeNet: Attention-Based Multi-Scale Network for Image Dehazing. Proceedings of the 2019 IEEE\/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea.","DOI":"10.1109\/ICCV.2019.00741"},{"key":"ref_27","first-page":"1","article-title":"Content Feature and Style Feature Fusion Network for Single Image Dehazing","volume":"46","author":"Yang","year":"2020","journal-title":"Acta Autom. Sin."},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Chen, D., He, M., Fan, Q., Liao, J., Zhang, L., Hou, D., Yuan, L., and Hua, G. (2019, January 7\u201311). Gated Context Aggregation Network for Image Dehazing and Deraining. Proceedings of the 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA.","DOI":"10.1109\/WACV.2019.00151"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Yang, H., Yang, C.H., and Tsai, Y.J. (2020, January 4\u20138). Y-Net: Multi-Scale Feature Aggregation Network with Wavelet Structure Similarity Loss Function For Single Image Dehazing. Proceedings of the ICASSP 2020\u20142020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain.","DOI":"10.1109\/ICASSP40776.2020.9053920"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Guo, C., Li, C., Guo, J., Loy, C.C., Hou, J., Kwong, S.T., and Cong, R. (2020, January 13\u201319). Zero-Reference Deep Curve Estimation for Low-Light Image Enhancement. Proceedings of the 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00185"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Liu, W., Ren, G., Yu, R., Guo, S., Zhu, J., and Zhang, L. (2021). Image-Adaptive YOLO for Object Detection in Adverse Weather Conditions. arXiv.","DOI":"10.1609\/aaai.v36i2.20072"},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"e772","DOI":"10.7717\/peerj-cs.772","article-title":"SVA-SSD: Saliency visual attention single shot detector for building detection in low contrast high-resolution satellite images","volume":"7","author":"Shahin","year":"2021","journal-title":"PeerJ Comput. Sci."},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"e1145","DOI":"10.7717\/peerj-cs.1145","article-title":"Lightweight multi-scale network for small object detection","volume":"8","author":"Li","year":"2022","journal-title":"PeerJ Comput. Sci."},{"key":"ref_34","first-page":"26","article-title":"Exposure: A White-Box Photo Post-Processing Framework","volume":"37","author":"Hu","year":"2017","journal-title":"ACM Trans"},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"505","DOI":"10.1109\/83.826787","article-title":"Image enhancement via adaptive unsharp masking","volume":"9","author":"Polesel","year":"2000","journal-title":"IEEE Trans. Image Process."},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Mosleh, A., Sharma, A., Onzon, E., Mannan, F., Robidoux, N., and Heide, F. (2020, January 13\u201319). Hardware-in-the-Loop End-to-End Optimization of Camera Image Processing Pipelines. Proceedings of the 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00755"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Lv, H., Yan, H., Liu, K., Zhou, Z., and Jing, J. (2022). YOLOv5-AC: Attention Mechanism-Based Lightweight YOLOv5 for Track Pedestrian Detection. Sensors, 22.","DOI":"10.3390\/s22155903"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Hussain, M., Al-Aqrabi, H., Munawar, M., Hill, R., and Alsboui, T.A. (2022). Domain Feature Mapping with YOLOv7 for Automated Edge-Based Pallet Racking Inspections. Sensors, 22.","DOI":"10.3390\/s22186927"},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Gui, F., Yu, S., Zhang, H., and Zhu, H. (2021, January 17\u201319). Coal Gangue Recognition Algorithm Based on Improved YOLOv5. Proceedings of the 2021 IEEE 2nd International Conference on Information Technology, Big Data and Artificial Intelligence 2021, Chongqing, China.","DOI":"10.1109\/ICIBA52610.2021.9687869"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Ren, W., Liu, S., Zhang, H., Pan, J., Cao, X., and Yang, M. (2016, January 11\u201314). Single Image Dehazing via Multi-Scale Convolutional Neural Networks. Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands.","DOI":"10.1007\/978-3-319-46475-6_10"},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Ren, W., Ma, L., Zhang, J., Pan, J., Cao, X., Liu, W., and Yang, M. (2018, January 18\u201323). Gated Fusion Network for Single Image Dehazing. Proceedings of the 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00343"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Qin, X., Wang, Z., Bai, Y., Xie, X., and Jia, H. (2019). FFA-Net: Feature Fusion Attention Network for Single Image Dehazing. arXiv.","DOI":"10.1609\/aaai.v34i07.6865"},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Ding, L., and Sharma, G. (2017, January 17\u201320). HazeRD: An outdoor scene dataset and benchmark for single image dehazing. Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China.","DOI":"10.1109\/ICIP.2017.8296874"},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Qu, Y., Chen, Y., Huang, J., and Xie, Y. (2019, January 15\u201320). Enhanced Pix2pix Dehazing Network. Proceedings of the 2019 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00835"},{"key":"ref_45","unstructured":"Redmon, J., and Farhadi, A. (2018). YOLOv3: An Incremental Improvement. arXiv."},{"key":"ref_46","doi-asserted-by":"crossref","first-page":"84","DOI":"10.1145\/3065386","article-title":"ImageNet classification with deep convolutional neural networks","volume":"60","author":"Krizhevsky","year":"2012","journal-title":"Commun. ACM"},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Lin, T., Maire, M., Belongie, S.J., Hays, J., Perona, P., Ramanan, D., Doll\u00e1r, P., and Zitnick, C.L. (2014, January 6\u201312). Microsoft COCO: Common Objects in Context. Proceedings of the European Conference on Computer Vision, Zurich, Switzerland.","DOI":"10.1007\/978-3-319-10602-1_48"},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Zhu, X., Lyu, S., Wang, X., and Zhao, Q. (2021, January 11\u201317). TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-captured Scenarios. Proceedings of the 2021 IEEE\/CVF International Conference on Computer Vision Workshops (ICCVW), Montreal, BC, Canada.","DOI":"10.1109\/ICCVW54120.2021.00312"},{"key":"ref_49","doi-asserted-by":"crossref","unstructured":"Chen, Z., Wang, Y., Yang, Y., and Liu, D. (2021, January 20\u201325). PSD: Principled Synthetic-to-Real Dehazing Guided by Physical Priors. Proceedings of the 2021 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.00710"},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Zhu, J., Park, T., Isola, P., and Efros, A.A. (2017, January 22\u201329). Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy.","DOI":"10.1109\/ICCV.2017.244"},{"key":"ref_51","doi-asserted-by":"crossref","first-page":"1754","DOI":"10.1007\/s11263-021-01431-5","article-title":"You Only Look Yourself: Unsupervised and Untrained Single Image Dehazing Neural Network","volume":"129","author":"Li","year":"2020","journal-title":"Int. J. Comput. Vis."},{"key":"ref_52","doi-asserted-by":"crossref","first-page":"492","DOI":"10.1109\/TIP.2018.2867951","article-title":"Benchmarking Single-Image Dehazing and Beyond","volume":"28","author":"Li","year":"2017","journal-title":"IEEE Trans. Image Process."},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Ancuti, C.O., Ancuti, C., Timofte, R., and Vleeschouwer, C.D. (2018, January 24\u201327). I-HAZE: A dehazing benchmark with real hazy and haze-free indoor images. Proceedings of the Advanced Concepts for Intelligent Vision Systems Conference, Poitiers, France.","DOI":"10.1109\/CVPRW.2018.00119"},{"key":"ref_54","doi-asserted-by":"crossref","first-page":"600","DOI":"10.1109\/TIP.2003.819861","article-title":"Image quality assessment: From error visibility to structural similarity","volume":"13","author":"Wang","year":"2004","journal-title":"IEEE Trans. Image Process."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/3\/1347\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T18:15:27Z","timestamp":1760120127000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/3\/1347"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,1,25]]},"references-count":54,"journal-issue":{"issue":"3","published-online":{"date-parts":[[2023,2]]}},"alternative-id":["s23031347"],"URL":"https:\/\/doi.org\/10.3390\/s23031347","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,1,25]]}}}