{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,29]],"date-time":"2026-04-29T19:58:00Z","timestamp":1777492680945,"version":"3.51.4"},"reference-count":33,"publisher":"MDPI AG","issue":"2","license":[{"start":{"date-parts":[[2020,1,8]],"date-time":"2020-01-08T00:00:00Z","timestamp":1578441600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>This study is to analyze the influence of visibility in a foggy weather environment on the accuracy of machine vision obstacle detection in assisted driving. We present a foggy day imaging model and analyze the image characteristics, then we set up the faster region convolutional neural network (Faster R-CNN) as the basic network for target detection in the simulation experiment and use Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) data for network detection and classification training. PreScan software is used to build weather and traffic scenes based on a foggy imaging model, and we study object detection of machine vision in four types of weather condition\u2014clear (no fog), light fog, medium fog, and heavy fog\u2014by simulation experiment. The experimental results show that the detection recall is 91.55%, 85.21%, 72.54~64.79%, and 57.75% respectively in no fog, light fog, medium fog, and heavy fog environments. Then we used real scenes in medium fog and heavy fog environment to verify the simulation experiment. Through this study, we can determine the influence of bad weather on the detection results of machine vision, and hence we can improve the safety of assisted driving through further research.<\/jats:p>","DOI":"10.3390\/s20020349","type":"journal-article","created":{"date-parts":[[2020,1,9]],"date-time":"2020-01-09T03:07:11Z","timestamp":1578539231000},"page":"349","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":55,"title":["Analysis of the Influence of Foggy Weather Environment on the Detection Effect of Machine Vision Obstacles"],"prefix":"10.3390","volume":"20","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-6228-3016","authenticated-orcid":false,"given":"Zhaohui","family":"Liu","sequence":"first","affiliation":[{"name":"Department of Transportation Engineering, College of Transportation, Shandong University of Science and Technology, Qingdao 266590, China"}]},{"given":"Yongjiang","family":"He","sequence":"additional","affiliation":[{"name":"Department of Transportation Engineering, College of Transportation, Shandong University of Science and Technology, Qingdao 266590, China"}]},{"given":"Chao","family":"Wang","sequence":"additional","affiliation":[{"name":"Department of Transportation Engineering, College of Transportation, Shandong University of Science and Technology, Qingdao 266590, China"}]},{"given":"Runze","family":"Song","sequence":"additional","affiliation":[{"name":"Department of Transportation Engineering, College of Transportation, Shandong University of Science and Technology, Qingdao 266590, China"}]}],"member":"1968","published-online":{"date-parts":[[2020,1,8]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"62880","DOI":"10.1109\/ACCESS.2018.2876702","article-title":"Design of Intelligent Road Recognition and Warning System for Vehicles Based on Binocular Vision","volume":"6","author":"Han","year":"2018","journal-title":"IEEE Access"},{"key":"ref_2","first-page":"1097","article-title":"ImageNet Classification with Deep Convolutional Neural Networks","volume":"25","author":"Krizhevsky","year":"2012","journal-title":"Adv. Neural. Inform. Process Syst."},{"key":"ref_3","unstructured":"Simonyan, K., and Andrew, Z. (2014). Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv."},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep Residual Learning for Image Recognition. Proceedings of the 2016 IEEE CVPR, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_5","unstructured":"Ross, G., Jeff, D., Trevor, D., and Jitendra, M. (2014, January 23\u201328). Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. Proceedings of the 2014 IEEE CVPR, Columbus, OH, USA."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"1904","DOI":"10.1109\/TPAMI.2015.2389824","article-title":"Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition","volume":"37","author":"He","year":"2015","journal-title":"IEEE Trans. Pattern Anal."},{"key":"ref_7","unstructured":"Ross, G. (2015, January 7\u201313). Fast R-CNN. Proceedings of the 2015 IEEE ICCV, Santiago, Chile."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"(2017). Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal., 39, 1137\u20131149.","DOI":"10.1109\/TPAMI.2016.2577031"},{"key":"ref_9","unstructured":"Jifeng, D., Yi, L., Kaiming, H., and Jian, S. (2016, January 5\u201310). R-FCN: Object Detection via Region-based Fully Convolutional Networks. Proceedings of the NIPS\u201916, Barcelona, Spain."},{"key":"ref_10","unstructured":"Quanfu, F., Lisa, B., and John, S. (2016, January 19\u201322). A closer look at Faster R-CNN for vehicle detection. Proceedings of the 2016 IEEE IV, Gothenburg, Sweden."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"1089","DOI":"10.3390\/s19051089","article-title":"Anchor Generation Optimization and Region of Interest Assignment for Vehicle Detection","volume":"19","author":"Ye","year":"2019","journal-title":"Sensors"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Gao, Y., Guo, S., Huang, K., Chen, J., Gong, Q., Zou, Y., Bai, T., and Overett, G. (2017, January 11\u201314). Scale optimization for full-image-CNN vehicle detection. Proceedings of the 2017 IEEE IV, Los Angeles, CA, USA.","DOI":"10.1109\/IVS.2017.7995812"},{"key":"ref_13","unstructured":"Mduduzi, M., Chunming, T., and Owolawi, P. (2018, January 6\u20137). Preprocessed Faster RCNN for Vehicle Detection. Proceedings of the 2018 ICONIC, Plaine Magnien, Mauritius."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"104","DOI":"10.1186\/s12940-016-0189-x","article-title":"Adverse weather conditions and fatal motor vehicle crashes in the United States, 1994\u20132012","volume":"15","author":"Shubhayu","year":"2016","journal-title":"Environ. Health"},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"466","DOI":"10.1002\/met.1714","article-title":"Hazardous weather conditions and multiple-vehicle chain-reaction crashes in the United States","volume":"25","author":"David","year":"2018","journal-title":"Met. Appl."},{"key":"ref_16","first-page":"71","article-title":"Analyzing the effect of fog weather conditions on driver lane-keeping performance using the SHRP2 naturalistic driving study data","volume":"68","author":"Anik","year":"2018","journal-title":"J. Phys. Saf. Res."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"379","DOI":"10.1016\/j.trc.2018.03.018","article-title":"Utilizing naturalistic driving data for in-depth analysis of driver lane-keeping behavior in rain: Non-parametric mars and parametric logistic regression modeling approaches","volume":"90","author":"Ghasemzadeh","year":"2018","journal-title":"Transp. Res. C Emerg."},{"key":"ref_18","unstructured":"Sinan, H., and Andreas, R. (2017, January 16\u201319). Introduction to rain and fog attenuation on automotive surround sensors. Proceedings of the 2017 IEEE 20th ITSC, Yokohama, Japan."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"103","DOI":"10.1109\/MVT.2019.2892497","article-title":"The Impact of Adverse Weather Conditions on Autonomous Vehicles: How Rain, Snow, Fog, and Hail Affect the Performance of a Self-Driving Car","volume":"14","author":"Shizhe","year":"2019","journal-title":"IEEE Veh. Technol. Mag."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Kutila, M., Pyykonen, P., Holzhuter, H., Colomb, M., and Duthon, P. (2018, January 4\u20137). Automotive LiDAR performance verification in fog and rain. Proceedings of the 21st ITSC, Maui, HI, USA.","DOI":"10.1109\/ITSC.2018.8569624"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Lee, U., Jung, J., Shin, S., Jeong, Y., Park, K., and Kweon, I.-S. (2016, January 9\u201314). EureCar Turbo: A Self-Driving Car that can Handle Adverse Weather Conditions. Proceedings of the IEEE\/RSJIROS, Daejeon, Korea.","DOI":"10.1109\/IROS.2016.7759359"},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"4287","DOI":"10.1109\/ACCESS.2018.2790407","article-title":"Multi-Traffic Scene Perception Based on Supervised Learning","volume":"6","author":"Lishen","year":"2018","journal-title":"IEEE Access"},{"key":"ref_23","unstructured":"Allach, S., Ahmed, M., and Anouar, A.B. (2018, January 10\u201311). A new architecture based on convolutional neural networks (CNN) for assisting the driver in fog environment. Proceedings of the SCA \u201918: 3rd International Conference on Smart City, Tetouan, Morocco."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"165","DOI":"10.1109\/ACCESS.2015.2511558","article-title":"Review of Video and Image Defogging Algorithms and Related Studies on Image Restoration and Enhancement","volume":"4","author":"Xu","year":"2016","journal-title":"IEEE Access"},{"key":"ref_25","first-page":"341","article-title":"Single Image Dehazing Using Dark Channel Prior and Minimal Atmospheric Veil","volume":"10","author":"Xiao","year":"2016","journal-title":"KSII T. Internet Inf."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Anwar, I., Arun, K., and Gajendra, S. (2017, January 2\u20133). Visibility enhancement with single image fog removal scheme using a post-processing technique. Proceedings of the 2017 4th SPIN IEEE, Noida, India.","DOI":"10.1109\/SPIN.2017.8049960"},{"key":"ref_27","first-page":"1084","article-title":"Optics of the Atmosphere. Scattering by Molecules and Particles","volume":"196","author":"Mccartney","year":"1997","journal-title":"IEEE J. Quantum Electron."},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"233","DOI":"10.1023\/A:1016328200723","article-title":"Vision and the atmosphere","volume":"48","author":"Narasimhan","year":"2002","journal-title":"Int. J. Comput. Vis."},{"key":"ref_29","unstructured":"Kaiming, H., Georgia, G., Pitro, D., and Ross, G. (2017, January 22\u201329). Mask R-CNN. Proceedings of the 2017 IEEE ICCV, Venice, Italy."},{"key":"ref_30","unstructured":"Karlsruhe Institute of Technology and Technological Institute at Chicago (2019, October 20). The KITTI Vision Benchmark Suite. Available online: http:\/\/www.cvlibs.net\/datasets\/kitti\/eval_object.php?obj_benchmark=2d."},{"key":"ref_31","unstructured":"Manak, E.A. (2019, December 05). Evaluating Object Detection Models: Guide to Performance Metrics. Available online: https:\/\/manalelaidouni.github.io\/manalelaidouni.github.io\/Evaluating-Object-Detection-Models-Guide-to-Performance-Metrics.html."},{"key":"ref_32","unstructured":"Berkeley Artificial Intelligence Research (2019, October 05). BDD100K: A Large-scale Diverse Driving Video Database. Available online: https:\/\/bdd-data.berkeley.edu\/."},{"key":"ref_33","unstructured":"Yu, F., Xiang, W., Cheng, Y., Liu, F., Liao, M., Madhavan, V., and Darrell, T. (2019, October 05). BDD100K: A Diverse Driving Video Database with Scalable Annotation Tooling. Available online: https:\/\/arxiv.org\/abs\/1805.04687,."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/20\/2\/349\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,13]],"date-time":"2025-10-13T13:29:03Z","timestamp":1760362143000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/20\/2\/349"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,1,8]]},"references-count":33,"journal-issue":{"issue":"2","published-online":{"date-parts":[[2020,1]]}},"alternative-id":["s20020349"],"URL":"https:\/\/doi.org\/10.3390\/s20020349","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,1,8]]}}}