{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,7]],"date-time":"2026-04-07T22:10:50Z","timestamp":1775599850218,"version":"3.50.1"},"reference-count":38,"publisher":"MDPI AG","issue":"7","license":[{"start":{"date-parts":[[2022,3,23]],"date-time":"2022-03-23T00:00:00Z","timestamp":1647993600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["U21B6001, 62020106002, 61735017, 61822510"],"award-info":[{"award-number":["U21B6001, 62020106002, 61735017, 61822510"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"National Key Basic Research Program of China","award":["2021YFC2401403"],"award-info":[{"award-number":["2021YFC2401403"]}]},{"name":"Major scientific Research project of Zhejiang laboratory","award":["2019MC0AD02"],"award-info":[{"award-number":["2019MC0AD02"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>The perception module plays an important role in vehicles equipped with advanced driver-assistance systems (ADAS). This paper presents a multi-sensor data fusion system based on the polarization color stereo camera and the forward-looking light detection and ranging (LiDAR), which achieves the multiple target detection, recognition, and data fusion. The You Only Look Once v4 (YOLOv4) network is utilized to achieve object detection and recognition on the color images. The depth images are obtained from the rectified left and right images based on the principle of the epipolar constraints, then the obstacles are detected from the depth images using the MeanShift algorithm. The pixel-level polarization images are extracted from the raw polarization-grey images, then the water hazards are detected successfully. The PointPillars network is employed to detect the objects from the point cloud. The calibration and synchronization between the sensors are accomplished. The experiment results show that the data fusion enriches the detection results, provides high-dimensional perceptual information and extends the effective detection range. Meanwhile, the detection results are stable under diverse range and illumination conditions.<\/jats:p>","DOI":"10.3390\/s22072453","type":"journal-article","created":{"date-parts":[[2022,3,23]],"date-time":"2022-03-23T22:08:06Z","timestamp":1648073286000},"page":"2453","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":19,"title":["Unifying Obstacle Detection, Recognition, and Fusion Based on the Polarization Color Stereo Camera and LiDAR for the ADAS"],"prefix":"10.3390","volume":"22","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-7279-4258","authenticated-orcid":false,"given":"Ningbo","family":"Long","sequence":"first","affiliation":[{"name":"Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou 311100, China"}]},{"given":"Han","family":"Yan","sequence":"additional","affiliation":[{"name":"Science and Technology on Space Intelligent Control Laboratory, Beijing Institute of Control Engineering, Beijing 100094, China"}]},{"given":"Liqiang","family":"Wang","sequence":"additional","affiliation":[{"name":"Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou 311100, China"},{"name":"College of Optical Science and Engineering, Zhejiang University, Hangzhou 310027, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3593-5482","authenticated-orcid":false,"given":"Haifeng","family":"Li","sequence":"additional","affiliation":[{"name":"Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou 311100, China"},{"name":"College of Optical Science and Engineering, Zhejiang University, Hangzhou 310027, China"}]},{"given":"Qing","family":"Yang","sequence":"additional","affiliation":[{"name":"Research Center for Humanoid Sensing, Zhejiang Lab, Hangzhou 311100, China"},{"name":"College of Optical Science and Engineering, Zhejiang University, Hangzhou 310027, China"}]}],"member":"1968","published-online":{"date-parts":[[2022,3,23]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"113816","DOI":"10.1016\/j.eswa.2020.113816","article-title":"Self-driving cars: A survey","volume":"165","author":"Badue","year":"2021","journal-title":"Expert Syst. Appl."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"436","DOI":"10.1038\/nature14539","article-title":"Deep learning","volume":"521","author":"LeCun","year":"2015","journal-title":"Nature"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"3212","DOI":"10.1109\/TNNLS.2018.2876865","article-title":"Object Detection with Deep Learning: A Review","volume":"30","author":"Zhao","year":"2019","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"39","DOI":"10.1016\/j.neucom.2020.01.085","article-title":"Recent advances in deep learning for object detection","volume":"396","author":"Wu","year":"2020","journal-title":"Neurocomputing"},{"key":"ref_5","unstructured":"Wang, W., Lai, Q., Fu, H., Shen, J., Ling, H., and Yang, R. (2021). Salient Objct Detection in the Deep Learning Era: An In-Depth Survey. IEEE Trans. Pattern Anal. Mach. Intell., 1."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"435","DOI":"10.1080\/15599610802438680","article-title":"Review of Stereo Vision Algorithms: From Software to Hardware","volume":"2","author":"Lazaros","year":"2008","journal-title":"Int. J. Optomechatronics"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"4802","DOI":"10.1364\/OE.416130","article-title":"Polarization-driven semantic segmentation via efficient attention-bridged fusion","volume":"29","author":"Xiang","year":"2021","journal-title":"Opt. Express"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Xie, B., Pan, H., Xiang, Z., and Liu, J. (2007, January 5\u20138). Polarization-Based Water Hazards Detection for Autonomous Off-Road Navigation. Proceedings of the 2007 International Conference on Mechatronics and Automation, Harbin, China.","DOI":"10.1109\/ICMA.2007.4303800"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Yang, K., Wang, K., Cheng, R., Hu, W., Huang, X., and Bai, J. (2017). Detecting Traversable Area and Water Hazards for the Visually Impaired with a pRGB-D Sensor. Sensors, 17.","DOI":"10.3390\/s17081890"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Nguyen, C.V., Milford, M., and Mahony, R. (June, January 29). 3D tracking of water hazards with polarized stereo cameras. Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore.","DOI":"10.1109\/ICRA.2017.7989616"},{"key":"ref_11","first-page":"1621","article-title":"DIOR: A Hardware-assisted Weather Denoising Solution for LiDAR Point Clouds","volume":"1","author":"Roriz","year":"2021","journal-title":"IEEE Sens. J."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Zhu, Y., Zheng, C., Yuan, C., Huang, X., and Hong, X. (June, January 30). CamVox: A Low-cost and Accurate Lidar-assisted Visual SLAM System. Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi\u2019an, China.","DOI":"10.1109\/ICRA48506.2021.9561149"},{"key":"ref_13","first-page":"58","article-title":"Low-cost Retina-like Robotic Lidars Based on Incommensurable Scanning","volume":"1","author":"Liu","year":"2021","journal-title":"IEEE\/ASME Trans. Mechatron."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"579","DOI":"10.1016\/j.procs.2021.02.100","article-title":"A survey of LiDAR and camera fusion enhancement","volume":"183","author":"Zhong","year":"2021","journal-title":"Procedia Comput. Sci."},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Mai, N.A.M., Duthon, P., Khoudour, L., Crouzil, A., and Velastin, S.A. (2021, January 17\u201319). Sparse LiDAR and Stereo Fusion (SLS-Fusion) for Depth Estimationand 3D Object Detection 2021. Proceedings of the 11th International Conference of Pattern Recognition Systems (ICPRS 2021), Online Conference.","DOI":"10.1049\/icp.2021.1442"},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"044102","DOI":"10.1063\/1.5093279","article-title":"Unifying obstacle detection, recognition, and fusion based on millimeter wave radar and RGB-depth sensors for the visually impaired","volume":"90","author":"Long","year":"2019","journal-title":"Rev. Sci. Instrum."},{"key":"ref_17","unstructured":"Long, N., Wang, K., Cheng, R., Yang, K., and Bai, J. (2018, January 10\u201313). Fusion of Millimeter Wave Radar and RGB-Depth Sensors for Assisted Navigation of the Visually Impaired. Proceedings of the Millimetre Wave and Terahertz Sensors and Technology XI, Berlin, Germany."},{"key":"ref_18","unstructured":"Bochkovskiy, A., Wang, C.-Y., and Liao, H.-Y.M. (2004). YOLOv4: Optimal Speed and Accuracy of Object Detection 2020. arXiv."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"107874","DOI":"10.1016\/j.patcog.2021.107874","article-title":"Mean-shift outlier detection and filtering","volume":"115","author":"Yang","year":"2021","journal-title":"Pattern Recognit."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Lang, A.H., Vora, S., Caesar, H., Zhou, L., Yang, J., and Beijbom, O. (2019, January 15\u201320). PointPillars: Fast Encoders for Object Detection from Point Clouds. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.01298"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"90","DOI":"10.1016\/j.isprsjprs.2019.10.015","article-title":"A Frustum-based probabilistic framework for 3D object detection by fusion of LiDAR and camera data","volume":"159","author":"Gong","year":"2020","journal-title":"ISPRS J. Photogramm. Remote Sens."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Zhang, C., Zhan, Q., Wang, Q., Wu, H., He, T., and An, Y. (2020). Autonomous Dam Surveillance Robot System Based on Multi-Sensor Fusion. Sensors, 20.","DOI":"10.3390\/s20041097"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Wang, L., Chen, T., Anklam, C., and Goldluecke, B. (November, January 19). High Dimensional Frustum PointNet for 3D Object Detection from Camera, LiDAR, and Radar. Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA.","DOI":"10.1109\/IV47402.2020.9304655"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Ku, J., Mozifian, M., Lee, J., Harakeh, A., and Waslander, S.L. (2018, January 1\u20135). Joint 3D Proposal Generation and Object Detection from View Aggregation. Proceedings of the 2018 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain.","DOI":"10.1109\/IROS.2018.8594049"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"125","DOI":"10.1016\/j.robot.2018.11.002","article-title":"LIDAR\u2013camera fusion for road detection using fully convolutional neural networks","volume":"111","author":"Caltagirone","year":"2019","journal-title":"Rob. Auton. Syst."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Zhuang, Z., Li, R., Jia, K., Wang, Q., Li, Y., and Tan, M. (2021, January 11\u201317). Perception-Aware Multi-Sensor Fusion for 3D LiDAR Semantic Segmentation. Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), Online Conference.","DOI":"10.1109\/ICCV48922.2021.01597"},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"3585","DOI":"10.1109\/LRA.2019.2928261","article-title":"A Joint Optimization Approach of LiDAR-Camera Fusion for Accurate Dense 3-D Reconstructions","volume":"4","author":"Zhen","year":"2019","journal-title":"IEEE Robot. Autom. Lett."},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Gu, S., Zhang, Y., Tang, J., Yang, J., and Kong, H. (2019, January 20\u201324). Road Detection through CRF based LiDAR-Camera Fusion. Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada.","DOI":"10.1109\/ICRA.2019.8793585"},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"34536","DOI":"10.1364\/OE.402947","article-title":"Snapshot multispectral imaging using a pixel-wise polarization color image sensor","volume":"28","author":"Ono","year":"2020","journal-title":"Opt. Express"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Wang, C.-Y., Bochkovskiy, A., and Liao, H.-Y.M. (2021, January 20\u201325). Scaled-YOLOv4: Scaling Cross Stage Partial Network. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Online Conference.","DOI":"10.1109\/CVPR46437.2021.01283"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Fleet, D., Pajdla, T., Schiele, B., and Tuytelaars, T. (2014). Microsoft COCO: Common Objects in Context. Computer Vision\u2014ECCV 2014, Springer International Publishing.","DOI":"10.1007\/978-3-319-10599-4"},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Qi, C.R., Liu, W., Wu, C., Su, H., and Guibas, L.J. (2018, January 18\u201323). Frustum PointNets for 3D Object Detection From RGB-D Data. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00102"},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Zhou, Y., and Tuzel, O. (2018, January 18\u201323). VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00472"},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"1330","DOI":"10.1109\/34.888718","article-title":"A flexible new technique for camera calibration","volume":"22","author":"Zhang","year":"2000","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"31","DOI":"10.1016\/j.patrec.2018.02.028","article-title":"A simple, robust and fast method for the perspective-n-point Problem","volume":"108","author":"Wang","year":"2018","journal-title":"Pattern Recognit. Lett."},{"key":"ref_36","unstructured":"Agarwal, S., and Mierle, K. (2022, February 22). Others Ceres Solver. Available online: http:\/\/ceres-solver.org."},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"1717","DOI":"10.1109\/TITS.2012.2202229","article-title":"Track-to-Track Fusion With Asynchronous Sensors Using Information Matrix Fusion for Surround Environment Perception","volume":"13","author":"Aeberhard","year":"2012","journal-title":"IEEE Trans. Intell. Transp. Syst."},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"013028","DOI":"10.1117\/1.JEI.28.1.013028","article-title":"Assisting the visually impaired: Multitarget warning through millimeter wave radar and RGB-depth sensors","volume":"28","author":"Long","year":"2019","journal-title":"J. Electron. Imaging"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/7\/2453\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T22:41:19Z","timestamp":1760136079000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/7\/2453"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,3,23]]},"references-count":38,"journal-issue":{"issue":"7","published-online":{"date-parts":[[2022,4]]}},"alternative-id":["s22072453"],"URL":"https:\/\/doi.org\/10.3390\/s22072453","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,3,23]]}}}