{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,24]],"date-time":"2026-01-24T19:28:40Z","timestamp":1769282920450,"version":"3.49.0"},"reference-count":23,"publisher":"Wiley","issue":"1","license":[{"start":{"date-parts":[[2021,5,26]],"date-time":"2021-05-26T00:00:00Z","timestamp":1621987200000},"content-version":"vor","delay-in-days":145,"URL":"http:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100004663","name":"Ministry of Science and Technology, Taiwan","doi-asserted-by":"publisher","award":["109-2221-E-027-082"],"award-info":[{"award-number":["109-2221-E-027-082"]}],"id":[{"id":"10.13039\/501100004663","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["onlinelibrary.wiley.com"],"crossmark-restriction":true},"short-container-title":["Journal of Sensors"],"published-print":{"date-parts":[[2021,1]]},"abstract":"<jats:p>Recently, self\u2010driving cars became a big challenge in the automobile industry. After the DARPA challenge, which introduced the design of a self\u2010driving system that can be classified as SAR Level 3 or higher levels, driven to focus on self\u2010driving cars more. Later on, using these introduced design models, a lot of companies started to design self\u2010driving cars. Various sensors, such as radar, high\u2010resolution cameras, and LiDAR are important in self\u2010driving cars to sense the surroundings. LiDAR acts as an eye of a self\u2010driving vehicle, by offering 64 scanning channels, 26.9\u00b0 vertical field view, and a high\u2010precision 360\u00b0 horizontal field view in real\u2010time. The LiDAR sensor can provide 360\u00b0 environmental depth information with a detection range of up to 120 meters. In addition, the left and right cameras can further assist in obtaining front image information. In this way, the surrounding environment model of the self\u2010driving car can be accurately obtained, which is convenient for the self\u2010driving algorithm to perform route planning. It is very important for self\u2010driving to avoid the collision. LiDAR provides both horizontal and vertical field views and helps in avoiding collision. In an online website, the dataset provides different kinds of data like point cloud data and color images which helps this data to use for object recognition. In this paper, we used two types of publicly available datasets, namely, KITTI and PASCAL VOC. Firstly, the KITTI dataset provides in\u2010depth data knowledge for the LiDAR segmentation (LS) of objects obtained through LiDAR point clouds. The performance of object segmentation through LiDAR cloud points is used to find the region of interest (ROI) on images. And later on, we trained the network with the PASCAL VOC dataset used for object detection by the YOLOv4 neural network. To evaluate, we used the region of interest image as input to YOLOv4. By using all these technologies, we can segment and detect objects. Our algorithm ultimately constructs a LiDAR point cloud at the same time; it also detects the image in real\u2010time.<\/jats:p>","DOI":"10.1155\/2021\/5576262","type":"journal-article","created":{"date-parts":[[2021,5,26]],"date-time":"2021-05-26T23:50:42Z","timestamp":1622073042000},"update-policy":"https:\/\/doi.org\/10.1002\/crossmark_policy","source":"Crossref","is-referenced-by-count":34,"title":["Real\u2010Time Object Detection for LiDAR Based on LS\u2010R\u2010YOLOv4 Neural Network"],"prefix":"10.1155","volume":"2021","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-9599-6415","authenticated-orcid":false,"given":"Yu-Cheng","family":"Fan","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9768-7798","authenticated-orcid":false,"given":"Chitra Meghala","family":"Yelamandala","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7482-7153","authenticated-orcid":false,"given":"Ting-Wei","family":"Chen","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7618-1678","authenticated-orcid":false,"given":"Chun-Ju","family":"Huang","sequence":"additional","affiliation":[]}],"member":"311","published-online":{"date-parts":[[2021,5,26]]},"reference":[{"key":"e_1_2_9_1_2","doi-asserted-by":"publisher","DOI":"10.1109\/34.3900"},{"key":"e_1_2_9_2_2","doi-asserted-by":"crossref","unstructured":"BimbrawK. Autonomous cars: past present and future a review of the developments in the last century the present scenario and the expected future of autonomous vehicle technology 2015 12th international conference on informatics in control automation and robotics (ICINCO) 2015 Colmar France 191\u2013198 https:\/\/doi.org\/10.5220\/0005540501910198.","DOI":"10.5220\/0005540501910198"},{"key":"e_1_2_9_3_2","doi-asserted-by":"crossref","unstructured":"DouillardB. UnderwoodJ. KuntzN. VlaskineV. QuadrosA. MortonP. andFrenkelA. On the segmentation of 3D LIDAR point clouds 2011 IEEE International Conference on Robotics and Automation 2011 Shanghai China 2798\u20132805 https:\/\/doi.org\/10.1109\/ICRA.2011.5979818 2-s2.0-84862851702.","DOI":"10.1109\/ICRA.2011.5979818"},{"key":"e_1_2_9_4_2","doi-asserted-by":"publisher","DOI":"10.1109\/JDT.2014.2331064"},{"key":"e_1_2_9_5_2","doi-asserted-by":"publisher","DOI":"10.3390\/app9214500"},{"key":"e_1_2_9_6_2","doi-asserted-by":"crossref","unstructured":"GirshickR. DonahueJ. DarrellT. andMalikJ. Rich feature hierarchies for accurate object detection and semantic segmentation Proceedings of the IEEE conference on computer vision and pattern recognition 2014 Columbus USA 580\u2013587 https:\/\/doi.org\/10.1109\/CVPR.2014.81 2-s2.0-84911400494.","DOI":"10.1109\/CVPR.2014.81"},{"key":"e_1_2_9_7_2","doi-asserted-by":"crossref","unstructured":"AbushahmaR. I. H. AliM. A. M. Al-SanjaryO. I. andTahirN. M. Region-based convolutional neural network as object detection in images 2019 IEEE 7th Conference on Systems Process and Control (ICSPC) 2019 Melaka Malaysia 264\u2013268 https:\/\/doi.org\/10.1109\/ICSPC47137.2019.9068011.","DOI":"10.1109\/ICSPC47137.2019.9068011"},{"key":"e_1_2_9_8_2","doi-asserted-by":"crossref","unstructured":"AzamS. RafiqueA. andJeonM. Vehicle pose detection using region based convolutional neural network 2016 International Conference on Control Automation and Information Sciences (ICCAIS) 2016 Ansan Korea (South) 194\u2013198 https:\/\/doi.org\/10.1109\/ICCAIS.2016.7822459 2-s2.0-85013806774.","DOI":"10.1109\/ICCAIS.2016.7822459"},{"key":"e_1_2_9_9_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2016.2577031"},{"key":"e_1_2_9_10_2","doi-asserted-by":"crossref","unstructured":"RedmonJ. DivvalaS. GirshickR. andFarhadiA. You only look once: unified real-time object detection 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016 Las Vegas NV USA 779\u2013788 https:\/\/doi.org\/10.1109\/CVPR.2016.91 2-s2.0-84986308404.","DOI":"10.1109\/CVPR.2016.91"},{"key":"e_1_2_9_11_2","doi-asserted-by":"crossref","unstructured":"RedmonJ.andFarhadiA. YOLO9000: better faster stronger 2017 IEEE conference on computer vision and pattern recognition 2017 Honolulu USA 6517\u20136525 https:\/\/doi.org\/10.1109\/CVPR.2017.690 2-s2.0-85041900441.","DOI":"10.1109\/CVPR.2017.690"},{"key":"e_1_2_9_12_2","unstructured":"RedmonJ.andFarhadiA. Yolov3: an incremental improvement 2018 https:\/\/arxiv.org\/abs\/1804.02767v1."},{"key":"e_1_2_9_13_2","doi-asserted-by":"crossref","unstructured":"LinT.-Y. DollarP. GirshickR. HeK. HariharanB. andBelongieS. Feature pyramid networks for object detection Proceedings of the IEEE conference on computer vision and pattern recognition 2017 Honolulu USA 936\u2013944 https:\/\/doi.org\/10.1109\/CVPR.2017.106 2-s2.0-85041898381.","DOI":"10.1109\/CVPR.2017.106"},{"key":"e_1_2_9_14_2","unstructured":"BochkovskiyA. WangC. Y. andLiaoH. Y. M. Yolov4: optimal speed and accuracy of object detection 2020 https:\/\/arxiv.org\/abs\/2004.10934."},{"key":"e_1_2_9_15_2","doi-asserted-by":"crossref","unstructured":"WangC. Y. LiaoH. Y. M. YehI. H. WuY. H. ChenP. Y. andHsiehJ. W. CSPNet: a new backbone that can enhance learning capability of CNN Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition workshops 2019 Los Alamitos USA 1571\u20131580 https:\/\/doi.org\/10.1109\/CVPRW50498.2020.00203.","DOI":"10.1109\/CVPRW50498.2020.00203"},{"key":"e_1_2_9_16_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.ins.2020.02.067"},{"key":"e_1_2_9_17_2","doi-asserted-by":"crossref","unstructured":"LiuS. QiL. QinH. ShiJ. andJiaJ. Path aggregation network for instance segmentation 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition 2018 Salt Lake City UT USA 8759\u20138768 https:\/\/doi.org\/10.1109\/CVPR.2018.00913 2-s2.0-85060854014.","DOI":"10.1109\/CVPR.2018.00913"},{"key":"e_1_2_9_18_2","doi-asserted-by":"publisher","DOI":"10.1109\/JSEN.2020.2966034"},{"key":"e_1_2_9_19_2","doi-asserted-by":"publisher","DOI":"10.3390\/sym12020324"},{"key":"e_1_2_9_20_2","doi-asserted-by":"crossref","unstructured":"LaddhaA. KocamazM. K. Navarro-SermentL. E. andHebertM. Map-supervised road detection 2016 IEEE Intelligent Vehicles Symposium (IV) 2016 Gothenburg Sweden 118\u2013123 https:\/\/doi.org\/10.1109\/IVS.2016.7535374 2-s2.0-84983356617.","DOI":"10.1109\/IVS.2016.7535374"},{"key":"e_1_2_9_21_2","doi-asserted-by":"crossref","unstructured":"KocamazM. K. GongJ. andPiresB. R. Vision-based counting of pedestrians and cyclists 2016 IEEE Winter Conference on Applications of Computer Vision (WACV) 2016 Lake Placid NY USA 1\u20138 https:\/\/doi.org\/10.1109\/WACV.2016.7477685 2-s2.0-84977651878.","DOI":"10.1109\/WACV.2016.7477685"},{"key":"e_1_2_9_22_2","doi-asserted-by":"publisher","DOI":"10.3390\/s19245412"},{"key":"e_1_2_9_23_2","volume-title":"AAAI 2006 Evaluation Methods for Machine Learning Workshop","author":"Japkowicz N.","year":"2006"}],"container-title":["Journal of Sensors"],"original-title":[],"language":"en","link":[{"URL":"http:\/\/downloads.hindawi.com\/journals\/js\/2021\/5576262.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/downloads.hindawi.com\/journals\/js\/2021\/5576262.xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/pdf\/10.1155\/2021\/5576262","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,8,6]],"date-time":"2024-08-06T00:35:45Z","timestamp":1722904545000},"score":1,"resource":{"primary":{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/10.1155\/2021\/5576262"}},"subtitle":[],"editor":[{"given":"Ismail","family":"Butun","sequence":"additional","affiliation":[]}],"short-title":[],"issued":{"date-parts":[[2021,1]]},"references-count":23,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2021,1]]}},"alternative-id":["10.1155\/2021\/5576262"],"URL":"https:\/\/doi.org\/10.1155\/2021\/5576262","archive":["Portico"],"relation":{},"ISSN":["1687-725X","1687-7268"],"issn-type":[{"value":"1687-725X","type":"print"},{"value":"1687-7268","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,1]]},"assertion":[{"value":"2021-01-31","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2021-04-29","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2021-05-26","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}],"article-number":"5576262"}}