{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,4]],"date-time":"2026-04-04T09:02:48Z","timestamp":1775293368656,"version":"3.50.1"},"reference-count":41,"publisher":"MDPI AG","issue":"9","license":[{"start":{"date-parts":[[2023,4,24]],"date-time":"2023-04-24T00:00:00Z","timestamp":1682294400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100006013","name":"Emirates Center for Mobility Research of the United Arab Emirates University","doi-asserted-by":"publisher","award":["31R271"],"award-info":[{"award-number":["31R271"]}],"id":[{"id":"10.13039\/501100006013","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100006013","name":"Emirates Center for Mobility Research of the United Arab Emirates University","doi-asserted-by":"publisher","award":["AARE20-368"],"award-info":[{"award-number":["AARE20-368"]}],"id":[{"id":"10.13039\/501100006013","id-type":"DOI","asserted-by":"publisher"}]},{"name":"ASPIRE Award for Research Excellence","award":["31R271"],"award-info":[{"award-number":["31R271"]}]},{"name":"ASPIRE Award for Research Excellence","award":["AARE20-368"],"award-info":[{"award-number":["AARE20-368"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>Autonomous vehicles (AVs) are predicted to change transportation; however, it is still difficult to maintain robust situation awareness in a variety of driving situations. To enhance AV perception, methods to integrate sensor data from the camera, radar, and LiDAR sensors have been proposed. However, due to rigidity in their fusion implementations, current techniques are not sufficiently robust in challenging driving scenarios (such as inclement weather, poor light, and sensor obstruction). These techniques can be divided into two main groups: (i) early fusion, which is ineffective when sensor data are distorted or noisy, and (ii) late fusion, which is unable to take advantage of characteristics from numerous sensors and hence yields sub-optimal estimates. In this paper, we suggest a flexible selective sensor fusion framework that learns to recognize the present driving environment and fuses the optimum sensor combinations to enhance robustness without sacrificing efficiency to overcome the above-mentioned limitations. The proposed framework dynamically simulates early fusion, late fusion, and mixtures of both, allowing for a quick decision on the best fusion approach. The framework includes versatile modules for pre-processing heterogeneous data such as numeric, alphanumeric, image, and audio data, selecting appropriate features, and efficiently fusing the selected features. Further, versatile object detection and classification models are proposed to detect and categorize objects accurately. Advanced ensembling, gating, and filtering techniques are introduced to select the optimal object detection and classification model. Further, innovative methodologies are proposed to create an accurate context and decision rules. Widely used datasets like KITTI, nuScenes, and RADIATE are used in experimental analysis to evaluate the proposed models. The proposed model performed well in both data-level and decision-level fusion activities and also outperformed other fusion models in terms of accuracy and efficiency.<\/jats:p>","DOI":"10.3390\/rs15092256","type":"journal-article","created":{"date-parts":[[2023,4,25]],"date-time":"2023-04-25T01:37:01Z","timestamp":1682386621000},"page":"2256","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":26,"title":["Multilevel Data and Decision Fusion Using Heterogeneous Sensory Data for Autonomous Vehicles"],"prefix":"10.3390","volume":"15","author":[{"given":"Henry Alexander","family":"Ignatious","sequence":"first","affiliation":[{"name":"College of Information Technology, United Arab Emirates University, Al Ain P.O. Box 15551, United Arab Emirates"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7488-0915","authenticated-orcid":false,"given":"Hesham","family":"El-Sayed","sequence":"additional","affiliation":[{"name":"College of Information Technology, United Arab Emirates University, Al Ain P.O. Box 15551, United Arab Emirates"},{"name":"Emirates Center for Mobility Research, United Arab Emirates University, Al Ain P.O. Box 15551, United Arab Emirates"}]},{"given":"Parag","family":"Kulkarni","sequence":"additional","affiliation":[{"name":"College of Information Technology, United Arab Emirates University, Al Ain P.O. Box 15551, United Arab Emirates"}]}],"member":"1968","published-online":{"date-parts":[[2023,4,24]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Rosique, F., Navarro, P.J., Fern\u00e1ndez, C., and Padilla, A. (2019). A systematic review of perception system and simulators for autonomous vehicles research. Sensors, 19.","DOI":"10.3390\/s19030648"},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Pendleton, S.D., Andersen, H., Du, X., Shen, X., Meghjani, M., Eng, Y.H., Rus, D., and Ang Jr, M.H. (2017). Perception, planning, control, and coordination for autonomous vehicles. Machines, 5.","DOI":"10.3390\/machines5010006"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"9961","DOI":"10.1109\/TITS.2021.3096854","article-title":"A review and comparative study on probabilistic object detection in autonomous driving","volume":"23","author":"Feng","year":"2021","journal-title":"IEEE Trans. Intell. Transp. Syst."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"3728","DOI":"10.1007\/s10489-021-02653-3","article-title":"A human-like decision intelligence for obstacle avoidance in autonomous vehicle parking","volume":"52","author":"Nakrani","year":"2022","journal-title":"Appl. Intell."},{"key":"ref_5","unstructured":"Gupta, S., and Snigdh, I. (2022). Autonomous and Connected Heavy Vehicle Technology, Elsevier."},{"key":"ref_6","unstructured":"Bar-Shalom, Y., Li, X.R., and Kirubarajan, T. (2004). Estimation with Applications to Tracking and Navigation: Theory Algorithms and Software, John Wiley & Sons."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Chen, C., Rosa, S., Miao, Y., Lu, C.X., Wu, W., Markham, A., and Trigoni, N. (2019, January 15\u201320). Selective sensor fusion for neural visual-inertial odometry. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.01079"},{"key":"ref_8","unstructured":"Chen, C., Rosa, S., Xiaoxuan Lu, C., Trigoni, N., and Markham, A. (2019). Selectfusion: A generic framework to selectively learn multisensory fusion. arXiv."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Lee, S., Lee, D., Choi, P., and Park, D. (2020). Accuracy\u2013power controllable LiDAR sensor system with 3D object recognition for autonomous vehicle. Sensors, 20.","DOI":"10.3390\/s20195706"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Gokhale, V., Barrera, G.M., and Prasad, R.V. (2021, January 14\u201323). FEEL: Fast, energy-efficient localization for autonomous indoor vehicles. Proceedings of the ICC 2021-IEEE International Conference on Communications, Virtual Event.","DOI":"10.1109\/ICC42927.2021.9500500"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"16","DOI":"10.1016\/j.inffus.2015.01.002","article-title":"Context-based information fusion: A survey and discussion","volume":"25","author":"Snidaro","year":"2015","journal-title":"Inf. Fusion"},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"5742","DOI":"10.3390\/s140405742","article-title":"Context-aware personal navigation using embedded sensor fusion in smartphones","volume":"14","author":"Saeedi","year":"2014","journal-title":"Sensors"},{"key":"ref_13","unstructured":"Board, N. (2020). Collision between a sport utility vehicle operating with partial driving automation and a crash attenuator mountain view, california. Accessed Oct., 30."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"34","DOI":"10.1109\/TGRS.2019.2930246","article-title":"Context-aware convolutional neural network for object detection in VHR remote sensing imagery","volume":"58","author":"Gong","year":"2019","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"9","DOI":"10.22381\/CRLSJ13120211","article-title":"Autonomous vehicle decision-making algorithms and data-driven mobilities in networked transport systems","volume":"13","author":"Taylor","year":"2021","journal-title":"Contemp. Readings Law Soc. Justice"},{"key":"ref_16","unstructured":"Alexander, H., El-Sayed, H., Khan, M.A., and Kulkarni, P. (Sensors, 2023). Analyzing Factors Influencing Situation Awareness in Autonomous Vehicles\u2014A Survey, Sensors, Accepted for publication."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"27","DOI":"10.22381\/CRLSJ14220222","article-title":"The Algorithmic Governance of Autonomous Driving Behaviors: Multi-Sensor Data Fusion, Spatial Computing Technologies, and Movement Tracking Tools","volume":"14","author":"Kovacova","year":"2022","journal-title":"Contemp. Readings Law Soc. Justice"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Choi, J.D., and Kim, M.Y. (2022). A sensor fusion system with thermal infrared camera and LiDAR for autonomous vehicles and deep learning based object detection. ICT Express.","DOI":"10.1016\/j.icte.2021.12.016"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"2293","DOI":"10.1177\/0954407019867492","article-title":"A multi-sensor fusion and object tracking algorithm for self-driving vehicles","volume":"233","author":"Yi","year":"2019","journal-title":"Proc. Inst. Mech. Eng. Part D J. Automob. Eng."},{"key":"ref_20","unstructured":"Mei, P., Karimi, H.R., Ma, F., Yang, S., and Huang, C. (2021, January 2\u20134). A Multi-sensor Information Fusion Method for Autonomous Vehicle Perception System. Proceedings of the Science and Technologies for Smart Cities: 7th EAI International Conference, SmartCity360\u00b0, Virtual Event."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"53","DOI":"10.1016\/j.comcom.2022.04.024","article-title":"6Blocks: 6G-enabled trust management scheme for decentralized autonomous vehicles","volume":"191","author":"Bhattacharya","year":"2022","journal-title":"Comput. Commun."},{"key":"ref_22","unstructured":"Ren, S., He, K., Girshick, R., and Sun, J. (2015, January 7\u201312). Faster R-CNN: Towards real-time object detection with region proposal networks. Proceedings of the Annual Conference on Neural Information Processing Systems 2015, Montreal, QC, Canada."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"3782","DOI":"10.1109\/TITS.2019.2892405","article-title":"A survey on 3d object detection methods for autonomous driving applications","volume":"20","author":"Arnold","year":"2019","journal-title":"IEEE Trans. Intell. Transp. Syst."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Nobis, F., Geisslinger, M., Weber, M., Betz, J., and Lienkamp, M. (2019, January 15\u201317). A deep learning-based radar and camera sensor fusion architecture for object detection. Proceedings of the 2019 Sensor Data Fusion: Trends, Solutions, Applications (SDF), Solutions, Bonn, Germany.","DOI":"10.1109\/SDF.2019.8916629"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Shahian Jahromi, B., Tulabandhula, T., and Cetin, S. (2019). Real-time hybrid multi-sensor fusion framework for perception in autonomous vehicles. Sensors, 19.","DOI":"10.3390\/s19204357"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Xu, D., Anguelov, D., and Jain, A. (2018, January 18\u201322). Pointfusion: Deep sensor fusion for 3d bounding box estimation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00033"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Aljundi, R., Chakravarty, P., and Tuytelaars, T. (2017, January 21\u201326). Expert gate: Lifelong learning with a network of experts. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.753"},{"key":"ref_28","unstructured":"Mullapudi, R.T., Mark, W.R., Shazeer, N., and Fatahalian, K. (2018, January 18\u201322). Hydranets: Specialized dynamic architectures for efficient inference. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA."},{"key":"ref_29","unstructured":"Li, Y., Chen, Y., Wang, N., and Zhang, Z. (November, January 27). Scale-aware trident networks for object detection. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, Republic of Korea."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Wei, Z., Zhang, F., Chang, S., Liu, Y., Wu, H., and Feng, Z. (2022). MmWave Radar and Vision Fusion for Object Detection in Autonomous Driving: A Review. Sensors, 22.","DOI":"10.3390\/s22072542"},{"key":"ref_31","unstructured":"Hallyburton, R.S., Liu, Y., Cao, Y., Mao, Z.M., and Pajic, M. (2022, January 10\u201312). Security analysis of camera-lidar fusion against black-box attacks on autonomous vehicles. Proceedings of the 31st USENIX Security Symposium (USENIX SECURITY), Boston, MA, USA."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Ahmed, K., Baig, M.H., and Torresani, L. (2016, January 11\u201314). Network of experts for large-scale image categorization. Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands.","DOI":"10.1007\/978-3-319-46478-7_32"},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Ye, E., Spiegel, P., and Althoff, M. (2020, January 20\u201323). Cooperative raw sensor data fusion for ground truth generation in autonomous driving. Proceedings of the 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), Rhodes, Greece.","DOI":"10.1109\/ITSC45102.2020.9294477"},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"24771","DOI":"10.1109\/JSEN.2021.3116057","article-title":"Improved Shape-Based Distance Method for Correlation Analysis of Multi-Radar Data Fusion in Self-Driving Vehicle","volume":"21","author":"Ren","year":"2021","journal-title":"IEEE Sensors J."},{"key":"ref_35","unstructured":"Liu, W., Liu, Y., and Bucknall, R. (2022). Filtering based multi-sensor data fusion algorithm for a reliable unmanned surface vehicle navigation. J. Mar. Eng. Technol., 1\u201317."},{"key":"ref_36","unstructured":"Alexander, H., El-Sayed, H., Khan, M.A., and Kulkarni, P. (Big Data, 2023). A versatile hybrid image fusion model to fuse multispectral image data, Big Data, Currently under review."},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"72757","DOI":"10.1109\/ACCESS.2020.2987414","article-title":"DyReT: A Dynamic Rule Framing Engine Equipped With Trust Management for Vehicular Networks","volume":"8","author":"Alexander","year":"2020","journal-title":"IEEE Access"},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"14643","DOI":"10.1109\/ACCESS.2022.3145972","article-title":"On the Integration of Enabling Wireless Technologies and Sensor Fusion for Next-Generation Connected and Autonomous Vehicles","volume":"10","author":"Butt","year":"2022","journal-title":"IEEE Access"},{"key":"ref_39","unstructured":"(2019, July 19). nuScenes. Available online: https:\/\/www.nuscenes.org\/nuscenes."},{"key":"ref_40","unstructured":"(2019, July 19). KITTI. Available online: https:\/\/paperswithcode.com\/dataset\/kitti."},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Malawade, A.V., Mortlock, T., and Al Faruque, M.A. (2022, January 4\u20136). HydraFusion: Context-aware selective sensor fusion for robust and efficient autonomous vehicle perception. Proceedings of the 2022 ACM\/IEEE 13th International Conference on Cyber-Physical Systems (ICCPS), Virtual.","DOI":"10.1109\/ICCPS54341.2022.00013"}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/15\/9\/2256\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T19:22:42Z","timestamp":1760124162000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/15\/9\/2256"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,4,24]]},"references-count":41,"journal-issue":{"issue":"9","published-online":{"date-parts":[[2023,5]]}},"alternative-id":["rs15092256"],"URL":"https:\/\/doi.org\/10.3390\/rs15092256","relation":{},"ISSN":["2072-4292"],"issn-type":[{"value":"2072-4292","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,4,24]]}}}