{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,9]],"date-time":"2026-02-09T00:44:35Z","timestamp":1770597875453,"version":"3.49.0"},"reference-count":64,"publisher":"SAGE Publications","issue":"3","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["ICA"],"published-print":{"date-parts":[[2022,6,21]]},"abstract":"<jats:p>Autonomous vehicles are equipped with complimentary sensors to perceive the environment accurately. Deep learning models have proven to be the most effective approach for computer vision problems. Therefore, in autonomous driving, it is essential to design reliable networks to fuse data from different sensors. In this work, we develop a novel data fusion architecture using camera and LiDAR data for object detection in autonomous driving. Given the sparsity of LiDAR data, developing multi-modal fusion models is a challenging task. Our proposal integrates an efficient LiDAR sparse-to-dense completion network into the pipeline of object detection models, achieving a more robust performance at different times of the day. The Waymo Open Dataset has been used for the experimental study, which is the most diverse detection benchmark in terms of weather and lighting conditions. The depth completion network is trained with the KITTI depth dataset, and transfer learning is used to obtain dense maps on Waymo. With the enhanced LiDAR data and the camera images, we explore early and middle fusion approaches using popular object detection models. The proposed data fusion network provides a significant improvement compared to single-modal detection at all times of the day, and outperforms previous approaches that upsample depth maps with classical image processing algorithms. Our multi-modal and multi-source approach achieves a 1.5, 7.5, and 2.1 mean AP increase at day, night, and dawn\/dusk, respectively, using four different object detection meta-architectures.<\/jats:p>","DOI":"10.3233\/ica-220681","type":"journal-article","created":{"date-parts":[[2022,5,20]],"date-time":"2022-05-20T15:47:14Z","timestamp":1653061634000},"page":"241-258","source":"Crossref","is-referenced-by-count":24,"title":["Object detection using depth completion and camera-LiDAR fusion for autonomous driving"],"prefix":"10.1177","volume":"29","author":[{"given":"Manuel","family":"Carranza-Garc\u00eda","sequence":"first","affiliation":[]},{"given":"F. Javier","family":"Gal\u00e1n-Sales","sequence":"additional","affiliation":[]},{"given":"Jos\u00e9 Mar\u00eda","family":"Luna-Romera","sequence":"additional","affiliation":[]},{"given":"Jos\u00e9 C.","family":"Riquelme","sequence":"additional","affiliation":[]}],"member":"179","reference":[{"key":"10.3233\/ICA-220681_ref1","doi-asserted-by":"crossref","first-page":"877","DOI":"10.1111\/mice.12540","article-title":"Modeling and field experiments on autonomous vehicle lane changing with surrounding human-driven vehicles","volume":"36","author":"Wang","year":"2020","journal-title":"Computer-Aided Civil and Infrastructure Engineering"},{"issue":"2","key":"10.3233\/ICA-220681_ref2","doi-asserted-by":"crossref","first-page":"123","DOI":"10.3233\/ICA-220675","article-title":"An integrated low-cost system for object detection in underwater environments","volume":"29","author":"Foresti","year":"2022","journal-title":"Integrated Computer-Aided Engineering"},{"issue":"3","key":"10.3233\/ICA-220681_ref3","doi-asserted-by":"crossref","first-page":"273","DOI":"10.3233\/ICA-180596","article-title":"Multi-object tracking with discriminant correlation filter based deep learning tracker","volume":"26","author":"Yang","year":"2019","journal-title":"Integrated Computer-Aided Engineering"},{"issue":"7","key":"10.3233\/ICA-220681_ref4","doi-asserted-by":"crossref","first-page":"890","DOI":"10.1111\/mice.12572","article-title":"Reinforcement learning-based bird-view automated vehicle control to avoid crossing traffic","volume":"36","author":"Wang","year":"2021","journal-title":"Computer-Aided Civil and Infrastructure Engineering"},{"issue":"7","key":"10.3233\/ICA-220681_ref5","doi-asserted-by":"crossref","first-page":"858","DOI":"10.1111\/mice.12506","article-title":"A simulation-based optimization model for infrastructure planning for electric autonomous vehicle sharing","volume":"36","author":"Zhao","year":"2021","journal-title":"Computer-Aided Civil and Infrastructure Engineering"},{"key":"10.3233\/ICA-220681_ref6","doi-asserted-by":"crossref","first-page":"305","DOI":"10.1111\/mice.12495","article-title":"A deep learning algorithm for simulating autonomous driving considering prior knowledge and temporal information","volume":"35","author":"Chen","year":"2019","journal-title":"Computer-Aided Civil and Infrastructure Engineering"},{"key":"10.3233\/ICA-220681_ref8","doi-asserted-by":"crossref","unstructured":"Caesar H, et al. Nuscenes: A multimodal dataset for autonomous driving. 2020; 11618-11628.","DOI":"10.1109\/CVPR42600.2020.01164"},{"key":"10.3233\/ICA-220681_ref9","unstructured":"Hesai, Scale. PandaSet: Public large-scale dataset for autonomous driving. 2019. (Accessed 7 February 2022). Available online: https:\/\/scale.com\/open-datasets\/pandaset."},{"issue":"3","key":"10.3233\/ICA-220681_ref10","doi-asserted-by":"crossref","first-page":"1341","DOI":"10.1109\/TITS.2020.2972974","article-title":"Deep multi-modal object detection and semantic segmentation for autonomous driving: Datasets, methods, and challenges","volume":"22","author":"Feng","year":"2021","journal-title":"IEEE Transactions on Intelligent Transportation Systems"},{"issue":"12","key":"10.3233\/ICA-220681_ref11","doi-asserted-by":"crossref","first-page":"1549","DOI":"10.1111\/mice.12749","article-title":"Deep learning-based object identification with instance segmentation and pseudo-liDAR point cloud for work zone safety","volume":"36","author":"Shen","year":"2021","journal-title":"Computer-Aided Civil and Infrastructure Engineering"},{"key":"10.3233\/ICA-220681_ref12","doi-asserted-by":"crossref","first-page":"352","DOI":"10.1016\/j.measurement.2014.09.063","article-title":"3D displacement measurement model for health monitoring of structures using a motion capture system","volume":"59","author":"Park","year":"2015","journal-title":"Measurement"},{"key":"10.3233\/ICA-220681_ref13","doi-asserted-by":"crossref","first-page":"576","DOI":"10.1016\/j.asoc.2017.05.029","article-title":"Evolutionary learning based sustainable strain sensing model for structural health monitoring of high-rise buildings","volume":"58","author":"Oh","year":"2017","journal-title":"Applied Soft Computing"},{"key":"10.3233\/ICA-220681_ref14","doi-asserted-by":"crossref","unstructured":"Kalenjuk S, Lienhart W, Rebhan M. Processing of mobile laser scanning data for large-scale deformation monitoring of anchored retaining structures along highways. Computer-Aided Civil and Infrastructure Engineering. 2021; 36(6): 678-694.","DOI":"10.1111\/mice.12656"},{"key":"10.3233\/ICA-220681_ref15","doi-asserted-by":"crossref","unstructured":"Rashed H, Ramzy M, Vaquero V, El\u00a0Sallab A, Sistu G, Yogamani S. FuseMODNet: Real-time camera and liDAR based moving object detection for robust low-light autonomous driving. Proceedings\u00a0\u2013 International Conference on Computer Vision Workshop, ICCVW. 2019; 2393-2402.","DOI":"10.1109\/ICCVW.2019.00293"},{"key":"10.3233\/ICA-220681_ref16","doi-asserted-by":"crossref","unstructured":"Geiger A, Lenz P, Urtasun R. Are we ready for autonomous driving? The KITTI Vision Benchmark Suite. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition. 2012; 3354-3361.","DOI":"10.1109\/CVPR.2012.6248074"},{"key":"10.3233\/ICA-220681_ref17","doi-asserted-by":"crossref","unstructured":"Ku J, Harakeh A, Waslander SL. In defense of classical image processing: Fast depth completion on the CPU. 15th Conference on Computer and Robot Vision (CRV). 2018; 16-22.","DOI":"10.1109\/CRV.2018.00013"},{"issue":"6","key":"10.3233\/ICA-220681_ref18","doi-asserted-by":"crossref","first-page":"1137","DOI":"10.1109\/TPAMI.2016.2577031","article-title":"Faster r-CNN: Towards real-time object detection with region proposal networks","volume":"39","author":"Ren","year":"2017","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"key":"10.3233\/ICA-220681_ref19","doi-asserted-by":"crossref","unstructured":"Zhang S, Chi C, Yao Y, Lei Z, Li SZ. Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection. 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2020; 9756-9765.","DOI":"10.1109\/CVPR42600.2020.00978"},{"issue":"2","key":"10.3233\/ICA-220681_ref20","doi-asserted-by":"crossref","first-page":"318","DOI":"10.1109\/TPAMI.2018.2858826","article-title":"Focal loss for dense object detection","volume":"42","author":"Lin","year":"2020","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"key":"10.3233\/ICA-220681_ref21","doi-asserted-by":"crossref","unstructured":"Chen Q, Wang Y, Yang T, Zhang X, Cheng J, Sun J. You only look one-level feature. 2021 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2021; 13034-13043.","DOI":"10.1109\/CVPR46437.2021.01284"},{"key":"10.3233\/ICA-220681_ref22","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. in: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016; 770-778.","DOI":"10.1109\/CVPR.2016.90"},{"key":"10.3233\/ICA-220681_ref23","doi-asserted-by":"crossref","unstructured":"Lin T, Doll\u00e1r P, Girshick R, He K, Hariharan B, Belongie S. Feature pyramid networks for object detection. in: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017; 936-944.","DOI":"10.1109\/CVPR.2017.106"},{"key":"10.3233\/ICA-220681_ref24","doi-asserted-by":"crossref","unstructured":"Tian Z, Shen C, Chen H, He T. FCOS: Fully convolutional one-stage object detection. in: 2019 IEEE\/CVF International Conference on Computer Vision (ICCV). 2019; 9626-9635.","DOI":"10.1109\/ICCV.2019.00972"},{"key":"10.3233\/ICA-220681_ref25","unstructured":"Zhou X, Wang D, Kr\u00e4henb\u00fchl P. Objects as points. CoRR. 2019; abs\/1904.07850."},{"key":"10.3233\/ICA-220681_ref26","first-page":"213","article-title":"End-to-end object detection with transformers","author":"Carion","year":"2020","journal-title":"Computer Vision\u00a0\u2013 ECCV 2020"},{"key":"10.3233\/ICA-220681_ref27","doi-asserted-by":"crossref","first-page":"740","DOI":"10.1007\/978-3-319-10602-1_48","article-title":"Microsoft COCO: Common objects in context","author":"Lin","year":"2014","journal-title":"Computer Vision\u00a0\u2013 ECCV 2014"},{"key":"10.3233\/ICA-220681_ref28","doi-asserted-by":"crossref","unstructured":"Xie S, Girshick R, Doll\u00e1r P, Tu Z, He K. Aggregated residual transformations for deep neural networks. 2017.","DOI":"10.1109\/CVPR.2017.634"},{"key":"10.3233\/ICA-220681_ref29","doi-asserted-by":"crossref","unstructured":"Cai Z, Vasconcelos N. Cascade r-CNN: Delving into high quality object detection. in: 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition. 2018; 6154-6162.","DOI":"10.1109\/CVPR.2018.00644"},{"issue":"1","key":"10.3233\/ICA-220681_ref30","doi-asserted-by":"crossref","first-page":"81","DOI":"10.3233\/ICA-200636","article-title":"Improving multi-class boosting-based object detection","volume":"28","author":"Buenaposada","year":"2021","journal-title":"Integrated Computer-Aided Engineering"},{"issue":"1","key":"10.3233\/ICA-220681_ref31","doi-asserted-by":"crossref","first-page":"89","DOI":"10.3390\/rs13010089","article-title":"On the performance of one-stage and two-stage object detectors in autonomous vehicles using camera data","volume":"13","author":"Carranza-Garc\u00eda","year":"2021","journal-title":"Remote Sensing"},{"key":"10.3233\/ICA-220681_ref32","doi-asserted-by":"crossref","first-page":"229","DOI":"10.1016\/j.neucom.2021.04.001","article-title":"Enhancing object detection for autonomous driving by optimizing anchor generation and addressing class imbalance","volume":"449","author":"Carranza-Garc\u00eda","year":"2021","journal-title":"Neurocomputing"},{"key":"10.3233\/ICA-220681_ref33","doi-asserted-by":"crossref","first-page":"1089","DOI":"10.3390\/s19051089","article-title":"Anchor generation optimization and region of interest assignment for vehicle detection","volume":"19","author":"Wang","year":"2019","journal-title":"Sensors"},{"key":"10.3233\/ICA-220681_ref34","first-page":"1","article-title":"Vehicle detection and tracking in adverse weather using a deep learning framework","author":"Hassaballah","year":"2020","journal-title":"IEEE Transactions on Intelligent Transportation Systems"},{"issue":"4","key":"10.3233\/ICA-220681_ref35","doi-asserted-by":"crossref","first-page":"973","DOI":"10.1109\/TPAMI.2017.2700460","article-title":"Towards reaching human performance in pedestrian detection","volume":"40","author":"Zhang","year":"2018","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"key":"10.3233\/ICA-220681_ref36","doi-asserted-by":"crossref","unstructured":"Lian J, Yin Y, Li L, Wang Z, Zhou Y. Small object detection in traffic scenes based on attention feature fusion. Sensors. 2021; 21(9).","DOI":"10.3390\/s21093031"},{"key":"10.3233\/ICA-220681_ref37","doi-asserted-by":"crossref","first-page":"332","DOI":"10.1016\/j.neucom.2018.08.009","article-title":"Evaluation of deep neural networks for traffic sign detection systems","volume":"316","author":"Arcos-Garc\u00eda","year":"2018","journal-title":"Neurocomputing"},{"key":"10.3233\/ICA-220681_ref38","doi-asserted-by":"crossref","unstructured":"Uhrig J, Schneider N, Schneider L, Franke U, Brox T, Geiger A. Sparsity invariant CNNs. in: 2017 International Conference on 3D Vision (3DV). 2017; 11-20.","DOI":"10.1109\/3DV.2017.00012"},{"key":"10.3233\/ICA-220681_ref39","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1109\/ACCESS.2020.3045681","article-title":"DepthNet: Real-time liDAR point cloud depth completion for autonomous vehicles","volume":"8","author":"Bai","year":"2020","journal-title":"IEEE Access"},{"key":"10.3233\/ICA-220681_ref40","doi-asserted-by":"crossref","unstructured":"Lu K, Barnes N, Anwar S, Zheng L. From depth what can you see? Depth Completion Via Auxiliary Image Reconstruction. In: 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2020; 11303-11312.","DOI":"10.1109\/CVPR42600.2020.01132"},{"issue":"2","key":"10.3233\/ICA-220681_ref41","doi-asserted-by":"crossref","first-page":"1808","DOI":"10.1109\/LRA.2021.3060396","article-title":"DenseLiDAR: A real-time pseudo dense depth guided depth completion network","volume":"6","author":"Gu","year":"2021","journal-title":"IEEE Robotics and Automation Letters"},{"key":"10.3233\/ICA-220681_ref42","doi-asserted-by":"crossref","first-page":"79801","DOI":"10.1109\/ACCESS.2020.2990212","article-title":"Deep architecture with cross guidance between single image and sparse liDAR data for depth completion","volume":"8","author":"Lee","year":"2020","journal-title":"IEEE Access"},{"key":"10.3233\/ICA-220681_ref43","doi-asserted-by":"crossref","unstructured":"Ma F, Cavalheiro GV, Karaman S. Self-supervised sparse-to-dense: Self-supervised depth completion from liDAR and monocular camera. in: 2019 International Conference on Robotics and Automation (ICRA). 2019; 3288-3295.","DOI":"10.1109\/ICRA.2019.8793637"},{"key":"10.3233\/ICA-220681_ref44","doi-asserted-by":"crossref","unstructured":"Xu Y, Zhu X, Shi J, Zhang G, Bao H, Li H. Depth completion from sparse liDAR data with depth-normal constraints. in: 2019 IEEE\/CVF International Conference on Computer Vision (ICCV). 2019; 2811-2820.","DOI":"10.1109\/ICCV.2019.00290"},{"key":"10.3233\/ICA-220681_ref45","doi-asserted-by":"crossref","unstructured":"Tang J, Tian F, Feng W, Li J, Tan P. Learning guided convolutional network for depth completion. IEEE Transactions on Image Processing. 2021; 30: 1116-1129.","DOI":"10.1109\/TIP.2020.3040528"},{"key":"10.3233\/ICA-220681_ref46","doi-asserted-by":"crossref","unstructured":"Hu M, Wang S, Li B, Ning S, Fan L, Gong X. PENet: Towards precise and efficient image guided depth completion. 2021 IEEE International Conference on Robotics and Automation (ICRA). 2021; 13656-13662.","DOI":"10.1109\/ICRA48506.2021.9561035"},{"key":"10.3233\/ICA-220681_ref47","doi-asserted-by":"crossref","unstructured":"Premebida C, Carreira JA, Batista J, Nunes U. Pedestrian detection combining RGB and dense LIDAR data. in: 2014 IEEE\/RSJ International Conference on Intelligent Robots and Systems. 2014; 4112-4117.","DOI":"10.1109\/IROS.2014.6943141"},{"key":"10.3233\/ICA-220681_ref48","unstructured":"Guo ZX, Liao WZ, Xiao YF, Veelaert P, Philips W. Deep learning fusion of RGB and depth images for pedestrian detection. in: 30th British Machine Vision Conference (BMVC), Proceedings. 2019; 1-13."},{"key":"10.3233\/ICA-220681_ref49","doi-asserted-by":"crossref","unstructured":"Ophoff T, Van\u00a0Beeck K, Goedem\u00e9 T. Exploring RGB+depth fusion for real-time object detection. Sensors. 2019; 19(4).","DOI":"10.3390\/s19040866"},{"key":"10.3233\/ICA-220681_ref50","doi-asserted-by":"crossref","unstructured":"Kim J, Kim J, Cho J. An advanced object classification strategy using YOLO through camera and liDAR sensor fusion. in: 2019 13th International Conference on Signal Processing and Communication Systems (ICSPCS). 2019; 1-5.","DOI":"10.1109\/ICSPCS47537.2019.9008742"},{"key":"10.3233\/ICA-220681_ref51","doi-asserted-by":"crossref","first-page":"1549","DOI":"10.1109\/IWCMC48107.2020.9148512","article-title":"Fusion strategy of multi-sensor based object detection for self-driving vehicles","author":"Li","year":"2020","journal-title":"2020 International Wireless Communications and Mobile Computing (IWCMC)"},{"key":"10.3233\/ICA-220681_ref52","doi-asserted-by":"crossref","unstructured":"Pfeuffer A, Dietmayer K. Optimal sensor data fusion architecture for object detection in adverse weather conditions. in: 2018 21st International Conference on Information Fusion (FUSION). 2018; 1-8.","DOI":"10.23919\/ICIF.2018.8455757"},{"key":"10.3233\/ICA-220681_ref53","doi-asserted-by":"crossref","first-page":"172","DOI":"10.1016\/j.inffus.2021.07.004","article-title":"SaccadeFork: A lightweight multi-sensor fusion-based target detector","volume":"77","author":"Ouyang","year":"2022","journal-title":"Information Fusion"},{"key":"10.3233\/ICA-220681_ref54","doi-asserted-by":"crossref","unstructured":"Geng K, Dong G, Yin G, Hu J. Deep dual-modal traffic objects instance segmentation method using camera and LIDAR data for autonomous driving. Remote Sensing. 2020; 12(20).","DOI":"10.3390\/rs12203274"},{"key":"10.3233\/ICA-220681_ref55","doi-asserted-by":"crossref","first-page":"41799","DOI":"10.1109\/ACCESS.2021.3063692","article-title":"ISETAuto: Detecting vehicles with depth and radiance information","volume":"9","author":"Liu","year":"2021","journal-title":"IEEE Access"},{"key":"10.3233\/ICA-220681_ref56","doi-asserted-by":"crossref","unstructured":"Islam MM, Newaz AAR, Karimoddini A. A pedestrian detection and tracking framework for autonomous cars: Efficient fusion of camera and liDAR data. in: 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC). 2021; 1287-1292.","DOI":"10.1109\/SMC52423.2021.9658639"},{"key":"10.3233\/ICA-220681_ref57","unstructured":"Liu R, Lehman J, Molino P, Petroski\u00a0Such F, Frank E, Sergeev A, et al. An intriguing failing of convolutional neural networks and the coordconv solution. in: Advances in Neural Information Processing Systems. 2018."},{"issue":"10","key":"10.3233\/ICA-220681_ref58","doi-asserted-by":"crossref","first-page":"1345","DOI":"10.1109\/TKDE.2009.191","article-title":"A survey on transfer learning","volume":"22","author":"Pan","year":"2010","journal-title":"IEEE Transactions on Knowledge and Data Engineering"},{"key":"10.3233\/ICA-220681_ref59","unstructured":"Chen K, Wang J, Pang J, Cao Y, Xiong Y, Li X, et al. MMDetection: Open MMLab detection toolbox and benchmark. CoRR. 2019; abs\/1906.07155."},{"key":"10.3233\/ICA-220681_ref60","unstructured":"Carranza-Garc\u00eda M. Multi-modal fusion for 2D object detection in autonomous driving. 2022. (Accessed 28 March 2022). https:\/\/github.com\/carranza96\/waymo-detection-fusion."},{"key":"10.3233\/ICA-220681_ref61","doi-asserted-by":"crossref","unstructured":"He K, Girshick R, Dollar P. Rethinking imageNet pre-training. Proceedings of the IEEE International Conference on Computer Vision. 2019; 4917-4926.","DOI":"10.1109\/ICCV.2019.00502"},{"key":"10.3233\/ICA-220681_ref62","doi-asserted-by":"crossref","unstructured":"Shivakumar SS, Nguyen T, Miller ID, Chen SW, Kumar V, Taylor CJ. DFuseNet: Deep fusion of RGB and sparse depth information for image guided dense depth completion. in: 2019 IEEE Intelligent Transportation Systems Conference (ITSC). 2019; 13-20.","DOI":"10.1109\/ITSC.2019.8917294"},{"key":"10.3233\/ICA-220681_ref63","doi-asserted-by":"crossref","unstructured":"Chodosh N, Wang CY, Lucey S. Deep convolutional compressed sensing for liDAR depth completion. in: Asian Conference on Computer Vision (ACCV). 2018.","DOI":"10.1007\/978-3-030-20887-5_31"},{"issue":"3","key":"10.3233\/ICA-220681_ref64","doi-asserted-by":"crossref","first-page":"197","DOI":"10.3233\/ICA-2010-0345","article-title":"Enhanced probabilistic neural network with local decision circles: A robust classifier","volume":"17","author":"Ahmadlou","year":"2010","journal-title":"Integr Comput-Aided Eng"},{"issue":"12","key":"10.3233\/ICA-220681_ref65","doi-asserted-by":"crossref","first-page":"8675","DOI":"10.1007\/s00521-019-04359-7","article-title":"A dynamic ensemble learning algorithm for neural networks","volume":"32","author":"Alam","year":"2020","journal-title":"Neural Comput Appl"}],"container-title":["Integrated Computer-Aided Engineering"],"original-title":[],"link":[{"URL":"https:\/\/content.iospress.com\/download?id=10.3233\/ICA-220681","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,3,11]],"date-time":"2025-03-11T09:42:20Z","timestamp":1741686140000},"score":1,"resource":{"primary":{"URL":"https:\/\/journals.sagepub.com\/doi\/full\/10.3233\/ICA-220681"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,6,21]]},"references-count":64,"journal-issue":{"issue":"3"},"URL":"https:\/\/doi.org\/10.3233\/ica-220681","relation":{},"ISSN":["1069-2509","1875-8835"],"issn-type":[{"value":"1069-2509","type":"print"},{"value":"1875-8835","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,6,21]]}}}