{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,2,21]],"date-time":"2025-02-21T21:37:53Z","timestamp":1740173873078,"version":"3.37.3"},"reference-count":57,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2022,3,7]],"date-time":"2022-03-07T00:00:00Z","timestamp":1646611200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,3,7]],"date-time":"2022-03-07T00:00:00Z","timestamp":1646611200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"Henry Ford Foundation Finland"},{"DOI":"10.13039\/501100002341","name":"Academy of Finland","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100002341","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["J Big Data"],"published-print":{"date-parts":[[2022,12]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>With the emerge of intelligent and connected transportation systems, driver perception and on-board safety systems could be extended with roadside camera units. Computer vision can be utilised to detect road users, conveying their presence to vehicles that cannot perceive them. However, accurate object detection algorithms are typically computationally heavy, depending on delay-prone cloud computation or expensive local hardware. Similar problems are faced in many intelligent transportation applications, in which road users are detected with a roadside camera. We propose utilising Motion Detection and Classification (MoDeCla) for road user detection. The approach is computationally lightweight and capable of running in real-time on an inexpensive single-board computer. To validate the applicability of MoDeCla in intelligent transportation applications, a detection benchmark was carried out on manually labelled data gathered from surveillance cameras overseeing urban areas in Espoo, Finland. Separate datasets were gathered during winter and summer, enabling comparison of the detectors in significantly different weather conditions. Compared to state-of-the-art object detectors, MoDeCla performed detection an order of magnitude faster, yet achieved similar accuracy. The most impactful deficiency of MoDeCla was errors in bounding box placement. Car headlights and long dark shadows were found especially difficult for the motion detection, which caused incorrect bounding boxes. Future improvements are also required for separately detecting overlapping road users.<\/jats:p>","DOI":"10.1186\/s40537-022-00581-8","type":"journal-article","created":{"date-parts":[[2022,3,7]],"date-time":"2022-03-07T09:02:58Z","timestamp":1646643778000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":4,"title":["Motion detection and classification: ultra-fast road user detection"],"prefix":"10.1186","volume":"9","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-0865-1775","authenticated-orcid":false,"given":"Risto","family":"Ojala","sequence":"first","affiliation":[]},{"given":"Jari","family":"Veps\u00e4l\u00e4inen","sequence":"additional","affiliation":[]},{"given":"Kari","family":"Tammi","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,3,7]]},"reference":[{"key":"581_CR1","unstructured":"WHO: The Top 10 Causes of Death. World Health Organization (WHO). World Health Organization (WHO). https:\/\/www.who.int\/news-room\/fact-sheets\/detail\/the-top-10-causes-of-death. 2018. Accessed 12 May 2020."},{"key":"581_CR2","unstructured":"National Highway Traffic Safety Administration: TRAFFIC SAFETY FACTS 2017. National Highway Traffic Safety Administration. https:\/\/crashstats.nhtsa.dot.gov\/Api\/Public\/ViewPublication\/812806. 2019. Accessed 09 Apr 2021."},{"key":"581_CR3","unstructured":"European Road Safety Observatory: Annual Accident Report 2018. European Road Safety Observatory. https:\/\/ec.europa.eu\/transport\/road_safety\/sites\/roadsafety\/files\/pdf\/statistics\/dacota\/asr2018.pdf. 2018. Accessed 09 Apr 2021."},{"key":"581_CR4","doi-asserted-by":"crossref","unstructured":"Ojala R, Veps\u00e4l\u00e4inen J, Hanhirova J, Hirvisalo V, Tammi K. Novel convolutional neural network-based roadside unit for accurate pedestrian localisation. In: IEEE Transactions on Intelligent Transportation Systems; 2019.","DOI":"10.1109\/TITS.2019.2932802"},{"issue":"3","key":"581_CR5","doi-asserted-by":"publisher","first-page":"920","DOI":"10.1109\/TITS.2011.2119372","volume":"12","author":"N Buch","year":"2011","unstructured":"Buch N, Velastin SA, Orwell J. A review of computer vision techniques for the analysis of urban traffic. IEEE Trans Intell Transport Syst. 2011;12(3):920\u201339.","journal-title":"IEEE Trans Intell Transport Syst"},{"issue":"4","key":"581_CR6","doi-asserted-by":"publisher","first-page":"416","DOI":"10.1109\/TITS.2005.858786","volume":"6","author":"S Atev","year":"2005","unstructured":"Atev S, Arumugam H, Masoud O, Janardan R, Papanikolopoulos NP. A vision-based approach to collision prediction at traffic intersections. IEEE Trans Intell Transport Syst. 2005;6(4):416\u201323.","journal-title":"IEEE Trans Intell Transport Syst"},{"key":"581_CR7","unstructured":"NVIDIA: Jetson Nano Developer Kit. 2021. NVIDIA. https:\/\/developer.nvidia.com\/embedded\/jetson-nano-developer-kit. Accessed 09 Apr 2021."},{"issue":"1","key":"581_CR8","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1186\/s40537-017-0110-7","volume":"5","author":"C Kim","year":"2018","unstructured":"Kim C, Lee J, Han T, Kim Y-M. A hybrid framework combining background subtraction and deep neural networks for rapid person detection. J Big Data. 2018;5(1):1\u201324.","journal-title":"J Big Data"},{"issue":"3","key":"581_CR9","doi-asserted-by":"publisher","first-page":"209","DOI":"10.1049\/iet-its.2013.0012","volume":"8","author":"Y Zhang","year":"2014","unstructured":"Zhang Y, Yao D, Qiu TZ, Peng L. Scene-based pedestrian safety performance model in mixed traffic situation. IET Intell Transport Syst. 2014;8(3):209\u201318.","journal-title":"IET Intell Transport Syst"},{"issue":"11","key":"581_CR10","doi-asserted-by":"publisher","first-page":"1447","DOI":"10.1049\/iet-its.2019.0665","volume":"14","author":"Z Zhou","year":"2020","unstructured":"Zhou Z, Peng Y, Cai Y. Vision-based approach for predicting the probability of vehicle-pedestrian collisions at intersections. IET Intell Transport Syst. 2020;14(11):1447\u201355.","journal-title":"IET Intell Transport Syst"},{"key":"581_CR11","doi-asserted-by":"publisher","first-page":"105356","DOI":"10.1016\/j.ssci.2021.105356","volume":"142","author":"IV Pustokhina","year":"2021","unstructured":"Pustokhina IV, Pustokhin DA, Vaiyapuri T, Gupta D, Kumar S, Shankar K. An automated deep learning based anomaly detection in pedestrian walkways for vulnerable road users safety. Safety Sci. 2021;142:105356.","journal-title":"Safety Sci"},{"issue":"6","key":"581_CR12","doi-asserted-by":"publisher","first-page":"1973","DOI":"10.1109\/TITS.2017.2740303","volume":"19","author":"Y Zhou","year":"2017","unstructured":"Zhou Y, Liu L, Shao L, Mellor M. Fast automatic vehicle annotation for urban traffic surveillance. IEEE Trans Intell Transport Syst. 2017;19(6):1973\u201384.","journal-title":"IEEE Trans Intell Transport Syst"},{"key":"581_CR13","doi-asserted-by":"crossref","unstructured":"Zhang B, Zhang J. A traffic surveillance system for obtaining comprehensive information of the passing vehicles based on instance segmentation. In: IEEE Transactions on Intelligent Transportation Systems; 2020.","DOI":"10.1109\/TITS.2020.3001154"},{"issue":"1","key":"581_CR14","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1186\/s40537-018-0157-0","volume":"5","author":"B Sharma","year":"2018","unstructured":"Sharma B, Kumar S, Tiwari P, Yadav P, Nezhurina MI. Ann based short-term traffic flow forecasting in undivided two lane highway. J Big Data. 2018;5(1):1\u201316.","journal-title":"J Big Data"},{"issue":"511\u2013518","key":"581_CR15","first-page":"3","volume":"1","author":"P Viola","year":"2001","unstructured":"Viola P, Jones M, et al. Rapid object detection using a boosted cascade of simple features. CVPR (1). 2001;1(511\u2013518):3.","journal-title":"CVPR (1)"},{"key":"581_CR16","doi-asserted-by":"crossref","unstructured":"Dalal N, Triggs B. Histograms of oriented gradients for human detection. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR\u201905), vol. 1; 2005. pp. 886\u201393.","DOI":"10.1109\/CVPR.2005.177"},{"key":"581_CR17","unstructured":"Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, p. 1097\u2013105; 2012."},{"key":"581_CR18","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016; 770\u2013778.","DOI":"10.1109\/CVPR.2016.90"},{"key":"581_CR19","doi-asserted-by":"crossref","unstructured":"Luebke D. Cuda: Scalable parallel programming for high-performance scientific computing. In: 2008 5th IEEE International Symposium on Biomedical Imaging: from Nano to Macro, 2008. p. 836\u20138.","DOI":"10.1109\/ISBI.2008.4541126"},{"key":"581_CR20","doi-asserted-by":"crossref","unstructured":"Sandler M, Howard A, Zhu M, Zhmoginov A, Chen L-C. Mobilenetv2: Inverted residuals and linear bottlenecks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, 2018. p. 4510\u20134520.","DOI":"10.1109\/CVPR.2018.00474"},{"key":"581_CR21","unstructured":"Iandola FN, Han S, Moskewicz MW, Ashraf K, Dally WJ, Keutzer K. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and $$<$$ 0.5 mb model size. arXiv preprint arXiv:1602.07360 2016."},{"key":"581_CR22","doi-asserted-by":"crossref","unstructured":"Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C-Y, Berg AC. Ssd: Single shot multibox detector. In: European Conference on Computer Vision. New YorK: Springer; 2016. p. 21\u201337.","DOI":"10.1007\/978-3-319-46448-0_2"},{"key":"581_CR23","doi-asserted-by":"crossref","unstructured":"Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: Unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, 2016. p. 779\u201388.","DOI":"10.1109\/CVPR.2016.91"},{"key":"581_CR24","doi-asserted-by":"crossref","unstructured":"Redmon J, Farhadi A. Yolo9000: better, faster, stronger. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017. p. 7263\u201371.","DOI":"10.1109\/CVPR.2017.690"},{"key":"581_CR25","unstructured":"Redmon J, Farhadi A. Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767 2018."},{"key":"581_CR26","unstructured":"Bochkovskiy A, Wang C-Y, Liao H-YM. Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934 2020."},{"key":"581_CR27","doi-asserted-by":"crossref","unstructured":"Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014. p. 580\u20137.","DOI":"10.1109\/CVPR.2014.81"},{"key":"581_CR28","doi-asserted-by":"crossref","unstructured":"Girshick R. Fast r-cnn. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1440\u20131448, 2015.","DOI":"10.1109\/ICCV.2015.169"},{"key":"581_CR29","doi-asserted-by":"crossref","unstructured":"Lin T-Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Doll\u00e1r P, Zitnick CL. Microsoft coco: Common objects in context. In: European Conference on Computer Vision. New York: Springer; 2014. p. 740\u201355.","DOI":"10.1007\/978-3-319-10602-1_48"},{"key":"581_CR30","doi-asserted-by":"crossref","unstructured":"Zhao Q, Sheng T, Wang Y, Tang Z, Chen Y, Cai L, Ling H. M2det: A single-shot object detector based on multi-level feature pyramid network. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, 2019. p. 9259\u201366.","DOI":"10.1609\/aaai.v33i01.33019259"},{"key":"581_CR31","doi-asserted-by":"crossref","unstructured":"Tan M, Pang R, Le QV. Efficientdet: Scalable and efficient object detection. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition. 2020. p. 10781\u201390.","DOI":"10.1109\/CVPR42600.2020.01079"},{"key":"581_CR32","doi-asserted-by":"crossref","unstructured":"Du X, El-Khamy M, Lee J, Davis L. Fused dnn: A deep neural network fusion approach to fast and robust pedestrian detection. In: 2017 IEEE Winter Conference on Applications of Computer Vision (WACV). 2017. p. 953\u201361.","DOI":"10.1109\/WACV.2017.111"},{"key":"581_CR33","doi-asserted-by":"crossref","unstructured":"Doll\u00e1r P, Wojek C, Schiele B, Perona P. Pedestrian detection: A benchmark. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition. 2009. p. 304\u201311.","DOI":"10.1109\/CVPR.2009.5206631"},{"key":"581_CR34","doi-asserted-by":"crossref","unstructured":"Wang L, Lu Y, Wang H, Zheng Y, Ye H, Xue X. Evolving boxes for fast vehicle detection. In: 2017 IEEE International Conference on Multimedia and Expo (ICME). 2017. p. 1135\u201340.","DOI":"10.1109\/ICME.2017.8019461"},{"key":"581_CR35","doi-asserted-by":"publisher","first-page":"102907","DOI":"10.1016\/j.cviu.2020.102907","volume":"193","author":"L Wen","year":"2020","unstructured":"Wen L, Du D, Cai Z, Lei Z, Chang M-C, Qi H, Lim J, Yang M-H, Lyu S. Ua-detrac: A new benchmark and protocol for multi-object detection and tracking. Computer Vision Image Understand. 2020;193:102907.","journal-title":"Computer Vision Image Understand"},{"issue":"10","key":"581_CR36","doi-asserted-by":"publisher","first-page":"1319","DOI":"10.1049\/iet-its.2019.0367","volume":"14","author":"PAP Ferraz","year":"2020","unstructured":"Ferraz PAP, de Oliveira BAG, Ferreira FMF, da Silva Martins CAP. Three-stage rgbd architecture for vehicle and pedestrian detection using convolutional neural networks and stereo vision. IET Intell Transport Syst. 2020;14(10):1319\u201327.","journal-title":"IET Intell Transport Syst"},{"key":"581_CR37","doi-asserted-by":"crossref","unstructured":"Benenson R, Omran M, Hosang J, Schiele B. Ten years of pedestrian detection, what have we learned? In: European Conference on Computer Vision. New York: Springer; 2014. p. 613\u201327.","DOI":"10.1007\/978-3-319-16181-5_47"},{"issue":"4","key":"581_CR38","doi-asserted-by":"publisher","first-page":"1773","DOI":"10.1109\/TITS.2013.2266661","volume":"14","author":"S Sivaraman","year":"2013","unstructured":"Sivaraman S, Trivedi MM. Looking at vehicles on the road: A survey of vision-based vehicle detection, tracking, and behavior analysis. IEEE Trans Intell Transport Syst. 2013;14(4):1773\u201395.","journal-title":"IEEE Trans Intell Transport Syst"},{"key":"581_CR39","doi-asserted-by":"crossref","unstructured":"Benenson R, Mathias M, Timofte R, Van\u00a0Gool L. Pedestrian detection at 100 frames per second. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition, 2012. p. 2903\u201310.","DOI":"10.1109\/CVPR.2012.6248017"},{"key":"581_CR40","doi-asserted-by":"crossref","unstructured":"Doll\u00e1r P, Tu Z, Perona P, Belongie S. Integral channel features 2009.","DOI":"10.5244\/C.23.91"},{"issue":"2","key":"581_CR41","doi-asserted-by":"publisher","first-page":"153","DOI":"10.1007\/s11263-005-6644-8","volume":"63","author":"P Viola","year":"2005","unstructured":"Viola P, Jones MJ, Snow D. Detecting pedestrians using patterns of motion and appearance. Int J Computer Vision. 2005;63(2):153\u201361.","journal-title":"Int J Computer Vision"},{"issue":"1","key":"581_CR42","doi-asserted-by":"publisher","first-page":"119","DOI":"10.1006\/jcss.1997.1504","volume":"55","author":"Y Freund","year":"1997","unstructured":"Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. J Computer Syst Sci. 1997;55(1):119\u201339.","journal-title":"J Computer Syst Sci"},{"issue":"1","key":"581_CR43","doi-asserted-by":"publisher","first-page":"37","DOI":"10.1109\/6979.994794","volume":"3","author":"S Gupte","year":"2002","unstructured":"Gupte S, Masoud O, Martin RF, Papanikolopoulos NP. Detection and classification of vehicles. IEEE Trans Intell Transport Syst. 2002;3(1):37\u201347.","journal-title":"IEEE Trans Intell Transport Syst"},{"key":"581_CR44","unstructured":"Bai H, Wu J, Liu C. Motion and haar-like features based vehicle detection. In: 2006 12th International Multi-Media Modelling Conference; 2006. p. 4."},{"key":"581_CR45","doi-asserted-by":"crossref","unstructured":"Zhang Z, Cai Y, Huang K, Tan T. Real-time moving object classification with automatic scene division. In: 2007 IEEE International Conference on Image Processing, vol. 5, p. 149, 2007.","DOI":"10.1109\/ICIP.2007.4379787"},{"issue":"7","key":"581_CR46","doi-asserted-by":"publisher","first-page":"773","DOI":"10.1016\/j.patrec.2005.11.005","volume":"27","author":"Z Zivkovic","year":"2006","unstructured":"Zivkovic Z, Van Der Heijden F. Efficient adaptive density estimation per image pixel for the task of background subtraction. Pattern Recognit Lett. 2006;27(7):773\u201380.","journal-title":"Pattern Recognit Lett"},{"issue":"1","key":"581_CR47","doi-asserted-by":"publisher","first-page":"32","DOI":"10.1016\/0734-189X(85)90016-7","volume":"30","author":"S Suzuki","year":"1985","unstructured":"Suzuki S, et al. Topological structural analysis of digitized binary images by border following. Computer Vision Graphics Image Processing. 1985;30(1):32\u201346.","journal-title":"Computer Vision Graphics Image Processing"},{"key":"581_CR48","unstructured":"Bradski G. The OpenCV Library. Dr. Dobb\u2019s Journal of Software Tools. 2000."},{"key":"581_CR49","unstructured":"Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, Killeen T, Lin Z, Gimelshein N, Antiga L et al. Pytorch: An imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems, 2019. p. 8024\u201335."},{"key":"581_CR50","unstructured":"NVIDIA: NVIDIA TensorRT. NVIDIA. https:\/\/developer.nvidia.com\/tensorrt. Accessed 09 Apr 2021."},{"key":"581_CR51","doi-asserted-by":"crossref","unstructured":"Deng Y, Luo P, Loy CC, Tang X. Pedestrian attribute recognition at far distance. In: Proceedings of the 22nd ACM International Conference on Multimedia. 2014. p. 789\u201392.","DOI":"10.1145\/2647868.2654966"},{"issue":"10","key":"581_CR52","doi-asserted-by":"publisher","first-page":"5129","DOI":"10.1109\/TIP.2018.2848705","volume":"27","author":"Z Luo","year":"2018","unstructured":"Luo Z, Branchaud-Charron F, Lemaire C, Konrad J, Li S, Mishra A, Achkar A, Eichel J, Jodoin P-M. Mio-tcd: A new benchmark dataset for vehicle classification and localization. IEEE Trans Image Process. 2018;27(10):5129\u201341.","journal-title":"IEEE Trans Image Process"},{"key":"581_CR53","unstructured":"Krizhevsky A, Hinton G, et al. Learning multiple layers of features from tiny images. Citeseer: Technical report; 2009."},{"key":"581_CR54","unstructured":"Franklin D. Deep Learning Inference Benchmarking Instructions. NVIDIA. NVIDIA. 2019; https:\/\/forums.developer.nvidia.com\/t\/deep-learning-inference-benchmarking-instructions\/73291. Accessed 09 Apr 2021."},{"key":"581_CR55","unstructured":"Jung J. TensorRT Demos. https:\/\/github.com\/jkjung-avt\/tensorrt_demos. 2021. Accessed 21 Sept 2021."},{"key":"581_CR56","unstructured":"Qijie Z. M2Det. https:\/\/github.com\/qijiezhao\/M2Det. 2019. Accessed 21 Sept 2021."},{"key":"581_CR57","unstructured":"Yet Another EfficientDet Pytorch. https:\/\/github.com\/zylo117\/Yet-Another-EfficientDet-Pytorch. 2020. Accessed 21 Sept 2021."}],"container-title":["Journal of Big Data"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s40537-022-00581-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1186\/s40537-022-00581-8\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s40537-022-00581-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,9,19]],"date-time":"2024-09-19T21:25:15Z","timestamp":1726781115000},"score":1,"resource":{"primary":{"URL":"https:\/\/journalofbigdata.springeropen.com\/articles\/10.1186\/s40537-022-00581-8"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,3,7]]},"references-count":57,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2022,12]]}},"alternative-id":["581"],"URL":"https:\/\/doi.org\/10.1186\/s40537-022-00581-8","relation":{},"ISSN":["2196-1115"],"issn-type":[{"type":"electronic","value":"2196-1115"}],"subject":[],"published":{"date-parts":[[2022,3,7]]},"assertion":[{"value":"3 November 2021","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"18 February 2022","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"7 March 2022","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"Not applicable.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval and consent to participate"}},{"value":"Not applicable.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent for publication"}},{"value":"The authors declare that they have no competing interests.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"28"}}