{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,5]],"date-time":"2026-03-05T15:31:20Z","timestamp":1772724680242,"version":"3.50.1"},"reference-count":61,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2024,1,3]],"date-time":"2024-01-03T00:00:00Z","timestamp":1704240000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,1,3]],"date-time":"2024-01-03T00:00:00Z","timestamp":1704240000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100012166","name":"National Key R &D Program of China","doi-asserted-by":"crossref","award":["2020AAA0104500"],"award-info":[{"award-number":["2020AAA0104500"]}],"id":[{"id":"10.13039\/501100012166","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62276153"],"award-info":[{"award-number":["62276153"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100020721","name":"Guoqiang Institute, Tsinghua University","doi-asserted-by":"crossref","id":[{"id":"10.13039\/100020721","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["J AUDIO SPEECH MUSIC PROC."],"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Acoustic scene classification (ASC) is the process of identifying the acoustic environment or scene from which an audio signal is recorded. In this work, we propose an encoder-decoder-based approach to ASC, which is borrowed from the SegNet in image semantic segmentation tasks. We also propose a novel feature normalization method named Mixup Normalization, which combines channel-wise instance normalization and the Mixup method to learn useful information for scene and discard specific information related to different devices. In addition, we propose an event extraction block, which can extract the accurate semantic segmentation region from the segmentation network, to imitate the effect of image segmentation on audio features. With four data augmentation techniques, our best single system achieved an average accuracy of 71.26% on different devices in the Detection and Classification of Acoustic Scenes and Events (DCASE) 2020 ASC Task 1A dataset. The result indicates a minimum margin of 17% against the DCASE 2020 challenge Task 1A baseline system. It has lower complexity and higher performance compared with other state-of-the-art CNN models, without using any supplementary data other than the official challenge dataset.<\/jats:p>","DOI":"10.1186\/s13636-023-00323-5","type":"journal-article","created":{"date-parts":[[2024,1,3]],"date-time":"2024-01-03T13:02:25Z","timestamp":1704286945000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":10,"title":["Deep semantic learning for acoustic scene classification"],"prefix":"10.1186","volume":"2024","author":[{"given":"Yun-Fei","family":"Shao","sequence":"first","affiliation":[]},{"given":"Xin-Xin","family":"Ma","sequence":"additional","affiliation":[]},{"given":"Yong","family":"Ma","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3841-1959","authenticated-orcid":false,"given":"Wei-Qiang","family":"Zhang","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,1,3]]},"reference":[{"issue":"3","key":"323_CR1","doi-asserted-by":"publisher","first-page":"16","DOI":"10.1109\/MSP.2014.2326181","volume":"32","author":"D Barchiesi","year":"2015","unstructured":"D. Barchiesi, D. Giannoulis, D. Stowell, M. Plumbley, Acoustic scene classification: classifying environments from the sounds they produce. IEEE Signal Process. Mag. 32(3), 16\u201334 (2015)","journal-title":"IEEE Signal Process. Mag."},{"key":"323_CR2","unstructured":"Y. Han, J. Park, Convolutional neural networks with binaural representations and background subtraction for acoustic scene classification. Tech. Rep., DCASE 2017 Challenge (2017)"},{"key":"323_CR3","unstructured":"H. Zeinali, L. Burget, J. Cernocky, in Proceedings of the Detection and Classification of Acoustic Scenes and Events 2018 Workshop (DCASE2018). Convolutional neural networks and x-vector embedding for DCASE2018 acoustic scene classification challenge (Zenodo, Geneve, 2018)"},{"key":"323_CR4","unstructured":"Y. Sakashita, M. Aono, Acoustic scene classifification by ensemble of spectrograms based on adaptive temporal divisions. Tech. Rep., DCASE 2018 Challenge (2018)"},{"key":"323_CR5","unstructured":"DCASE. Detection and classification of acoustic scenes and events 2020 task 1a (2020), https:\/\/dcase.community\/challenge2020\/task-acoustic-scene-classification-results-a"},{"issue":"12","key":"323_CR6","doi-asserted-by":"publisher","first-page":"2481","DOI":"10.1109\/TPAMI.2016.2644615","volume":"39","author":"V Badrinarayanan","year":"2017","unstructured":"V. Badrinarayanan, A. Kendall, R. Cipolla, SegNet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481\u20132495 (2017)","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"323_CR7","unstructured":"X. Ma, Y. Shao, Y. Ma, W.Q. Zhang, in Proceedings of Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). Deep semantic encoder-decoder network for acoustic scene classification with multiple devices (IEEE, Piscataway, NJ, 2020), pp. 365\u2013370"},{"key":"323_CR8","unstructured":"X. Ma, Y. Shao, Y. Ma, W.-Q. Zhang, THUEE submission for DCASE 2020 challenge task1a. Tech. Rep., DCASE 2020 Challenge (2020)"},{"key":"323_CR9","unstructured":"S. Ioffe, C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift, (2015), https:\/\/arxiv.org\/abs\/1502.03167"},{"key":"323_CR10","doi-asserted-by":"crossref","unstructured":"D. Ulyanov, A. Vedaldi, V. Lempitsky, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),\u00a0Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis. (IEEE, Piscataway, 2017), pp.4105\u20134113","DOI":"10.1109\/CVPR.2017.437"},{"key":"323_CR11","unstructured":"DCASE. Detection and classification of acoustic scenes and events (2020), http:\/\/dcase.community\/"},{"key":"323_CR12","unstructured":"DCASE. Detection and classification of acoustic scenes and events challenge 2018 (2018), http:\/\/dcase.community\/challenge2018\/task-acoustic-scene-classification-results-a"},{"key":"323_CR13","unstructured":"DCASE. Detection and classification of acoustic scenes and events challenge 2019 (2019), http:\/\/dcase.community\/challenge2019\/task-acoustic-scene-classification#subtask-a"},{"key":"323_CR14","unstructured":"DCASE. Detection and classification of acoustic scenes and events challenge 2020 (2020), http:\/\/dcase.community\/challenge2020\/task-acoustic-scene-classification#subtask-a"},{"key":"323_CR15","doi-asserted-by":"crossref","unstructured":"J.T. Geiger, B. Schuller, G. Rigoll, in 2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics. Large-scale audio feature extraction and svm for acoustic scene classification (IEEE, Piscataway, 2013), pp. 1\u20134","DOI":"10.1109\/WASPAA.2013.6701857"},{"key":"323_CR16","doi-asserted-by":"crossref","unstructured":"S. Mun, S. Park, Y. Lee, H. Ko, Deep neural network bottleneck feature for acoustic scene classification. Tech. Rep., DCASE 2016 Challenge (2016)","DOI":"10.21437\/Interspeech.2016-1112"},{"key":"323_CR17","unstructured":"G. Vikaskumar, S. Waldekar, D. Paul, G. Saha, Acoustic scene classification using block based mfcc features. Tech. Rep., DCASE 2016 Challenge (2016)"},{"key":"323_CR18","unstructured":"W. Zheng, J. Yi, X. Xing, X. Liu, S. Peng, in Proceedings of the Detection and Classification of Acoustic Scenes and Events 2017 Workshop (DCASE2017). Acoustic scene classification using deep convolutional neural network and multiple spectrograms fusion (Zenodo, Geneve, 2017)"},{"key":"323_CR19","unstructured":"A. Schindler, T. Lidy, A. Rauber, in Proceedings of the Detection and Classification of Acoustic Scenes and Events 2017 Workshop (DCASE2017). Multi-temporal resolution convolutional neural networks for acoustic scene classification (Zenodo, Geneve, 2017)"},{"key":"323_CR20","unstructured":"H. Wang, Y. Zou, D. Chong, Acoustic scene classification with spectrogram processing strategies, (2020), https:\/\/arxiv.org\/abs\/2007.03781"},{"key":"323_CR21","doi-asserted-by":"crossref","unstructured":"J. Sun, X. Liu, X. Mei, J. Zhao, M.D. Plumbley, V. K\u0131l\u0131\u00e7, W. Wang, in 30th European Signal Processing Conference (EUSIPCO). Deep neural decision forest for acoustic scene classification (IEEE, Piscataway, 2022), pp. 772\u2013776","DOI":"10.23919\/EUSIPCO55093.2022.9909575"},{"key":"323_CR22","unstructured":"DCASE. Low-complexity acoustic scene classification with multiple devices (2021), https:\/\/dcase.community\/challenge2021\/task-acoustic-scene-classification-results-a"},{"issue":"3","key":"323_CR23","doi-asserted-by":"publisher","first-page":"279","DOI":"10.1109\/LSP.2017.2657381","volume":"24","author":"J Salamon","year":"2017","unstructured":"J. Salamon, J.P. Bello, Deep convolutional neural networks and data augmentation for environmental sound classification. IEEE Signal Process. Lett. 24(3), 279\u2013283 (2017)","journal-title":"IEEE Signal Process. Lett."},{"key":"323_CR24","unstructured":"S.H. Bae, I. Choi, N.S. Kim., in Proceedings of the Detection and Classification of Acoustic Scenes and Events 2017 Workshop (DCASE2017). Acoustic scene classification using parallel combination of LSTM and CNN (Zenodo, Geneve, 2017)"},{"key":"323_CR25","doi-asserted-by":"crossref","unstructured":"S. Phaye, E. Benetos, Y. Wang, in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Subspectralnet-using sub-spectrogram based convolutional neural networks for acoustic scene classification (IEEE, Piscataway, 2019), pp. 825\u2013829","DOI":"10.1109\/ICASSP.2019.8683288"},{"key":"323_CR26","doi-asserted-by":"crossref","unstructured":"M.D. McDonnell, W. Gao, in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Acoustic scene classification using deep residual networks with late fusion of separated high and low frequency paths (IEEE, Piscataway, 2020), pp. 141\u2013145","DOI":"10.1109\/ICASSP40776.2020.9053274"},{"key":"323_CR27","doi-asserted-by":"crossref","unstructured":"H. Wang, Y. Zou, W. Wang, in Interspeech 2021. SpecAugment++: A hidden space data augmentation method for acoustic scene classification (ISCA, Baixas, 2021), pp. 551\u2013555","DOI":"10.21437\/Interspeech.2021-140"},{"key":"323_CR28","first-page":"2672","volume":"3","author":"IJ Goodfellow","year":"2014","unstructured":"I.J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio, Generative adversarial networks. Adv. Neural Inf. Process Syst. 3, 2672\u20132680 (2014)","journal-title":"Adv. Neural Inf. Process Syst."},{"key":"323_CR29","unstructured":"S. Mun, S. Park, D. Han, H. Ko, Generative adversial network based acoustic scene training set augmentation and selection using svm hyper-planne. Tech. Rep., DCASE 2017 Challenge (2017)"},{"key":"323_CR30","doi-asserted-by":"crossref","unstructured":"H. Hu, C.H.H. Yang, X. Xia, X. Bai, X. Tang, Y. Wang, S. Niu, L. Chai, J. Li, H. Zhu, F. Bao, Y. Zhao, S.M. Siniscalchi, Y. Wang, J. Du, C.H. Lee, Device-robust acoustic scene classification based on two-stage categorization and data augmentation. Tech. Rep., DCASE 2020 Challenge (2020)","DOI":"10.1109\/ICASSP39728.2021.9414835"},{"key":"323_CR31","unstructured":"S. Suh, S. Park, Y. Jeong, T. Lee, Designing acoustic scene classification models with cnn variants. Tech. Rep., DCASE 2020 Challenge (2020)"},{"key":"323_CR32","doi-asserted-by":"crossref","unstructured":"W. Gao, M.D. McDonnell, Coustic scene classification using deep residual networks with focal loss and mild domain adaptation. Tech. Rep., DCASE 2020 Challenge (2020)","DOI":"10.1109\/ICASSP40776.2020.9053274"},{"key":"323_CR33","unstructured":"A.G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, H. Adam, MobileNets: Efficient convolutional neural networks for mobile vision applications, (2017), https:\/\/arxiv.org\/abs\/1704.04861"},{"key":"323_CR34","unstructured":"B. Kim, S. Yang, J. Kim, S. Chang, QTI submission to DCASE 2021: Residual normalization for device-imbalanced acoustic scene classification with efficient design. Tech. Rep., DCASE 2021 Challenge (2021)"},{"key":"323_CR35","unstructured":"K. Koutini, S. Jan, G. Widmer, Cpjku submission to DCASE21: Cross-device audio scene classification with wide sparse frequency-damped CNNs. Tech. Rep., DCASE 2021 Challenge (2021)"},{"key":"323_CR36","unstructured":"H.S. Heo, J.w. Jung, H.j. Shim, B.J. Lee, Clova submission for the DCASE 2021 challenge: Acoustic scene classification using light architectures and device augmentation. Tech. Rep., DCASE 2021 Challenge (2021)"},{"key":"323_CR37","unstructured":"H. Yen, C.H.H. Yang, H. Hu, S.M. Siniscalchi, Q. Wang, Y. Wang, X. Xia, Y. Zhao, Y. Wu, Y. Wang, J. Du, C.H. Lee, A lottery ticket hypothesis framework for low-complexity device-robust neural acoustic scene classification, (2021), https:\/\/arxiv.org\/abs\/2107.01461"},{"key":"323_CR38","unstructured":"J. Frankle, M. Carbin, The lottery ticket hypothesis: Finding sparse, trainable neural networks, (2018), https:\/\/arxiv.org\/abs\/1803.03635"},{"key":"323_CR39","doi-asserted-by":"crossref","unstructured":"T.Y. Lin, P. Doll\u00e1r, R. Girshick, K. He, B. Hariharan, S. Belongie, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Feature pyramid networks for object detection (IEEE, Piscataway, 2017), pp. 936\u2013944","DOI":"10.1109\/CVPR.2017.106"},{"issue":"4","key":"323_CR40","doi-asserted-by":"publisher","first-page":"640","DOI":"10.1109\/TPAMI.2016.2572683","volume":"39","author":"E Shelhamer","year":"2017","unstructured":"E. Shelhamer, J. Long, T. Darrell, Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 640\u2013651 (2017)","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"323_CR41","doi-asserted-by":"crossref","unstructured":"O. Ronneberger, P. Fischer, T. Brox, in 18th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). U-Net: Convolutional networks for biomedical image segmentation (Springer,\u00a0Berlin, 2015), pp. 234\u2013241","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"323_CR42","first-page":"357","volume":"4","author":"LC Chen","year":"2014","unstructured":"L.C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, A.L. Yuille, Semantic image segmentation with deep convolutional nets and fully connected CRFs (2014), https:\/\/arxiv.org\/abs\/1412.7062","journal-title":"Comput. Sci."},{"issue":"4","key":"323_CR43","doi-asserted-by":"publisher","first-page":"834","DOI":"10.1109\/TPAMI.2017.2699184","volume":"40","author":"LC Chen","year":"2018","unstructured":"L.C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, A.L. Yuille, DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834\u2013848 (2018)","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"323_CR44","doi-asserted-by":"crossref","unstructured":"L.C. Chen, Y. Zhu, G. Papandreou, F. Schroff, H. Adam, in European conference on computer vision (ECCV). Encoder-decoder with atrous separable convolution for semantic image segmentation (Springer, Berlin, 2018), pp. 801\u2013818","DOI":"10.1007\/978-3-030-01234-2_49"},{"key":"323_CR45","unstructured":"Q. Kong, Y. Cao, H. Liu, K. Choi, Y. Wang, Decoupling magnitude and phase estimation with deep resunet for music source separation, (2021), https:\/\/arxiv.org\/abs\/2109.05418"},{"key":"323_CR46","doi-asserted-by":"crossref","unstructured":"A. Cohen-Hadria, A. Roebel, G. Peeters, in 27th European Signal Processing Conference (EUSIPCO). Improving singing voice separation using deep u-net and wave-u-net with data augmentation (Springer, Berlin, 2019), pp. 1\u20135","DOI":"10.23919\/EUSIPCO.2019.8902810"},{"key":"323_CR47","unstructured":"Y. Liu, B. Thoshkahna, A. Milani, T. Kristjansson, Voice and accompaniment separation in music using self-attention convolutional neural network, (2020), https:\/\/arxiv.org\/abs\/2003.08954"},{"key":"323_CR48","doi-asserted-by":"crossref","unstructured":"H. Huang, K. Wang, Y. Hu, S. Li, in 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Encoder-decoder based pitch tracking and joint model training for mandarin tone classification (IEEE, Piscataway, 2021), pp. 6943\u20136947","DOI":"10.1109\/ICASSP39728.2021.9413888"},{"key":"323_CR49","doi-asserted-by":"publisher","first-page":"125868","DOI":"10.1109\/ACCESS.2019.2938007","volume":"7","author":"H Meng","year":"2019","unstructured":"H. Meng, T. Yan, F. Yuan, H. Wei, Speech emotion recognition from 3D log-mel spectrograms with deep learning network. IEEE Access 7, 125868\u2013125881 (2019)","journal-title":"IEEE Access"},{"issue":"6","key":"323_CR50","doi-asserted-by":"publisher","first-page":"84","DOI":"10.1145\/3065386","volume":"60","author":"A Krizhevsky","year":"2017","unstructured":"A. Krizhevsky, I. Sutskever, G.E. Hinton, Imagenet classification with deep convolutional neural networks. Commun. ACM 60(6), 84\u201390 (2017)","journal-title":"Commun. ACM"},{"key":"323_CR51","unstructured":"H. Zhang, M. Cisse, Y.N. Dauphin, D. Lopez-Paz, Mixup: Beyond empirical risk minimization, (2017), https:\/\/arxiv.org\/abs\/1710.09412"},{"key":"323_CR52","doi-asserted-by":"crossref","unstructured":"K. Wilkinghoff, F. Kurth, Open-set acoustic scene classification with deep convolutional autoencoders. Tech. Rep., DCASE 2019 Challenge (2019)","DOI":"10.33682\/340j-wd27"},{"key":"323_CR53","doi-asserted-by":"crossref","unstructured":"D.S. Park, W. Chan, Y. Zhang, C.C. Chiu, B. Zoph, E.D. Cubuk, Q.V. Le, in Interspeech. SpecAugment: A simple data augmentation method for automatic speech recognition (ISCA, Baixas, 2019), pp. 2613\u20132617","DOI":"10.21437\/Interspeech.2019-2680"},{"key":"323_CR54","unstructured":"D. Ulyanov, V. Lebedev, A. Vedaldi, V. Lempitsky, Texture networks: Feed-forward synthesis of textures and stylized images. Proc. 33rd Int. Conf. Int. Conf. Mach. Learn. 48, 1349-1357 (2016)"},{"key":"323_CR55","doi-asserted-by":"crossref","unstructured":"X. Huang, S. Belongie, in 2017 IEEE International Conference on Computer Vision (ICCV). Arbitrary style transfer in real-time with adaptive instance normalization (IEEE, Piscataway, NJ, 2017), pp. 1510\u20131519","DOI":"10.1109\/ICCV.2017.167"},{"key":"323_CR56","doi-asserted-by":"crossref","unstructured":"D. Jung, S. Yang, J. Choi, C. Kim, in 2020 IEEE International Conference on Image Processing (ICIP). Arbitrary style transfer using graph instance normalization. (IEEE, Piscataway, 2020), pp. 1596\u20131600","DOI":"10.1109\/ICIP40778.2020.9191195"},{"key":"323_CR57","unstructured":"I. Loshchilov, F. Hutter, SGDR: Stochastic gradient descent with warm restarts, (2016), https:\/\/arxiv.org\/abs\/1608.03983"},{"key":"323_CR58","doi-asserted-by":"crossref","unstructured":"A. Dang, T.H. Vu, J.C. Wang, in IEEE International Conference on Consumer Electronics (ICCE). Acoustic scene classification using convolutional neural networks and multi-scale multi-feature extraction (IEEE, Piscataway, 2018), pp. 1\u20134","DOI":"10.1109\/ICCE.2018.8326315"},{"key":"323_CR59","unstructured":"L. Jie, Acoustic scene classification with residual networks and attention mechanism. Tech. Rep., DCASE 2020 Challenge (2020)"},{"key":"323_CR60","unstructured":"K. Koutini, F. Henkel, H. Eghbal-zadeh, G. Widmer, Cpjku submissions to DCASE20: Low-complexity cross-device acoustic scene classification with rf-regularized cnns. Tech. Rep., DCASE 2020 Challenge (2020)"},{"key":"323_CR61","unstructured":"Y. Liu, J. Liang, L. Zhao, J. Liu, K. Zhao, W. Liu, L. Zhang, T. Xu, C. Shi, DCASE 2021 task 1 subtask a: Low-complexity acoustic scene classification. Tech. Rep., DCASE 2021 Challenge (2021)"}],"container-title":["EURASIP Journal on Audio, Speech, and Music Processing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s13636-023-00323-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1186\/s13636-023-00323-5\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s13636-023-00323-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,1,3]],"date-time":"2024-01-03T13:05:38Z","timestamp":1704287138000},"score":1,"resource":{"primary":{"URL":"https:\/\/asmp-eurasipjournals.springeropen.com\/articles\/10.1186\/s13636-023-00323-5"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,1,3]]},"references-count":61,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2024,12]]}},"alternative-id":["323"],"URL":"https:\/\/doi.org\/10.1186\/s13636-023-00323-5","relation":{},"ISSN":["1687-4722"],"issn-type":[{"value":"1687-4722","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,1,3]]},"assertion":[{"value":"16 April 2022","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"1 December 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"3 January 2024","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no competing interests.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"1"}}