{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,27]],"date-time":"2026-03-27T22:40:44Z","timestamp":1774651244418,"version":"3.50.1"},"reference-count":56,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2024,1,3]],"date-time":"2024-01-03T00:00:00Z","timestamp":1704240000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,1,3]],"date-time":"2024-01-03T00:00:00Z","timestamp":1704240000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Discov Internet Things"],"abstract":"<jats:title>Abstract<\/jats:title><jats:p>In the era of automated and digitalized information, advanced computer applications deal with a major part of the data that comprises audio-related information. Advancements in technology have ushered in a new era where cutting-edge devices can deliver comprehensive insights into audio content, leveraging sophisticated algorithms such such as Mel Frequency Cepstral Coefficients (MFCCs) and Short-Time Fourier Transform (STFT) to extract and provide pertinent information. Our study helps in not only efficient audio file management and audio file retrievals but also plays a vital role in security, the robotics industry, and investigations. Beyond its industrial applications, our model exhibits remarkable versatility in the corporate sector, particularly in tasks like siren sound detection and more. Embracing this capability holds the promise of catalyzing the development of advanced automated systems, paving the way for increased efficiency and safety across various corporate domains. The primary aim of our experiment is to focus on creating highly efficient audio classification models that can be seamlessly automated and deployed within the industrial sector, addressing critical needs for enhanced productivity and performance. Despite the dynamic nature of environmental sounds and the presence of noises, our presented audio classification model comes out to be efficient and accurate. The novelty of our research work reclines to compare two different audio datasets having similar characteristics and revolves around classifying the audio signals into several categories using various machine learning techniques and extracting MFCCs and STFTs features from the audio signals. We have also tested the results after and before the noise removal for analyzing the effect of the noise on the results including the precision, recall, specificity, and F1-score. Our experiment shows that the ANN model outperforms the other six audio models with the accuracy of 91.41% and 91.27% on respective datasets.<\/jats:p>","DOI":"10.1007\/s43926-023-00049-y","type":"journal-article","created":{"date-parts":[[2024,1,3]],"date-time":"2024-01-03T19:03:33Z","timestamp":1704308613000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":57,"title":["Comparative analysis of audio classification with MFCC and STFT features using machine learning techniques"],"prefix":"10.1007","volume":"4","author":[{"given":"Mahendra Kumar","family":"Gourisaria","sequence":"first","affiliation":[]},{"given":"Rakshit","family":"Agrawal","sequence":"additional","affiliation":[]},{"given":"Manoj","family":"Sahni","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7676-9014","authenticated-orcid":false,"given":"Pradeep Kumar","family":"Singh","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,1,3]]},"reference":[{"key":"49_CR1","doi-asserted-by":"publisher","first-page":"1142","DOI":"10.1109\/TASL.2009.2017438","volume":"17","author":"S Chu","year":"2009","unstructured":"Chu S, Narayanan S, Kuo C-CJ. Environmental sound recognition with time-frequency audio features. IEEE Trans Audio Speech Lang Process. 2009;17:1142\u201358.","journal-title":"IEEE Trans Audio Speech Lang Process"},{"key":"49_CR2","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1007\/s43926-021-00007-6","volume":"1","author":"I Ahmad","year":"2021","unstructured":"Ahmad I. \u201cWelcome from Editor-in-Chief: discover Internet-of-Things editorial\u201d, inaugural issue. Discov Internet Things. 2021;1:1.","journal-title":"Discov Internet Things"},{"key":"49_CR3","doi-asserted-by":"crossref","unstructured":"E. Alexandre, L. Caudra, M. Rosa, and F. Lopez-Ferreras, \u201cFeature selection for sound classification in hearing aids through restricted search driven by genetic algorithms,\u201d IEEE Transactions on Audio, Speech, and Language Processing, vol. 15, no. 8, pp. 2249\u20132256, Oct. 2007.L. Ballan, A. Bazzica, M. Bertini, A. D. Bimbo, G. Serra, \u201cDeep networks for audio event classification in soccer videos,\u201d In Proceedings of the IEEE International Conference on Multimedia and Expo, pp. 474\u2013477, 2009.","DOI":"10.1109\/TASL.2007.905139"},{"key":"49_CR4","unstructured":"Vacher M, Serignat J-F, and Chaillot S. \u201cSound classification in a smart room environment: an approach using GMM and HMM methods,\u201d In Proceedings of the IEEE Conference on Speech Technology and Human-Computer Dialogue, pp. 135\u2013146, 2007."},{"key":"49_CR5","doi-asserted-by":"crossref","unstructured":". Ahmad I,. Swaminathan V, Aved A, &. Khalid S, \u201cAn overview of rate control techniques in HEVC and SHVC video encoding. Multimedia Tools and Applications\u201d, vol. 81, no. 24, 2022.","DOI":"10.1007\/s11042-021-11249-5"},{"issue":"2","key":"49_CR6","doi-asserted-by":"publisher","first-page":"202","DOI":"10.1109\/TCSVT.2005.856899","volume":"16","author":"I Ahmad","year":"2006","unstructured":"Ahmad I, Luo J. On using game theory for perceptually tuned rate control algorithm for video coding. IEEE Trans Circuits Syst Video Technol. 2006;16(2):202\u20138.","journal-title":"IEEE Trans Circuits Syst Video Technol"},{"key":"49_CR7","doi-asserted-by":"crossref","unstructured":"L. Ballan, A. Bazzica, M. Bertini, A. D. Bimbo, G. Serra, \u201cDeep networks for audio event classification in soccer videos,\u201d In Proceedings of the IEEE International Conference on Multimedia and Expo, pp. 474\u2013477, 2009.","DOI":"10.1109\/ICME.2009.5202537"},{"key":"49_CR8","doi-asserted-by":"crossref","unstructured":"K. Lopatka, P. Zwan, and A. Czy\u02d9zewski, \u201cDangerous sound event recognition using support vector machine classifiers,\u201d In Advances in Multimedia and Network Information System Technologies, pp. 49\u201357, 2010.","DOI":"10.1007\/978-3-642-14989-4_5"},{"key":"49_CR9","doi-asserted-by":"publisher","first-page":"124055","DOI":"10.1109\/ACCESS.2020.3006082","volume":"8","author":"SL Ullo","year":"2020","unstructured":"Ullo SL, Khare SK, Bajaj V, Sinha GR. Hybrid computerized method for environmental sound classification. IEEE Access. 2020;8:124055\u201365.","journal-title":"IEEE Access"},{"key":"49_CR10","doi-asserted-by":"publisher","first-page":"125714","DOI":"10.1109\/ACCESS.2020.3007906","volume":"8","author":"X Dong","year":"2020","unstructured":"Dong X, Yin B, Cong Y, Du Z, Huang X. Environment sound event classification with a two-stream convolutional neural network. IEEE Access. 2020;8:125714\u201321.","journal-title":"IEEE Access"},{"key":"49_CR11","doi-asserted-by":"crossref","unstructured":"M.K.Gourisaria, R. Agrawal, GM. Harshvardhan, M. Pandey, S.S. Rautaray \u201cApplication of Machine Learning in Industry 4.0,\u201d In Machine Learning: Theoretical Foundations and Practical Applications, pp 57\u201387, 2021, Machine learning: Theoretical foundations and practical applications.","DOI":"10.1007\/978-981-33-6518-6_4"},{"key":"49_CR12","first-page":"463","volume-title":"Automatic classification of carnatic music instruments Using MFCC and LPC","author":"S Shetty","year":"2020","unstructured":"Shetty S, Hegde S. Automatic classification of carnatic music instruments Using MFCC and LPC. Analytics and Innovation: In Data Management; 2020. p. 463\u201374."},{"key":"49_CR13","doi-asserted-by":"crossref","unstructured":"Vivek V S, Vidhya S, and. Madhanmohan P, \u201cAcoustic Scene Classification in Hearing aid using Deep Learning,\u201d In 2020 International Conference on Communication and Signal Processing (ICCSP), pp. 0695\u20130699, July 2020.","DOI":"10.1109\/ICCSP48568.2020.9182160"},{"issue":"8","key":"49_CR14","first-page":"3384","volume":"14","author":"CI Kim","year":"2020","unstructured":"Kim CI, Cho Y, Jung S, Rew J, Hwang E. Animal sounds classification scheme based on multi-feature network with mixed datasets. KSII Transactions on Internet and Information Systems (TIIS). 2020;14(8):3384\u201398.","journal-title":"KSII Transactions on Internet and Information Systems (TIIS)"},{"key":"49_CR15","doi-asserted-by":"crossref","unstructured":"Bansal V, Pahwa G, and. Kannan N, \u201cCough Classification for COVID-19 based on audio mfcc features using Convolutional Neural Networks,\u201d In 2020 IEEE International Conference on Computing, Power and Communication Technologies (GUCON), pp. 604\u2013608. 2020.","DOI":"10.1109\/GUCON48875.2020.9231094"},{"key":"49_CR16","doi-asserted-by":"crossref","unstructured":"Chabot P, Bouserhal R E, Cardinal P, and Voix J, \u201cDetection and classification of human-produced nonverbal audio events,\u201d Applied Acoustics, vol. 171, 2020.","DOI":"10.1016\/j.apacoust.2020.107643"},{"issue":"5","key":"49_CR17","doi-asserted-by":"publisher","first-page":"716","DOI":"10.1109\/TCSVT.2004.826766","volume":"14","author":"HG Kim","year":"2004","unstructured":"Kim HG, Moreau N, Sikora T. Audio classification based on MPEG-7 spectral basis representations. IEEE Trans Circuits Syst Video Technol. 2004;14(5):716\u201325.","journal-title":"IEEE Trans Circuits Syst Video Technol"},{"issue":"5","key":"49_CR18","doi-asserted-by":"publisher","first-page":"533","DOI":"10.1016\/S0167-8655(00)00119-7","volume":"22","author":"D Li","year":"2001","unstructured":"Li D, Sethi IK, Dimitrova N, McGee T. Classification of general audio data for content-based retrieval. Pattern Recogn Lett. 2001;22(5):533\u201344.","journal-title":"Pattern Recogn Lett"},{"key":"49_CR19","doi-asserted-by":"publisher","first-page":"2048","DOI":"10.1016\/j.procs.2017.08.250","volume":"112","author":"V Boddapati","year":"2017","unstructured":"Boddapati V, Petef A, Rasmusson J, Lundberg L. Classifying environmental sounds using image recognition networks. Procedia computer science. 2017;112:2048\u201356.","journal-title":"Procedia computer science"},{"issue":"15","key":"49_CR20","doi-asserted-by":"publisher","first-page":"2895","DOI":"10.1016\/S0167-8655(03)00147-8","volume":"24","author":"M Cowling","year":"2003","unstructured":"Cowling M, Sitte R. Comparison of techniques for environmental sound recognition. Pattern Recogn Lett. 2003;24(15):2895\u2013907.","journal-title":"Pattern Recogn Lett"},{"key":"49_CR21","doi-asserted-by":"crossref","unstructured":"Bountourakis V, Vrysis L, and Papanikolaou G, \u201cMachine learning algorithms for environmental sound recognition: Towards soundscape semantics,\u201d In Proceedings of the Audio Mostly 2015 on Interaction With Sound, pp. 1\u20137, 2015.","DOI":"10.1145\/2814895.2814905"},{"issue":"2","key":"49_CR22","doi-asserted-by":"publisher","first-page":"410","DOI":"10.3390\/acoustics1020023","volume":"1","author":"V Bountourakis","year":"2019","unstructured":"Bountourakis V, Vrysis L, Konstantoudakis K, Vryzas N. An Enhanced Temporal Feature Integration Method for Environmental Sound Recognition. In Acoustics. 2019;1(2):410\u201322.","journal-title":"In Acoustics"},{"key":"49_CR23","doi-asserted-by":"crossref","unstructured":"Dieleman S, Schrauwen B. \u201cEnd-to-end learning for music audio,\u201d IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 6964\u20136968, 2014.","DOI":"10.1109\/ICASSP.2014.6854950"},{"issue":"1","key":"49_CR24","doi-asserted-by":"publisher","first-page":"1","DOI":"10.3390\/app8010150","volume":"8","author":"J Lee","year":"2018","unstructured":"Lee J, Park J, Kim KL, Nam J. End-to-end deep convolutional neural networks using very small filters for music classification. Applied Sci. 2018;8(1):1\u201314.","journal-title":"Applied Sci"},{"key":"49_CR25","doi-asserted-by":"publisher","first-page":"90","DOI":"10.1016\/j.knosys.2018.07.033","volume":"161","author":"Y Wu","year":"2018","unstructured":"Wu Y, Mao H, Yi Z. Audio classification using attention-augmented convolutional neural network. Knowl-Based Syst. 2018;161:90\u2013100.","journal-title":"Knowl-Based Syst"},{"key":"49_CR26","doi-asserted-by":"crossref","unstructured":"Pons J, and Serra X, \u201cDesigning efficient architectures for modeling temporal features with convolutional neural networks,\u201d IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 2472\u20132476, 2017.","DOI":"10.1109\/ICASSP.2017.7952601"},{"key":"49_CR27","unstructured":"Choi K, Fazekas G, and Sandler M, \u201cAutomatic tagging using deep convolutional neural networks,\u201d Proceedings of the 17th International Society for Music Information Retrieval Conference, ISMIR 2016 pp. 805\u2013811, 2016."},{"key":"49_CR28","unstructured":"Jiang H, Bai J, Zhang S, and Xu B, \u201cSVM-based audio scene classification,\u201d Proceeding of the IEEE, pp. 131\u2013136, 2005."},{"key":"49_CR29","doi-asserted-by":"publisher","first-page":"482","DOI":"10.1007\/s00530-002-0065-0","volume":"8","author":"L Lu","year":"2003","unstructured":"Lu L, Zhang H-J, Li SZ. Content-based audio classification and segmentation by using support vector machines. Multimedia Syst. 2003;8:482\u201392.","journal-title":"Multimedia Syst"},{"key":"49_CR30","doi-asserted-by":"crossref","unstructured":"Cowling M, and Sitte R, \u201cComparison of techniques for environmental sound recognition,\u201d Pattern Recog Lett, pp. 2895\u2013907, 2003.","DOI":"10.1016\/S0167-8655(03)00147-8"},{"key":"49_CR31","unstructured":"Harma A, McKinney M F, and Skowronek J, \u201cAutomatic surveillance of the acoustic activity in our living environment,\u201d IEEE international conference on multimedia and exposition. Amsterdam (The Netherlands), July 2005."},{"key":"49_CR32","unstructured":"Clavel C, Ehrette T, and Richard G, \u201cEvent detection for an audio-based surveillance system,\u201d IEEE International Conference on Multimedia Exposition. Amsterdam (The Netherlands), July 2005."},{"key":"49_CR33","unstructured":"Dufaux A, Bezacier L, Ansorge M, and Pellandini F, \u201cAutomatic sound detection and recognition for a noisy environment,\u201d Proceedings of. European Signal Processing Conference. Finland, pp. 1033\u20136, Sep. 2000."},{"key":"49_CR34","doi-asserted-by":"publisher","first-page":"715","DOI":"10.1109\/TSMCA.2009.2015676","volume":"39","author":"W Dargie","year":"2009","unstructured":"Dargie W. Adaptive audio-based contest recognition. IEEE Trans Syst, Man, Cybernet. 2009;39:715\u201325.","journal-title":"IEEE Trans Syst, Man, Cybernet"},{"key":"49_CR35","doi-asserted-by":"crossref","unstructured":"El-Maleh K, Samouelian A, and Kabal P, \u201cFrame-level noise classification in mobile environments,\u201d Proceedings of ICASSP. Phoenix (AZ), pp. 237\u201340, March 1999.","DOI":"10.1109\/ICASSP.1999.758106"},{"key":"49_CR36","doi-asserted-by":"crossref","unstructured":"Seker H, and Inik O. \u201cCnnSound: Convolutional Neural Networks for the Classification of Environmental Sounds,\u201d Proceedings of ICPS, International Conference on Advances in Artificial Intelligence (ICAAI), pp. 79\u201384, Oct. 2020.","DOI":"10.1145\/3441417.3441431"},{"key":"49_CR37","doi-asserted-by":"publisher","first-page":"896","DOI":"10.1016\/j.neucom.2020.08.069","volume":"453","author":"Z Zhang","year":"2021","unstructured":"Zhang Z, Xu S, Zhang S, Qiao T, Cao S. S, \u201cAttention-based convolutional recurrent neural network for environmental sound classification.\u201d Neurocomputing. 2021;453:896\u2013903.","journal-title":"Neurocomputing"},{"issue":"3","key":"49_CR38","doi-asserted-by":"publisher","first-page":"6069","DOI":"10.1016\/j.eswa.2008.06.126","volume":"36","author":"P Dhanalakshmi","year":"2009","unstructured":"Dhanalakshmi P, Palanivel S, Ramalingam V. Classification of audio signals using SVM and RBFNN. Expert Syst Appl. 2009;36(3):6069\u201375.","journal-title":"Expert Syst Appl"},{"key":"49_CR39","doi-asserted-by":"crossref","unstructured":"Chen L, Gunduz S, and Ozsu M T, \u201cMixed type audio classification with support vector machine,\u201d IEEE International Conference on Multimedia and Expo, pp. 781\u2013784. July 2006.","DOI":"10.1109\/ICME.2006.262954"},{"key":"49_CR40","doi-asserted-by":"crossref","unstructured":". Maccagno A, Mastropietro A, Mazziotta U, Scarpiniti M, Lee Y C, and Uncini A, \u201cA CNN approach for audio classification in construction sites,\u201d In\u00a0Progresses in Artificial Intelligence and Neural Systems,\u00a0pp. 371\u2013381. 2021.","DOI":"10.1007\/978-981-15-5093-5_33"},{"key":"49_CR41","doi-asserted-by":"crossref","unstructured":". Mehyadin AE, Abdulazeez AM, Hasan DA, and Saeed JN, \u201cBirds Sound Classification Based on Machine Learning Algorithms,\u201d\u00a0Asian Journal of Research in Computer Science, pp. 1\u201311. 2021.","DOI":"10.9734\/ajrcos\/2021\/v9i430227"},{"issue":"1","key":"49_CR42","doi-asserted-by":"publisher","first-page":"46","DOI":"10.5755\/j01.eie.26.1.25309","volume":"26","author":"M Pakyurek","year":"2020","unstructured":"Pakyurek M, Atmis M, Kulac S, Uludag U. Extraction of Novel Features Based on Histograms of MFCCs Used in Emotion Classification from Generated Original Speech Dataset. Elektronika ir Elektrotechnika. 2020;26(1):46\u201351.","journal-title":"Elektronika ir Elektrotechnika"},{"key":"49_CR43","doi-asserted-by":"publisher","first-page":"22","DOI":"10.1016\/j.neunet.2020.06.015","volume":"130","author":"M Deng","year":"2020","unstructured":"Deng M, Meng T, Cao J, Wang S, Zhang J, Fan H. Heart sound classification based on improved MFCC features and convolutional recurrent neural networks. Neural Netw. 2020;130:22\u201332.","journal-title":"Neural Netw"},{"key":"49_CR44","doi-asserted-by":"crossref","unstructured":"Salamon J, Jacoby C, and Bello J P, \u201cA dataset and taxonomy for urban sound research,\u201d Proceedings of the 22nd ACM international conference on Multimedia, pp. 1041\u20131044, Nov. 2014. Retrieved 14 December 2020 from https:\/\/urbansounddataset.weebly.com\/urbansound8k.html","DOI":"10.1145\/2647868.2655045"},{"key":"49_CR45","unstructured":"Chathuranga S (2019) [Online]. Sound Event Dataset. Retrieved 14 December 2020 from https:\/\/github.com\/chathuranga95\/SoundEventClassification"},{"key":"49_CR46","doi-asserted-by":"publisher","first-page":"62719","DOI":"10.1109\/ACCESS.2021.3073786","volume":"9","author":"MA Qamhan","year":"2021","unstructured":"Qamhan MA, Altaheri H, Meftah AH, Muhammad G, Alotaibi YA. Digital audio forensics: microphone and environment classification using deep learning. IEEE Access. 2021;9:62719\u201333.","journal-title":"IEEE Access"},{"key":"49_CR47","doi-asserted-by":"crossref","unstructured":"GM H, Gourisaria MK, Pandey M, and Rautaray SS, \u201cA Comprehensive Survey and Analysis of Generative Models in Machine Learning,\u201d Computer Science Review \u2013 Elsevier, vol. 38, Nov. 2020.","DOI":"10.1016\/j.cosrev.2020.100285"},{"issue":"1","key":"49_CR48","doi-asserted-by":"publisher","first-page":"13","DOI":"10.1148\/rg.301095057","volume":"30","author":"T Ayer","year":"2010","unstructured":"Ayer T, Chhatwal J, Alagoz O, Kahn CE Jr, Woods RW, Burnside ES. Comparison of logistic regression and artificial neural network models in breast cancer risk estimation. Radiographics. 2010;30(1):13\u201322.","journal-title":"Radiographics"},{"key":"49_CR49","first-page":"91","volume":"1","author":"R Singh","year":"2010","unstructured":"Singh R, Yadav CS, Verma P, Yadav V. Optical character recognition (OCR) for printed Devanagari script using artificial neural network. Int J Computer Sci Communication. 2010;1:91\u20135.","journal-title":"Int J Computer Sci Communication"},{"key":"49_CR50","first-page":"131","volume":"1","author":"S Barve","year":"2012","unstructured":"Barve S. Optical character recognition using artificial neural network. Int J Adv Res Computer Eng Technol. 2012;1:131\u20133.","journal-title":"Int J Adv Res Computer Eng Technol"},{"key":"49_CR51","doi-asserted-by":"crossref","unstructured":"Jaitly N, Nguyen P, Senior A and Vanhoucke V. Application of pre-trained deep neural networks to large vocabulary speech recognition. 2012.","DOI":"10.21437\/Interspeech.2012-10"},{"issue":"3","key":"49_CR52","first-page":"37","volume":"5","author":"SL Ting","year":"2011","unstructured":"Ting SL, Ip WH, Tsang AH. Is Naive Bayes a good classifier for document classification. International Journal of Software Engineering and Its Applications. 2011;5(3):37\u201346.","journal-title":"International Journal of Software Engineering and Its Applications"},{"key":"49_CR53","doi-asserted-by":"crossref","unstructured":"Chen L, Gunduz S, and Ozsu MT. Mixed type audio classification with support vector machine. IEEE International Conference on Multimedia and Expo, pp. 781\u2013784, July 2006.","DOI":"10.1109\/ICME.2006.262954"},{"key":"49_CR54","unstructured":"Palanisamy K, Singhania D, & Yao A. (2020). Rethinking CNN models for audio classification. arXiv preprint arXiv:2007.11154."},{"key":"49_CR55","unstructured":"Zeghidour N, Teboul O, Quitry FDC, & Tagliasacchi M, (2021). Leaf: A learnable frontend for audio classification. arXiv preprint arXiv:2101.08596."},{"issue":"10","key":"49_CR56","doi-asserted-by":"publisher","first-page":"e0205355","DOI":"10.1371\/journal.pone.0205355","volume":"13","author":"DT Toledano","year":"2018","unstructured":"Toledano DT, Fern\u00e1ndez-Gallego MP, Lozano-Diez A. Multi-resolution speech analysis for automatic speech recognition using deep neural networks: Experiments on TIMIT. PLoS ONE. 2018;13(10):e0205355.","journal-title":"PLoS ONE"}],"container-title":["Discover Internet of Things"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s43926-023-00049-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s43926-023-00049-y\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s43926-023-00049-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,1,3]],"date-time":"2024-01-03T19:07:27Z","timestamp":1704308847000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s43926-023-00049-y"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,1,3]]},"references-count":56,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2024,12]]}},"alternative-id":["49"],"URL":"https:\/\/doi.org\/10.1007\/s43926-023-00049-y","relation":{},"ISSN":["2730-7239"],"issn-type":[{"value":"2730-7239","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,1,3]]},"assertion":[{"value":"28 April 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"8 November 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"3 January 2024","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The submitted manuscript is the original piece of research work, and it not submitted elsewhere in any form previous to this submission. All authors declare that the manuscript is not under consideration in any of the journal or conference and free from dual submission. All authors have contributed for the manuscript.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval and consent to participate"}},{"value":"This research work carried out in this manuscript does not include the involvement of human\/ animal in any form nor it is related to human\/ animal medical data.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Research involving human participants and\/or animals informed consent"}},{"value":"There is no competing interest.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing Interests"}}],"article-number":"1"}}