{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,21]],"date-time":"2026-03-21T21:09:36Z","timestamp":1774127376261,"version":"3.50.1"},"reference-count":50,"publisher":"Springer Science and Business Media LLC","issue":"5","license":[{"start":{"date-parts":[[2022,3,24]],"date-time":"2022-03-24T00:00:00Z","timestamp":1648080000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,3,24]],"date-time":"2022-03-24T00:00:00Z","timestamp":1648080000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"dst, govt of india","award":["DST\/ICPS\/CLUSTER\/Data Science\/2018\/General"],"award-info":[{"award-number":["DST\/ICPS\/CLUSTER\/Data Science\/2018\/General"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2022,10]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>The Odia language is an old Eastern Indo-Aryan language, spoken by 46.8 million people across India. We have designed an ensemble classifier using Deep Convolutional Recurrent Neural Network for Speech Emotion Recognition (SER). This study presents a new approach for SER tasks motivated by recent research on speech emotion recognition. Initially, we extract utterance-level log Mel-spectrograms and their first and second derivative (Static, Delta, and Delta-delta), represented as 3-D log Mel-spectrograms. We utilize deep convolutional neural networks deep convolutional neural networks to extract the deep features from 3-D log Mel-spectrograms. Then a bi-directional-gated recurrent unit network is applied to express long-term temporal dependency out of all features to produce utterance-level emotion. Finally, we use ensemble classifiers using Softmax and Support Vector Machine classifier to improve the final recognition rate. In this way, our proposed framework is trained and tested on Odia (Seven emotional states) and RAVDESS (Eight emotional states) dataset. The experimental results reveal that an ensemble classifier performs better instead of a single classifier. The accuracy levels reached are 85.31% and 77.54%, outperforming some state-of-the-art frameworks on the Odia and RAVDESS datasets.<\/jats:p>","DOI":"10.1007\/s40747-022-00713-w","type":"journal-article","created":{"date-parts":[[2022,3,25]],"date-time":"2022-03-25T10:34:41Z","timestamp":1648204481000},"page":"4237-4249","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":23,"title":["A DCRNN-based ensemble classifier for speech emotion recognition in Odia language"],"prefix":"10.1007","volume":"8","author":[{"given":"Monorama","family":"Swain","sequence":"first","affiliation":[]},{"given":"Bubai","family":"Maji","sequence":"additional","affiliation":[]},{"given":"P.","family":"Kabisatpathy","sequence":"additional","affiliation":[]},{"given":"Aurobinda","family":"Routray","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,3,24]]},"reference":[{"issue":"1","key":"713_CR1","doi-asserted-by":"publisher","first-page":"16","DOI":"10.1002\/masy.201400045","volume":"347","author":"R Khokher","year":"2015","unstructured":"Khokher R, Singh RC, Kumar R (2015) Footprint recognition with principal component analysis and independent component analysis. Macromol Symp 347(1):16\u201326. https:\/\/doi.org\/10.1002\/masy.201400045","journal-title":"Macromol Symp"},{"key":"713_CR2","doi-asserted-by":"publisher","unstructured":"Mittal S, Agarwal S, Nigam MJ (2018) Real time multiple face recognition: a deep learning approach. In: Proceedings of the 2018 international conference on digital medicine and image processing, ACM, pp 70\u201376. https:\/\/doi.org\/10.1145\/3299852.3299853","DOI":"10.1145\/3299852.3299853"},{"key":"713_CR3","doi-asserted-by":"publisher","first-page":"101894","DOI":"10.1016\/j.bspc.2020.101894","volume":"59","author":"D Issa","year":"2020","unstructured":"Issa D, Demirci MF, Yazici A (2020) Speech emotion recognition with deep convolutional neural networks. Biomed Signal Process Control 59:101894. https:\/\/doi.org\/10.1016\/j.bspc.2020.101894","journal-title":"Biomed Signal Process Control"},{"key":"713_CR4","doi-asserted-by":"publisher","unstructured":"Le BV, Lee S (2014) Adaptive hierarchical emotion recognition from speech signal for human-robot communication. In: 2014 10th International conference on intelligent information hiding and multimedia signal processing, IEEE, pp 807\u2013810. https:\/\/doi.org\/10.1109\/IIH-MSP.2014.204","DOI":"10.1109\/IIH-MSP.2014.204"},{"issue":"2","key":"713_CR5","first-page":"20","volume":"4","author":"JG R\u00e1zuri","year":"2015","unstructured":"R\u00e1zuri JG, Sundgren D, Rahmani R, Larsson A, Cardenas AM, Bonet I (2015) Speech emotion recognition in emotional feedback for human-robot interaction. Int J Adv Res Artif Intell 4(2):20\u201327","journal-title":"Int J Adv Res Artif Intell"},{"key":"713_CR6","doi-asserted-by":"publisher","first-page":"1467","DOI":"10.1007\/s11235-011-9624-z","volume":"52","author":"S Ramakrishnan","year":"2013","unstructured":"Ramakrishnan S, El Emary IMM (2013) Speech emotion recognition approaches in human computer interaction. Telecommun Syst 52:1467\u20131478. https:\/\/doi.org\/10.1007\/s11235-011-9624-z","journal-title":"Telecommun Syst"},{"issue":"4","key":"713_CR7","first-page":"431","volume":"34","author":"X Sui","year":"2017","unstructured":"Sui X, Zhu T, Wang J (2017) Speech emotion recognition based on local feature optimization. J Univ Chin Acad Sci 34(4):431\u2013438","journal-title":"J Univ Chin Acad Sci"},{"issue":"1","key":"713_CR8","doi-asserted-by":"publisher","first-page":"137","DOI":"10.1007\/s10772-018-9493-x","volume":"21","author":"MB Mustafa","year":"2018","unstructured":"Mustafa MB, Yusoof MAM, Don ZM, Malekzadeh M (2018) Speech emotion recognition research: an analysis of research focus. Int J Speech Tech 21(1):137\u2013156. https:\/\/doi.org\/10.1007\/s10772-018-9493-x","journal-title":"Int J Speech Tech"},{"issue":"21","key":"713_CR9","doi-asserted-by":"publisher","first-page":"6008","DOI":"10.3390\/s20216008","volume":"20","author":"M Farooq","year":"2020","unstructured":"Farooq M, Hussain F, Baloch NK, Raja FR, Yu H, Zikria YB (2020) Impact of feature selection algorithm on speech emotion recognition using deep convolutional neural network. Sensors 20(21):6008. https:\/\/doi.org\/10.3390\/s20216008","journal-title":"Sensors"},{"key":"713_CR10","doi-asserted-by":"publisher","first-page":"643202","DOI":"10.3389\/fphys.2021.643202","volume":"12","author":"H Zhang","year":"2021","unstructured":"Zhang H, Gou R, Shang J, Shen F, Wu Y, Dai G (2021) Pre-trained deep convolution neural network model with attention for speech emotion recognition. Front Physiol 12:643202. https:\/\/doi.org\/10.3389\/fphys.2021.643202","journal-title":"Front Physiol"},{"key":"713_CR11","doi-asserted-by":"publisher","first-page":"771","DOI":"10.1007\/s12559-021-09865-2","volume":"13","author":"KA Arano","year":"2021","unstructured":"Arano KA, Gloor P, Orsenigo C, Vercellis C (2021) When old meets new: emotion recognition from speech signals. Cogn Comput 13:771\u2013783. https:\/\/doi.org\/10.1007\/s12559-021-09865-2","journal-title":"Cogn Comput"},{"issue":"5","key":"713_CR12","doi-asserted-by":"publisher","first-page":"63","DOI":"10.14132\/j.cnki.1673-5439.2018.05.009","volume":"38","author":"G Lu","year":"2018","unstructured":"Lu G, Yuan L, Yang W, Yan J, Li H (2018) Speech emotion recognition based on long-term and short-term memory and convolutional neural network. J Nanjing Inst Posts Telecomm 38(5):63\u201369. https:\/\/doi.org\/10.14132\/j.cnki.1673-5439.2018.05.009","journal-title":"J Nanjing Inst Posts Telecomm"},{"key":"713_CR13","doi-asserted-by":"publisher","first-page":"29","DOI":"10.1016\/j.specom.2019.10.004","volume":"115","author":"L Sun","year":"2019","unstructured":"Sun L, Zou B, Fu S, Chen J, Wang F (2019) Speech emotion recognition based on DNN-decision tree SVM model. Speech Commun 115:29\u201337","journal-title":"Speech Commun"},{"issue":"3","key":"713_CR14","doi-asserted-by":"publisher","first-page":"572","DOI":"10.1016\/j.patcog.2010.09.020","volume":"44","author":"ME Ayadi","year":"2011","unstructured":"Ayadi ME, Kamel MS, Karray F (2011) Survey on speech emotion recognition: features, classification schemes, and databases. Pattern Recogn 44(3):572\u2013587","journal-title":"Pattern Recogn"},{"issue":"1","key":"713_CR15","doi-asserted-by":"publisher","first-page":"93","DOI":"10.1007\/s10772-018-9491-z","volume":"21","author":"M Swain","year":"2018","unstructured":"Swain M, Routray A, Kabisatpathy P (2018) Databases, features and classifiers for speech emotion recognition: a review. Int J Speech Technol 21(1):93\u2013120","journal-title":"Int J Speech Technol"},{"key":"713_CR16","doi-asserted-by":"crossref","unstructured":"Wang ZQ, Tashev I (2017) Learning utterance-level representations for speech emotion and age\/gender recognition using deep neural networks. In: 2017 IEEE international conference on acoustics, speech, and signal processing (ICASSP), pp 5150\u20135154","DOI":"10.1109\/ICASSP.2017.7953138"},{"key":"713_CR17","doi-asserted-by":"publisher","first-page":"90368","DOI":"10.1109\/ACCESS.2019.2927384","volume":"7","author":"P Jiang","year":"2019","unstructured":"Jiang P, Fu H, Tao H, Lei P, Zhao L (2019) Parallelized convolutional recurrent neural network with spectral features for speech emotion recognition. IEEE Access 7:90368\u201390377. https:\/\/doi.org\/10.1109\/ACCESS.2019.2927384","journal-title":"IEEE Access"},{"key":"713_CR18","doi-asserted-by":"publisher","unstructured":"Hu H, Xu M, Wu W (2007) GMM supervector based SVM with spectral features for speech emotion recognition. In: 2007 IEEE international conference on acoustics, speech, and signal processing (ICASSP), pp 413\u2013416. https:\/\/doi.org\/10.1109\/ICASSP.2007.366937","DOI":"10.1109\/ICASSP.2007.366937"},{"issue":"10","key":"713_CR19","doi-asserted-by":"publisher","first-page":"1533","DOI":"10.1109\/TASLP.2014.2339736","volume":"22","author":"O Abdel-Hamid","year":"2014","unstructured":"Abdel-Hamid O, Mohamed AR, Jiang H, Deng L, Penn G, Yu D (2014) Convolutional neural networks for speech recognition. IEEE\/ACM Trans Audio Speech Lang Process 22(10):1533\u20131545","journal-title":"IEEE\/ACM Trans Audio Speech Lang Process"},{"issue":"4","key":"713_CR20","doi-asserted-by":"publisher","first-page":"235","DOI":"10.2478\/jaiscr-2019-0006","volume":"9","author":"A Shewalkar","year":"2019","unstructured":"Shewalkar A, Nyavanandi D, Ludwig SA (2019) Performance evaluation of deep neural networks applied to speech recognition: RNN, LSTM AND GRU. JAISCR 9(4):235\u2013245. https:\/\/doi.org\/10.2478\/jaiscr-2019-0006","journal-title":"JAISCR"},{"issue":"6","key":"713_CR21","doi-asserted-by":"publisher","first-page":"1576","DOI":"10.1109\/TMM.2017.2766843","volume":"20","author":"S Zhang","year":"2017","unstructured":"Zhang S, Zhang S, Huang T, Gao W (2017) Speech emotion recognition using deep convolutional neural network and discriminant temporal pyramid matching. IEEE Trans Multimedia 20(6):1576\u20131590. https:\/\/doi.org\/10.1109\/TMM.2017.2766843","journal-title":"IEEE Trans Multimedia"},{"key":"713_CR22","doi-asserted-by":"crossref","unstructured":"Zeng Y, Mao H, Peng D, Yi Z (2017) Spectrogram based multi-task audio classification. Multimed Tools Appl, pp 1\u201318","DOI":"10.1007\/s11042-017-5539-3"},{"issue":"5","key":"713_CR23","doi-asserted-by":"publisher","first-page":"e0196391","DOI":"10.1371\/journal.pone.0196391","volume":"13","author":"SR Livingstone","year":"2018","unstructured":"Livingstone SR, Russo FA (2018) The ryerson audio-visual database of emotional speech and song (ravdess): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS ONE 13(5):e0196391","journal-title":"PLoS ONE"},{"key":"713_CR24","doi-asserted-by":"publisher","unstructured":"Badshah AM, Ahmad J, Rahim N, Baik SW (2017) Speech emotion recognition from spectrograms with deep convolutional neural network. In: 2017 International conference on platform technology and service (PlatCon), pp 1\u20135. https:\/\/doi.org\/10.1109\/PlatCon.2017.7883728","DOI":"10.1109\/PlatCon.2017.7883728"},{"key":"713_CR25","first-page":"1097","volume":"25","author":"A Krizhevsky","year":"2012","unstructured":"Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst 25:1097\u20131105","journal-title":"Adv Neural Inf Process Syst"},{"key":"713_CR26","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-540-74171-8_101","author":"TL Pao","year":"2007","unstructured":"Pao TL, Chen YT, Yeh JH, Cheng YM, Lin YY (2007) A comparative study of different weighting schemes on KNN-based emotion recognition in mandarin speech. Int Conf Adv Intell Comput Theories App. https:\/\/doi.org\/10.1007\/978-3-540-74171-8_101","journal-title":"Int Conf Adv Intell Comput Theories App"},{"issue":"4","key":"713_CR27","doi-asserted-by":"publisher","first-page":"603","DOI":"10.1016\/S0167-6393(03)00099-2","volume":"41","author":"TL Nwe","year":"2003","unstructured":"Nwe TL, Foo SW, De Silva LC (2003) Speech emotion recognition using hidden markov models. Speech Commun 41(4):603\u2013623","journal-title":"Speech Commun"},{"key":"713_CR28","doi-asserted-by":"crossref","unstructured":"Ververidis D, Kotropoulos C (2005) Emotional speech classification using Gaussian mixture models and the sequential floating forward selection algorithm. In: 2005 IEEE International conference on multimedia and expo (ICME), Netherlands, pp 1500\u20131503","DOI":"10.1109\/ICME.2005.1521717"},{"key":"713_CR29","unstructured":"Tang Y (2015) Deep learning using linear support vector machines. arXiv:1306.0239"},{"key":"713_CR30","doi-asserted-by":"crossref","unstructured":"Schuller B, Rigoll G, Lang M (2004) Speech emotion recognition combining acoustic features and linguistic information in a hybrid support vector machine-belief network architecture. In: 2004 IEEE International conference on acoustics, speech, and signal processing (ICASSP), pp 1-577","DOI":"10.1109\/ICASSP.2004.1326051"},{"key":"713_CR31","doi-asserted-by":"publisher","unstructured":"Zhou Y, Sun Y, Zhang J, Yan Y (2009) Speech emotion recognition using both spectral and prosodic features. In: 2009 International conference on information engineering and computer science (ICIECS), Wuhan, China, pp 1\u20134. https:\/\/doi.org\/10.1109\/ICIECS.2009.5362730","DOI":"10.1109\/ICIECS.2009.5362730"},{"key":"713_CR32","doi-asserted-by":"publisher","first-page":"803","DOI":"10.1109\/ICPR.2014.148","volume-title":"2014 22nd international conference on pattern recognition (ICPR)","author":"M Kachele","year":"2014","unstructured":"Kachele M, Zharkov D, Meudt S, Schwenker F (2014) Prosodic, spectral and voice quality feature selection using a long-term stopping criterion for audio-based emotion recognition. 2014 22nd international conference on pattern recognition (ICPR). Stockholm, Sweden, pp 803\u2013808"},{"key":"713_CR33","unstructured":"Pan Y, Shen P, Shen L (2005) Feature extraction and selection in speech emotion recognition. In: IEEE (AVSS) conference on advanced video and signal based surveillance, Como, Italy, pp 64\u201369"},{"key":"713_CR34","doi-asserted-by":"crossref","unstructured":"Petrushin VA (2000) Emotion recognition in speech signal: experimental study, development, and application. In: 6th International Conference on Spoken Language Processing, Beijing, China, pp 222\u2013225","DOI":"10.21437\/ICSLP.2000-791"},{"issue":"1","key":"713_CR35","doi-asserted-by":"publisher","first-page":"119","DOI":"10.1007\/s13042-013-0192-2","volume":"6","author":"MA Quiros-Ramirez","year":"2015","unstructured":"Quiros-Ramirez MA, Onisawa T (2015) Considering cross-cultural context in the automatic recognition of emotion. Int J Mach Learn Cyber 6(1):119\u2013127","journal-title":"Int J Mach Learn Cyber"},{"issue":"10","key":"713_CR36","doi-asserted-by":"publisher","first-page":"1440","DOI":"10.1109\/LSP.2018.2860246","volume":"25","author":"M Chen","year":"2018","unstructured":"Chen M, He X, Yang J, Zhang H (2018) 3-D convolutional recurrent neural networks with attention model for speech emotion recognition. IEEE Signal Process Lett 25(10):1440\u20131444","journal-title":"IEEE Signal Process Lett"},{"key":"713_CR37","doi-asserted-by":"crossref","unstructured":"McFee B, Raffel C, Liang D, Ellis DPW, McVicar M, Battenberg E, Nieto O (2015) librosa: audio and music signal analysis in python. In: proceedings of the 14th Python in Science Conference, pp 18\u201325","DOI":"10.25080\/Majora-7b98e3ed-003"},{"key":"713_CR38","doi-asserted-by":"publisher","first-page":"3155","DOI":"10.1007\/s00521-020-05209-7","volume":"33","author":"M Dua","year":"2021","unstructured":"Dua M, Shakshi SR et al (2021) Deep CNN models-based ensemble approach to driver drowsiness detection. Neural Comput Appl 33:3155\u20133168. https:\/\/doi.org\/10.1007\/s00521-020-05209-7","journal-title":"Neural Comput Appl"},{"key":"713_CR39","doi-asserted-by":"publisher","first-page":"358","DOI":"10.1016\/j.patrec.2020.11.009","volume":"140","author":"Z Zhu","year":"2020","unstructured":"Zhu Z, Dai W, Hu Y, Li J (2020) Speech emotion recognition based on Bi-GRU and Focal Loss. Pattern Recog Lett 140:358\u2013365","journal-title":"Pattern Recog Lett"},{"key":"713_CR40","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1016\/j.knosys.2021.106934","volume":"220","author":"Z Xiao","year":"2021","unstructured":"Xiao Z, Xu X, Zhang H, Szczerbicki E (2021) A new multi-process collaborative architecture for time series classification. Knowl Based Syst 220:1\u201311","journal-title":"Knowl Based Syst"},{"key":"713_CR41","doi-asserted-by":"publisher","first-page":"65","DOI":"10.1016\/j.ins.2021.04.053","volume":"571","author":"Z Xiao","year":"2021","unstructured":"Xiao Z, Xu X, Xing H, Luo S, Dai P, Zhan D (2021) RTFN: a robust temporal feature network for time series classification. Inf Sci 571:65\u201386","journal-title":"Inf Sci"},{"issue":"8","key":"713_CR42","doi-asserted-by":"publisher","first-page":"1735","DOI":"10.1162\/neco.1997.9.8.1735","volume":"9","author":"S Hochreiter","year":"1997","unstructured":"Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735\u20131780. https:\/\/doi.org\/10.1162\/neco.1997.9.8.1735","journal-title":"Neural Comput"},{"key":"713_CR43","doi-asserted-by":"crossref","unstructured":"Gong Y, Chung YA, Glass J (2021) AST: audio spectrogram transformer. arXiv:2104.01778","DOI":"10.21437\/Interspeech.2021-698"},{"key":"713_CR44","doi-asserted-by":"publisher","unstructured":"Duan K, Keerthi SS, Chu W, Shevade SK, Poo AN (2003) Multi-category classification by soft-max combination of binary classifiers. In: Proceedings of the 4th international conference on multiple classifier systems, MCS\u201903, Springer, Berlin, pp 125\u2013134. https:\/\/doi.org\/10.1007\/3-540-44938-8_13","DOI":"10.1007\/3-540-44938-8_13"},{"issue":"2","key":"713_CR45","doi-asserted-by":"publisher","first-page":"98","DOI":"10.1016\/j.specom.2006.11.004","volume":"49","author":"D Morrison","year":"2007","unstructured":"Morrison D, Wang R, De Silva LC (2007) Ensemble methods for spoken emotion recognition in call-centres. Speech Commun 49(2):98\u2013112. https:\/\/doi.org\/10.1016\/j.specom.2006.11.004","journal-title":"Speech Commun"},{"key":"713_CR46","doi-asserted-by":"crossref","unstructured":"Swain M, Routray A, Kabisatpathy P, Kundu JN (2016) Study of prosodic feature extraction for multidialectal Odia speech emotion recognition. In: IEEE region 10 conference (TENCON), pp 1644\u20131649","DOI":"10.1109\/TENCON.2016.7848296"},{"key":"713_CR47","unstructured":"Kingma DP, Ba JL (2017) ADAM: A method for stochastic optimization. arXiv:1412.6980"},{"key":"713_CR48","volume-title":"Hands-on machine learning with Scikit-Learn and Tensor-Flow: concepts, tools, and techniques to build intelligent systems","author":"A Geron","year":"2017","unstructured":"Geron A (2017) Hands-on machine learning with Scikit-Learn and Tensor-Flow: concepts, tools, and techniques to build intelligent systems. O\u2019Reilly Media, Inc, USA"},{"key":"713_CR49","doi-asserted-by":"publisher","unstructured":"Shegokar P, Sircar P (2016) Continuous wavelet transform based speech emotion recognition. In: Proceedings of the 10th international conference on signal processing and communication systems, pp 1\u20138. https:\/\/doi.org\/10.1109\/ICSPCS.2016.7843306","DOI":"10.1109\/ICSPCS.2016.7843306"},{"key":"713_CR50","doi-asserted-by":"publisher","unstructured":"Jalal MA, Loweimi E, Moore RK, Hain T (2019) Learning temporal clusters using capsule routing for speech emotion recognition. In: Proceedings of the INTERSPEECH 2019, Graz, Austria, pp 1701\u20131705. https:\/\/doi.org\/10.21437\/Interspeech.2019-3068","DOI":"10.21437\/Interspeech.2019-3068"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-022-00713-w.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-022-00713-w\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-022-00713-w.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,9,27]],"date-time":"2022-09-27T13:53:59Z","timestamp":1664286839000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-022-00713-w"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,3,24]]},"references-count":50,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2022,10]]}},"alternative-id":["713"],"URL":"https:\/\/doi.org\/10.1007\/s40747-022-00713-w","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"value":"2199-4536","type":"print"},{"value":"2198-6053","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,3,24]]},"assertion":[{"value":"23 June 2021","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"5 March 2022","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"24 March 2022","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors have no conflicts of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}