{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,7]],"date-time":"2026-04-07T17:05:28Z","timestamp":1775581528598,"version":"3.50.1"},"reference-count":53,"publisher":"Springer Science and Business Media LLC","issue":"7","license":[{"start":{"date-parts":[[2025,5,12]],"date-time":"2025-05-12T00:00:00Z","timestamp":1747008000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,5,12]],"date-time":"2025-05-12T00:00:00Z","timestamp":1747008000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62107032"],"award-info":[{"award-number":["62107032"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62377027"],"award-info":[{"award-number":["62377027"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2025,7]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Emotion recognition in conversations has recently emerged as a hot research topic owing to its increasingly important role in developing intelligent empathy services. Thoroughly exploring the conversational context and accurately capturing emotion-shift information are highly crucial for accurate emotion recognition in conversations. However, existing studies generally failed to fully understand the complex conversational context due to their insufficient capabilities in extracting and integrating emotional cues. Moreover, they mainly focused on the speaker\u2019s emotion inertia while paying less attention to explore multi-perspective emotion-shift patterns. To address these limitations, this study proposes a novel multimodal approach, namely, GAT-CRESA (Graph ATtention based on Contextual Reasoning and Emotion-Shift Awareness). Specifically, the multi-turn global contextual reasoning module iteratively performs contextual perception and cognitive reasoning for efficiently understanding the global conversational context. Then, GAT-CRESA explores emotion-shift information among utterances from both the speaker-dependent and the global context-based perspectives. Next, the emotion-shift awareness graphs are constructed for extracting significant local-level conversational context, where edge relations are determined by the learnt emotion-shift labels. Finally, the outputs of graphs are concatenated for final emotion recognition. The loss of emotion prediction task is combined together with those of two perspective\u2019s emotion-shift learning for guiding the training process. Experimental results show that our GAT-CRESA achieves new state-of-art records with 72.77% ACC and 72.81% wa-F1 on IEMOCAP, and 65.44% ACC and 65.04% wa-F1 on MELD, respectively. The ablation results also indicate the effectiveness and rationality of each component in our approach.<\/jats:p>","DOI":"10.1007\/s40747-025-01903-y","type":"journal-article","created":{"date-parts":[[2025,5,12]],"date-time":"2025-05-12T07:13:48Z","timestamp":1747034028000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["Graph attention based on contextual reasoning and emotion-shift awareness for emotion recognition in conversations"],"prefix":"10.1007","volume":"11","author":[{"given":"Juan","family":"Yang","sequence":"first","affiliation":[]},{"given":"Puling","family":"Wei","sequence":"additional","affiliation":[]},{"given":"Xu","family":"Du","sequence":"additional","affiliation":[]},{"given":"Jun","family":"Shen","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,5,12]]},"reference":[{"issue":"9","key":"1903_CR1","doi-asserted-by":"publisher","DOI":"10.1016\/j.jksuci.2023.101791","volume":"35","author":"J Yang","year":"2023","unstructured":"Yang J, Dong X, Du X (2023) SMFNM: Semi-supervised multimodal fusion network with main-modal for real-time emotion recognition in conversations. J King Saud Univ-Com 35(9):101791","journal-title":"J King Saud Univ-Com"},{"key":"1903_CR2","doi-asserted-by":"publisher","first-page":"50","DOI":"10.1016\/j.inffus.2020.06.011","volume":"64","author":"YK Ma","year":"2020","unstructured":"Ma YK, Nguyen KL, Xing FZ, Cambria E (2020) A survey on empathetic dialogue systems. Inform Fusion 64:50\u201370","journal-title":"Inform Fusion"},{"issue":"11","key":"1903_CR3","doi-asserted-by":"publisher","first-page":"13710","DOI":"10.1007\/s11227-022-04416-4","volume":"78","author":"R Nimmagadda","year":"2022","unstructured":"Nimmagadda R, Arora K, Martin MV (2022) Emotion recognition models for companion robots. J Supercomput 78(11):13710\u201313727","journal-title":"J Supercomput"},{"issue":"10","key":"1903_CR4","doi-asserted-by":"publisher","first-page":"7593","DOI":"10.1007\/s00500-019-04387-4","volume":"24","author":"RZ Cabada","year":"2020","unstructured":"Cabada RZ, Rangel HR, Estrada MLB, Lopez HMC (2020) Hyper parameter optimization in CNN for learning-centered emotion recognition for intelligent tutoring systems. Soft Comput 24(10):7593\u20137602","journal-title":"Soft Comput"},{"key":"1903_CR5","doi-asserted-by":"crossref","unstructured":"Jiao W, Lyu M, King I (2020) Real-time emotion recognition via attention gated hierarchical memory network. In: Proceedings of the AAAI Conference on Artificial Intelligence. pp 8002\u20138009","DOI":"10.1609\/aaai.v34i05.6309"},{"key":"1903_CR6","doi-asserted-by":"publisher","first-page":"73","DOI":"10.1016\/j.neucom.2021.09.057","volume":"467","author":"W Li","year":"2022","unstructured":"Li W, Shao W, Ji SX, Cambria E (2022) BiERU: Bidirectional emotion recurrent unit for conversational sentiment analysis. Neurocomputing 467:73\u201382","journal-title":"Neurocomputing"},{"key":"1903_CR7","doi-asserted-by":"crossref","unstructured":"Shen W, Chen J, Quan X, Xie Z (2021) DialogXL: all-in-one xlnet for multi-party conversation emotion recognition. In: Proceedings of the AAAI Conference on Artificial Intelligence. pp 13789\u201313797","DOI":"10.1609\/aaai.v35i15.17625"},{"key":"1903_CR8","doi-asserted-by":"publisher","first-page":"4471","DOI":"10.1109\/TMM.2021.3118881","volume":"24","author":"W Nie","year":"2022","unstructured":"Nie W, Chang R, Ren M, Su Y, Liu A (2022) I-GCN: Incremental graph convolution network for conversation emotion detection. IEEE Trans Multimed 24:4471\u20134481","journal-title":"IEEE Trans Multimed"},{"key":"1903_CR9","doi-asserted-by":"publisher","DOI":"10.1016\/j.knosys.2021.107751","volume":"236","author":"H Ma","year":"2022","unstructured":"Ma H, Wang J, Lin H, Pan X, Zhang Y, Yang Z (2022) A multi-view network for real-time emotion recognition in conversations. Knowl-based Syst 236:107751","journal-title":"Knowl-based Syst"},{"key":"1903_CR10","doi-asserted-by":"crossref","unstructured":"Hu D, Bao Y N, Wei L W, Zhou W, Hu SL (2023) Supervised adversarial contrastive learning for emotion recognition in conversations. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. pp 10835\u201310852","DOI":"10.18653\/v1\/2023.acl-long.606"},{"issue":"4","key":"1903_CR11","doi-asserted-by":"publisher","first-page":"3164","DOI":"10.1109\/TAFFC.2022.3221749","volume":"14","author":"S Latif","year":"2023","unstructured":"Latif S, Rana R, Khalifa S, Jurdak R, Schuller BW (2023) Multitask learning from augmented auxiliary data for improving speech emotion recognition. IEEE Trans Affect Comput 14(4):3164\u20133176","journal-title":"IEEE Trans Affect Comput"},{"key":"1903_CR12","doi-asserted-by":"crossref","unstructured":"Hu J, Liu Y, Zhao J, Jin Q (2021) MMGCN: multimodal fusion via deep graph convolution network for emotion recognition in conversation. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. pp 5666\u20135675","DOI":"10.18653\/v1\/2021.acl-long.440"},{"key":"1903_CR13","first-page":"2694","volume":"2021","author":"Y Mao","year":"2021","unstructured":"Mao Y, Liu G, Wang X, Gao W, Li X (2021) DialogueTRM: exploring multi-modal emotional dynamics in a conversation. Proc EMNLP 2021:2694\u20132704","journal-title":"Proc EMNLP"},{"key":"1903_CR14","doi-asserted-by":"crossref","unstructured":"Hu D, Hou X, Wei L, Jiang L, Mo Y (2022) MM-DFN: Multimodal dynamic fusion network for emotion recognition in conversations. In: Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing. pp 7037\u20137041","DOI":"10.1109\/ICASSP43922.2022.9747397"},{"key":"1903_CR15","doi-asserted-by":"crossref","unstructured":"Joshi A, Bhat A, Jain A, Singh A, Modi A (2022) COGMEN: contextualized GNN based multimodal emotion recognition. In: Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics. pp 4148\u20134164","DOI":"10.18653\/v1\/2022.naacl-main.306"},{"key":"1903_CR16","doi-asserted-by":"crossref","unstructured":"Hu G, Lin TE, Zhao Y, Lu G, Wu Y, Li Y (2022) UniMSE: Towards unified multimodal sentiment analysis and emotion recognition. In: Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp 7837\u20137851","DOI":"10.18653\/v1\/2022.emnlp-main.534"},{"issue":"3","key":"1903_CR17","doi-asserted-by":"publisher","first-page":"1426","DOI":"10.1109\/TAFFC.2020.3005660","volume":"13","author":"S Xing","year":"2020","unstructured":"Xing S, Mai S, Hu H (2020) Adapted dynamic memory network for emotion recognition in conversation. IEEE Trans Affect Comput 13(3):1426\u20131439","journal-title":"IEEE Trans Affect Comput"},{"key":"1903_CR18","doi-asserted-by":"publisher","DOI":"10.1016\/j.knosys.2022.108861","volume":"248","author":"Q Gao","year":"2022","unstructured":"Gao Q, Cao B, Guan X, Gu T, Bao X, Wu J et al (2022) Emotion recognition in conversations with emotion shift detection based on multi-task learning. Knowl-based Syst 248:108861","journal-title":"Knowl-based Syst"},{"issue":"4","key":"1903_CR19","doi-asserted-by":"publisher","first-page":"1919","DOI":"10.1109\/TAFFC.2024.3389453","volume":"15","author":"J Li","year":"2024","unstructured":"Li J, Liu YJ, Wang XP, Zeng ZG (2024) CFN-ESA: A cross-modal fusion network with emotion-shift awareness for dialogue emotion recognition. IEEE Trans Affect Comput 15(4):1919\u20131933","journal-title":"IEEE Trans Affect Comput"},{"key":"1903_CR20","doi-asserted-by":"crossref","unstructured":"Majumder N, Poria S, Hazarika D, Mihalcea R, Gelbukh A, Cambria E (2019) Dialoguernn: an attentive rnn for emotion detection in conversations. In: Proceedings of the AAAI Conference on Artificial Intelligence. pp 6818\u20136825","DOI":"10.1609\/aaai.v33i01.33016818"},{"key":"1903_CR21","doi-asserted-by":"crossref","unstructured":"Jiao W, Yang H, King I, Lyu MR (2019) HiGRU: hierarchical gated recurrent units for utterance-level emotion recognition. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. pp 397\u2013406","DOI":"10.18653\/v1\/N19-1037"},{"key":"1903_CR22","doi-asserted-by":"crossref","unstructured":"Ghosal D, Majumder N, Gelbukh A, Mihalcea R, Poria S (2020) COSMIC: commonsense knowledge for emotion identification in conversations. In: Proceedings of the 2020 Conference Empirical Methods in Natural Language Processing. pp 2470\u20132481","DOI":"10.18653\/v1\/2020.findings-emnlp.224"},{"key":"1903_CR23","doi-asserted-by":"crossref","unstructured":"Hazarika D, Poria S, Mihalcea R, Cambria E, Zimmermann R (2018) Icon: interactive conversational memory network for multimodal emotion detection. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. pp 2594\u20132604","DOI":"10.18653\/v1\/D18-1280"},{"key":"1903_CR24","doi-asserted-by":"publisher","first-page":"985","DOI":"10.1109\/TASLP.2021.3049898","volume":"29","author":"Z Lian","year":"2021","unstructured":"Lian Z, Liu B, Tao JH (2021) CTNet: conversational transformer network for emotion recognition. IEEE-ACM Trans Audio Speech Lang Process 29:985\u20131000","journal-title":"IEEE-ACM Trans Audio Speech Lang Process"},{"key":"1903_CR25","doi-asserted-by":"publisher","first-page":"378","DOI":"10.1037\/h0046234","volume":"69","author":"S Schachter","year":"1962","unstructured":"Schachter S, Singer J (1962) Cognitive, social and physiological determinants of emotional state. Psychol Rev 69:378\u2013399","journal-title":"Psychol Rev"},{"key":"1903_CR26","volume-title":"Appraisal processes in emotion: theory, methods, research","author":"KR Scherer","year":"2021","unstructured":"Scherer KR, Schorr A, Johnstone T (2021) Appraisal processes in emotion: theory, methods, research. Oxford University Press"},{"issue":"10","key":"1903_CR27","doi-asserted-by":"publisher","first-page":"454","DOI":"10.1016\/j.tics.2003.08.012","volume":"7","author":"JSBT Evans","year":"2003","unstructured":"Evans JSBT (2003) In two minds: dual-process accounts of reasoning. Trends Cog Sci 7(10):454\u2013459","journal-title":"Trends Cog Sci"},{"key":"1903_CR28","doi-asserted-by":"publisher","first-page":"255","DOI":"10.1146\/annurev.psych.59.103006.093629","volume":"59","author":"JSBT Evans","year":"2008","unstructured":"Evans JSBT (2008) Dual-processing accounts of reasoning, judgment, and social cognition. Annual Rev Psychol 59:255\u2013278","journal-title":"Annual Rev Psychol"},{"key":"1903_CR29","doi-asserted-by":"crossref","unstructured":"Ghosal D, Majumder N, Poria S, Chhaya N, Gelbukh A (2019) Dialoguegcn: a graph convolutional neural network for emotion recognition in conversation. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing. pp 154\u2013164","DOI":"10.18653\/v1\/D19-1015"},{"key":"1903_CR30","doi-asserted-by":"publisher","first-page":"4422","DOI":"10.1109\/TMM.2021.3117062","volume":"24","author":"MJ Ren","year":"2022","unstructured":"Ren MJ, Huang XD, Li WH, Song D, Nie WZ (2022) LR-GCN: latent relation-aware graph convolutional network for conversational emotion. IEEE Trans Multimedia 24:4422\u20134432","journal-title":"IEEE Trans Multimedia"},{"issue":"4","key":"1903_CR31","doi-asserted-by":"publisher","first-page":"335","DOI":"10.1007\/s10579-008-9076-6","volume":"42","author":"C Busso","year":"2008","unstructured":"Busso C, Bulut M, Lee CC, Kazemzadeh A, Mower E, Kim S et al (2008) IEMOCAP: interactive emotional dyadic motion capture database. Lang Resour Eval 42(4):335\u2013359","journal-title":"Lang Resour Eval"},{"key":"1903_CR32","doi-asserted-by":"crossref","unstructured":"Poria S, Hazarika D, Majumder N, Naik G, Cambria E, Mihalcea R (2019) Meld: a multimodal multi-party dataset for emotion recognition in conversations. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. pp 527\u2013536","DOI":"10.18653\/v1\/P19-1050"},{"key":"1903_CR33","first-page":"140","volume":"21","author":"C Raffel","year":"2020","unstructured":"Raffel C, Shazeer N, Roberts A, Lee K, Narang S, Matena M et al (2020) Exploring the limits of transfer learning with a unified text-to-text transformer. J Mach Learn Res 21:140","journal-title":"J Mach Learn Res"},{"issue":"1","key":"1903_CR34","first-page":"1","volume":"13","author":"Y Zhou","year":"2022","unstructured":"Zhou Y, Zheng H, Huang Z, Hao S, Li D, Zhao J (2022) Graph neural networks: taxonomy, advances and trends. ACM Trans Intel Syst Tec 13(1):1\u201354","journal-title":"ACM Trans Intel Syst Tec"},{"key":"1903_CR35","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2023.126441","volume":"549","author":"X Li","year":"2023","unstructured":"Li X, Sun L, Ling MJ, Peng Y (2023) A survey of graph network based recommendation in social networks. Neurocomputing 549:126441","journal-title":"Neurocomputing"},{"key":"1903_CR36","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2023.127222","volume":"573","author":"J Yang","year":"2024","unstructured":"Yang J, Xu MY, Xiao YL, Du X (2024) AMIFN: aspect-guided multi-view interactions and fusion network for multimodal aspect-based sentiment analysis. Neurocomputing 573:127222","journal-title":"Neurocomputing"},{"key":"1903_CR37","doi-asserted-by":"publisher","DOI":"10.1016\/j.knosys.2020.105578","volume":"194","author":"Y Xie","year":"2020","unstructured":"Xie Y, Yao CY, Gong MG, Chen C, Qin AK (2020) Graph convolutional networks with multi-level coarsening for graph classification. Knowl-based Syst 194:105578","journal-title":"Knowl-based Syst"},{"key":"1903_CR38","unstructured":"Velickovic P, Cucurull G, Casanova A, Romero A, Lio P, Bengio Y (2018) Graph attention networks. In: Proceedings of the International Conference on Learning Representations. pp 1\u201312 https:\/\/arxiv.org\/pdf\/1710.10903. Accessed 12 July 2024"},{"issue":"1","key":"1903_CR39","doi-asserted-by":"publisher","first-page":"130","DOI":"10.1109\/TAFFC.2023.3261279","volume":"15","author":"J Li","year":"2024","unstructured":"Li J, Wang XP, Lv GQ, Zeng ZG (2024) GA2MIF: graph and attention based two-stage multi-source information fusion for conversational emotion detection. IEEE Trans Affect Comput 15(1):130\u2013143","journal-title":"IEEE Trans Affect Comput"},{"key":"1903_CR40","doi-asserted-by":"publisher","first-page":"77","DOI":"10.1109\/TMM.2023.3260635","volume":"26","author":"J Li","year":"2024","unstructured":"Li J, Wang XP, Lv GQ, Zeng ZG (2024) GraphCFC: a directed graph based cross-modal feature complementation approach for multimodal conversational emotion recognition. IEEE Trans Multimed 26:77\u201389","journal-title":"IEEE Trans Multimed"},{"key":"1903_CR41","doi-asserted-by":"crossref","unstructured":"Tu G, Liang B, Xu RF (2024) A persona-infused cross-task graph network for multimodal emotion recognition with emotion shift detection in conversations. In: Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. pp 2266\u20132270","DOI":"10.1145\/3626772.3657944"},{"key":"1903_CR42","unstructured":"Liu Y, Ott M, Goyal N, Du J, Joshi M, Chen D, et al (2019) Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv: 1907.11692"},{"key":"1903_CR43","unstructured":"Baevski A, Zhou Y, Mohamed A, Auli M (2020) wav2vec 2.0: a framework for self-supervised learning of speech representations. In: Proceedings of the 34th Conference on Neural Information Processing Systems. pp 12449\u201312460"},{"issue":"13","key":"1903_CR44","doi-asserted-by":"publisher","first-page":"2645","DOI":"10.3390\/electronics13132645","volume":"13","author":"PL Wei","year":"2024","unstructured":"Wei PL, Yang J, Xiao YL (2024) Hierarchical cross-modal interaction and fusion network enhanced with self-distillation for emotion recognition in conversations. Electronics 13(13):2645","journal-title":"Electronics"},{"key":"1903_CR45","unstructured":"Brody S, Alon U, and EY ahav (2022) How attentive are graph attention networks?. In: Proceedings of the International Conference on Learning Representations. https:\/\/arxiv.org\/pdf\/2105.14491. Accessed 12 July 2024"},{"key":"1903_CR46","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770\u2013778","DOI":"10.1109\/CVPR.2016.90"},{"key":"1903_CR47","unstructured":"Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, et al (2017) Attention is all you need. arXiv preprint arXiv: 1706.03762v5"},{"key":"1903_CR48","doi-asserted-by":"crossref","unstructured":"Poria S, Cambria E, Hazarika D, Majumder N, Zadeh A, Morency LP (2017) Context-dependent sentiment analysis in user-generated videos. In: Proceedings of the 55th annual meeting of the association for computational linguistics. pp 873\u2013883","DOI":"10.18653\/v1\/P17-1081"},{"key":"1903_CR49","doi-asserted-by":"crossref","unstructured":"Hazarika D, Poria S, Zadeh A, Cambria E, Morency LP, Zimmermann R (2018) Conversational memory network for emotion recognition in dyadic dialogue videos. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. pp 2122\u20132132","DOI":"10.18653\/v1\/N18-1193"},{"key":"1903_CR50","doi-asserted-by":"crossref","unstructured":"Hu D, Wei L, Huai X (2021) DialogueCRN: contextual reasoning networks for emotion recognition in conversations. In: Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics. pp 7042\u20137052","DOI":"10.18653\/v1\/2021.acl-long.547"},{"issue":"11","key":"1903_CR51","first-page":"2579","volume":"9","author":"L Van der Maaten","year":"2008","unstructured":"Van der Maaten L, Hinton G (2008) Visualizing data using t-sne. J Mach Learn Res 9(11):2579\u20132605","journal-title":"J Mach Learn Res"},{"issue":"10","key":"1903_CR52","doi-asserted-by":"publisher","DOI":"10.1088\/1361-6501\/ad633d","volume":"35","author":"HF Tao","year":"2024","unstructured":"Tao HF, Zheng YC, Wang Y, Qiu JE, Stojanovic V (2024) Enhanced feature extraction YOLO industrial small object detection algorithm based on receptive-field attention and multi-scale features. Meas Sci Technol 35(10):105023","journal-title":"Meas Sci Technol"},{"issue":"9","key":"1903_CR53","doi-asserted-by":"publisher","first-page":"260","DOI":"10.1007\/s10462-024-10888-y","volume":"57","author":"P Kumar","year":"2024","unstructured":"Kumar P (2024) Large language models (LLMs): survey, technical frameworks, and future challenges. Artif Intell Rev 57(9):260","journal-title":"Artif Intell Rev"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-025-01903-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-025-01903-y\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-025-01903-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T11:09:37Z","timestamp":1750331377000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-025-01903-y"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,5,12]]},"references-count":53,"journal-issue":{"issue":"7","published-print":{"date-parts":[[2025,7]]}},"alternative-id":["1903"],"URL":"https:\/\/doi.org\/10.1007\/s40747-025-01903-y","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"value":"2199-4536","type":"print"},{"value":"2198-6053","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,5,12]]},"assertion":[{"value":"14 July 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"13 April 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"12 May 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}],"article-number":"287"}}