{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,19]],"date-time":"2026-04-19T06:37:23Z","timestamp":1776580643023,"version":"3.51.2"},"reference-count":48,"publisher":"Springer Science and Business Media LLC","issue":"2","license":[{"start":{"date-parts":[[2023,12,19]],"date-time":"2023-12-19T00:00:00Z","timestamp":1702944000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,12,19]],"date-time":"2023-12-19T00:00:00Z","timestamp":1702944000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62162068"],"award-info":[{"award-number":["62162068"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62172354"],"award-info":[{"award-number":["62172354"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Yunnan Province Ten Thousand Talents Program and Yunling Scholars Special Project","award":["YNWR-YLXZ-2018-022"],"award-info":[{"award-number":["YNWR-YLXZ-2018-022"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2024,4]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Visual sentiment analysis is in great demand as it provides a computational method to recognize sentiment information in abundant visual contents from social media sites. Most of existing methods use CNNs to extract varying visual attributes for image sentiment prediction, but they failed to comprehensively consider the correlation among visual components, and are limited by the receptive field of convolutional layers as a result. In this work, we propose a visual semantic correlation network VSCNet, a Transformer-based visual sentiment prediction model. Precisely, global visual features are captured through an extended attention network stacked by a well-designed extended attention mechanism like Transformer. An off-the-shelf object query tool is used to determine the local candidates of potential affective regions, by which redundant and noisy visual proposals are filtered out. All candidates considered affective are embedded into a computable semantic space. Finally, a fusion strategy integrates semantic representations and visual features for sentiment analysis. Extensive experiments reveal that our method outperforms previous studies on 5 annotated public image sentiment datasets without any training tricks. More specifically, it achieves 1.8% higher accuracy on FI benchmark compared with other state-of-the-art methods.<\/jats:p>","DOI":"10.1007\/s40747-023-01296-w","type":"journal-article","created":{"date-parts":[[2023,12,19]],"date-time":"2023-12-19T03:02:16Z","timestamp":1702954936000},"page":"2869-2881","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":8,"title":["Visual sentiment analysis with semantic correlation enhancement"],"prefix":"10.1007","volume":"10","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-0404-6941","authenticated-orcid":false,"given":"Hao","family":"Zhang","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2320-229X","authenticated-orcid":false,"given":"Yanan","family":"Liu","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2603-436X","authenticated-orcid":false,"given":"Zhaoyu","family":"Xiong","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2340-4502","authenticated-orcid":false,"given":"Zhichao","family":"Wu","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4602-3550","authenticated-orcid":false,"given":"Dan","family":"Xu","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,12,19]]},"reference":[{"key":"1296_CR1","doi-asserted-by":"publisher","first-page":"162","DOI":"10.1016\/j.neucom.2020.12.092","volume":"433","author":"A Bhandari","year":"2021","unstructured":"Bhandari A, Pal NR (2021) Can edges help convolution neural networks in emotion recognition? Neurocomputing 433:162\u2013168. https:\/\/doi.org\/10.1016\/j.neucom.2020.12.092","journal-title":"Neurocomputing"},{"key":"1296_CR2","doi-asserted-by":"publisher","unstructured":"Borth D, Chen T, Ji R, Chang SF (2013) Sentibank: large-scale ontology and classifiers for detecting sentiment and emotions in visual content. In: Proceedings of the 21st ACM international conference on multimedia, association for computing machinery, New York, NY, USA. pp 459-460. https:\/\/doi.org\/10.1145\/2502081.2502268","DOI":"10.1145\/2502081.2502268"},{"key":"1296_CR3","unstructured":"Chen T, Borth D, Darrell T, Chang S (2014) Deepsentibank: visual sentiment concept classification with deep convolutional neural networks. CoRR abs\/1410.8586. arXiv:1410.8586"},{"key":"1296_CR4","unstructured":"Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S, Uszkoreit J, Houlsby N (2020) An image is worth $$16\\times 16$$ words: transformers for image recognition at scale. CoRR abs\/2010.11929. arXiv:2010.11929"},{"key":"1296_CR5","doi-asserted-by":"publisher","DOI":"10.1007\/s41095-023-0364-2","author":"MH Guo","year":"2023","unstructured":"Guo MH, Lu CZ, Liu ZN, Cheng MM, Hu SM (2023) Visual attention network. Comp Visual Media. https:\/\/doi.org\/10.1007\/s41095-023-0364-2","journal-title":"Comp Visual Media"},{"key":"1296_CR6","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 770\u2013778","DOI":"10.1109\/CVPR.2016.90"},{"key":"1296_CR7","doi-asserted-by":"publisher","unstructured":"He X, Zhang H, Li N, Feng L, Zheng F (2019) A multi-attentive pyramidal model for visual sentiment analysis. In: 2019 international joint conference on neural networks (IJCNN). pp 1\u20138. https:\/\/doi.org\/10.1109\/IJCNN.2019.8852317","DOI":"10.1109\/IJCNN.2019.8852317"},{"key":"1296_CR8","doi-asserted-by":"publisher","first-page":"187","DOI":"10.1016\/j.neucom.2018.02.073","volume":"291","author":"X He","year":"2018","unstructured":"He X, Zhang W (2018) Emotion recognition by assisted learning with convolutional neural networks. Neurocomputing 291:187\u2013194. https:\/\/doi.org\/10.1016\/j.neucom.2018.02.073","journal-title":"Neurocomputing"},{"key":"1296_CR9","doi-asserted-by":"crossref","unstructured":"Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 4700\u20134708","DOI":"10.1109\/CVPR.2017.243"},{"key":"1296_CR10","doi-asserted-by":"publisher","first-page":"3303","DOI":"10.1007\/s40747-021-00526-3","volume":"7","author":"MN Kartheek","year":"2021","unstructured":"Kartheek MN, Prasad MVNK, Bhukya R (2021) Modified chess patterns: handcrafted feature descriptors for facial expression recognition. Complex Intell Syst 7:3303\u20133322. https:\/\/doi.org\/10.1007\/s40747-021-00526-3","journal-title":"Complex Intell Syst"},{"key":"1296_CR11","doi-asserted-by":"publisher","DOI":"10.1145\/3505244","author":"S Khan","year":"2022","unstructured":"Khan S, Naseer M, Hayat M, Zamir SW, Khan FS, Shah M (2022) Transformers in vision: a survey. ACM Comput Surv. https:\/\/doi.org\/10.1145\/3505244","journal-title":"ACM Comput Surv"},{"key":"1296_CR12","doi-asserted-by":"publisher","first-page":"84","DOI":"10.1145\/3065386","volume":"60","author":"A Krizhevsky","year":"2017","unstructured":"Krizhevsky A, Sutskever I, Hinton GE (2017) Imagenet classification with deep convolutional neural networks. Commun ACM 60:84\u201390. https:\/\/doi.org\/10.1145\/3065386","journal-title":"Commun ACM"},{"key":"1296_CR13","doi-asserted-by":"crossref","unstructured":"Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, Lin S, Guo B (2021) Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV). pp 10012\u201310022","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"1296_CR14","doi-asserted-by":"publisher","unstructured":"Machajdik J, Hanbury A (2010) Affective image classification using features inspired by psychology and art theory. In: Proceedings of the 18th ACM international conference on multimedia, association for computing machinery, New York, NY, USA. pp 83-92. https:\/\/doi.org\/10.1145\/1873951.1873965","DOI":"10.1145\/1873951.1873965"},{"key":"1296_CR15","doi-asserted-by":"publisher","first-page":"626","DOI":"10.3758\/BF03192732","volume":"37","author":"J Mikels","year":"2005","unstructured":"Mikels J, Fredrickson B, Samanez-Larkin G, Lindberg C, Maglio S, Reuter-Lorenz P (2005) Emotional category data on images from the international affective picture system. Behav Res Methods 37:626\u201330. https:\/\/doi.org\/10.3758\/BF03192732","journal-title":"Behav Res Methods"},{"key":"1296_CR16","doi-asserted-by":"publisher","unstructured":"Ou H, Qing C, Xu X, Jin J (2021) Multi-level context pyramid network for visual sentiment analysis. Sensors 21. https:\/\/www.mdpi.com\/1424-8220\/21\/6\/2136. https:\/\/doi.org\/10.3390\/s21062136","DOI":"10.3390\/s21062136"},{"key":"1296_CR17","doi-asserted-by":"crossref","unstructured":"Peng KC, Chen T, Sadovnik A, Gallagher AC (2015) A mixed bag of emotions: model, predict, and transfer emotion distributions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)","DOI":"10.1109\/CVPR.2015.7298687"},{"key":"1296_CR18","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1007\/s11063-019-10033-9","volume":"51","author":"T Rao","year":"2020","unstructured":"Rao T, Li X, Xu M (2020) Learning multi-level deep representations for image emotion classification. Neural Process Lett 51:1\u201319. https:\/\/doi.org\/10.1007\/s11063-019-10033-9","journal-title":"Neural Process Lett"},{"key":"1296_CR19","doi-asserted-by":"publisher","first-page":"429","DOI":"10.1016\/j.neucom.2018.12.053","volume":"333","author":"T Rao","year":"2019","unstructured":"Rao T, Li X, Zhang H, Xu M (2019) Multi-level region-based convolutional neural network for image emotion classification. Neurocomputing 333:429\u2013439. https:\/\/doi.org\/10.1016\/j.neucom.2018.12.053","journal-title":"Neurocomputing"},{"key":"1296_CR20","doi-asserted-by":"publisher","DOI":"10.1145\/3326335","author":"D She","year":"2019","unstructured":"She D, Sun M, Yang J (2019) Learning discriminative sentiment representation from strongly- and weakly supervised CNNs. ACM Trans Multimedia Comput Commun Appl. https:\/\/doi.org\/10.1145\/3326335","journal-title":"ACM Trans Multimedia Comput Commun Appl"},{"key":"1296_CR21","doi-asserted-by":"publisher","first-page":"1358","DOI":"10.1109\/TMM.2019.2939744","volume":"22","author":"D She","year":"2020","unstructured":"She D, Yang J, Cheng MM, Lai YK, Rosin PL, Wang L (2020) Wscnet: weakly supervised coupled networks for visual sentiment classification and detection. IEEE Trans Multimedia 22:1358\u20131371. https:\/\/doi.org\/10.1109\/TMM.2019.2939744","journal-title":"IEEE Trans Multimedia"},{"key":"1296_CR22","unstructured":"Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556"},{"key":"1296_CR23","doi-asserted-by":"publisher","unstructured":"Srinivas A., Lin TY, Parmar N, Shlens J, Abbeel P, Vaswani A (2021) Bottleneck transformers for visual recognition. In: 2021 IEEE\/CVF conference on computer vision and pattern recognition (CVPR). pp 16514\u201316524. https:\/\/doi.org\/10.1109\/CVPR46437.2021.01625","DOI":"10.1109\/CVPR46437.2021.01625"},{"key":"1296_CR24","doi-asserted-by":"crossref","unstructured":"Szeged C, Ioffe S, Vanhoucke V, Alemi A (2016a) Inception-v4, inception-resnet and the impact of residual connections on learning. In: AAAI conference on artificial intelligence","DOI":"10.1609\/aaai.v31i1.11231"},{"key":"1296_CR25","doi-asserted-by":"crossref","unstructured":"Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016b) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR)","DOI":"10.1109\/CVPR.2016.308"},{"key":"1296_CR26","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2022.3202765","author":"YH Wu","year":"2022","unstructured":"Wu YH, Liu Y, Zhan X, Cheng MM (2022) P2t: pyramid pooling transformer for scene understanding. IEEE Trans Pattern Anal Mach Intell. https:\/\/doi.org\/10.1109\/TPAMI.2022.3202765","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"1296_CR27","doi-asserted-by":"publisher","DOI":"10.1186\/s13640-019-0433-8","author":"H Xiong","year":"2019","unstructured":"Xiong H, Liu Q, Song S, Cai Y (2019) Region-based convolutional neural network using group sparse regularization for image sentiment classification. EURASIP J Image Video Process. https:\/\/doi.org\/10.1186\/s13640-019-0433-8","journal-title":"EURASIP J Image Video Process"},{"key":"1296_CR28","doi-asserted-by":"crossref","unstructured":"Xu L, Wang Z, Wu B, Lui S (2022a) Mdan: Multi-level dependent attention network for visual emotion analysis. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (CVPR). pp 9479\u20139488","DOI":"10.1109\/CVPR52688.2022.00926"},{"key":"1296_CR29","doi-asserted-by":"publisher","first-page":"951","DOI":"10.1007\/s40747-022-00841-3","volume":"9","author":"Y Xu","year":"2022","unstructured":"Xu Y, Su H, Ma G, Liu X (2022) A novel dual-modal emotion recognition algorithm with fusing hybrid features of audio signal and speech context. Complex Intell Syst 9:951\u2013963. https:\/\/doi.org\/10.1007\/s40747-022-00841-3","journal-title":"Complex Intell Syst"},{"key":"1296_CR30","doi-asserted-by":"publisher","first-page":"431","DOI":"10.1007\/s00530-020-00656-7","volume":"26","author":"A Yadav","year":"2020","unstructured":"Yadav A, Vishwakarma DK (2020) A deep learning architecture of RA-DLNet for visual sentiment analysis. Multimedia Syst 26:431\u2013451. https:\/\/doi.org\/10.1007\/s00530-020-00656-7","journal-title":"Multimedia Syst"},{"key":"1296_CR31","doi-asserted-by":"publisher","first-page":"1691","DOI":"10.1587\/transinf.2020EDP7218","volume":"104","author":"T Yamamoto","year":"2021","unstructured":"Yamamoto T, Takeuchi S, Nakazawa A (2021) Image emotion recognition using visual and semantic features reflecting emotional and similar objects. IEICE Trans Inf Syst 104:1691\u20131701. https:\/\/doi.org\/10.1587\/transinf.2020EDP7218","journal-title":"IEICE Trans Inf Syst"},{"key":"1296_CR32","doi-asserted-by":"publisher","DOI":"10.1007\/s00371-022-02472-8","author":"H Yang","year":"2022","unstructured":"Yang H, Fan Y, Lv G, Liu S, Guo Z (2022) Exploiting emotional concepts for image emotion recognition. Visual Comput. https:\/\/doi.org\/10.1007\/s00371-022-02472-8","journal-title":"Visual Comput"},{"key":"1296_CR33","doi-asserted-by":"publisher","unstructured":"Yang J, Li J, Wang X, Ding Y, Gao X (2021) Stimuli-aware visual emotion analysis. IEEE Trans Image Process 30:7432\u20137445. https:\/\/doi.org\/10.1109\/TIP.2021.3106813. arXiv:2109.01812","DOI":"10.1109\/TIP.2021.3106813"},{"key":"1296_CR34","doi-asserted-by":"crossref","unstructured":"Yang J, She D, Lai YK, Yang MH (2018) Retrieving and classifying affective images via deep metric learning. In: Proceedings of the AAAI conference on artificial intelligence, vol 32. https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/11275","DOI":"10.1609\/aaai.v32i1.11275"},{"key":"1296_CR35","doi-asserted-by":"publisher","unstructured":"Yang J, She D, Sun M (2017) Joint image emotion classification and distribution learning via deep convolutional neural network. In: Proceedings of the twenty-sixth international joint conference on artificial intelligence, IJCAI-17, pp 3266\u20133272. https:\/\/doi.org\/10.24963\/ijcai.2017\/456","DOI":"10.24963\/ijcai.2017\/456"},{"key":"1296_CR36","doi-asserted-by":"publisher","first-page":"2513","DOI":"10.1109\/TMM.2018.2803520","volume":"20","author":"J Yang","year":"2018","unstructured":"Yang J, She D, Sun M, Cheng MM, Rosin PL, Wang L (2018) Visual sentiment prediction based on automatic discovery of affective regions. IEEE Trans Multimedia 20:2513\u20132525. https:\/\/doi.org\/10.1109\/TMM.2018.2803520","journal-title":"IEEE Trans Multimedia"},{"key":"1296_CR37","doi-asserted-by":"publisher","unstructured":"Yanulevskaya V, van Gemert J, Roth K, Herbold A, Sebe N, Geusebroek J (2008) Emotional valence categorization using holistic image features. In: 2008 15th IEEE international conference on image processing. pp 101\u2013104. https:\/\/doi.org\/10.1109\/ICIP.2008.4711701","DOI":"10.1109\/ICIP.2008.4711701"},{"key":"1296_CR38","doi-asserted-by":"crossref","unstructured":"You Q, Luo J, Jin H, Yang J (2015) Robust image sentiment analysis using progressively trained and domain transferred deep networks. In: Twenty-ninth AAAI conference on artificial intelligence","DOI":"10.1609\/aaai.v29i1.9179"},{"key":"1296_CR39","doi-asserted-by":"crossref","unstructured":"You Q, Luo J, Jin H, Yang J (2016) Building a large scale dataset for image emotion recognition: the fine print and the benchmark. In: Proceedings of the AAAI conference on artificial intelligence, vol 30. https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/9987","DOI":"10.1609\/aaai.v30i1.9987"},{"key":"1296_CR40","doi-asserted-by":"publisher","DOI":"10.1007\/s00521-022-07139-y","author":"H Zhang","year":"2022","unstructured":"Zhang H, Xu D, Luo G, He K (2022) Learning multi-level representations for affective image recognition. Neural Comput App. https:\/\/doi.org\/10.1007\/s00521-022-07139-y","journal-title":"Neural Comput App"},{"key":"1296_CR41","doi-asserted-by":"publisher","DOI":"10.1007\/s00521-022-07139-y","author":"H Zhang","year":"2022","unstructured":"Zhang H, Xu D, Luo G, He K (2022) Learning multi-level representations for affective image recognition. Neural Comput Appl. https:\/\/doi.org\/10.1007\/s00521-022-07139-y","journal-title":"Neural Comput Appl"},{"key":"1296_CR42","doi-asserted-by":"publisher","unstructured":"Zhang J, Chen M, Sun H, Li D, Wang Z (2020) Object semantics sentiment correlation analysis enhanced image sentiment classification. Knowl Based Syst. https:\/\/doi.org\/10.1016\/j.knosys.2019.105245","DOI":"10.1016\/j.knosys.2019.105245"},{"key":"1296_CR43","doi-asserted-by":"publisher","first-page":"515","DOI":"10.1109\/TMM.2019.2928998","volume":"22","author":"W Zhang","year":"2020","unstructured":"Zhang W, He X, Lu W (2020) Exploring discriminative representations for image emotion recognition with CNNs. IEEE Trans Multimedia 22:515\u2013523. https:\/\/doi.org\/10.1109\/TMM.2019.2928998","journal-title":"IEEE Trans Multimedia"},{"key":"1296_CR44","doi-asserted-by":"publisher","unstructured":"Zhao S (2016) Image emotion computing. In: Proceedings of the 24th ACM international conference on multimedia, association for computing machinery, New York, NY, USA. pp 1435\u20131439. https:\/\/doi.org\/10.1145\/2964284.2971473","DOI":"10.1145\/2964284.2971473"},{"key":"1296_CR45","doi-asserted-by":"publisher","unstructured":"Zhao S, Gao Y, Jiang X, Yao H, Chua TS, Sun X (2014) Exploring principles-of-art features for image emotion recognition. In: Proceedings of the 22nd ACM international conference on multimedia, association for computing machinery, New York, NY, USA. pp 47\u201356. https:\/\/doi.org\/10.1145\/2647868.2654930","DOI":"10.1145\/2647868.2654930"},{"key":"1296_CR46","doi-asserted-by":"publisher","unstructured":"Zhao S, Jia Z, Chen H, Li L, Ding G, Keutzer K (2019) Pdanet: polarity-consistent deep attention network for fine-grained visual emotion regression. In: Proceedings of the 27th ACM international conference on multimedia, association for computing machinery, New York, NY, USA. pp 192\u2013201. https:\/\/doi.org\/10.1145\/3343031.3351062","DOI":"10.1145\/3343031.3351062"},{"key":"1296_CR47","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2021.3094362","author":"S Zhao","year":"2021","unstructured":"Zhao S, Yao X, Yang J, Jia G, Ding G, Chua TS, Schuller BW, Keutzer K (2021) Affective image content analysis: two decades review and new perspectives. IEEE Trans Pattern Anal Mach Intell. https:\/\/doi.org\/10.1109\/TPAMI.2021.3094362","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"1296_CR48","doi-asserted-by":"crossref","unstructured":"Zhu X, Li L, Zhang W, Rao T, Xu M, Huang Q, Xu D (2017) Dependency exploitation: a unified CNN\u2013RNN approach for visual emotion recognition. In: Proceedings of the 26th international joint conference on artificial intelligence. AAAI Press, Washington, DC. pp 3595\u20133601","DOI":"10.24963\/ijcai.2017\/503"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-023-01296-w.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-023-01296-w\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-023-01296-w.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,3,30]],"date-time":"2024-03-30T15:36:06Z","timestamp":1711812966000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-023-01296-w"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,12,19]]},"references-count":48,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2024,4]]}},"alternative-id":["1296"],"URL":"https:\/\/doi.org\/10.1007\/s40747-023-01296-w","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"value":"2199-4536","type":"print"},{"value":"2198-6053","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,12,19]]},"assertion":[{"value":"10 April 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"11 November 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"19 December 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}