{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,12]],"date-time":"2026-03-12T00:05:11Z","timestamp":1773273911058,"version":"3.50.1"},"reference-count":65,"publisher":"Springer Science and Business Media LLC","issue":"5","license":[{"start":{"date-parts":[[2025,3,25]],"date-time":"2025-03-25T00:00:00Z","timestamp":1742860800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,3,25]],"date-time":"2025-03-25T00:00:00Z","timestamp":1742860800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"Science and Technology Development Plan of Jilin Provincial Department of Science and Technology","award":["20220508145RC"],"award-info":[{"award-number":["20220508145RC"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2025,5]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Aiming at the influence of factors such as the special optical characteristics of water bodies on the perceptual quality of generated images, this paper proposes the DifSG2-CCL model for reducing the special optical characteristics of water bodies and the DPL-SG2 model for introducing perceptual loss. Combining the ideas of cyclic consistency and style migration, this paper builds the Underwater Cycle Consistency Loss (U-CCL) module. The DifSG2-CCL model is based on the method of image reconstruction, which converts the underwater image into the style of the land image to reduce the influence of the water body factors. VGG16 is introduced as a perceptual loss into the DPL-SG2 to enhance the visual perception of the image by feature extraction with different layers and tonal weighting. Furthermore, in addition to the already disclosed SA dataset, a T dataset with a resolution of 256\u2009\u00d7\u2009256 in 9.366k sheets is provided in this paper. The experimental results show that DifSG2-CCL and DPL-SG2 can effectively enhance the perceptual quality of the images. The unique underwater image generation of DifSG2-CCL produces excellent results in qualitative analysis and reduces its FID value to 8.97. DPL-SG2 is more outstanding in the training of T dataset, and its FID value is reduced to 5.39. Therefore, the U-CCL and VGG16 can be applied as an innovative approach to enhance visual perception of underwater images. The experimental code with pre-trained models will be published shortly at <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" xlink:href=\"https:\/\/github.com\/yff0428\/DPL-SG2\/tree\/main\" ext-link-type=\"uri\">https:\/\/github.com\/yff0428\/DPL-SG2\/tree\/main<\/jats:ext-link>.<\/jats:p>","DOI":"10.1007\/s40747-025-01832-w","type":"journal-article","created":{"date-parts":[[2025,3,27]],"date-time":"2025-03-27T11:13:37Z","timestamp":1743074017000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":26,"title":["A study of enhanced visual perception of marine biology images based on diffusion-GAN"],"prefix":"10.1007","volume":"11","author":[{"given":"Feifan","family":"Yao","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3902-3213","authenticated-orcid":false,"given":"Huiying","family":"Zhang","sequence":"additional","affiliation":[]},{"given":"Yifei","family":"Gong","sequence":"additional","affiliation":[]},{"given":"Qinghua","family":"Zhang","sequence":"additional","affiliation":[]},{"given":"Pan","family":"Xiao","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,3,25]]},"reference":[{"key":"1832_CR1","unstructured":"Sohl-Dickstein J, Weiss E, Maheswaranathan N, Ganguli S (2015) Deep unsupervised learning using nonequilibrium thermodynamics. In: International conference on machine learning. PMLR, pp 2256\u20132265"},{"key":"1832_CR2","doi-asserted-by":"crossref","unstructured":"Zhao W, Rao Y, Liu Z, Liu B, Zhou J, Lu J (2023) Unleashing text-to-image diffusion models for visual perception. In: Proceedings of the IEEE\/CVF international conference on computer vision, pp 5729\u20135739","DOI":"10.1109\/ICCV51070.2023.00527"},{"key":"1832_CR3","unstructured":"Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. Adv Neural Inf Process Syst 27. https:\/\/arxiv.org\/pdf\/1406.2661"},{"key":"1832_CR4","unstructured":"Brock AJ (2018) Large scale GAN training for high fidelity natural image synthesis"},{"key":"1832_CR5","unstructured":"Wang Z, Zheng H, He P, Chen W, Zhou M (2022) Diffusion-GAN: training GANs with diffusion. https:\/\/arxiv.org\/abs\/2206.02262"},{"key":"1832_CR6","doi-asserted-by":"crossref","unstructured":"Zhang H, Yao F, Gong Y, Zhang Q (2024) Marine biology image generation based on diffusion-Stylegan2","DOI":"10.1109\/ACCESS.2024.3369234"},{"key":"1832_CR7","unstructured":"Arjovsky M, Chintala S, Bottou L (2017) Wasserstein generative adversarial networks. In: International conference on machine learning. PMLR, pp 214\u2013223"},{"key":"1832_CR8","unstructured":"Simonyan K (2014) Very deep convolutional networks for large-scale image recognition. https:\/\/arxiv.org\/pdf\/1409.1556v6"},{"key":"1832_CR9","doi-asserted-by":"publisher","first-page":"115","DOI":"10.1007\/BF02478259","volume":"5","author":"WS McCulloch","year":"1943","unstructured":"McCulloch WS, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys 5:115\u2013133","journal-title":"Bull Math Biophys"},{"key":"1832_CR10","doi-asserted-by":"publisher","first-page":"2971","DOI":"10.1007\/s13369-017-3034-9","volume":"43","author":"R Kumar","year":"2018","unstructured":"Kumar R, Srivastava S, Gupta J (2018) Comparative study of neural networks for control of nonlinear dynamical systems with Lyapunov stability-based adaptive learning rates. Arab J Sci Eng 43:2971\u20132993","journal-title":"Arab J Sci Eng"},{"key":"1832_CR11","doi-asserted-by":"crossref","unstructured":"Karras T, Laine S, Aila T (2019) A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 4401\u20134410. https:\/\/arxiv.org\/pdf\/1812.04948v3","DOI":"10.1109\/CVPR.2019.00453"},{"key":"1832_CR12","unstructured":"Karras T (2017) Progressive growing of GANs for improved quality, stability, and variation"},{"key":"1832_CR13","unstructured":"Karras T, Laine S, Aittala M, Hellsten J, Lehtinen J, Aila T (2020) Analyzing and improving the image quality of Stylegan. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 8110\u20138119. https:\/\/arxiv.org\/pdf\/1912.04958v2"},{"key":"1832_CR14","first-page":"12104","volume":"33","author":"T Karras","year":"2020","unstructured":"Karras T, Aittala M, Hellsten J, Laine S, Lehtinen J, Aila T (2020) Training generative adversarial networks with limited data. Adv Neural Inf Process 33:12104\u201312114","journal-title":"Adv Neural Inf Process"},{"key":"1832_CR15","first-page":"6840","volume":"33","author":"J Ho","year":"2020","unstructured":"Ho J, Jain A, Abbeel P (2020) Denoising diffusion probabilistic models. Adv Neural Inf 33:6840\u20136851","journal-title":"Adv Neural Inf"},{"key":"1832_CR16","doi-asserted-by":"publisher","first-page":"7875","DOI":"10.1007\/s00521-020-05526-x","volume":"33","author":"R Kumar","year":"2021","unstructured":"Kumar R, Srivastava S (2021) A novel dynamic recurrent functional link neural network-based identification of nonlinear systems using Lyapunov stability analysis. Neural Comput Appl 33:7875\u20137892","journal-title":"Neural Comput Appl"},{"key":"1832_CR17","first-page":"852","volume":"34","author":"T Karras","year":"2021","unstructured":"Karras T, Aittala M, Laine S, H\u00e4rk\u00f6nen E, Hellsten J, Lehtinen J, Aila T (2021) Alias-free generative adversarial networks. Adv Neural Inf Process Syst 34:852\u2013863","journal-title":"Adv Neural Inf Process Syst"},{"key":"1832_CR18","unstructured":"Ramesh A, Pavlov M, Goh G, Gray S, Voss C, Radford A, Chen M, Sutskever I (2021) Zero-shot text-to-image generation. In: International conference on machine learning. PMLR, pp 8821\u20138831"},{"key":"1832_CR19","unstructured":"Vaswani A (2017) Attention is all you need. Adv Neural Inf Process Syst"},{"key":"1832_CR20","doi-asserted-by":"publisher","first-page":"87","DOI":"10.1109\/TPAMI.2022.3152247","volume":"45","author":"K Han","year":"2022","unstructured":"Han K, Wang Y, Chen H, Chen X, Guo J, Liu Z, Tang Y, Xiao A, Xu C, Xu Y (2022) A survey on vision transformer. IEEE Trans Pattern Anal Mach Intell 45:87\u2013110","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"1832_CR21","unstructured":"Ramesh A, Dhariwal P, Nichol A, Chu C, Chen M (2022) Hierarchical text-conditional image generation with clip latents 1:3"},{"key":"1832_CR22","doi-asserted-by":"crossref","unstructured":"Sauer A, Schwarz K, Geiger A (2022) StyleGAN-xl: scaling styleGAN to large diverse datasets. In: ACM SIGGRAPH 2022 conference proceedings, pp 1\u201310","DOI":"10.1145\/3528233.3530738"},{"key":"1832_CR23","first-page":"17480","volume":"34","author":"A Sauer","year":"2021","unstructured":"Sauer A, Chitta K, M\u00fcller J, Geiger A (2021) Projected gans converge faster. Adv Neural Inf Process Syst 34:17480\u201317492","journal-title":"Adv Neural Inf Process Syst"},{"key":"1832_CR24","doi-asserted-by":"crossref","unstructured":"Bolya D, Hoffman J (2023) Token merging for fast stable diffusion. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 4599\u20134603","DOI":"10.1109\/CVPRW59228.2023.00484"},{"key":"1832_CR25","doi-asserted-by":"crossref","unstructured":"Rombach R, Blattmann A, Lorenz D, Esser P, Ommer B (2022) High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 10684\u201310695","DOI":"10.1109\/CVPR52688.2022.01042"},{"key":"1832_CR26","first-page":"36479","volume":"35","author":"C Saharia","year":"2022","unstructured":"Saharia C, Chan W, Saxena S, Li L, Whang J, Denton EL, Ghasemipour K, Gontijo Lopes R, KaragolAyan B, Salimans T (2022) Photorealistic text-to-image diffusion models with deep language understanding. Adv Neural Inf Process Syst 35:36479\u201336494","journal-title":"Adv Neural Inf Process Syst"},{"key":"1832_CR27","unstructured":"Betker J, Goh G, Jing L, Brooks T, Wang J, Li L, Ouyang L, Zhuang J, Lee J, Guo Y (2023) Improving image generation with better captions 2:8"},{"key":"1832_CR28","unstructured":"Sauer A, Karras T, Laine S, Geiger A, Aila T (2023) Stylegan-t: unlocking the power of GANs for fast large-scale text-to-image synthesis. In: International conference on machine learning. PMLR, pp 30105\u201330118"},{"key":"1832_CR29","doi-asserted-by":"crossref","unstructured":"Peebles W, Xie S (2023) Scalable diffusion models with transformers. In: Proceedings of the IEEE\/CVF international conference on computer vision, pp 4195\u20134205","DOI":"10.1109\/ICCV51070.2023.00387"},{"key":"1832_CR30","unstructured":"Wu K, Liu F, Cai Z, Yan R, Wang H, Hu Y, Duan Y, Ma K (2024) Unique3D: high-quality and efficient 3D mesh generation from a single image"},{"key":"1832_CR31","unstructured":"Shi Y, Wang P, Ye J, Long M, Li K, Yang X (2023) MVdream: multi-view diffusion for 3D generation"},{"key":"1832_CR32","doi-asserted-by":"crossref","unstructured":"Huang X, Shao R, Zhang Q, Zhang H, Feng Y, Liu Y, Wang Q (2024) Humannorm: learning normal diffusion model for high-quality and realistic 3D human generation. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 4568\u20134577","DOI":"10.1109\/CVPR52733.2024.00437"},{"key":"1832_CR33","doi-asserted-by":"crossref","unstructured":"Yuan Y, Wang Y, Wang L, Zhao X, Lu H, Wang Y, Su W, Zhang L (2023) Isomer: Isomerous transformer for zero-shot video object segmentation. In: Proceedings of the IEEE\/CVF international conference on computer vision, pp 966\u2013976","DOI":"10.1109\/ICCV51070.2023.00095"},{"key":"1832_CR34","doi-asserted-by":"crossref","unstructured":"Wang Z, Zhao L, Xing W (2023) Stylediffusion: controllable disentangled style transfer via diffusion models. In: Proceedings of the IEEE\/CVF international conference on computer vision, pp 7677\u20137689","DOI":"10.1109\/ICCV51070.2023.00706"},{"key":"1832_CR35","unstructured":"Radford A, Kim JW, Hallacy C, Ramesh A, Goh G, Agarwal S, Sastry G, Askell A, Mishkin P, Clark J (2021) Learning transferable visual models from natural language supervision. In: International conference on machine learning. PMLR, pp 8748\u20138763"},{"key":"1832_CR36","doi-asserted-by":"publisher","first-page":"646","DOI":"10.1109\/TASLP.2022.3145297","volume":"30","author":"X An","year":"2022","unstructured":"An X, Soong FK, Xie L (2022) Disentangling style and speaker attributes for TTS style transfer. IEEE\/ACM Trans Audio Speech Lang Process 30:646\u2013658","journal-title":"IEEE\/ACM Trans Audio Speech Lang Process"},{"key":"1832_CR37","unstructured":"Gatys LA (2015) A neural algorithm of artistic style. https:\/\/arxiv.org\/pdf\/1508.06576v2"},{"key":"1832_CR38","doi-asserted-by":"crossref","unstructured":"Huang X, Belongie S (2017) Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE international conference on computer vision, pp 1501\u20131510","DOI":"10.1109\/ICCV.2017.167"},{"key":"1832_CR39","doi-asserted-by":"crossref","unstructured":"Qi T, Fang S, Wu Y, Xie H, Liu J, Chen L, He Q, Zhang Y (2024) DEADiff: an efficient stylization diffusion model with disentangled representations. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 8693\u20138702","DOI":"10.1109\/CVPR52733.2024.00830"},{"key":"1832_CR40","doi-asserted-by":"crossref","unstructured":"Zhu J-Y, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision, pp 2223\u20132232","DOI":"10.1109\/ICCV.2017.244"},{"key":"1832_CR41","doi-asserted-by":"crossref","unstructured":"Yuan Y, Liu S, Zhang J, Zhang Y, Dong C, Lin L (2018) Unsupervised image super-resolution using cycle-in-cycle generative adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 701\u2013710","DOI":"10.1109\/CVPRW.2018.00113"},{"key":"1832_CR42","unstructured":"Bahdanau D (2014) Neural machine translation by jointly learning to align and translate. https:\/\/arxiv.org\/pdf\/1409.0473v7"},{"key":"1832_CR43","doi-asserted-by":"crossref","unstructured":"Liu Z, Lin Y, Cao Y, Hu H, Wei Y, Zhang Z, Lin S, Guo B (2021) Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE\/CVF international conference on computer vision, pp 10012\u201310022","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"1832_CR44","unstructured":"Mirza M (2014) Conditional generative adversarial nets. https:\/\/arxiv.org\/pdf\/1411.1784v1"},{"key":"1832_CR45","first-page":"21357","volume":"33","author":"M Kang","year":"2020","unstructured":"Kang M, Park J (2020) Contragan: contrastive learning for conditional image generation. Adv Neural Inf Process Syst 33:21357\u201321369","journal-title":"Adv Neural Inf Process Syst"},{"key":"1832_CR46","doi-asserted-by":"publisher","first-page":"7864","DOI":"10.1109\/ACCESS.2023.3344666","volume":"12","author":"H Zhang","year":"2023","unstructured":"Zhang H, Gong Y, Yao F, Zhang QJIA (2023) Research on real-time detection algorithm for pedestrian and vehicle in foggy weather based on lightweight XM-YOLOViT. IEEE Access 12:7864\u20137883","journal-title":"IEEE Access"},{"key":"1832_CR47","doi-asserted-by":"crossref","unstructured":"Johnson J, Alahi A, Fei-Fei L (2016) Perceptual losses for real-time style transfer and super-resolution. In: Computer vision\u2014ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11\u201314, 2016, Proceedings, Part II 14. Springer, pp 694\u2013711. https:\/\/arxiv.org\/pdf\/1603.08155v1","DOI":"10.1007\/978-3-319-46475-6_43"},{"key":"1832_CR48","doi-asserted-by":"publisher","first-page":"295","DOI":"10.1109\/TPAMI.2015.2439281","volume":"38","author":"C Dong","year":"2015","unstructured":"Dong C, Loy CC, He K, Tang X (2015) Image super-resolution using deep convolutional networks. IEEE Trans Pattern Anal Mach Intell 38:295\u2013307","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"1832_CR49","doi-asserted-by":"crossref","unstructured":"Kupyn O, Budzan V, Mykhailych M, Mishkin D, Matas J (2018) Deblurgan: blind motion deblurring using conditional adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 8183\u20138192","DOI":"10.1109\/CVPR.2018.00854"},{"key":"1832_CR50","doi-asserted-by":"crossref","unstructured":"Luan F, Paris S, Shechtman E, Bala K (2017) Deep photo style transfer. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4990\u20134998","DOI":"10.1109\/CVPR.2017.740"},{"key":"1832_CR51","doi-asserted-by":"crossref","unstructured":"Wang X, Yu K, Wu S, Gu J, Liu Y, Dong C, Qiao Y, Change Loy C (2018) ESRGAN: enhanced super-resolution generative adversarial networks. Proceedings of the European conference on computer vision (ECCV) workshops","DOI":"10.1007\/978-3-030-11021-5_5"},{"key":"1832_CR52","doi-asserted-by":"crossref","unstructured":"Ledig C, Theis L, Husz\u00e1r F, Caballero J, Cunningham A, Acosta A, Aitken A, Tejani A, Totz J, Wang Z (2017) Photo-realistic single image super-resolution using a generative adversarial network. Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4681\u20134690","DOI":"10.1109\/CVPR.2017.19"},{"key":"1832_CR53","doi-asserted-by":"publisher","first-page":"107249","DOI":"10.1016\/j.patcog.2020.107249","volume":"102","author":"Y Fang","year":"2020","unstructured":"Fang Y, Deng W, Du J, Hu J (2020) Identity-aware CycleGAN for face photo-sketch synthesis and recognition. Pattern Recongnit 102:107249","journal-title":"Pattern Recongnit"},{"key":"1832_CR54","doi-asserted-by":"publisher","first-page":"1747","DOI":"10.3390\/s22051747","volume":"22","author":"A Jabbar","year":"2022","unstructured":"Jabbar A, Li X, Assam M, Khan JA, Obayya M, Alkhonaini MA, Al-Wesabi FN, Assad M (2022) AFD-StackGAN: automatic mask generation network for face de-occlusion using StackGAN. Sensors 22:1747","journal-title":"Sensors"},{"key":"1832_CR55","doi-asserted-by":"publisher","first-page":"3227","DOI":"10.1109\/LRA.2020.2974710","volume":"5","author":"MJ Islam","year":"2020","unstructured":"Islam MJ, Xia Y, Sattar JJIR, Letters A (2020) Fast underwater image enhancement for improved visual perception. IEEE Robot Autom Lett 5:3227\u20133234","journal-title":"IEEE Robot Autom Lett"},{"key":"1832_CR56","doi-asserted-by":"crossref","unstructured":"Adam L, \u010cerm\u00e1k V, Papafitsoros K, Picek L (2024) SeaTurtleID2022: a long-span dataset for reliable sea turtle re-identification. In: Proceedings of the IEEE\/CVF winter conference on applications of computer vision, pp 7146\u20137156","DOI":"10.1109\/WACV57701.2024.00699"},{"key":"1832_CR57","doi-asserted-by":"crossref","unstructured":"Ronneberger O, Fischer P, Brox T (2015) U-net: Convolutional networks for biomedical image segmentation. Medical image computing and computer-assisted intervention\u2014MICCAI 2015: 18th international conference, Munich, Germany, October 5\u20139, 2015, proceedings, part III 18. Springer, pp 234\u2013241. https:\/\/arxiv.org\/pdf\/1505.04597v1","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"1832_CR58","unstructured":"Lafferty J, McCallum A, Pereira F (2001) Conditional random fields: probabilistic models for segmenting and labeling sequence data. ICML. Williamstown, MA, p 3"},{"key":"1832_CR59","unstructured":"Heusel M, Ramsauer H, Unterthiner T, Nessler B, Hochreiter S (2017) GANs trained by a two time-scale update rule converge to a local Nash equilibrium. Adv Neural Inf Process Syst 30. https:\/\/arxiv.org\/pdf\/1706.08500v6"},{"key":"1832_CR60","unstructured":"Bi\u0144kowski M, Sutherland DJ, Arbel M, Gretton A (2018) Demystifying MMD GANs"},{"key":"1832_CR61","unstructured":"Salimans T, Goodfellow I, Zaremba W, Cheung V, Radford A, Chen X (2016) Improved techniques for training GANs. Adv Neural Inf Process Syst 29"},{"key":"1832_CR62","unstructured":"Barratt S, Sharma R (2018) A note on the inception score"},{"key":"1832_CR63","unstructured":"Sajjadi MS, Bachem O, Lucic M, Bousquet O, Gelly S (2018) Assessing generative models via precision and recall. Adv Neural Inf Process Syst 31"},{"key":"1832_CR64","unstructured":"Kynk\u00e4\u00e4nniemi T, Karras T, Laine S, Lehtinen J, Aila T (2019) Improved precision and recall metric for assessing generative models. Adv Neural Inf Process Syst 32"},{"key":"1832_CR65","unstructured":"Gulrajani I, Ahmed F, Arjovsky M, Dumoulin V, Courville AC (2017) Improved training of Wasserstein GANs. Adv Neural Inf Process Syst 30"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-025-01832-w.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-025-01832-w\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-025-01832-w.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,4,29]],"date-time":"2025-04-29T10:37:28Z","timestamp":1745923048000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-025-01832-w"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,3,25]]},"references-count":65,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2025,5]]}},"alternative-id":["1832"],"URL":"https:\/\/doi.org\/10.1007\/s40747-025-01832-w","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"value":"2199-4536","type":"print"},{"value":"2198-6053","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,3,25]]},"assertion":[{"value":"26 July 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"26 February 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"25 March 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}],"article-number":"227"}}