{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,18]],"date-time":"2026-01-18T06:10:03Z","timestamp":1768716603492,"version":"3.49.0"},"reference-count":55,"publisher":"Springer Science and Business Media LLC","issue":"6","license":[{"start":{"date-parts":[[2023,6,30]],"date-time":"2023-06-30T00:00:00Z","timestamp":1688083200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,6,30]],"date-time":"2023-06-30T00:00:00Z","timestamp":1688083200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["11673009"],"award-info":[{"award-number":["11673009"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2023,12]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>As for the problems of boundary blurring and information loss in the multi-focus image fusion method based on the generative decision maps, this paper proposes a new gradient-intensity joint proportional constraint generative adversarial network for multi-focus image fusion, with the name of GIPC-GAN. First, a set of labeled multi-focus image datasets using the deep region competition algorithm on a public dataset is constructed. It can train the network and generate fused images in an end-to-end manner, while avoiding boundary errors caused by artificially constructed decision maps. Second, the most meaningful information in the multi-focus image fusion task is defined as the target intensity and detail gradient, and a jointly constrained loss function based on intensity and gradient proportional maintenance is proposed. Constrained by a specific loss function to force the generated image to retain the information of target intensity, global texture and local texture of the source image as much as possible and maintain the structural consistency between the fused image and the source image. Third, we introduce GAN into the network, and establish an adversarial game between the generator and the discriminator, so that the intensity structure and texture gradient retained by the fused image are kept in a balance, and the detailed information of the fused image is further enhanced. Last but not least, experiments are conducted on two multi-focus public datasets and a multi-source multi-focus image sequence dataset and compared with other 7 state-of-the-art algorithms. The experimental results show that the images fused by the GIPC-GAN model are superior to other comparison algorithms in both subjective performance and objective measurement, and basically meet the requirements of real-time image fusion in terms of running efficiency and mode parameters quantity.<\/jats:p>","DOI":"10.1007\/s40747-023-01151-y","type":"journal-article","created":{"date-parts":[[2023,6,30]],"date-time":"2023-06-30T04:01:40Z","timestamp":1688097700000},"page":"7395-7422","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":6,"title":["GIPC-GAN: an end-to-end gradient and intensity joint proportional constraint generative adversarial network for multi-focus image fusion"],"prefix":"10.1007","volume":"9","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-1996-1840","authenticated-orcid":false,"given":"Junwu","family":"Li","sequence":"first","affiliation":[]},{"given":"Binhua","family":"Li","sequence":"additional","affiliation":[]},{"given":"Yaoxi","family":"Jiang","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,6,30]]},"reference":[{"key":"1151_CR1","doi-asserted-by":"crossref","first-page":"100","DOI":"10.1016\/j.inffus.2016.05.004","volume":"33","author":"S Li","year":"2017","unstructured":"Li S, Kang X, Fang L et al (2017) Pixel-level image fusion: a survey of the state of the art. Inform Fusion 33:100\u2013112","journal-title":"Inform Fusion"},{"key":"1151_CR2","doi-asserted-by":"crossref","first-page":"40","DOI":"10.1016\/j.inffus.2020.08.022","volume":"66","author":"H Zhang","year":"2021","unstructured":"Zhang H, Le Z, Shao Z et al (2021) MFF-GAN: An unsupervised generative adversarial network with adaptive and gradient joint constraints for multi-focus image fusion. Inform Fusion 66:40\u201353","journal-title":"Inform Fusion"},{"key":"1151_CR3","doi-asserted-by":"crossref","first-page":"323","DOI":"10.1016\/j.inffus.2021.06.008","volume":"76","author":"H Zhang","year":"2021","unstructured":"Zhang H, Xu H, Tian X et al (2021) Image fusion meets deep learning: a survey and perspective. Inform Fusion 76:323\u2013336","journal-title":"Inform Fusion"},{"key":"1151_CR4","doi-asserted-by":"crossref","first-page":"153","DOI":"10.1016\/j.inffus.2018.02.004","volume":"45","author":"J Ma","year":"2019","unstructured":"Ma J, Ma Y, Li C (2019) Infrared and visible image fusion methods and applications: a survey. Information Fusion 45:153\u2013178","journal-title":"Information Fusion"},{"issue":"12","key":"1151_CR5","doi-asserted-by":"crossref","first-page":"2379","DOI":"10.3390\/diagnostics11122379","volume":"11","author":"Y Dai","year":"2021","unstructured":"Dai Y, Song Y, Liu W et al (2021) Multi-focus image fusion based on convolution neural network for Parkinson\u2019s disease image classification. Diagnostics 11(12):2379","journal-title":"Diagnostics"},{"key":"1151_CR6","doi-asserted-by":"crossref","DOI":"10.1016\/j.patcog.2022.108673","volume":"128","author":"H Basak","year":"2022","unstructured":"Basak H, Kundu R, Sarkar R (2022) MFSNet: a multi focus segmentation network for skin lesion segmentation. Pattern Recogn 128:108673","journal-title":"Pattern Recogn"},{"issue":"1","key":"1151_CR7","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1007\/s13721-021-00348-w","volume":"11","author":"D Liu","year":"2022","unstructured":"Liu D, Teng W (2022) Deep learning-based image target detection and recognition of fractal feature fusion for BIOmetric authentication and monitoring. Netw Model Anal Health Inform Bioinform 11(1):1\u201314","journal-title":"Netw Model Anal Health Inform Bioinform"},{"issue":"5","key":"1151_CR8","doi-asserted-by":"crossref","first-page":"2179","DOI":"10.1007\/s40747-021-00428-4","volume":"7","author":"AE Ilesanmi","year":"2021","unstructured":"Ilesanmi AE, Ilesanmi TO (2021) Methods for image denoising using convolutional neural network: a review. Complex Intell Syst 7(5):2179\u20132198","journal-title":"Complex Intell Syst"},{"key":"1151_CR9","doi-asserted-by":"crossref","unstructured":"Saleem S, Amin J, Sharif M, et al (2022) A deep network designed for segmentation and classification of leukemia using fusion of the transfer learning models, Complex Intell Syst 8:3105\u20133120","DOI":"10.1007\/s40747-021-00473-z"},{"key":"1151_CR10","doi-asserted-by":"crossref","unstructured":"Li D, Peng Y, Guo Y, et al (2022) TAUNet: a triple-attention-based multi-modality MRI fusion U-Net for cardiac pathology segmentation. Complex Intell Syst 8:2489\u20132505","DOI":"10.1007\/s40747-022-00660-6"},{"key":"1151_CR11","volume":"198","author":"J Wang","year":"2022","unstructured":"Wang J, Qu H, Wei Y et al (2022) Multi-focus image fusion based on quad-tree decomposition and edge-weighted focus measure. Signal Process 198:108590","journal-title":"Signal Process"},{"key":"1151_CR12","doi-asserted-by":"crossref","unstructured":"Ma L, Hu Y, Zhang B et al (2023) A new multi-focus image fusion method based on multi-classification focus learning and multi-scale decomposition. Appl Intell 53:1452\u20131468","DOI":"10.1007\/s10489-022-03658-2"},{"key":"1151_CR13","volume":"96","author":"Y Wang","year":"2021","unstructured":"Wang Y, Xu S, Liu J et al (2021) MFIF-GAN: A new generative adversarial network for multi-focus image fusion. Signal Process Image Commun 96:116295","journal-title":"Signal Process Image Commun"},{"key":"1151_CR14","doi-asserted-by":"crossref","first-page":"71","DOI":"10.1016\/j.inffus.2020.06.013","volume":"64","author":"Y Liu","year":"2020","unstructured":"Liu Y, Wang L, Cheng J et al (2020) Multi-focus image fusion: a survey of the state of the art. Information Fusion 64:71\u201391","journal-title":"Information Fusion"},{"issue":"4","key":"1151_CR15","doi-asserted-by":"crossref","first-page":"727","DOI":"10.1007\/s11760-018-1402-x","volume":"13","author":"Y Zhang","year":"2019","unstructured":"Zhang Y, Wei W, Yuan Y (2019) Multi-focus image fusion with alternating guided filtering. SIViP 13(4):727\u2013735","journal-title":"SIViP"},{"key":"1151_CR16","doi-asserted-by":"crossref","first-page":"35","DOI":"10.1016\/j.image.2018.12.004","volume":"72","author":"X Qiu","year":"2019","unstructured":"Qiu X, Li M, Zhang L et al (2019) Guided filter-based multi-focus image fusion through focus region detection. Signal Process Image Commun 72:35\u201346","journal-title":"Signal Process Image Commun"},{"issue":"11","key":"1151_CR17","doi-asserted-by":"crossref","first-page":"5636","DOI":"10.1109\/TIP.2019.2922097","volume":"28","author":"O Bouzos","year":"2019","unstructured":"Bouzos O, Andreadis I, Mitianoudis N (2019) Conditional random field model for robust multi-focus image fusion. IEEE Trans Image Process 28(11):5636\u20135648","journal-title":"IEEE Trans Image Process"},{"issue":"2","key":"1151_CR18","doi-asserted-by":"crossref","first-page":"2847","DOI":"10.1007\/s11042-020-09647-2","volume":"80","author":"Z Zhang","year":"2021","unstructured":"Zhang Z, Xi X, Luo X et al (2021) Multimodal image fusion based on global-regional-local rule in NSST domain. Multimed Tools Appl 80(2):2847\u20132873","journal-title":"Multimed Tools Appl"},{"key":"1151_CR19","volume":"184","author":"X Li","year":"2021","unstructured":"Li X, Zhou F, Tan H et al (2021) Multi-focus image fusion based on nonsubsampled contourlet transform and residual removal. Signal Process 184:108062","journal-title":"Signal Process"},{"key":"1151_CR20","doi-asserted-by":"crossref","first-page":"179857","DOI":"10.1109\/ACCESS.2020.3028088","volume":"8","author":"L Junwu","year":"2020","unstructured":"Junwu L, Li B, Jiang Y (2020) An infrared and visible image fusion algorithm based on LSWT-NSST. IEEE Access 8:179857\u2013179880","journal-title":"IEEE Access"},{"issue":"3","key":"1151_CR21","doi-asserted-by":"crossref","first-page":"4387","DOI":"10.1007\/s11042-021-11758-3","volume":"81","author":"L Yu","year":"2022","unstructured":"Yu L, Zeng Z, Wang H et al (2022) Fractional-order differentiation based sparse representation for multi-focus image fusion. Multimed Tools Appl 81(3):4387\u20134411","journal-title":"Multimed Tools Appl"},{"key":"1151_CR22","volume":"92","author":"J Tan","year":"2021","unstructured":"Tan J, Zhang T, Zhao L et al (2021) Multi-focus image fusion with geometrical sparse representation. Signal Process Image Commun 92:116130","journal-title":"Signal Process Image Commun"},{"issue":"3","key":"1151_CR23","doi-asserted-by":"crossref","first-page":"81","DOI":"10.1504\/IJSISE.2021.117915","volume":"12","author":"S Babahenini","year":"2021","unstructured":"Babahenini S, Charif F, Cherif F et al (2021) Using saliency detection to improve multi-focus image fusion. Int J Signal Imaging Syst Eng 12(3):81\u201392","journal-title":"Int J Signal Imaging Syst Eng"},{"key":"1151_CR24","doi-asserted-by":"crossref","first-page":"733","DOI":"10.1016\/j.neucom.2015.09.092","volume":"174","author":"B Zhang","year":"2016","unstructured":"Zhang B, Lu X, Pei H et al (2016) Multi-focus image fusion algorithm based on focused region extraction. Neurocomputing 174:733\u2013748","journal-title":"Neurocomputing"},{"key":"1151_CR25","doi-asserted-by":"crossref","first-page":"201","DOI":"10.1016\/j.inffus.2019.02.003","volume":"51","author":"M Amin-Naji","year":"2019","unstructured":"Amin-Naji M, Aghagolzadeh A, Ezoji M (2019) Ensemble of CNN for multi-focus image fusion. Inform fusion 51:201\u2013214","journal-title":"Inform fusion"},{"issue":"33","key":"1151_CR26","doi-asserted-by":"crossref","first-page":"24303","DOI":"10.1007\/s11042-020-09154-4","volume":"79","author":"L Li","year":"2020","unstructured":"Li L, Si Y, Wang L et al (2020) A novel approach for multi-focus image fusion based on SF-PAPCNN and ISML in NSST domain. Multimed Tools Appl 79(33):24303\u201324328","journal-title":"Multimed Tools Appl"},{"key":"1151_CR27","doi-asserted-by":"crossref","first-page":"509","DOI":"10.1016\/j.neucom.2021.11.060","volume":"488","author":"W Kong","year":"2022","unstructured":"Kong W, Miao Q, Lei Y et al (2022) Guided filter random walk and improved spiking cortical model based image fusion method in NSST domain. Neurocomputing 488:509\u2013527","journal-title":"Neurocomputing"},{"key":"1151_CR28","volume":"81","author":"X Ma","year":"2021","unstructured":"Ma X, Wang Z, Hu S (2021) Multi-focus image fusion based on multi-scale sparse representation. J Vis Commun Image Represent 81:103328","journal-title":"J Vis Commun Image Represent"},{"key":"1151_CR29","doi-asserted-by":"crossref","unstructured":"Li J, Li B, Jiang Y, et al (2022) MSAt-GAN: a generative adversarial network based on multi-scale and deep attention mechanism for infrared and visible light image fusion. Complex Intell Syst 8:4753\u20134781","DOI":"10.1007\/s40747-022-00722-9"},{"key":"1151_CR30","doi-asserted-by":"crossref","first-page":"204","DOI":"10.1016\/j.neucom.2021.10.115","volume":"470","author":"B Ma","year":"2022","unstructured":"Ma B, Yin X, Wu D et al (2022) End-to-end learning for simultaneously generating decision map and multi-focus image fusion result. Neurocomputing 470:204\u2013216","journal-title":"Neurocomputing"},{"key":"1151_CR31","doi-asserted-by":"crossref","first-page":"309","DOI":"10.1109\/TCI.2021.3063872","volume":"7","author":"J Ma","year":"2021","unstructured":"Ma J, Le Z, Tian X et al (2021) SMFuse: multi-focus image fusion via self-supervised mask-optimization. IEEE Trans Comput Imaging 7:309\u2013320","journal-title":"IEEE Trans Comput Imaging"},{"key":"1151_CR32","doi-asserted-by":"crossref","first-page":"191","DOI":"10.1016\/j.inffus.2016.12.001","volume":"36","author":"Y Liu","year":"2017","unstructured":"Liu Y, Chen X, Peng H et al (2017) Multi-focus image fusion with a deep convolutional neural network. Inform Fusion 36:191\u2013207","journal-title":"Inform Fusion"},{"issue":"11","key":"1151_CR33","doi-asserted-by":"crossref","first-page":"5793","DOI":"10.1007\/s00521-020-05358-9","volume":"33","author":"B Ma","year":"2021","unstructured":"Ma B, Zhu Y, Yin X et al (2021) Sesf-fuse: An unsupervised deep model for multi-focus image fusion. Neural Comput Appl 33(11):5793\u20135804","journal-title":"Neural Comput Appl"},{"key":"1151_CR34","doi-asserted-by":"crossref","first-page":"4816","DOI":"10.1109\/TIP.2020.2976190","volume":"29","author":"J Li","year":"2020","unstructured":"Li J, Guo X, Lu G et al (2020) DRPL: deep regression pair learning for multi-focus image fusion. IEEE Trans Image Process 29:4816\u20134831","journal-title":"IEEE Trans Image Process"},{"key":"1151_CR35","doi-asserted-by":"crossref","first-page":"163","DOI":"10.1109\/TIP.2020.3033158","volume":"30","author":"B Xiao","year":"2020","unstructured":"Xiao B, Xu B, Bi X et al (2020) Global-feature encoding U-Net (GEU-Net) for multi-focus image fusion. IEEE Trans Image Process 30:163\u2013175","journal-title":"IEEE Trans Image Process"},{"key":"1151_CR36","doi-asserted-by":"crossref","first-page":"125","DOI":"10.1016\/j.ins.2017.12.043","volume":"433","author":"H Tang","year":"2018","unstructured":"Tang H, Xiao B, Li W et al (2018) Pixel convolutional neural network for multi-focus image fusion. Inf Sci 433:125\u2013141","journal-title":"Inf Sci"},{"key":"1151_CR37","doi-asserted-by":"crossref","unstructured":"Zhang H, Xu H, Xiao Y, et al (2020) Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity. In: Proceedings of the AAAI Conference on artificial intelligence (AAAI), 34(07), pp 12797\u201312804","DOI":"10.1609\/aaai.v34i07.6975"},{"key":"1151_CR38","first-page":"14264","volume":"34","author":"P Yu","year":"2021","unstructured":"Yu P, Xie S, Ma X et al (2021) Unsupervised foreground extraction via deep region competition. Adv Neural Inf Process Syst 34:14264\u201314279","journal-title":"Adv Neural Inf Process Syst"},{"key":"1151_CR39","unstructured":"Goodfellow I, Pouget-Abadie J, Mirza M, et al (2014) Generative adversarial nets.  Adv Neural Inf Process Syst 27:2672\u20132680"},{"key":"1151_CR40","doi-asserted-by":"crossref","unstructured":"Mao X, Li Q, Xie H, et al (2017) Least squares generative adversarial networks. In: Proceedings of the IEEE International Conference on computer vision (ICCV), 2017, pp 2794\u20132802","DOI":"10.1109\/ICCV.2017.304"},{"key":"1151_CR41","doi-asserted-by":"crossref","first-page":"11","DOI":"10.1016\/j.inffus.2018.09.004","volume":"48","author":"J Ma","year":"2019","unstructured":"Ma J, Yu W, Liang P et al (2019) FusionGAN: A generative adversarial network for infrared and visible image fusion. Inform Fusion 48:11\u201326","journal-title":"Inform Fusion"},{"key":"1151_CR42","doi-asserted-by":"crossref","unstructured":"Huang G, Liu Z, Van Der Maaten L, et al (2017) Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp 4700\u20134708","DOI":"10.1109\/CVPR.2017.243"},{"key":"1151_CR43","doi-asserted-by":"crossref","first-page":"81","DOI":"10.1016\/j.inffus.2016.09.006","volume":"35","author":"Y Zhang","year":"2017","unstructured":"Zhang Y, Bai X, Wang T (2017) Boundary finding based multi-focus image fusion through multi-scale morphological focus-measure. Inform fusion 35:81\u2013101","journal-title":"Inform fusion"},{"key":"1151_CR44","doi-asserted-by":"crossref","first-page":"139","DOI":"10.1016\/j.inffus.2014.05.004","volume":"23","author":"Y Liu","year":"2015","unstructured":"Liu Y, Liu S, Wang Z (2015) Multi-focus image fusion with dense SIFT. Inform Fusion 23:139\u2013155","journal-title":"Inform Fusion"},{"key":"1151_CR45","doi-asserted-by":"crossref","first-page":"60","DOI":"10.1016\/j.inffus.2013.11.005","volume":"20","author":"Z Zhou","year":"2014","unstructured":"Zhou Z, Li S, Wang B (2014) Multi-scale weighted gradient-based fusion for multi-focus images. Inform Fusion 20:60\u201372","journal-title":"Inform Fusion"},{"key":"1151_CR46","doi-asserted-by":"crossref","first-page":"72","DOI":"10.1016\/j.inffus.2014.10.004","volume":"25","author":"M Nejati","year":"2015","unstructured":"Nejati M, Samavi S, Shirani S (2015) Multi-focus image fusion using dictionary-based sparse representation. Inform Fusion 25:72\u201384","journal-title":"Inform Fusion"},{"issue":"1","key":"1151_CR47","volume":"2","author":"JW Roberts","year":"2008","unstructured":"Roberts JW, Van Aardt JA, Ahmed FB (2008) Assessment of image fusion procedures using entropy, image quality, and multispectral classification. J Appl Remote Sens 2(1):023522","journal-title":"J Appl Remote Sens"},{"issue":"12","key":"1151_CR48","doi-asserted-by":"crossref","first-page":"2959","DOI":"10.1109\/26.477498","volume":"43","author":"AM Eskicioglu","year":"1995","unstructured":"Eskicioglu AM, Fisher PS (1995) Image quality measures and their performance. IEEE Trans Commun 43(12):2959\u20132965","journal-title":"IEEE Trans Commun"},{"issue":"4","key":"1151_CR49","doi-asserted-by":"crossref","first-page":"355","DOI":"10.1088\/0957-0233\/8\/4\/002","volume":"8","author":"YJ Rao","year":"1997","unstructured":"Rao YJ (1997) In-fibre Bragg grating sensors. Meas Sci Technol 8(4):355","journal-title":"Meas Sci Technol"},{"issue":"5","key":"1151_CR50","first-page":"484","volume":"4","author":"M Deshmukh","year":"2010","unstructured":"Deshmukh M, Bhosale U (2010) Image fusion and image quality assessment of fused images. Int J Image Process (IJIP) 4(5):484","journal-title":"Int J Image Process (IJIP)"},{"key":"1151_CR51","unstructured":"Wang Z, Simoncelli EP, Bovik AC (2003) Multiscale structural similarity for image quality assessment In: The Thirty-Seventh Asilomar Conference on Signals, Systems & Computers, 2 (2003), pp 1398\u20131402"},{"key":"1151_CR52","doi-asserted-by":"crossref","first-page":"99","DOI":"10.1016\/j.inffus.2019.07.011","volume":"54","author":"Y Zhang","year":"2020","unstructured":"Zhang Y, Liu Y, Sun P et al (2020) IFCNN: A general image fusion framework based on convolutional neural network. Inform Fusion 54:99\u2013118","journal-title":"Inform Fusion"},{"issue":"10","key":"1151_CR53","doi-asserted-by":"crossref","first-page":"2761","DOI":"10.1007\/s11263-021-01501-8","volume":"129","author":"H Zhang","year":"2021","unstructured":"Zhang H, Ma J (2021) SDNet: A versatile squeeze-and-decomposition network for real-time image fusion. Int J Comput Vision 129(10):2761\u20132785","journal-title":"Int J Comput Vision"},{"issue":"18","key":"1151_CR54","doi-asserted-by":"crossref","first-page":"15119","DOI":"10.1007\/s00521-020-04863-1","volume":"32","author":"J Huang","year":"2020","unstructured":"Huang J, Le Z, Ma Y et al (2020) A generative adversarial network with adaptive constraints for multi-focus image fusion. Neural Comput Appl 32(18):15119\u201315129","journal-title":"Neural Comput Appl"},{"key":"1151_CR55","doi-asserted-by":"crossref","unstructured":"Liu Z, Lin Y, Cao Y, et al (2021) Swin transformer: Hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), 2021, pp 10012\u201310022","DOI":"10.1109\/ICCV48922.2021.00986"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-023-01151-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-023-01151-y\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-023-01151-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,10,27]],"date-time":"2023-10-27T19:08:39Z","timestamp":1698433719000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-023-01151-y"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,6,30]]},"references-count":55,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2023,12]]}},"alternative-id":["1151"],"URL":"https:\/\/doi.org\/10.1007\/s40747-023-01151-y","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"value":"2199-4536","type":"print"},{"value":"2198-6053","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,6,30]]},"assertion":[{"value":"21 June 2022","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"11 June 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"30 June 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}