{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,14]],"date-time":"2026-02-14T06:06:28Z","timestamp":1771049188687,"version":"3.50.1"},"reference-count":50,"publisher":"Springer Science and Business Media LLC","issue":"6","license":[{"start":{"date-parts":[[2022,4,22]],"date-time":"2022-04-22T00:00:00Z","timestamp":1650585600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,4,22]],"date-time":"2022-04-22T00:00:00Z","timestamp":1650585600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["11673009"],"award-info":[{"award-number":["11673009"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2022,12]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>For the past few years, image fusion technology has made great progress, especially in infrared and visible light image infusion. However, the fusion methods, based on traditional or deep learning technology, have some disadvantages such as unobvious structure or texture detail loss. In this regard, a novel generative adversarial network named MSAt-GAN is proposed in this paper. It is based on multi-scale feature transfer and deep attention mechanism feature fusion, and used for infrared and visible image fusion. First, this paper employs three different receptive fields to extract the multi-scale and multi-level deep features of multi-modality images in three channels rather than artificially setting a single receptive field. In this way, the important features of the source image can be better obtained from different receptive fields and angles, and the extracted feature representation is also more flexible and diverse. Second, a multi-scale deep attention fusion mechanism is designed in this essay. It describes the important representation of multi-level receptive field extraction features through both spatial and channel attention and merges them according to the level of attention. Doing so can lay more emphasis on the attention feature map and extract significant features of multi-modality images, which eliminates noise to some extent. Third, the concatenate operation of the multi-level deep features in the encoder and the deep features in the decoder are cascaded to enhance the feature transmission while making better use of the previous features. Finally, this paper adopts a dual-discriminator generative adversarial network on the network structure, which can force the generated image to retain the intensity of the infrared image and the texture detail information of the visible image at the same time. Substantial qualitative and quantitative experimental analysis of infrared and visible image pairs on three public datasets show that compared with state-of-the-art fusion methods, the proposed MSAt-GAN network has comparable outstanding fusion performance in subjective perception and objective quantitative measurement.<\/jats:p>","DOI":"10.1007\/s40747-022-00722-9","type":"journal-article","created":{"date-parts":[[2022,4,22]],"date-time":"2022-04-22T08:03:01Z","timestamp":1650614581000},"page":"4753-4781","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":25,"title":["MSAt-GAN: a generative adversarial network based on multi-scale and deep attention mechanism for infrared and visible light image fusion"],"prefix":"10.1007","volume":"8","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-1996-1840","authenticated-orcid":false,"given":"Junwu","family":"Li","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4179-6979","authenticated-orcid":false,"given":"Binhua","family":"Li","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4173-8311","authenticated-orcid":false,"given":"Yaoxi","family":"Jiang","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6795-6152","authenticated-orcid":false,"given":"Weiwei","family":"Cai","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,4,22]]},"reference":[{"key":"722_CR1","doi-asserted-by":"publisher","first-page":"100","DOI":"10.1016\/j.inffus.2016.05.004","volume":"33","author":"S Li","year":"2017","unstructured":"Li S, Kang X, Fang L, Hu J, Yin H (2017) Pixel-level image fusion: a survey of the state of the art. Inf Fusion 33:100\u2013112","journal-title":"Inf Fusion"},{"key":"722_CR2","doi-asserted-by":"publisher","first-page":"106977","DOI":"10.1016\/j.patcog.2019.106977","volume":"96","author":"C Li","year":"2019","unstructured":"Li C, Liang X, Lu Y, Zhao N, Tang J (2019) RGB-T object tracking: Benchmark and baseline. Pattern Recognit 96:106977","journal-title":"Pattern Recognit"},{"key":"722_CR3","doi-asserted-by":"crossref","unstructured":"Kristan M et al (2019) The seventh visual object tracking vot2019 challenge results. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), IEEE, pp 1\u201336","DOI":"10.1109\/ICCVW.2019.00276"},{"key":"722_CR4","doi-asserted-by":"publisher","first-page":"166","DOI":"10.1016\/j.inffus.2020.05.002","volume":"63","author":"X Zhang","year":"2020","unstructured":"Zhang X, Ye P, Leung H, Gong K, Xiao G (2020) Object fusion tracking based on visible and infrared images: a comprehensive review. Inf Fusion 63:166\u2013187","journal-title":"Inf Fusion"},{"key":"722_CR5","doi-asserted-by":"crossref","unstructured":"Duan Z, Lan J, Xu Y, Ni B, Zhuang L, Yang X (2017) Pedestrian detection via bi-directional multi-scale analysis. In: Proceedings of the 25th ACM international conference on Multimedia. ACM, pp 1023\u20131031","DOI":"10.1145\/3123266.3123356"},{"issue":"1","key":"722_CR6","doi-asserted-by":"publisher","first-page":"198","DOI":"10.1080\/10584587.2021.1911313","volume":"217","author":"K Sun","year":"2021","unstructured":"Sun K, Zhang B, Chen Y et al (2021) The facial expression recognition method based on image fusion and CNN. Integr Ferroelectr 217(1):198\u2013213","journal-title":"Integr Ferroelectr"},{"key":"722_CR7","doi-asserted-by":"publisher","first-page":"179","DOI":"10.1016\/j.patrec.2021.03.015","volume":"146","author":"J Xu","year":"2021","unstructured":"Xu J, Lu K, Wang H (2021) Attention fusion network for multi-spectral semantic segmentation. Pattern Recognit Lett 146:179\u2013184","journal-title":"Pattern Recognit Lett"},{"key":"722_CR8","doi-asserted-by":"crossref","unstructured":"Kuanar S, Athitsos V, Mahapatra D, Rao K, Akhtar Z, Dasgupta D (2019) Low dose abdominal CT image reconstruction: an unsupervised learning based approach. In: 2019 IEEE International Conference on Image Processing (ICIP). IEEE, pp 1351\u20131355","DOI":"10.1109\/ICIP.2019.8803037"},{"key":"722_CR9","doi-asserted-by":"publisher","first-page":"7629","DOI":"10.1109\/TIP.2020.3004733","volume":"29","author":"R Liu","year":"2020","unstructured":"Liu R, Mu P, Chen J, Fan X, Luo Z (2020) Investigating task-driven latent feasibility for nonconvex image modeling. IEEE Trans Image Process 29:7629\u20137640","journal-title":"IEEE Trans Image Process"},{"key":"722_CR10","first-page":"1","volume":"70","author":"J Li","year":"2020","unstructured":"Li J, Huo H, Li C et al (2020) Multi-grained attention network for infrared and visible image fusion. IEEE Trans Instrum Meas 70:1\u201312","journal-title":"IEEE Trans Instrum Meas"},{"key":"722_CR11","doi-asserted-by":"publisher","first-page":"153","DOI":"10.1016\/j.inffus.2018.02.004","volume":"45","author":"J Ma","year":"2019","unstructured":"Ma J, Ma Y, Li C (2019) Infrared and visible image fusion methods and applications: a survey. Inf Fusion 45:153\u2013178","journal-title":"Inf Fusion"},{"key":"722_CR12","doi-asserted-by":"publisher","first-page":"158","DOI":"10.1016\/j.inffus.2017.10.007","volume":"42","author":"Y Liu","year":"2018","unstructured":"Liu Y, Chen X, Wang Z, Wang ZJ, Ward RK, Wang X (2018) Deep learning for pixel-level image fusion: recent advances and future prospects. Inf Fusion 42:158\u2013173","journal-title":"Inf Fusion"},{"key":"722_CR13","doi-asserted-by":"publisher","first-page":"179857","DOI":"10.1109\/ACCESS.2020.3028088","volume":"8","author":"L Junwu","year":"2020","unstructured":"Junwu L, Li B, Jiang Y (2020) An infrared and visible image fusion algorithm based on LSWT-NSST. IEEE Access 8:179857\u2013179880","journal-title":"IEEE Access"},{"issue":"9","key":"722_CR14","doi-asserted-by":"publisher","first-page":"3367","DOI":"10.1109\/TIM.2018.2877285","volume":"68","author":"A Vishwakarma","year":"2018","unstructured":"Vishwakarma A, Bhuyan MK (2018) Image fusion using adjustable non-subsampled shearlet transform. IEEE Trans Instrum Meas 68(9):3367\u20133378","journal-title":"IEEE Trans Instrum Meas"},{"key":"722_CR15","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1016\/j.infrared.2017.10.004","volume":"88","author":"X Jin","year":"2018","unstructured":"Jin X, Jiang Q, Yao S et al (2018) Infrared and visual image fusion method based on discrete cosine transform and local spatial frequency in discrete stationary wavelet transform domain. Infrared Phys Technol 88:1\u201312","journal-title":"Infrared Phys Technol"},{"key":"722_CR16","doi-asserted-by":"publisher","first-page":"64","DOI":"10.1016\/j.ins.2019.08.066","volume":"508","author":"J Chen","year":"2020","unstructured":"Chen J, Li X, Luo L et al (2020) Infrared and visible image fusion based on target-enhanced multiscale transform decomposition. Inf Sci 508:64\u201378","journal-title":"Inf Sci"},{"key":"722_CR17","doi-asserted-by":"publisher","first-page":"94","DOI":"10.1016\/j.infrared.2017.04.018","volume":"83","author":"CH Liu","year":"2017","unstructured":"Liu CH, Qi Y, Ding WR (2017) Infrared and visible image fusion method based on saliency detection in sparse domain. Infrared Phys Technol 83:94\u2013102","journal-title":"Infrared Phys Technol"},{"key":"722_CR18","doi-asserted-by":"publisher","first-page":"57","DOI":"10.1016\/j.inffus.2017.05.006","volume":"40","author":"Q Zhang","year":"2018","unstructured":"Zhang Q, Liu Y, Blum RS, Han J, Tao D (2018) Sparse representation based multi-sensor image fusion for multi-focus and multi-modality images: a review. Inf Fusion 40:57\u201375","journal-title":"Inf Fusion"},{"key":"722_CR19","doi-asserted-by":"publisher","first-page":"8","DOI":"10.1016\/j.infrared.2017.02.005","volume":"82","author":"J Ma","year":"2017","unstructured":"Ma J, Zhou Z, Wang B, Zong H (2017) Infrared and visible image fusion based on visual saliency map and weighted least square optimization. Infrared Phys Technol 82:8\u201317","journal-title":"Infrared Phys Technol"},{"key":"722_CR20","doi-asserted-by":"publisher","first-page":"20676","DOI":"10.1109\/ACCESS.2019.2897320","volume":"7","author":"J Li","year":"2019","unstructured":"Li J, Huo H, Sui C, Jiang C, Li C (2019) Poisson reconstruction-based fusion of infrared and visible images via saliency detection. IEEE Access 7:20676\u201320688","journal-title":"IEEE Access"},{"key":"722_CR21","doi-asserted-by":"publisher","first-page":"121","DOI":"10.1016\/j.inffus.2020.07.002","volume":"64","author":"B Rasti","year":"2020","unstructured":"Rasti B, Ghamisi P (2020) Remote sensing image classification using subspace sensor fusion. Inf Fusion 64:121\u2013130","journal-title":"Inf Fusion"},{"issue":"2","key":"722_CR22","doi-asserted-by":"publisher","first-page":"593","DOI":"10.1109\/TIM.2019.2902808","volume":"69","author":"S Singh","year":"2019","unstructured":"Singh S, Anand RS (2019) Multimodal medical image sensor fusion model using sparse K-SVD dictionary learning in nonsubsampled shearlet domain. IEEE Trans Instrum Meas 69(2):593\u2013607","journal-title":"IEEE Trans Instrum Meas"},{"issue":"07","key":"722_CR23","first-page":"12797","volume":"34","author":"H Zhang","year":"2020","unstructured":"Zhang H, Xu H, Xiao Y et al (2020) Rethinking the image fusion: A fast unified image fusion network based on proportional maintenance of gradient and intensity. Proc AAAI Conf Artif Intell (AAAI) 34(07):12797\u201312804","journal-title":"Proc AAAI Conf Artif Intell (AAAI)"},{"issue":"03","key":"722_CR24","doi-asserted-by":"publisher","first-page":"1850018","DOI":"10.1142\/S0219691318500182","volume":"16","author":"Y Liu","year":"2018","unstructured":"Liu Y, Chen X, Cheng J et al (2018) Infrared and visible image fusion with convolutional neural networks. Int J Wavelets Multiresolut Inf Process 16(03):1850018","journal-title":"Int J Wavelets Multiresolut Inf Process"},{"key":"722_CR25","doi-asserted-by":"crossref","unstructured":"Li H, Wu X-J, Kittler J (2018) Infrared and visible image fusion using a deep learning framework. In: 2018 24th International Conference on Pattern Recognition (ICPR), IEEE, pp 2705\u20132710","DOI":"10.1109\/ICPR.2018.8546006"},{"issue":"5","key":"722_CR26","doi-asserted-by":"publisher","first-page":"2614","DOI":"10.1109\/TIP.2018.2887342","volume":"28","author":"H Li","year":"2018","unstructured":"Li H, Wu X-J (2018) DenseFuse: a fusion approach to infrared and visible images. IEEE Trans Image Process 28(5):2614\u20132623","journal-title":"IEEE Trans Image Process"},{"key":"722_CR27","doi-asserted-by":"publisher","first-page":"11","DOI":"10.1016\/j.inffus.2018.09.004","volume":"48","author":"J Ma","year":"2019","unstructured":"Ma J, Yu W, Liang P et al (2019) FusionGAN: a generative adversarial network for infrared and visible image fusion. Inf Fusion 48:11\u201326","journal-title":"Inf Fusion"},{"key":"722_CR28","doi-asserted-by":"publisher","first-page":"4980","DOI":"10.1109\/TIP.2020.2977573","volume":"29","author":"J Ma","year":"2020","unstructured":"Ma J, Xu H, Jiang J et al (2020) DDcGAN: a dual-discriminator conditional generative adversarial network for multi-resolution image fusion. IEEE Trans Image Process 29:4980\u20134995","journal-title":"IEEE Trans Image Process"},{"key":"722_CR29","first-page":"1","volume":"70","author":"J Ma","year":"2020","unstructured":"Ma J, Zhang H, Shao Z et al (2020) GANMcC: a generative adversarial network with multiclassification constraints for infrared and visible image fusion. IEEE Trans Instrum Meas 70:1\u201314","journal-title":"IEEE Trans Instrum Meas"},{"key":"722_CR30","doi-asserted-by":"publisher","first-page":"1383","DOI":"10.1109\/TMM.2020.2997127","volume":"23","author":"J Li","year":"2020","unstructured":"Li J, Huo H, Li C et al (2020) AttentionFGAN: infrared and visible image fusion using attention-based generative adversarial networks. IEEE Trans Multimed 23:1383\u20131396","journal-title":"IEEE Trans Multimed"},{"key":"722_CR31","doi-asserted-by":"crossref","unstructured":"Huang G, Liu Z, Van Der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR). IEEE, pp 4700\u20134708","DOI":"10.1109\/CVPR.2017.243"},{"issue":"12","key":"722_CR32","doi-asserted-by":"publisher","first-page":"9645","DOI":"10.1109\/TIM.2020.3005230","volume":"69","author":"H Li","year":"2020","unstructured":"Li H, Wu XJ, Durrani T (2020) NestFuse: an infrared and visible image fusion architecture based on nest connection and spatial\/channel attention models. IEEE Trans Instrum Meas 69(12):9645\u20139656","journal-title":"IEEE Trans Instrum Meas"},{"key":"722_CR33","doi-asserted-by":"crossref","unstructured":"Liu J, Fan X, Jiang J et al (2021) Learning a deep multi-scale feature ensemble and an edge-attention guidance for image fusion. In: IEEE Transactions on Circuits and Systems for Video Technology","DOI":"10.1109\/TCSVT.2021.3056725"},{"key":"722_CR34","doi-asserted-by":"crossref","unstructured":"Szegedy C, Vanhoucke V, Ioffe S et al (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR). IEEE, pp 2818\u20132826.","DOI":"10.1109\/CVPR.2016.308"},{"key":"722_CR35","unstructured":"Petzka H, Fischer A, Lukovnicov D (2017) On the Regularization of Wasserstein Gans, arXiv preprint 1709 (08894)"},{"key":"722_CR36","doi-asserted-by":"publisher","first-page":"28","DOI":"10.1016\/j.ins.2020.04.035","volume":"529","author":"J Li","year":"2020","unstructured":"Li J, Huo H, Liu K et al (2020) Infrared and visible image fusion using dual discriminators generative adversarial networks with Wasserstein distance. Inf Sci 529:28\u201341","journal-title":"Inf Sci"},{"key":"722_CR37","unstructured":"Vaswani A, Shazeer N, Parmar N et al (2017) Attention is all you need. In: Advances in neural information processing systems, pp 5998\u20136008"},{"key":"722_CR38","doi-asserted-by":"crossref","unstructured":"Zhong Z et al (2020) Squeeze-and-Attention Networks for Semantic Segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, pp 13062\u201313071","DOI":"10.1109\/CVPR42600.2020.01308"},{"key":"722_CR39","doi-asserted-by":"crossref","unstructured":"Ulutan O, Iftekhar ASM, Manjunath BS (2020) VSGNet: Spatial Attention Network for Detecting Human Object Interactions Using Graph Convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, pp 13614\u201313623","DOI":"10.1109\/CVPR42600.2020.01363"},{"key":"722_CR40","doi-asserted-by":"crossref","unstructured":"Yang Q, Xu Y, Wu Z et al (2019) Hyperspectral and multispectral image fusion based on deep attention network. In: 2019 10th Workshop on Hyperspectral Imaging and Signal Processing: Evolution in Remote Sensing (WHISPERS). IEEE, pp 1\u20135","DOI":"10.1109\/WHISPERS.2019.8920825"},{"key":"722_CR41","doi-asserted-by":"publisher","first-page":"116","DOI":"10.1016\/j.inffus.2019.12.013","volume":"58","author":"H Zhu","year":"2020","unstructured":"Zhu H, Ma W, Li L et al (2020) A Dual-Branch Attention fusion deep network for multiresolution remote-Sensing image classification. Inform Fusion 58:116\u2013131","journal-title":"Inform Fusion"},{"key":"722_CR42","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S et al (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR). IEEE, pp 770\u2013778.","DOI":"10.1109\/CVPR.2016.90"},{"key":"722_CR43","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S, Sun J (2015) Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. In: Proceedings of the IEEE international conference on computer vision (ICCV). IEEE, pp 1026\u20131034","DOI":"10.1109\/ICCV.2015.123"},{"issue":"5","key":"722_CR44","doi-asserted-by":"publisher","first-page":"1185","DOI":"10.1109\/TIP.2010.2092435","volume":"20","author":"Z Wang","year":"2010","unstructured":"Wang Z, Li Q (2010) Information content weighting for perceptual image quality assessment. IEEE Trans Image Process 20(5):1185\u20131198","journal-title":"IEEE Trans Image Process"},{"issue":"2","key":"722_CR45","doi-asserted-by":"publisher","first-page":"127","DOI":"10.1016\/j.inffus.2011.08.002","volume":"14","author":"Y Han","year":"2013","unstructured":"Han Y, Cai Y, Cao Y, Xu X (2013) A new image fusion performance metric based on visual information fidelity. Inf Fusion 14(2):127\u2013135","journal-title":"Inf Fusion"},{"issue":"12","key":"722_CR46","doi-asserted-by":"publisher","first-page":"1890","DOI":"10.1016\/j.aeue.2015.09.004","volume":"69","author":"V Aslantas","year":"2015","unstructured":"Aslantas V, Bendes E (2015) A new image quality metric for image fusion: the sum of the correlations of differences. Aeu-Int J Electron Commun 69(12):1890\u20131896","journal-title":"Aeu-Int J Electron Commun"},{"key":"722_CR47","doi-asserted-by":"publisher","first-page":"100","DOI":"10.1016\/j.inffus.2016.02.001","volume":"31","author":"J Ma","year":"2016","unstructured":"Ma J, Chen C, Li C, Huang J (2016) Infrared and visible image fusion via gradient transfer and total variation minimization. Inf Fusion 31:100\u2013109","journal-title":"Inf Fusion"},{"issue":"5","key":"722_CR48","doi-asserted-by":"publisher","first-page":"1193","DOI":"10.1007\/s11760-013-0556-9","volume":"9","author":"BK Shreyamsha","year":"2015","unstructured":"Shreyamsha BK (2015) Image fusion based on pixel significance using cross bilateral filter. Signal Image Video Process 9(5):1193\u20131204","journal-title":"Signal Image Video Process"},{"issue":"7","key":"722_CR49","doi-asserted-by":"publisher","first-page":"2864","DOI":"10.1109\/TIP.2013.2244222","volume":"22","author":"S Li","year":"2013","unstructured":"Li S, Kang X, Hu J (2013) Image fusion with guided filtering. IEEE Trans Image Process 22(7):2864\u20132875","journal-title":"IEEE Trans Image Process"},{"issue":"5","key":"722_CR50","doi-asserted-by":"publisher","first-page":"479","DOI":"10.14429\/dsj.61.705","volume":"61","author":"VPS Naidu","year":"2011","unstructured":"Naidu VPS (2011) Image fusion technique using multi-resolution singular value decomposition. Def Sci J 61(5):479\u2013484","journal-title":"Def Sci J"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-022-00722-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-022-00722-9\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-022-00722-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,10,27]],"date-time":"2022-10-27T12:04:21Z","timestamp":1666872261000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-022-00722-9"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,4,22]]},"references-count":50,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2022,12]]}},"alternative-id":["722"],"URL":"https:\/\/doi.org\/10.1007\/s40747-022-00722-9","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"value":"2199-4536","type":"print"},{"value":"2198-6053","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,4,22]]},"assertion":[{"value":"27 October 2021","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"5 March 2022","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"22 April 2022","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}