{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,7,30]],"date-time":"2025-07-30T13:24:11Z","timestamp":1753881851950,"version":"3.41.2"},"reference-count":57,"publisher":"Wiley","issue":"1","license":[{"start":{"date-parts":[[2023,8,31]],"date-time":"2023-08-31T00:00:00Z","timestamp":1693440000000},"content-version":"vor","delay-in-days":242,"URL":"http:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62003065"],"award-info":[{"award-number":["62003065"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100007957","name":"Chongqing Municipal Education Commission","doi-asserted-by":"publisher","award":["KJQN202200564","KJZD202200504"],"award-info":[{"award-number":["KJQN202200564","KJZD202200504"]}],"id":[{"id":"10.13039\/501100007957","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100010338","name":"Chongqing Normal University","doi-asserted-by":"publisher","award":["21XLB032"],"award-info":[{"award-number":["21XLB032"]}],"id":[{"id":"10.13039\/100010338","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["onlinelibrary.wiley.com"],"crossmark-restriction":true},"short-container-title":["International Journal of Intelligent Systems"],"published-print":{"date-parts":[[2023,1]]},"abstract":"<jats:p>Focus measurement, one of the key tasks in multifocus image fusion (MFIF) frameworks, identifies the clearer parts of multifocus images pairs. Most of the existing methods aim to achieve disposable pixel\u2010level focus measurement. However, the lack of sufficient accuracy often gives rise to misjudgments in the results. To this end, a novel two\u2010stage focus measurement with joint boundary refinement network is proposed for MFIF. In this work, we adopt a coarse\u2010to\u2010fine strategy to gradually achieve block\u2010level and pixel\u2010level focus measurement for producing more fine\u2010grained focus probability maps, instead of directly predicting at the pixel level. In addition, the joint boundary refinement optimizes the performance on the focused\/defocused boundary component (FDB) during the focus measurement. To improve feature extraction capability, both CNN and transformer are employed to, respectively, encode local patterns and capture long\u2010range dependencies. Then, the features from two input branches are legitimately aggregated by modeling the spatial complementary relationship in each pair of multifocus images. Extensive experiments demonstrate that the proposed model achieves state\u2010of\u2010the\u2010art performance in both subjective perception and objective assessment.<\/jats:p>","DOI":"10.1155\/2023\/4155948","type":"journal-article","created":{"date-parts":[[2023,8,31]],"date-time":"2023-08-31T17:35:15Z","timestamp":1693503315000},"update-policy":"https:\/\/doi.org\/10.1002\/crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["Two\u2010Stage Focus Measurement Network with Joint Boundary Refinement for Multifocus Image Fusion"],"prefix":"10.1155","volume":"2023","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-4747-7979","authenticated-orcid":false,"given":"Hao","family":"Zhai","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0009-0003-4020-3451","authenticated-orcid":false,"given":"Xin","family":"Pan","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8786-0120","authenticated-orcid":false,"given":"You","family":"Yang","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0009-0008-5383-9615","authenticated-orcid":false,"given":"Jinyuan","family":"Jiang","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0009-0003-8548-0858","authenticated-orcid":false,"given":"Qing","family":"Li","sequence":"additional","affiliation":[]}],"member":"311","published-online":{"date-parts":[[2023,8,31]]},"reference":[{"key":"e_1_2_9_1_2","doi-asserted-by":"publisher","DOI":"10.1016\/s1566-2535(01)00038-0"},{"key":"e_1_2_9_2_2","doi-asserted-by":"publisher","DOI":"10.1007\/s00500-017-2694-4"},{"key":"e_1_2_9_3_2","doi-asserted-by":"publisher","DOI":"10.1007\/s00521-015-2061-2"},{"key":"e_1_2_9_4_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2018.08.024"},{"key":"e_1_2_9_5_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.optlastec.2018.07.045"},{"key":"e_1_2_9_6_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2014.05.004"},{"key":"e_1_2_9_7_2","doi-asserted-by":"publisher","DOI":"10.1109\/tmm.2019.2928516"},{"key":"e_1_2_9_8_2","doi-asserted-by":"publisher","DOI":"10.1006\/gmip.1995.1022"},{"key":"e_1_2_9_9_2","doi-asserted-by":"publisher","DOI":"10.1109\/access.2019.2924033"},{"key":"e_1_2_9_10_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2018.04.066"},{"key":"e_1_2_9_11_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.asej.2016.06.011"},{"key":"e_1_2_9_12_2","doi-asserted-by":"publisher","DOI":"10.1007\/s11760-018-1402-x"},{"key":"e_1_2_9_13_2","doi-asserted-by":"publisher","DOI":"10.1109\/access.2019.2909591"},{"key":"e_1_2_9_14_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.image.2019.06.002"},{"key":"e_1_2_9_15_2","doi-asserted-by":"publisher","DOI":"10.1109\/tim.2018.2877285"},{"key":"e_1_2_9_16_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.sigpro.2019.107252"},{"key":"e_1_2_9_17_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2016.12.001"},{"key":"e_1_2_9_18_2","doi-asserted-by":"publisher","DOI":"10.1162\/neco_a_01098"},{"key":"e_1_2_9_19_2","doi-asserted-by":"publisher","DOI":"10.1109\/tim.2021.3072124"},{"key":"e_1_2_9_20_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10489-022-03194-z"},{"key":"e_1_2_9_21_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2020.08.022"},{"key":"e_1_2_9_22_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10489-022-04406-2"},{"key":"e_1_2_9_23_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.image.2021.116554"},{"key":"e_1_2_9_24_2","doi-asserted-by":"publisher","DOI":"10.1109\/jas.2022.105686"},{"key":"e_1_2_9_25_2","doi-asserted-by":"publisher","DOI":"10.1109\/tpami.2021.3078906"},{"key":"e_1_2_9_26_2","unstructured":"DosovitskiyA. BeyerL. andKolesnikovA. An image is worth 16x16 words: transformers for image recognition at scale 2020 https:\/\/arxiv.org\/abs\/2010.11929."},{"key":"e_1_2_9_27_2","doi-asserted-by":"crossref","unstructured":"YuanL. ChenY. WangT. andWeihaoY. Tokens-to-token vit: training vision transformers from scratch on imagenet Proceedings of the IEEE\/CVF International Conference on Computer Vision October 2021 Montreal BC Canada 558\u2013567.","DOI":"10.1109\/ICCV48922.2021.00060"},{"key":"e_1_2_9_28_2","doi-asserted-by":"crossref","unstructured":"WangW. XieE. LiX. andFanD. P. Pyramid vision transformer: a versatile backbone for dense prediction without convolutions Proceedings of the IEEE\/CVF International Conference on Computer Vision October 2021 Montreal BC Canada 568\u2013578.","DOI":"10.1109\/ICCV48922.2021.00061"},{"key":"e_1_2_9_29_2","unstructured":"TouvronH. CordM. DouzeM. MassaF. SablayrollesA. andJ\u00e9gouH. Training data-efficient image transformers & distillation through attention Proceedings of the International Conference on Machine Learning July 2021 10347\u201310357."},{"key":"e_1_2_9_30_2","doi-asserted-by":"crossref","unstructured":"LiuZ. LinY. andCaoY. Swin transformer: hierarchical vision transformer using shifted windows Proceedings of the IEEE\/CVF International Conference on Computer Vision October 2021 Montreal BC Canada 10012\u201310022.","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"e_1_2_9_31_2","article-title":"Fast vision transformers with hilo attention","volume":"35","author":"Pan Z.","year":"2022","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_9_32_2","unstructured":"CordonnierJ. B. LoukasA. andJaggiM. On the relationship between self-attention and convolutional layers 2019 https:\/\/arxiv.org\/abs\/1911.03584."},{"key":"e_1_2_9_33_2","doi-asserted-by":"crossref","unstructured":"HeK. ZhangX. RenS. andSunJ. Deep residual learning for image recognition Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition July 2016 Las Vegas NV USA 770\u2013778.","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_2_9_34_2","doi-asserted-by":"crossref","unstructured":"LiuZ. MaoH. WuC. Y. FeichtenhoferC. DarrellT. andXieS. A ConvNet for the 2020s Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition June 2022 New Orleans LA USA.","DOI":"10.1109\/CVPR52688.2022.01167"},{"key":"e_1_2_9_35_2","doi-asserted-by":"crossref","unstructured":"WangW. XieE. LiX. FanD. P. andShaoL. Pyramid vision transformer: a versatile backbone for dense prediction without convolutions Proceedings of the IEEE International Conference on Computer Vision October 2021 Montreal BC Canada.","DOI":"10.1109\/ICCV48922.2021.00061"},{"key":"e_1_2_9_36_2","doi-asserted-by":"crossref","unstructured":"WuH. XiaoB. andCodellaN. CvT: introducing convolutions to vision transformers Proceedings of the IEEE International Conference on Computer Vision October 2021 Montreal BC Canada.","DOI":"10.1109\/ICCV48922.2021.00009"},{"key":"e_1_2_9_37_2","doi-asserted-by":"publisher","DOI":"10.1109\/tci.2020.3039564"},{"key":"e_1_2_9_38_2","doi-asserted-by":"publisher","DOI":"10.1002\/int.22804"},{"key":"e_1_2_9_39_2","doi-asserted-by":"publisher","DOI":"10.1002\/int.22687"},{"key":"e_1_2_9_40_2","unstructured":"IslamM. A. JiaS. andBruceN. D. How much position information do convolutional neural networks encode? 2020 https:\/\/arxiv.org\/abs\/2001.08248."},{"key":"e_1_2_9_41_2","doi-asserted-by":"crossref","unstructured":"MilletariF. NavabN. andAhmadiS. A. V. N. Fully convolutional neural networks for volumetric medical image segmentation Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV) October 2016 Stanford CL USA 565\u2013571.","DOI":"10.1109\/3DV.2016.79"},{"key":"e_1_2_9_42_2","doi-asserted-by":"crossref","unstructured":"PiaoY. JiW. LiJ. ZhangM. andLuH. Depth-induced multi-scale recurrent attention network for saliency detection Proceedings of the IEEE\/CVF International Conference on Computer Vision October 2019 Seoul Korea (South) 7254\u20137263.","DOI":"10.1109\/ICCV.2019.00735"},{"key":"e_1_2_9_43_2","doi-asserted-by":"crossref","unstructured":"LiG.andYuY. Visual saliency based on multiscale deep features Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition June 2015 Boston MA USA 5455\u20135463.","DOI":"10.1109\/CVPR.2015.7299184"},{"key":"e_1_2_9_44_2","doi-asserted-by":"publisher","DOI":"10.1109\/tip.2020.3018261"},{"key":"e_1_2_9_45_2","doi-asserted-by":"crossref","unstructured":"HaghighatM.andRazianM. A. Fast-FMI: non-reference image fusion metric Proceedings of the 2014 IEEE 8th International Conference on Application of Information and Communication Technologies (AICT) October 2014 Astana Kazakhstan 1\u20133.","DOI":"10.1109\/ICAICT.2014.7036000"},{"key":"e_1_2_9_46_2","first-page":"1433","article-title":"Performance assessment of combinative pixel-level image fusion based on an absolute feature measurement","volume":"3","author":"Zhao J.","year":"2007","journal-title":"Int. J. Innov. Comput. Inf. Control."},{"key":"e_1_2_9_47_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.optcom.2014.12.032"},{"key":"e_1_2_9_48_2","first-page":"89","article-title":"Objective pixel-level image fusion performance measure","volume":"4051","author":"Xydeas C. S.","year":"2000","journal-title":"Sensor Fusion: Architectures, Algorithms, and Applications IV"},{"key":"e_1_2_9_49_2","doi-asserted-by":"publisher","DOI":"10.1109\/tip.2003.819861"},{"key":"e_1_2_9_50_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2011.08.002"},{"key":"e_1_2_9_51_2","doi-asserted-by":"publisher","DOI":"10.1007\/s00034-019-01131-z"},{"key":"e_1_2_9_52_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.image.2018.12.004"},{"key":"e_1_2_9_53_2","doi-asserted-by":"publisher","DOI":"10.1109\/access.2020.2971137"},{"key":"e_1_2_9_54_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2019.07.011"},{"key":"e_1_2_9_55_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i07.6975"},{"key":"e_1_2_9_56_2","doi-asserted-by":"publisher","DOI":"10.1109\/tim.2021.3124058"},{"key":"e_1_2_9_57_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2021.10.115"}],"container-title":["International Journal of Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"http:\/\/downloads.hindawi.com\/journals\/ijis\/2023\/4155948.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/downloads.hindawi.com\/journals\/ijis\/2023\/4155948.xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/pdf\/10.1155\/2023\/4155948","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,12,31]],"date-time":"2024-12-31T05:12:13Z","timestamp":1735621933000},"score":1,"resource":{"primary":{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/10.1155\/2023\/4155948"}},"subtitle":[],"editor":[{"given":"Vittorio","family":"Memmolo","sequence":"additional","affiliation":[]}],"short-title":[],"issued":{"date-parts":[[2023,1]]},"references-count":57,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2023,1]]}},"alternative-id":["10.1155\/2023\/4155948"],"URL":"https:\/\/doi.org\/10.1155\/2023\/4155948","archive":["Portico"],"relation":{},"ISSN":["0884-8173","1098-111X"],"issn-type":[{"type":"print","value":"0884-8173"},{"type":"electronic","value":"1098-111X"}],"subject":[],"published":{"date-parts":[[2023,1]]},"assertion":[{"value":"2023-04-25","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-08-17","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-08-31","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}],"article-number":"4155948"}}