{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,2,21]],"date-time":"2025-02-21T22:09:13Z","timestamp":1740175753625,"version":"3.37.3"},"reference-count":45,"publisher":"Springer Science and Business Media LLC","issue":"6","license":[{"start":{"date-parts":[[2023,5,23]],"date-time":"2023-05-23T00:00:00Z","timestamp":1684800000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,5,23]],"date-time":"2023-05-23T00:00:00Z","timestamp":1684800000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["52102467","62003227"],"award-info":[{"award-number":["52102467","62003227"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100003787","name":"Natural Science Foundation of Hebei Province","doi-asserted-by":"publisher","award":["F2021210016","F2022210024"],"award-info":[{"award-number":["F2021210016","F2022210024"]}],"id":[{"id":"10.13039\/501100003787","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Science Research Project of the Education Department of Hebei Province","award":["QN2021135"],"award-info":[{"award-number":["QN2021135"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2023,12]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Blur detection is aimed to recognize the blurry pixels from a given image, which is increasingly valued in vision-centered applications. Albeit great improvement achieved by recent deep learning-based methods, the overweight model and rough boundary still pose challenges to blur detection. In this paper, we propose a Hierarchical Edge-guided Region-complemented Network (HER-Net) to tackle the above issues in quest of a favorable accuracy\u2013complexity trade-off. First, we propose novel olive-shaped and pear-shaped inverted bottleneck structures based on large-kernel depth-wise convolutions to build a very concise architecture. Second, we provoke and exploit region-concerned and edge-concerned morphological priors to refine the boundary. To this end, we propose a reverse-region spatial attention to mine the complementary affinities between blurry and sharp regions so as to enrich the residual details around the boundary. In addition, we propose an edge spatial attention to guide the edge-concerned cues to emphasize the features related to the boundary. Both attentions are embedded into the model with hierarchical manners. Extensive experiments on three benchmark datasets demonstrate that the proposed method can achieve better detection performance using fewer parameters and lower floating-point operations compared to competitive methods. It proves the efficiency and effectiveness of our method in blur detection task.<\/jats:p>","DOI":"10.1007\/s40747-023-01093-5","type":"journal-article","created":{"date-parts":[[2023,5,23]],"date-time":"2023-05-23T06:02:06Z","timestamp":1684821726000},"page":"6523-6540","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Efficient image blur detection via hierarchical edge guidance and region complementation"],"prefix":"10.1007","volume":"9","author":[{"given":"Xuewei","family":"Wang","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3326-3723","authenticated-orcid":false,"given":"Xiao","family":"Liang","sequence":"additional","affiliation":[]},{"given":"Shaohua","family":"Li","sequence":"additional","affiliation":[]},{"given":"Jinjin","family":"Zheng","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,5,23]]},"reference":[{"key":"1093_CR1","doi-asserted-by":"publisher","first-page":"257","DOI":"10.1016\/j.jvcir.2016.01.002","volume":"35","author":"X Zhang","year":"2016","unstructured":"Zhang X, Wang R, Jiang X et al (2016) Spatially variant defocus blur map estimation and deblurring from a single image. J Vis Commun Image Represent 35:257\u2013264","journal-title":"J Vis Commun Image Represent"},{"key":"1093_CR2","doi-asserted-by":"crossref","unstructured":"Abuolaim A, Brown MS (2020) Defocus deblurring using dual-pixel data. In: European conference on computer vision. Springer, Cham, pp 111\u2013126","DOI":"10.1007\/978-3-030-58607-2_7"},{"issue":"9","key":"1093_CR3","doi-asserted-by":"publisher","first-page":"4510","DOI":"10.1109\/TIP.2019.2906582","volume":"28","author":"MS Hosseini","year":"2019","unstructured":"Hosseini MS, Zhang Y, Plataniotis KN (2019) Encoding visual sensitivity by maxpol convolution filters for image sharpness assessment. IEEE Trans Image Process 28(9):4510\u20134525","journal-title":"IEEE Trans Image Process"},{"key":"1093_CR4","doi-asserted-by":"crossref","unstructured":"Li D, Jiang T, Jiang M (2017) Exploiting high-level semantics for no-reference image quality assessment of realistic blur images. In: Proceedings of the 25th ACM international conference on multimedia, pp 378\u2013386","DOI":"10.1145\/3123266.3123322"},{"issue":"10","key":"1093_CR5","doi-asserted-by":"publisher","first-page":"2941","DOI":"10.1109\/TCSVT.2018.2870832","volume":"29","author":"R Cong","year":"2018","unstructured":"Cong R, Lei J, Fu H et al (2018) Review of visual saliency detection with comprehensive information. IEEE Trans Circuits Syst Video Technol 29(10):2941\u20132959","journal-title":"IEEE Trans Circuits Syst Video Technol"},{"issue":"11","key":"1093_CR6","doi-asserted-by":"publisher","first-page":"4545","DOI":"10.1109\/TIP.2013.2274389","volume":"22","author":"J Lin","year":"2013","unstructured":"Lin J, Ji X, Xu W et al (2013) Absolute depth estimation from a single defocused image. IEEE Trans Image Process 22(11):4545\u20134550","journal-title":"IEEE Trans Image Process"},{"key":"1093_CR7","doi-asserted-by":"crossref","unstructured":"Gur S, Wolf L (2019) Single image depth estimation trained via depth from defocus cues. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 7683\u20137692","DOI":"10.1109\/CVPR.2019.00787"},{"key":"1093_CR8","doi-asserted-by":"crossref","unstructured":"Lee H, Kim C (2014) Blurred image region detection and segmentation. In: 2014 IEEE international conference on image processing (ICIP). IEEE, pp 4427\u20134431","DOI":"10.1109\/ICIP.2014.7025898"},{"key":"1093_CR9","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2020.107485","volume":"107","author":"S Liu","year":"2020","unstructured":"Liu S, Liao Q, Xue JH et al (2020) Defocus map estimation from a single image using improved likelihood feature and edge-based basis. Pattern Recogn 107:107485","journal-title":"Pattern Recogn"},{"issue":"4","key":"1093_CR10","doi-asserted-by":"publisher","first-page":"1626","DOI":"10.1109\/TIP.2016.2528042","volume":"25","author":"X Yi","year":"2016","unstructured":"Yi X, Eramian M (2016) LBP-based segmentation of defocus blur. IEEE Trans Image Process 25(4):1626\u20131638","journal-title":"IEEE Trans Image Process"},{"issue":"2","key":"1093_CR11","doi-asserted-by":"publisher","first-page":"1323","DOI":"10.1007\/s40747-021-00592-7","volume":"8","author":"X Liang","year":"2022","unstructured":"Liang X, Wang X, Lyu L et al (2022) Noise-immune image blur detection via sequency spectrum truncation. Complex Intell Syst 8(2):1323\u20131337","journal-title":"Complex Intell Syst"},{"key":"1093_CR12","doi-asserted-by":"publisher","DOI":"10.1016\/j.mlwa.2021.100134","volume":"6","author":"J Chai","year":"2021","unstructured":"Chai J, Zeng H, Li A et al (2021) Deep learning in computer vision: A critical review of emerging techniques and application scenarios. Mach Learn Applications 6:100134","journal-title":"Mach Learn Applications"},{"key":"1093_CR13","doi-asserted-by":"crossref","unstructured":"Park J, Tai YW, Cho D et al (2017) A unified approach of multi-scale deep and hand-crafted features for defocus estimation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1736\u20131745","DOI":"10.1109\/CVPR.2017.295"},{"key":"1093_CR14","doi-asserted-by":"publisher","first-page":"154","DOI":"10.1016\/j.neucom.2018.01.041","volume":"285","author":"R Huang","year":"2018","unstructured":"Huang R, Feng W, Fan M et al (2018) Multiscale blur detection by learning discriminative deep features. Neurocomputing 285:154\u2013166","journal-title":"Neurocomputing"},{"issue":"5","key":"1093_CR15","doi-asserted-by":"publisher","first-page":"2107","DOI":"10.1109\/TIP.2018.2881830","volume":"28","author":"K Zeng","year":"2018","unstructured":"Zeng K, Wang Y, Mao J et al (2018) A local metric for defocus blur detection based on CNN feature learning. IEEE Trans Image Process 28(5):2107\u20132115","journal-title":"IEEE Trans Image Process"},{"key":"1093_CR16","doi-asserted-by":"crossref","unstructured":"Zhao W, Zhao F, Wang D et al (2018) Defocus blur detection via multi-stream bottom-top-bottom fully convolutional network. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3080\u20133088","DOI":"10.1109\/CVPR.2018.00325"},{"issue":"8","key":"1093_CR17","doi-asserted-by":"publisher","first-page":"1884","DOI":"10.1109\/TPAMI.2019.2906588","volume":"42","author":"W Zhao","year":"2020","unstructured":"Zhao W, Zhao F, Wang D et al (2020) Defocus blur detection via multi-stream bottom-top-bottom network. IEEE Trans Pattern Anal Mach Intell 42(8):1884\u20131897","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"1093_CR18","doi-asserted-by":"publisher","first-page":"5426","DOI":"10.1109\/TIP.2021.3084101","volume":"30","author":"W Zhao","year":"2021","unstructured":"Zhao W, Hou X, He Y et al (2021) Defocus blur detection via boosting diversity of deep ensemble networks. IEEE Trans Image Process 30:5426\u20135438","journal-title":"IEEE Trans Image Process"},{"issue":"5","key":"1093_CR19","doi-asserted-by":"publisher","first-page":"2719","DOI":"10.1109\/TCSVT.2021.3095347","volume":"32","author":"F Zhao","year":"2022","unstructured":"Zhao F, Lu H, Zhao W et al (2022) Image-scale-symmetric cooperative network for defocus blur detection. IEEE Trans Circuits Syst Video Technol 32(5):2719\u20132731","journal-title":"IEEE Trans Circuits Syst Video Technol"},{"key":"1093_CR20","doi-asserted-by":"crossref","unstructured":"Tang C, Zhu X, Liu X et al (2019) DefusionNET: defocus blur detection via recurrently fusing and refining multi-scale deep features. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 2700\u20132709","DOI":"10.1109\/CVPR.2019.00281"},{"issue":"2","key":"1093_CR21","doi-asserted-by":"publisher","first-page":"955","DOI":"10.1109\/TPAMI.2020.3014629","volume":"44","author":"C Tang","year":"2022","unstructured":"Tang C, Xinwang LIU, Zheng X et al (2022) DeFusionNET: Defocus blur detection via recurrently fusing and refining discriminative multi-scale deep features. IEEE Trans Pattern Anal Mach Intell 44(2):955\u2013968","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"1093_CR22","doi-asserted-by":"publisher","first-page":"624","DOI":"10.1109\/TMM.2020.2985541","volume":"23","author":"C Tang","year":"2021","unstructured":"Tang C, Liu X, An S et al (2021) BR2Net: Defocus blur detection via a bidirectional channel attention residual refining network. IEEE Trans Multimedia 23:624\u2013635","journal-title":"IEEE Trans Multimedia"},{"key":"1093_CR23","doi-asserted-by":"publisher","first-page":"1097","DOI":"10.1109\/TIP.2021.3139243","volume":"31","author":"A Karaali","year":"2022","unstructured":"Karaali A, Harte N, Jung CR (2022) Deep multi-scale feature learning for defocus blur estimation. IEEE Trans Image Process 31:1097\u20131106","journal-title":"IEEE Trans Image Process"},{"key":"1093_CR24","doi-asserted-by":"publisher","first-page":"3494","DOI":"10.1109\/TIP.2022.3171424","volume":"31","author":"Z Jiang","year":"2022","unstructured":"Jiang Z, Xu X, Zhang L et al (2022) MA-GANet: A Multi-attention generative adversarial network for defocus blur detection. IEEE Trans Image Process 31:3494\u20133508","journal-title":"IEEE Trans Image Process"},{"key":"1093_CR25","doi-asserted-by":"publisher","first-page":"140","DOI":"10.1109\/LSP.2021.3128375","volume":"29","author":"W Guo","year":"2021","unstructured":"Guo W, Xiao X, Hui Y et al (2021) Heterogeneous attention nested U-shaped network for blur detection. IEEE Signal Process Lett 29:140\u2013144","journal-title":"IEEE Signal Process Lett"},{"key":"1093_CR26","doi-asserted-by":"publisher","DOI":"10.1016\/j.sigpro.2021.107996","volume":"183","author":"Y Zhai","year":"2021","unstructured":"Zhai Y, Wang J, Deng J et al (2021) Global context guided hierarchically residual feature refinement network for defocus blur detection. Signal Process 183:107996","journal-title":"Signal Process"},{"key":"1093_CR27","unstructured":"Howard AG, Zhu M, Chen B et al (2017) Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861"},{"key":"1093_CR28","doi-asserted-by":"crossref","unstructured":"Chen S, Tan X, Wang B et al (2018) Reverse attention for salient object detection. In: Proceedings of the European conference on computer vision (ECCV), pp 234\u2013250","DOI":"10.1007\/978-3-030-01240-3_15"},{"key":"1093_CR29","unstructured":"Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980"},{"issue":"11","key":"1093_CR30","doi-asserted-by":"publisher","first-page":"4500","DOI":"10.1109\/TNNLS.2019.2955777","volume":"31","author":"SR Dubey","year":"2019","unstructured":"Dubey SR, Chakraborty S, Roy SK et al (2019) DiffGrad: An optimization method for convolutional neural networks. IEEE Trans Neur Netw Learn Syst 31(11):4500\u20134511","journal-title":"IEEE Trans Neur Netw Learn Syst"},{"key":"1093_CR31","doi-asserted-by":"crossref","unstructured":"Su B, Lu S, Tan CL (2011) Blurred image region detection and classification. In: Proceedings of the 19th ACM international conference on multimedia, pp 1397\u20131400","DOI":"10.1145\/2072298.2072024"},{"key":"1093_CR32","doi-asserted-by":"crossref","unstructured":"Shi J, Xu L, Jia J (2014) Discriminative blur detection features. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2965\u20132972","DOI":"10.1109\/CVPR.2014.379"},{"issue":"11","key":"1093_CR33","doi-asserted-by":"publisher","first-page":"1652","DOI":"10.1109\/LSP.2016.2611608","volume":"23","author":"C Tang","year":"2016","unstructured":"Tang C, Wu J, Hou Y et al (2016) A spectral and spatial approach of coarse-to-fine blurred image region detection. IEEE Signal Process Lett 23(11):1652\u20131656","journal-title":"IEEE Signal Process Lett"},{"key":"1093_CR34","doi-asserted-by":"crossref","unstructured":"Xu G, Quan Y, Ji H (2017) Estimating defocus blur via rank of local patches. In: Proceedings of the IEEE international conference on computer vision, pp 5371\u20135379","DOI":"10.1109\/ICCV.2017.574"},{"key":"1093_CR35","doi-asserted-by":"crossref","unstructured":"Alireza Golestaneh S, Karam LJ (2017) Spatially-varying blur detection based on multiscale fused and sorted transform coefficients of gradient magnitudes. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5800\u20135809","DOI":"10.1109\/CVPR.2017.71"},{"key":"1093_CR36","doi-asserted-by":"publisher","first-page":"47","DOI":"10.1016\/j.image.2018.09.007","volume":"70","author":"X Wang","year":"2019","unstructured":"Wang X, Liang X, Zheng J et al (2019) Fast detection and segmentation of partial image blur based on discrete Walsh-Hadamard transform. Signal Proc Image Commun 70:47\u201356","journal-title":"Signal Proc Image Commun"},{"key":"1093_CR37","doi-asserted-by":"publisher","first-page":"3748","DOI":"10.1109\/TIP.2021.3065171","volume":"30","author":"J Li","year":"2021","unstructured":"Li J, Fan D, Yang L et al (2021) Layer-output guided complementary attention learning for image defocus blur detection. IEEE Trans Image Process 30:3748\u20133763","journal-title":"IEEE Trans Image Process"},{"key":"1093_CR38","doi-asserted-by":"publisher","first-page":"88","DOI":"10.1016\/j.neucom.2022.06.023","volume":"501","author":"X Lin","year":"2022","unstructured":"Lin X, Li H, Cai Q (2022) Hierarchical complementary residual attention learning for defocus blur detection. Neurocomputing 501:88\u2013101","journal-title":"Neurocomputing"},{"key":"1093_CR39","doi-asserted-by":"crossref","unstructured":"Liu Z, Mao H, Wu CY et al (2022) A convnet for the 2020s. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 11976\u201311986","DOI":"10.1109\/CVPR52688.2022.01167"},{"key":"1093_CR40","unstructured":"Tan M, Le QV (2019) Mixconv: mixed depth wise convolutional kernels. arXiv preprint arXiv:1907.09595"},{"key":"1093_CR41","doi-asserted-by":"crossref","unstructured":"Lee J, Lee S, Cho S et al (2019) Deep defocus map estimation using domain adaptation. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 12222\u201312230","DOI":"10.1109\/CVPR.2019.01250"},{"key":"1093_CR42","doi-asserted-by":"crossref","unstructured":"Sandler M, Howard A, Zhu M et al (2018) Mobilenetv2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4510\u20134520","DOI":"10.1109\/CVPR.2018.00474"},{"key":"1093_CR43","doi-asserted-by":"crossref","unstructured":"Liu Z, Lin Y, Cao Y, et al (2021) Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE\/CVF international conference on computer vision, pp 10012\u201310022","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"1093_CR44","doi-asserted-by":"crossref","unstructured":"Zhao W, Zheng B, Lin Q et al (2019) Enhancing diversity of defocus blur detectors via cross-ensemble network. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 8905\u20138913","DOI":"10.1109\/CVPR.2019.00911"},{"key":"1093_CR45","doi-asserted-by":"publisher","first-page":"217","DOI":"10.1016\/j.eswa.2019.06.034","volume":"136","author":"S Mukherjee","year":"2019","unstructured":"Mukherjee S, Ahmed SA, Dogra DP et al (2019) Fingertip detection and tracking for recognition of air-writing in videos. Expert Syst Appl 136:217\u2013229","journal-title":"Expert Syst Appl"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-023-01093-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-023-01093-5\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-023-01093-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,10,27]],"date-time":"2023-10-27T19:17:43Z","timestamp":1698434263000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-023-01093-5"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,5,23]]},"references-count":45,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2023,12]]}},"alternative-id":["1093"],"URL":"https:\/\/doi.org\/10.1007\/s40747-023-01093-5","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"type":"print","value":"2199-4536"},{"type":"electronic","value":"2198-6053"}],"subject":[],"published":{"date-parts":[[2023,5,23]]},"assertion":[{"value":"1 November 2022","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"21 April 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"23 May 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}