{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,2,21]],"date-time":"2025-02-21T22:09:45Z","timestamp":1740175785600,"version":"3.37.3"},"reference-count":45,"publisher":"Springer Science and Business Media LLC","issue":"5","license":[{"start":{"date-parts":[[2022,3,29]],"date-time":"2022-03-29T00:00:00Z","timestamp":1648512000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,3,29]],"date-time":"2022-03-29T00:00:00Z","timestamp":1648512000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62175086"],"award-info":[{"award-number":["62175086"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Chinese Academy of Sciences-Youth Innovation Promotion Association","award":["2020220"],"award-info":[{"award-number":["2020220"]}]},{"DOI":"10.13039\/501100011789","name":"Department of Science and Technology of Jilin Province","doi-asserted-by":"publisher","award":["20210201132GX"],"award-info":[{"award-number":["20210201132GX"]}],"id":[{"id":"10.13039\/501100011789","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2022,10]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Defocus blur detection (DBD) aims to separate blurred and unblurred regions for a given image. Due to its potential and practical applications, this task has attracted much attention. Most of the existing DBD models have achieved competitive performance by aggregating multi-level features extracted from fully convolutional networks. However, they also suffer from several challenges, such as coarse object boundaries of the defocus blur regions, background clutter, and the detection of low contrast focal regions. In this paper, we develop a hierarchical edge-aware network to solve the above problems, to the best of our knowledge, it is the first trial to develop an end-to-end network with edge awareness for DBD. We design an edge feature extraction network to capture boundary information, a hierarchical interior perception network is used to generate local and global context information, which is helpful to detect the low contrast focal regions. Moreover, a hierarchical edge-aware fusion network is proposed to hierarchically fuse edge information and semantic features. Benefiting from the rich edge information, the fused features can generate more accurate boundaries. Finally, we propose a progressive feature refinement network to refine the output features. Experimental results on two widely used DBD datasets demonstrate that the proposed model outperforms the state-of-the-art approaches.<\/jats:p>","DOI":"10.1007\/s40747-022-00711-y","type":"journal-article","created":{"date-parts":[[2022,3,29]],"date-time":"2022-03-29T12:05:49Z","timestamp":1648555549000},"page":"4265-4276","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["Hierarchical edge-aware network for defocus blur detection"],"prefix":"10.1007","volume":"8","author":[{"given":"Zijian","family":"Zhao","sequence":"first","affiliation":[]},{"given":"Hang","family":"Yang","sequence":"additional","affiliation":[]},{"given":"Huiyuan","family":"Luo","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,3,29]]},"reference":[{"key":"711_CR1","doi-asserted-by":"publisher","first-page":"2977","DOI":"10.1007\/s10489-020-01691-7","volume":"50","author":"C Xia","year":"2020","unstructured":"Xia C, Gao X, Li KC et al (2020) Salient object detection based on distribution-edge guidance and iterative Bayesian optimization. Appl Intell 50:2977\u20132990. https:\/\/doi.org\/10.1007\/s10489-020-01691-7","journal-title":"Appl Intell"},{"issue":"10","key":"711_CR2","doi-asserted-by":"publisher","first-page":"1706","DOI":"10.1364\/OL.38.001706","volume":"38","author":"C Tang","year":"2013","unstructured":"Tang C, Hou C, Song Z (2013) Defocus map estimation from a single image via spectrum contrast. Opt Lett 38(10):1706\u20131708. https:\/\/doi.org\/10.1364\/OL.38.001706","journal-title":"Opt Lett"},{"issue":"1","key":"711_CR3","doi-asserted-by":"publisher","first-page":"257","DOI":"10.1016\/j.jvcir.2016.01.002","volume":"35","author":"X Zhang","year":"2016","unstructured":"Zhang X, Wang R, Jiang X et al (2016) Spatially variant defocus blur map estimation and deblurring from a single image. J Vis Commun Image Represent 35(1):257\u2013264. https:\/\/doi.org\/10.1016\/j.jvcir.2016.01.002","journal-title":"J Vis Commun Image Represent"},{"issue":"10","key":"711_CR4","doi-asserted-by":"publisher","first-page":"1699","DOI":"10.1109\/TPAMI.2008.168","volume":"30","author":"A Levin","year":"2008","unstructured":"Levin A, Rav-Acha A, Lischinski D (2008) Spectral matting. IEEE Trans Pattern Anal Mach Intell 30(10):1699\u20131712. https:\/\/doi.org\/10.1109\/TPAMI.2008.168","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"issue":"12","key":"711_CR5","doi-asserted-by":"publisher","first-page":"4879","DOI":"10.1109\/TIP.2013.2279316","volume":"22","author":"X Zhu","year":"2013","unstructured":"Zhu X, Cohen S, Schiller S et al (2013) Estimating spatially varying defocus blur from a single image. IEEE Trans Image Process 22(12):4879\u20134891. https:\/\/doi.org\/10.1109\/TIP.2013.2279316","journal-title":"IEEE Trans Image Process"},{"key":"711_CR6","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2011.2169974","author":"CT Vu","year":"2012","unstructured":"Vu CT, Phan TD, Chandler DM (2012) $${ S}_{3}$$: a spectral and spatial measure of local perceived sharpness in natural images. IEEE Trans Image Process. https:\/\/doi.org\/10.1109\/TIP.2011.2169974","journal-title":"IEEE Trans Image Process"},{"key":"711_CR7","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2013.145","author":"Y Zhang","year":"2013","unstructured":"Zhang Y, Hirakawa K (2013) Blur processing using double discrete wavelet transform. IEEE Conf Comput Vis Pattern Recognit. https:\/\/doi.org\/10.1109\/CVPR.2013.145","journal-title":"IEEE Conf Comput Vis Pattern Recognit"},{"key":"711_CR8","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2014.379","author":"J Shi","year":"2014","unstructured":"Shi J, Xu L, Jia J (2014) Discriminative blur detection features. IEEE Conf Comput Vis Pattern Recognit. https:\/\/doi.org\/10.1109\/CVPR.2014.379","journal-title":"IEEE Conf Comput Vis Pattern Recognit"},{"key":"711_CR9","doi-asserted-by":"publisher","DOI":"10.1109\/LSP.2016.2611608","author":"C Tang","year":"2016","unstructured":"Tang C, Wu J, Hou Y, Wang P, Li W (2016) A spectral and spatial approach of coarse-to-fine blurred image region detection. IEEE Signal Process Lett. https:\/\/doi.org\/10.1109\/LSP.2016.2611608","journal-title":"IEEE Signal Process Lett"},{"issue":"9","key":"711_CR10","doi-asserted-by":"publisher","first-page":"1852","DOI":"10.1016\/j.patcog.2011.03.009","volume":"44","author":"S Zhuo","year":"2011","unstructured":"Zhuo S, Sim T (2011) Defocus map estimation from a single image. Pattern Recognit 44(9):1852\u20131858. https:\/\/doi.org\/10.1016\/j.patcog.2011.03.009","journal-title":"Pattern Recognit"},{"issue":"6","key":"711_CR11","doi-asserted-by":"publisher","first-page":"1173","DOI":"10.1007\/s11760-012-0381-6","volume":"7","author":"J Zhao","year":"2013","unstructured":"Zhao J, Feng H, Xu Z et al (2013) Automatic blur region segmentation approach using image matting. Signal Image Video Process 7(6):1173\u20131181. https:\/\/doi.org\/10.1007\/s11760-012-0381-6","journal-title":"Signal Image Video Process"},{"key":"711_CR12","doi-asserted-by":"crossref","unstructured":"Su B, Lu S, Tan CL (2011) Blurred image region detection and classification. ACM international conference on multimedia, pp 1397\u20131400","DOI":"10.1145\/2072298.2072024"},{"issue":"7","key":"711_CR13","doi-asserted-by":"publisher","first-page":"3141","DOI":"10.1109\/TIP.2016.2555702","volume":"25","author":"E Saad","year":"2016","unstructured":"Saad E, Hirakawa K (2016) Defocus blur-invariant scale-space feature extractions. IEEE Trans Image Process 25(7):3141\u20133156. https:\/\/doi.org\/10.1109\/TIP.2016.2555702","journal-title":"IEEE Trans Image Process"},{"issue":"10","key":"711_CR14","doi-asserted-by":"publisher","first-page":"2220","DOI":"10.1109\/TCYB.2015.2472478","volume":"46","author":"Y Pang","year":"2017","unstructured":"Pang Y, Zhu H, Li X et al (2017) classifying discriminative features for blur detection. IEEE Trans Cybern 46(10):2220\u20132227. https:\/\/doi.org\/10.1109\/TCYB.2015.2472478","journal-title":"IEEE Trans Cybern"},{"key":"711_CR15","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2008.4587465","author":"R Liu","year":"2008","unstructured":"Liu R, Li Z, Jia J (2008) Image partial blur detection and classification. IEEE Conf Comput Vis Pattern Recognit. https:\/\/doi.org\/10.1109\/CVPR.2008.4587465","journal-title":"IEEE Conf Comput Vis Pattern Recognit"},{"key":"711_CR16","doi-asserted-by":"publisher","unstructured":"Zhao W, Zhao F, Wang D et al (2018) Defocus blur detection via multi-stream bottom-top-bottom fully convolutional network. IEEE conference on computer vision and pattern recognition, pp 3080\u20133088. https:\/\/doi.org\/10.1109\/CVPR.2018.00325","DOI":"10.1109\/CVPR.2018.00325"},{"key":"711_CR17","doi-asserted-by":"publisher","unstructured":"Tang C, Zhu X, Liu X et al (2019) DeFusionNET: defocus blur detection via recurrently fusing and refining multi-scale deep features. IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 2695\u20132704. https:\/\/doi.org\/10.1109\/CVPR.2019.00281","DOI":"10.1109\/CVPR.2019.00281"},{"issue":"4","key":"711_CR18","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1109\/TIP.2016.2528042","volume":"25","author":"Y Xin","year":"2016","unstructured":"Xin Y, Eramian M (2016) LBP-based segmentation of defocus blur. IEEE Trans Image Process 25(4):1\u20131. https:\/\/doi.org\/10.1109\/TIP.2016.2528042","journal-title":"IEEE Trans Image Process"},{"key":"711_CR19","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2017.2662206","author":"K Zhang","year":"2017","unstructured":"Zhang K, Zuo W, Chen Y et al (2017) Beyond a gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans Image Process. https:\/\/doi.org\/10.1109\/TIP.2017.2662206","journal-title":"IEEE Trans Image Process"},{"issue":"9","key":"711_CR20","doi-asserted-by":"publisher","first-page":"1901","DOI":"10.1109\/TPAMI.2015.2491929","volume":"38","author":"Y Wei","year":"2016","unstructured":"Wei Y, Wei X, Min L et al (2016) HCP: a flexible CNN framework for multi-label image classification. IEEE Trans Softw Eng 38(9):1901\u20131907. https:\/\/doi.org\/10.1109\/TPAMI.2015.2491929","journal-title":"IEEE Trans Softw Eng"},{"issue":"2","key":"711_CR21","doi-asserted-by":"publisher","first-page":"295","DOI":"10.1109\/TPAMI.2015.2439281","volume":"38","author":"C Dong","year":"2016","unstructured":"Dong C, Loy CC, He K et al (2016) Image super-resolution using deep convolutional networks. IEEE Trans Pattern Anal Mach Intell 38(2):295\u2013307. https:\/\/doi.org\/10.1109\/TPAMI.2015.2439281","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"711_CR22","doi-asserted-by":"publisher","DOI":"10.1007\/s10489-020-02147-8","author":"J Jiao","year":"2021","unstructured":"Jiao J, Xue H, Ding J (2021) Non-local duplicate pooling network for salient object detection. Appl Intell. https:\/\/doi.org\/10.1007\/s10489-020-02147-8","journal-title":"Appl Intell"},{"key":"711_CR23","doi-asserted-by":"publisher","first-page":"323","DOI":"10.1016\/j.patcog.2017.11.007","volume":"76","author":"P Li","year":"2018","unstructured":"Li P, Wang D, Wang L et al (2018) Deep visual tracking: review and experimental comparison. Pattern Recognit 76:323\u2013338","journal-title":"Pattern Recognit"},{"key":"711_CR24","unstructured":"Karaali A, Harte N, Jung CR (2020) Deep multi-scale feature learning for defocus blur estimation. arXiv:2009.11939"},{"key":"711_CR25","doi-asserted-by":"crossref","unstructured":"Park J, Tai Y W, Cho D et al (2017) A unified approach of multi-scale deep and hand-crafted features for defocus estimation. IEEE Computer Society, pp 2760\u20132769","DOI":"10.1109\/CVPR.2017.295"},{"key":"711_CR26","doi-asserted-by":"publisher","unstructured":"Zhao W, Zheng B, Lin Q et al (2019) Enhancing diversity of defocus blur detectors via cross-ensemble network. IEEE\/CVF conference on computer vision and pattern recognition (CVPR), pp 8897\u20138905. https:\/\/doi.org\/10.1109\/CVPR.2019.00911","DOI":"10.1109\/CVPR.2019.00911"},{"key":"711_CR27","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2018.2847421","author":"K Ma","year":"2016","unstructured":"Ma K, Fu H, Liu T et al (2016) Deep blur mapping: exploiting high-level semantics by deep neural networks. IEEE Trans Image Process. https:\/\/doi.org\/10.1109\/TIP.2018.2847421","journal-title":"IEEE Trans Image Process"},{"key":"711_CR28","doi-asserted-by":"publisher","unstructured":"Lee J, Lee S, Cho S et al (2019) Deep defocus map estimation using domain adaptation. IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp 12214\u201312222. https:\/\/doi.org\/10.1109\/CVPR.2019.01250","DOI":"10.1109\/CVPR.2019.01250"},{"key":"711_CR29","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2020.3014629","author":"C Tang","year":"2020","unstructured":"Tang C, Liu X, Zheng X et al (2020) DeFusionNET: defocus blur detection via recurrently fusing and refining discriminative multi-scale deep features. IEEE Trans Pattern Anal Mach Intell. https:\/\/doi.org\/10.1109\/TPAMI.2020.3014629","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"711_CR30","doi-asserted-by":"publisher","DOI":"10.1109\/TMM.2020.2985541","author":"C Tang","year":"2021","unstructured":"Tang C, Liu X, An S et al (2021) BR$$^{2}$$Net: defocus blur detection via a bidirectional channel attention residual refining network. IEEE Trans Multimed. https:\/\/doi.org\/10.1109\/TMM.2020.2985541","journal-title":"IEEE Trans Multimed"},{"issue":"7","key":"711_CR31","doi-asserted-by":"publisher","first-page":"12063","DOI":"10.1609\/aaai.v34i07.6884","volume":"34","author":"C Tang","year":"2020","unstructured":"Tang C, Liu X, Zhu X et al (2020) R$$^{2}$$MRF: defocus blur detection via recurrently refining multi-scale residual features. Proc AAAI Conf Artif Intell 34(7):12063\u201312070. https:\/\/doi.org\/10.1609\/aaai.v34i07.6884","journal-title":"Proc AAAI Conf Artif Intell"},{"key":"711_CR32","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2021.3065171","author":"J Li","year":"2021","unstructured":"Li J, Fan D, Yang L et al (2021) Layer-output guided complementary attention learning for image defocus blur detection. IEEE Trans Image Process. https:\/\/doi.org\/10.1109\/TIP.2021.3065171","journal-title":"IEEE Trans Image Process"},{"key":"711_CR33","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2019.2906588","author":"W Zhao","year":"2020","unstructured":"Zhao W, Zhao F, Wang D et al (2020) Defocus blur detection via multi-stream bottom-top-bottom network. IEEE Trans Pattern Anal Mach Intell. https:\/\/doi.org\/10.1109\/TPAMI.2019.2906588","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"711_CR34","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2021.3084101","author":"W Zhao","year":"2021","unstructured":"Zhao W, Hou X, He Y et al (2021) Defocus blur detection via boosting diversity of deep ensemble networks. IEEE Trans Image Process. https:\/\/doi.org\/10.1109\/TIP.2021.3084101","journal-title":"IEEE Trans Image Process"},{"key":"711_CR35","doi-asserted-by":"publisher","unstructured":"Zhao W, Shang C, Lu H (2021) Self-generated defocus blur detection via dual adversarial discriminators. IEEE\/CVF conference on computer vision and pattern recognition (CVPR), pp 6929\u20136938. https:\/\/doi.org\/10.1109\/CVPR46437.2021.00686","DOI":"10.1109\/CVPR46437.2021.00686"},{"issue":"11","key":"711_CR36","doi-asserted-by":"publisher","first-page":"1652","DOI":"10.1109\/LSP.2016.2611608","volume":"23","author":"C Tang","year":"2016","unstructured":"Tang C, Wu J, Hou Y et al (2016) A spectral and spatial approach of coarse-to-fine blurred image region detection. IEEE Signal Process Lett 23(11):1652\u20131656. https:\/\/doi.org\/10.1109\/LSP.2016.2611608","journal-title":"IEEE Signal Process Lett"},{"key":"711_CR37","doi-asserted-by":"publisher","unstructured":"Golestaneh SA, Karam LJ (2017) Spatially-varying blur detection based on multiscale fused and sorted transform coefficients of gradient magnitudes. IEEE conference on computer vision and pattern recognition, pp 5800\u20135809. https:\/\/doi.org\/10.1109\/CVPR.2017.71","DOI":"10.1109\/CVPR.2017.71"},{"key":"711_CR38","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2019.2913372","author":"J Hu","year":"2017","unstructured":"Hu J, Shen L, Albanie S et al (2017) Squeeze-and-excitation networks. IEEE Trans Pattern Anal Mach Intell. https:\/\/doi.org\/10.1109\/TPAMI.2019.2913372","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"711_CR39","doi-asserted-by":"publisher","unstructured":"Peng C, Zhang X, Yu G, et al. (2017) Large kernel matters-improve semantic segmentation by global convolutional network. IEEE conference on computer vision and pattern recognition (CVPR), pp 1743\u20131751. https:\/\/doi.org\/10.1109\/CVPR.2017.189","DOI":"10.1109\/CVPR.2017.189"},{"key":"711_CR40","doi-asserted-by":"publisher","unstructured":"Zhao J, Liu J, Fan D et al (2020) EGNet: edge guidance network for salient object detection. IEEE\/CVF international conference on computer vision (ICCV), pp 8778\u20138787. https:\/\/doi.org\/10.1109\/ICCV.2019.00887","DOI":"10.1109\/ICCV.2019.00887"},{"issue":"7","key":"711_CR41","doi-asserted-by":"publisher","first-page":"10599","DOI":"10.1609\/aaai.v34i07.6633","volume":"34","author":"Z Chen","year":"2020","unstructured":"Chen Z, Xu Q, Cong R et al (2020) Global context-aware progressive aggregation network for salient object detection. Proc AAAI Conf Artif Intell 34(7):10599\u201310606. https:\/\/doi.org\/10.1609\/aaai.v34i07.6633","journal-title":"Proc AAAI Conf Artif Intell"},{"key":"711_CR42","unstructured":"Wei J, Wang S, Huang Q (2019) F3Net: fusion, feedback and focus for salient object detection. arXiv:1911.11445"},{"key":"711_CR43","doi-asserted-by":"publisher","unstructured":"Fan D, Gong C, Yang C et al (2018) Enhanced-alignment measure for binary foreground map evaluation, pp 698\u2013704. https:\/\/doi.org\/10.24963\/ijcai.2018\/97","DOI":"10.24963\/ijcai.2018\/97"},{"key":"711_CR44","doi-asserted-by":"publisher","unstructured":"Fan D, Cheng M, Liu Y et al (2017) Structure-measure: a new way to evaluate foreground maps. IEEE international conference on computer vision (ICCV), pp 4558\u20134567. https:\/\/doi.org\/10.1109\/ICCV.2017.487","DOI":"10.1109\/ICCV.2017.487"},{"key":"711_CR45","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90","author":"K He","year":"2016","unstructured":"He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. Proc IEEE Conf Comput Vis Pattern Recognit. https:\/\/doi.org\/10.1109\/CVPR.2016.90","journal-title":"Proc IEEE Conf Comput Vis Pattern Recognit"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-022-00711-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-022-00711-y\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-022-00711-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,9,27]],"date-time":"2022-09-27T13:57:27Z","timestamp":1664287047000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-022-00711-y"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,3,29]]},"references-count":45,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2022,10]]}},"alternative-id":["711"],"URL":"https:\/\/doi.org\/10.1007\/s40747-022-00711-y","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"type":"print","value":"2199-4536"},{"type":"electronic","value":"2198-6053"}],"subject":[],"published":{"date-parts":[[2022,3,29]]},"assertion":[{"value":"6 July 2021","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"5 March 2022","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"29 March 2022","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"Corresponding authors declare on behalf of all authors that there is no conflict of interest. We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}