{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,22]],"date-time":"2026-04-22T20:24:51Z","timestamp":1776889491755,"version":"3.51.2"},"reference-count":43,"publisher":"Springer Science and Business Media LLC","issue":"3","license":[{"start":{"date-parts":[[2024,3,9]],"date-time":"2024-03-09T00:00:00Z","timestamp":1709942400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,3,9]],"date-time":"2024-03-09T00:00:00Z","timestamp":1709942400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"the national level Frontier Artificial Intelligence Technology Research Project","award":["672020109"],"award-info":[{"award-number":["672020109"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2024,6]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>The intrinsic similarity between camouflaged objects and background environment impedes the automatic detection\/segmentation of camouflaged objects, and novel network architectures for deep learning are promising to overcome this challenge and improve detection accuracy. However, these existing network architectures for distinguishing between camouflaged objects and their backgrounds do not account for the constraint of detection speed, which results in high computational complexity and the inability to meet the requirements of rapid detection. Therefore, based on the human visual system, this study proposes a single-stage lightweight camouflage object detection network using multilevel feature fusion, integrating features of various feature layers and receptive field sizes. Using three benchmark datasets for normal camouflaged objects, the lightweight network (LINet) model demonstrated an accuracy superior to those of six existing mainstream camouflaged object detection methods. Its detection speed, 126.3 frames per second, is significantly higher than those of the existing mainstream methods, enabling rapid detection with a maximum increase of 187.62%. The accuracy of LINet is the minimum and maximum for Resnet101 and Resnet152, respectively. These findings pave the way for diverse applications of camouflaged target detection algorithms.<\/jats:p>","DOI":"10.1007\/s40747-024-01386-3","type":"journal-article","created":{"date-parts":[[2024,3,9]],"date-time":"2024-03-09T10:01:37Z","timestamp":1709978497000},"page":"4409-4419","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":9,"title":["Lightweight camouflaged object detection model based on multilevel feature fusion"],"prefix":"10.1007","volume":"10","author":[{"given":"Qiaoyi","family":"Li","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0009-0002-6810-2786","authenticated-orcid":false,"given":"Zhengjie","family":"Wang","sequence":"additional","affiliation":[]},{"given":"Xiaoning","family":"Zhang","sequence":"additional","affiliation":[]},{"given":"Hongbao","family":"Du","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,3,9]]},"reference":[{"key":"1386_CR1","doi-asserted-by":"publisher","first-page":"351","DOI":"10.1016\/j.ieri.2013.11.050","volume":"4","author":"SK Singh","year":"2013","unstructured":"Singh SK, Dhawale CA, Misra S (2013) Survey of object detection methods in camouflaged image. IERI Procedia 4:351\u2013357. https:\/\/doi.org\/10.1016\/j.ieri.2013.11.050","journal-title":"IERI Procedia"},{"key":"1386_CR2","doi-asserted-by":"publisher","first-page":"45","DOI":"10.1016\/j.cviu.2019.04.006","volume":"184","author":"TN Le","year":"2019","unstructured":"Le TN, Nguyen TV, Nie Z et al (2019) Anabranch network for camouflaged object segmentation. Comput Vis Image Underst 184:45\u201356. https:\/\/doi.org\/10.1016\/j.cviu.2019.04.006","journal-title":"Comput Vis Image Underst"},{"key":"1386_CR3","doi-asserted-by":"publisher","unstructured":"Fan DP, Ji GP, Sun G, et al. (2020) Camouflaged object detection. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. p 2774\u20132784. https:\/\/doi.org\/10.1109\/CVPR42600.2020.00285.","DOI":"10.1109\/CVPR42600.2020.00285"},{"key":"1386_CR4","doi-asserted-by":"crossref","unstructured":"Fan DP, Ji GP, Zhou T, et al. (2020) Pranet: parallel reverse attention network for polyp segmentation. In: Medical image computing and computer-assisted intervention\u2014MICCAI. Proceedings of the part VI: 23rd International Conference, Lima, Peru, October 4\u20138, 2020 23. Springer International Publishing. p 263\u2013273.","DOI":"10.1007\/978-3-030-59725-2_26"},{"key":"1386_CR5","doi-asserted-by":"publisher","first-page":"21414","DOI":"10.1073\/pnas.1213775110","volume":"109","author":"R la P\u00e9rez-de Fuente","year":"2012","unstructured":"la P\u00e9rez-de Fuente R, Delcl\u00f2s X, Pe\u00f1alver E et al (2012) Early evolution and ecology of camouflage in insects. Proc Natl Acad Sci U S A 109:21414\u201321419. https:\/\/doi.org\/10.1073\/pnas.1213775110","journal-title":"Proc Natl Acad Sci U S A"},{"key":"1386_CR6","doi-asserted-by":"publisher","first-page":"2075","DOI":"10.1109\/TNNLS.2020.2996406","volume":"32","author":"DP Fan","year":"2021","unstructured":"Fan DP, Lin Z, Zhang Z et al (2021) Rethinking RGB-D salient object detection: Models, data sets, and large-scale benchmarks. IEEE Trans Neural Netw Learn Syst 32:2075\u20132089. https:\/\/doi.org\/10.1109\/TNNLS.2020.2996406","journal-title":"IEEE Trans Neural Netw Learn Syst"},{"key":"1386_CR7","doi-asserted-by":"publisher","unstructured":"Li G, Xie Y, Lin L, et al. (2017) Instance-level salient object segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. p 247\u2013256. https:\/\/doi.org\/10.1109\/CVPR.2017.34.","DOI":"10.1109\/CVPR.2017.34"},{"key":"1386_CR8","doi-asserted-by":"publisher","first-page":"3239","DOI":"10.1109\/TPAMI.2021.3051099","volume":"44","author":"W Wang","year":"2022","unstructured":"Wang W, Lai Q, Fu H et al (2022) Salient object detection in the deep learning era: an in-depth survey. IEEE Trans Pattern Anal Mach Intell 44:3239\u20133259. https:\/\/doi.org\/10.1109\/TPAMI.2021.3051099","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"1386_CR9","doi-asserted-by":"publisher","unstructured":"Zhao JX, Cao Y, Fan DP, et al. (2019) Contrast prior and fluid pyramid integration for RGBD salient object detection. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. p 3922\u20133931. https:\/\/doi.org\/10.1109\/CVPR.2019.00405.","DOI":"10.1109\/CVPR.2019.00405"},{"key":"1386_CR10","doi-asserted-by":"publisher","unstructured":"Zhao JX, Liu JJ, Fan DP, et al. (2019) EGNet: Edge guidance network for salient object detection. In: Proceedings of the IEEE\/CVF international conference on computer vision. p 8778\u20138787. https:\/\/doi.org\/10.1109\/ICCV.2019.00887.","DOI":"10.1109\/ICCV.2019.00887"},{"key":"1386_CR11","doi-asserted-by":"publisher","unstructured":"Kirillov A, He K, Girshick R, et al. (2019) Panoptic segmentation. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. p 9396\u20139405. https:\/\/doi.org\/10.1109\/CVPR.2019.00963.","DOI":"10.1109\/CVPR.2019.00963"},{"key":"1386_CR12","doi-asserted-by":"publisher","first-page":"261","DOI":"10.1007\/s11263-019-01247-4","volume":"128","author":"L Liu","year":"2020","unstructured":"Liu L, Ouyang W, Wang X et al (2020) Deep learning for generic object detection: a survey. Int J Comput Vis 128:261\u2013318. https:\/\/doi.org\/10.1007\/s11263-019-01247-4","journal-title":"Int J Comput Vis"},{"key":"1386_CR13","first-page":"1","volume":"87","author":"G Medioni","year":"2009","unstructured":"Medioni G (2009) Generic object recognition by inference of 3-d volumetric. Object Categorization 87:1","journal-title":"Object Categorization"},{"key":"1386_CR14","doi-asserted-by":"publisher","unstructured":"Sun Y, Chen G, Zhou T, et al. 2021. Context-aware cross-level fusion network for camouflaged object detection. arXiv preprint arXiv:2105.12555. https:\/\/doi.org\/10.24963\/ijcai.2021\/142.","DOI":"10.24963\/ijcai.2021\/142"},{"key":"1386_CR15","doi-asserted-by":"publisher","unstructured":"Yang F, Zhai Q, Li X, et al. (2021) Uncertainty-guided transformer reasoning for camouflaged object detection. In: Proceedings of the IEEE\/CVF international conference on computer vision. p 4126\u20134135. https:\/\/doi.org\/10.1109\/ICCV48922.2021.00411.","DOI":"10.1109\/ICCV48922.2021.00411"},{"key":"1386_CR16","doi-asserted-by":"publisher","unstructured":"Zhai Q, Li X, Yang F, et al. (2021) Mutual graph learning for camouflaged object detection. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. p 12992\u201313002. https:\/\/doi.org\/10.1109\/CVPR46437.2021.01280.","DOI":"10.1109\/CVPR46437.2021.01280"},{"key":"1386_CR17","doi-asserted-by":"publisher","unstructured":"Li A, Zhang J, Lv Y, et al. (2021) Uncertainty-aware joint salient object and camouflaged object detection. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. p 10066\u201310076. https:\/\/doi.org\/10.1109\/CVPR46437.2021.00994.","DOI":"10.1109\/CVPR46437.2021.00994"},{"key":"1386_CR18","doi-asserted-by":"publisher","unstructured":"Lv Y, Zhang J, Dai Y, et al. (2021) Simultaneously localize, segment and rank the camouflaged objects. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. p 11586\u201311596. https:\/\/doi.org\/10.1109\/CVPR46437.2021.01142.","DOI":"10.1109\/CVPR46437.2021.01142"},{"key":"1386_CR19","doi-asserted-by":"publisher","unstructured":"Mei H, Ji GP, Wei Z, et al. (2021) Camouflaged object segmentation with distraction mining. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. pp. 8768\u20138777. https:\/\/doi.org\/10.1109\/CVPR46437.2021.00866.","DOI":"10.1109\/CVPR46437.2021.00866"},{"key":"1386_CR20","doi-asserted-by":"publisher","unstructured":"Sun Y, Wang S, Chen C, et al. (2022) Boundary-guided camouflaged object detection. arXiv preprint arXiv:2207.00794. https:\/\/doi.org\/10.24963\/ijcai.2022\/186.","DOI":"10.24963\/ijcai.2022\/186"},{"key":"1386_CR21","doi-asserted-by":"publisher","unstructured":"He K, Zhang X, Ren S, et al. (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. p 770\u2013778. https:\/\/doi.org\/10.1109\/CVPR.2016.90.","DOI":"10.1109\/CVPR.2016.90"},{"key":"1386_CR22","doi-asserted-by":"publisher","unstructured":"Fan DP, Gong C, Cao Y, et al. 2018. Enhanced-alignment measure for binary foreground map evaluation. arXiv preprint arXiv:1805.10421. https:\/\/doi.org\/10.24963\/ijcai.2018\/97.","DOI":"10.24963\/ijcai.2018\/97"},{"key":"1386_CR23","doi-asserted-by":"publisher","unstructured":"Fan DP, Cheng MM, Liu Y, et al. (2017) Structure-measure: a new way to evaluate foreground maps. In: Proceedings of the IEEE international conference on computer vision. p 4558\u20134567. https:\/\/doi.org\/10.1109\/ICCV.2017.487.","DOI":"10.1109\/ICCV.2017.487"},{"key":"1386_CR24","doi-asserted-by":"publisher","unstructured":"Margolin R, Zelnik-Manor L, and Tal A. (2014) How to evaluate foreground maps? In: Proceedings of the IEEE conference on computer vision and pattern recognition. p 248\u2013255. https:\/\/doi.org\/10.1109\/CVPR.2014.39.","DOI":"10.1109\/CVPR.2014.39"},{"key":"1386_CR25","doi-asserted-by":"publisher","unstructured":"Perazzi F, Kr\u00e4henb\u00fchl P, Pritch Y, et al. (2012) Saliency filters: Contrast based filtering for salient region detection. In: IEEE conference on computer vision and pattern recognition. p 733\u2013740. https:\/\/doi.org\/10.1109\/CVPR.2012.6247743.","DOI":"10.1109\/CVPR.2012.6247743"},{"key":"1386_CR26","doi-asserted-by":"publisher","unstructured":"Huang G, Liu Z, Van Der Maaten L, et al. (2017) Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. p 2261\u20132269. https:\/\/doi.org\/10.1109\/CVPR.2017.243.","DOI":"10.1109\/CVPR.2017.243"},{"key":"1386_CR27","doi-asserted-by":"publisher","unstructured":"Liu S, Huang D, and Wang Y. Receptive field block net for accurate and fast object detection, Computer Vision\u2014ECCV 2018; 2018: 404\u2013419. https:\/\/doi.org\/10.1007\/978-3-030-01252-6_24.","DOI":"10.1007\/978-3-030-01252-6_24"},{"key":"1386_CR28","unstructured":"Skurowski P, Abdulameer H, B\u0142aszczyk J, et al. (2018) Animal camouflage analysis: Chameleon database. Unpublished manuscript. 2: 7."},{"key":"1386_CR29","unstructured":"Kingma DP and Ba J (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980."},{"key":"1386_CR30","doi-asserted-by":"publisher","unstructured":"Wang L, Lu H, Wang Y, et al. (2017) Learning to detect salient objects with image-level supervision. In: Proceedings of the IEEE conference on computer vision and pattern recognition. p 3796\u20133805. https:\/\/doi.org\/10.1109\/CVPR.2017.404.","DOI":"10.1109\/CVPR.2017.404"},{"key":"1386_CR31","doi-asserted-by":"publisher","first-page":"303","DOI":"10.1007\/s11263-009-0275-4","volume":"88","author":"M Everingham","year":"2010","unstructured":"Everingham M, Van Gool L, Williams CKI et al (2010) The Pascal visual object classes (voc) challenge. Int J Comput Vis 88:303\u2013338. https:\/\/doi.org\/10.1007\/s11263-009-0275-4","journal-title":"Int J Comput Vis"},{"key":"1386_CR32","doi-asserted-by":"publisher","first-page":"43290","DOI":"10.1109\/ACCESS.2021.3064443","volume":"9","author":"J Yan","year":"2021","unstructured":"Yan J, Le TN, Nguyen KD et al (2021) Mirrornet: bio-inspired camouflaged object segmentation. IEEE Access 9:43290\u201343300. https:\/\/doi.org\/10.1109\/ACCESS.2021.3064443","journal-title":"IEEE Access"},{"key":"1386_CR33","doi-asserted-by":"publisher","unstructured":"Lv Y, Zhang J, Dai Y, et al. (2021) Simultaneously localize, segment and rank the camouflaged objects. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition. p 11591\u201311601. https:\/\/doi.org\/10.1109\/CVPR46437.2021.01142.","DOI":"10.1109\/CVPR46437.2021.01142"},{"key":"1386_CR34","doi-asserted-by":"publisher","unstructured":"Jia Q, Yao S, Liu Y, et al. (2022) Segment, magnify and reiterate: detecting camouflaged objects the hard way. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition. p 4713\u20134722.https:\/\/doi.org\/10.1109\/CVPR52688.2022.00467","DOI":"10.1109\/CVPR52688.2022.00467"},{"key":"1386_CR35","doi-asserted-by":"publisher","unstructured":"Pang Y, Zhao X, Xiang TZ, et al. (2022) Zoom in and out: a mixed-scale triplet network for camouflaged object detection. Proceedings of the IEEE\/CVF Conference on computer vision and pattern recognition. p 2160\u20132170. https:\/\/doi.org\/10.1109\/CVPR52688.2022.00220","DOI":"10.1109\/CVPR52688.2022.00220"},{"key":"1386_CR36","doi-asserted-by":"publisher","unstructured":"Bhajantri NU, Nagabhushan P (2006) Camouflage defect identification: a novel approach.9th International Conference on Information Technology (ICIT'06). IEEE. p 145\u2013148.https:\/\/doi.org\/10.1109\/ICIT.2006.34","DOI":"10.1109\/ICIT.2006.34"},{"key":"1386_CR37","doi-asserted-by":"publisher","unstructured":"Feng X, Guoying C, Wei S (2013) Camouflage texture evaluation using saliency map. Proceedings of the Fifth International Conference on Internet Multimedia Computing and Service. p 93\u201396. https:\/\/doi.org\/10.1007\/s00530-014-0368-y","DOI":"10.1007\/s00530-014-0368-y"},{"issue":"3","key":"1386_CR38","doi-asserted-by":"publisher","first-page":"208","DOI":"10.1006\/cviu.2001.0912","volume":"82","author":"A Tankus","year":"2001","unstructured":"Tankus A, Yeshurun Y (2001) Convexity-based visual camouflage breaking. Comput Vision Image Underst. 82(3):208\u2013237. https:\/\/doi.org\/10.1006\/cviu.2001.0912","journal-title":"Comput Vision Image Underst."},{"key":"1386_CR39","doi-asserted-by":"publisher","first-page":"4065","DOI":"10.1007\/s11042-015-2946-1","volume":"75","author":"F Xue","year":"2016","unstructured":"Xue F, Yong C, Xu S et al (2016) Camouflage performance analysis and evaluation framework based on features fusion. Multimed Tools Appl 75:4065\u20134082. https:\/\/doi.org\/10.1007\/s11042-015-2946-1","journal-title":"Multimed Tools Appl"},{"key":"1386_CR40","doi-asserted-by":"publisher","unstructured":"Li S, Florencio D, Zhao Y, et al. (2017) Foreground detection in camouflaged scenes. 2017 IEEE International Conference on Image Processing (ICIP). IEEE. p 4247\u20134251. https:\/\/doi.org\/10.1109\/ICIP.2017.8297083","DOI":"10.1109\/ICIP.2017.8297083"},{"issue":"8","key":"1386_CR41","doi-asserted-by":"publisher","first-page":"1883","DOI":"10.1111\/2041-210X.13019","volume":"9","author":"TW Pike","year":"2018","unstructured":"Pike TW (2018) Quantifying camouflage and conspicuousness using visual salience. Methods Ecol Evol 9(8):1883\u20131895. https:\/\/doi.org\/10.1111\/2041-210X.13019","journal-title":"Methods Ecol Evol"},{"key":"1386_CR42","doi-asserted-by":"publisher","unstructured":"Zhao T, Wu X (2019) Pyramid feature attention network for saliency detection. Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition. p 3085\u20133094. https:\/\/doi.org\/10.1109\/CVPR.2019.00320","DOI":"10.1109\/CVPR.2019.00320"},{"key":"1386_CR43","first-page":"40","volume":"7","author":"AK Aggarwal","year":"2022","unstructured":"Aggarwal AK, Jaidka P (2022) Segmentation of crop images for crop yield prediction. Int J Biol Biomed 7:40\u201344","journal-title":"Int J Biol Biomed"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-024-01386-3.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-024-01386-3\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-024-01386-3.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,5,16]],"date-time":"2024-05-16T18:27:15Z","timestamp":1715884035000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-024-01386-3"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,3,9]]},"references-count":43,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2024,6]]}},"alternative-id":["1386"],"URL":"https:\/\/doi.org\/10.1007\/s40747-024-01386-3","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"value":"2199-4536","type":"print"},{"value":"2198-6053","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,3,9]]},"assertion":[{"value":"11 October 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"10 February 2024","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"9 March 2024","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"All the authors agreed to participate in this paper.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent to participate"}}]}}