{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,1]],"date-time":"2025-11-01T09:36:14Z","timestamp":1761989774564,"version":"3.37.3"},"reference-count":59,"publisher":"Springer Science and Business Media LLC","issue":"4","license":[{"start":{"date-parts":[[2024,5,7]],"date-time":"2024-05-07T00:00:00Z","timestamp":1715040000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,5,7]],"date-time":"2024-05-07T00:00:00Z","timestamp":1715040000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"the national level Frontier Artificial Intelligence Technology Research Project","award":["672020109"],"award-info":[{"award-number":["672020109"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2024,8]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>The camouflaged object segmentation model (COSM) has recently gained substantial attention due to its remarkable ability to detect camouflaged objects. Nevertheless, deep vision models are widely acknowledged to be susceptible to adversarial examples, which can mislead models, causing them to make incorrect predictions through imperceptible perturbations. The vulnerability to adversarial attacks raises significant concerns when deploying COSM in security-sensitive applications. Consequently, it is crucial to determine whether the foundational vision model COSM is also susceptible to such attacks. To our knowledge, our work represents the first exploration of strategies for targeting COSM with adversarial examples in the digital world. With the primary objective of reversing the predictions for both masked objects and backgrounds, we explore the adversarial robustness of COSM in full white-box and black-box settings. In addition to the primary objective of reversing the predictions for masked objects and backgrounds, our investigation reveals the potential to generate any desired mask through adversarial attacks. The experimental results indicate that COSM demonstrates weak robustness, rendering it vulnerable to adversarial example attacks. In the realm of COS, the projected gradient descent (PGD) attack method exhibits superior attack capabilities compared to the fast gradient sign (FGSM) method in both white-box and black-box settings. These findings reduce the security risks in the application of COSM and pave the way for multiple applications of COSM.<\/jats:p>","DOI":"10.1007\/s40747-024-01455-7","type":"journal-article","created":{"date-parts":[[2024,5,7]],"date-time":"2024-05-07T08:01:26Z","timestamp":1715068886000},"page":"5445-5457","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":5,"title":["ATTACK-COSM: attacking the camouflaged object segmentation model through digital world adversarial examples"],"prefix":"10.1007","volume":"10","author":[{"given":"Qiaoyi","family":"Li","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0009-0002-6810-2786","authenticated-orcid":false,"given":"Zhengjie","family":"Wang","sequence":"additional","affiliation":[]},{"given":"Xiaoning","family":"Zhang","sequence":"additional","affiliation":[]},{"given":"Yang","family":"Li","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,5,7]]},"reference":[{"issue":"6","key":"1455_CR1","doi-asserted-by":"publisher","first-page":"531","DOI":"10.1007\/s11633-022-1371-y","volume":"19","author":"GP Ji","year":"2022","unstructured":"Ji GP, Xiao G, Chou YC, Fan DP, Zhao K, Chen G, Van Gool L (2022) Video polyp segmentation: a deep learning perspective. Mach Intell Res 19(6):531\u2013549. https:\/\/doi.org\/10.1007\/s11633-022-1371-y","journal-title":"Mach Intell Res"},{"issue":"8","key":"1455_CR2","doi-asserted-by":"publisher","first-page":"2626","DOI":"10.1109\/tmi.2020.2996645","volume":"39","author":"DP Fan","year":"2020","unstructured":"Fan DP, Zhou T, Ji GP, Zhou Y, Chen G, Fu H, Shen J, Shao L (2020) Inf-Net: automatic COVID-19 lung infection segmentation from CT images. IEEE Trans Med Imaging 39(8):2626\u20132637. https:\/\/doi.org\/10.1109\/tmi.2020.2996645","journal-title":"IEEE Trans Med Imaging"},{"key":"1455_CR3","first-page":"1","volume":"6","author":"A Srivastava","year":"2017","unstructured":"Srivastava A, Singhal V, Aggarawal AK (2017) Comparative analysis of multimodal medical image fusion using PCA and wavelet transforms. Int J Latest Technol Eng Manag Appl Sci (IJLTEMAS) 6:1","journal-title":"Int J Latest Technol Eng Manag Appl Sci (IJLTEMAS)"},{"key":"1455_CR4","doi-asserted-by":"publisher","first-page":"45301","DOI":"10.1109\/ACCESS.2019.2909522","volume":"7","author":"L Liu","year":"2019","unstructured":"Liu L, Wang R, Xie C, Yang P, Wang F, Sudirman S, Liu W (2019) PestNet: an end-to-end deep learning approach for large-scale multi-class pest detection and classification. IEEE Access 7:45301\u201345312. https:\/\/doi.org\/10.1109\/ACCESS.2019.2909522","journal-title":"IEEE Access"},{"key":"1455_CR5","doi-asserted-by":"publisher","first-page":"44","DOI":"10.1016\/j.aiia.2023.02.004","volume":"7","author":"M Rizzo","year":"2023","unstructured":"Rizzo M, Marcuzzo M, Zangari A, Gasparetto A, Albarelli A (2023) Fruit ripeness classification: a survey. Artif Intell Agric 7:44\u201357. https:\/\/doi.org\/10.1016\/j.aiia.2023.02.004","journal-title":"Artif Intell Agric"},{"issue":"1","key":"1455_CR6","doi-asserted-by":"publisher","first-page":"241","DOI":"10.46300\/91011.2022.16.30","volume":"16","author":"AK Aggarwal","year":"2022","unstructured":"Aggarwal AK (2022) Biological Tomato Leaf disease classification using deep learning framework. Int J Biol Biomed Eng 16(1):241\u2013244","journal-title":"Int J Biol Biomed Eng"},{"issue":"4","key":"1455_CR7","doi-asserted-by":"publisher","first-page":"51:51","DOI":"10.1145\/1778765.1778788","volume":"29","author":"HK Chu","year":"2010","unstructured":"Chu HK, Hsu WH, Mitra NJ, Cohen-Or D, Wong TT, Lee TY (2010) Camouflage images. ACM Trans Graph 29(4):51:51-51:58. https:\/\/doi.org\/10.1145\/1778765.1778788","journal-title":"ACM Trans Graph"},{"issue":"1","key":"1455_CR8","doi-asserted-by":"publisher","first-page":"16","DOI":"10.48550\/arXiv.2304.11234","volume":"1","author":"DP Fan","year":"2023","unstructured":"Fan DP, Ji GP, Xu P et al (2023) Advances in deep concealed scene understanding. Vis Intell 1(1):16. https:\/\/doi.org\/10.48550\/arXiv.2304.11234","journal-title":"Vis Intell"},{"key":"1455_CR9","doi-asserted-by":"publisher","unstructured":"Fan DP, Ji GP, Sun G et al (2020) Camouflaged object detection. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 2777\u20132787. https:\/\/doi.org\/10.1109\/CVPR42600.2020.00285","DOI":"10.1109\/CVPR42600.2020.00285"},{"issue":"10","key":"1455_CR10","doi-asserted-by":"publisher","first-page":"6024","DOI":"10.1109\/TPAMI.2021.3085766","volume":"44","author":"DP Fan","year":"2021","unstructured":"Fan DP, Ji GP, Cheng MM, Shao L (2021) Concealed object detection. IEEE Trans Pattern Anal Mach Intell 44(10):6024\u20136042. https:\/\/doi.org\/10.1109\/TPAMI.2021.3085766","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"1455_CR11","doi-asserted-by":"publisher","unstructured":"Lv Y, Zhang J, Dai Y et al (2021) Simultaneously localize, segment and rank the camouflaged objects. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 11591\u201311601. https:\/\/doi.org\/10.1109\/CVPR46437.2021.01142","DOI":"10.1109\/CVPR46437.2021.01142"},{"key":"1455_CR12","doi-asserted-by":"publisher","first-page":"108975","DOI":"10.1016\/j.epsr.2022.108975","volume":"215","author":"M Ghiasi","year":"2023","unstructured":"Ghiasi M, Niknam T, Wang Z et al (2023) A comprehensive review of cyber-attacks and defense mechanisms for improving security in smart grid energy systems: past, present and future. Electric Power Syst Res 215:108975. https:\/\/doi.org\/10.1016\/j.epsr.2022.108975","journal-title":"Electric Power Syst Res"},{"issue":"1","key":"1455_CR13","doi-asserted-by":"publisher","first-page":"44","DOI":"10.1007\/s42452-018-0049-0","volume":"1","author":"M Ghiasi","year":"2019","unstructured":"Ghiasi M, Ghadimi N, Ahmadinia E (2019) An analytical methodology for reliability assessment and failure analysis in distributed power system. SN Appl Sci 1(1):44. https:\/\/doi.org\/10.1007\/s42452-018-0049-0","journal-title":"SN Appl Sci"},{"issue":"5","key":"1455_CR14","doi-asserted-by":"publisher","first-page":"4837","DOI":"10.1007\/s40747-022-00909-0","volume":"9","author":"R Zhang","year":"2023","unstructured":"Zhang R, Du Y, Shi P et al (2023) ST-MAE: robust lane detection in continuous multi-frame driving scenes based on a deep hybrid network. Complex Intell Syst 9(5):4837\u20134855. https:\/\/doi.org\/10.1007\/s40747-022-00909-0","journal-title":"Complex Intell Syst"},{"key":"1455_CR15","doi-asserted-by":"publisher","unstructured":"Mei H, Ji GP, Wei Z et al (2021) Camouflaged object segmentation with distraction mining. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 8772\u20138781. https:\/\/doi.org\/10.1109\/CVPR46437.2021.00866","DOI":"10.1109\/CVPR46437.2021.00866"},{"issue":"1","key":"1455_CR16","doi-asserted-by":"publisher","first-page":"92","DOI":"10.1007\/s11633-022-1365-9","volume":"20","author":"GP Ji","year":"2023","unstructured":"Ji GP, Fan DP, Chou YC, Dai D, Liniger A, Van Gool L (2023) Deep gradient learning for efficient camouflaged object detection. Mach Intell Res 20(1):92\u2013108. https:\/\/doi.org\/10.1007\/s11633-022-1365-9","journal-title":"Mach Intell Res"},{"key":"1455_CR17","doi-asserted-by":"publisher","unstructured":"Goodfellow IJ, Shlens J, Szegedy C (2015) Explaining and harnessing adversarial examples. Preprint arXiv:1412.6572. https:\/\/doi.org\/10.48550\/arXiv.1412.6572","DOI":"10.48550\/arXiv.1412.6572"},{"key":"1455_CR18","doi-asserted-by":"publisher","unstructured":"Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2018) Towards deep learning models resistant to adversarial attacks. Preprint arXiv:1706.06083. https:\/\/doi.org\/10.48550\/arXiv.1706.06083","DOI":"10.48550\/arXiv.1706.06083"},{"issue":"10","key":"1455_CR19","doi-asserted-by":"publisher","first-page":"6981","DOI":"10.1109\/TCSVT.2022.3178173","volume":"32","author":"G Chen","year":"2022","unstructured":"Chen G, Liu SJ, Sun YJ, Ji GP, Wu YF, Zhou T (2022) Camouflaged object detection via context-aware cross-level fusion. IEEE Trans Circuits Syst Video Technol 32(10):6981\u20136993. https:\/\/doi.org\/10.1109\/TCSVT.2022.3178173","journal-title":"IEEE Trans Circuits Syst Video Technol"},{"key":"1455_CR20","doi-asserted-by":"publisher","unstructured":"Li A, Zhang J, Lv Y, Liu B, Zhang T, Dai Y (2021) Uncertainty-aware joint salient object and camouflaged object detection. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 10071\u201310081. https:\/\/doi.org\/10.1109\/CVPR46437.2021.00994","DOI":"10.1109\/CVPR46437.2021.00994"},{"key":"1455_CR21","doi-asserted-by":"publisher","unstructured":"Yang F, Zhai Q, Li X, Huang R, Luo A, Cheng H, Fan DP (2021) Uncertainty-guided transformer reasoning for camouflaged object detection. In: Proceedings of the IEEE\/CVF international conference on computer vision, pp 4146\u20134155. https:\/\/doi.org\/10.1109\/ICCV48922.2021.00411","DOI":"10.1109\/ICCV48922.2021.00411"},{"key":"1455_CR22","doi-asserted-by":"publisher","unstructured":"Liu J, Zhang J, Barnes N (2022) Modeling aleatoric uncertainty for camouflaged object detection. In: Proceedings of the IEEE\/CVF winter conference on applications of computer vision, pp 1445\u20131454. https:\/\/doi.org\/10.1109\/WACV51458.2022.00267","DOI":"10.1109\/WACV51458.2022.00267"},{"key":"1455_CR23","doi-asserted-by":"publisher","unstructured":"Zhang M, Xu S, Piao Y, Shi D, Lin S, Lu H (2022) Preynet: preying on camouflaged objects. In: Proceedings of the 30th ACM international conference on multimedia, pp 5323\u20135332. https:\/\/doi.org\/10.1145\/3503161.3548178","DOI":"10.1145\/3503161.3548178"},{"key":"1455_CR24","doi-asserted-by":"publisher","first-page":"3462","DOI":"10.1109\/TCSVT.2023.3234578","volume":"333","author":"Y Lv","year":"2023","unstructured":"Lv Y, Zhang J, Dai Y, Li A, Barnes N, Fan DP (2023) Towards deeper understanding of camouflaged object detection. IEEE Trans Circuits Syst Video Technol 333:3462\u20133476. https:\/\/doi.org\/10.1109\/TCSVT.2023.3234578","journal-title":"IEEE Trans Circuits Syst Video Technol"},{"key":"1455_CR25","doi-asserted-by":"publisher","first-page":"45","DOI":"10.1016\/j.cviu.2019.04.006","volume":"184","author":"TN Le","year":"2019","unstructured":"Le TN, Nguyen TV, Nie Z, Tran MT, Sugimoto A (2019) Anabranch network for camouflaged object segmentation. Comput Vis Image Underst 184:45\u201356. https:\/\/doi.org\/10.1016\/j.cviu.2019.04.006","journal-title":"Comput Vis Image Underst"},{"key":"1455_CR26","doi-asserted-by":"publisher","unstructured":"Xiang M, Zhang J, Lv Y, Li A, Zhong Y, Dai Y (2021) Exploring depth contribution for camouflaged object detection. Preprint arXiv:2106.13217. https:\/\/doi.org\/10.48550\/arXiv.2106.13217","DOI":"10.48550\/arXiv.2106.13217"},{"key":"1455_CR27","doi-asserted-by":"publisher","unstructured":"Wu Z, Paudel DP, Fan DP, Wang J, Wang S, Demonceaux C, Timofte R, Van Gool L (2023) Source-free depth for object pop-out. In: Proceedings of the IEEE\/CVF international conference on computer vision, pp 1032\u20131042. https:\/\/doi.org\/10.48550\/arXiv.2212.05370","DOI":"10.48550\/arXiv.2212.05370"},{"key":"1455_CR28","doi-asserted-by":"publisher","unstructured":"Zhai Q, Li X, Yang F, Chen C, Cheng H, Fan DP (2021) Mutual graph learning for camouflaged object detection. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 12997\u201313007. https:\/\/doi.org\/10.1109\/CVPR46437.2021.01280","DOI":"10.1109\/CVPR46437.2021.01280"},{"key":"1455_CR29","doi-asserted-by":"publisher","first-page":"108644","DOI":"10.1016\/j.patcog.2022.108644","volume":"127","author":"M Zhuge","year":"2022","unstructured":"Zhuge M, Lu X, Guo Y, Cai Z, Chen S (2022) CubeNet: X-shape connection for camouflaged object detection. Pattern Recogn 127:108644. https:\/\/doi.org\/10.1016\/j.patcog.2022.108644","journal-title":"Pattern Recogn"},{"key":"1455_CR30","doi-asserted-by":"publisher","first-page":"108414","DOI":"10.1016\/j.patcog.2021.108414","volume":"123","author":"GP Ji","year":"2022","unstructured":"Ji GP, Zhu L, Zhuge M, Fu K (2022) Fast camouflaged object detection via edge-based reversible re-calibration network. Pattern Recogn 123:108414. https:\/\/doi.org\/10.1016\/j.patcog.2021.108414","journal-title":"Pattern Recogn"},{"key":"1455_CR31","doi-asserted-by":"publisher","unstructured":"Zhu H, Li P, Xie H, Yan X, Liang D, Chen D, Wei M, Qin J (2022) I can find you! boundary-guided separated attention network for camouflaged object detection. In: Proceedings of the AAAI conference on artificial intelligence, vol 36, pp 3608\u20133616. https:\/\/doi.org\/10.1609\/aaai.v36i3.20273","DOI":"10.1609\/aaai.v36i3.20273"},{"key":"1455_CR32","doi-asserted-by":"publisher","first-page":"7036","DOI":"10.1109\/TIP.2022.3217695","volume":"31","author":"T Zhou","year":"2022","unstructured":"Zhou T, Zhou Y, Gong C, Yang J, Zhang Y (2022) Feature aggregation and propagation network for camouflaged object detection. IEEE Trans Image Process 31:7036\u20137047. https:\/\/doi.org\/10.1109\/TIP.2022.3217695","journal-title":"IEEE Trans Image Process"},{"key":"1455_CR33","doi-asserted-by":"publisher","unstructured":"Sun Y, Wang S, Chen C, Xiang TZ (2022) Boundary-guided camouflaged object detection. In: Proceedings of the 31st international joint conference on artificial intelligence, pp 1335\u20131341. https:\/\/doi.org\/10.24963\/ijcai.2022\/186","DOI":"10.24963\/ijcai.2022\/186"},{"key":"1455_CR34","doi-asserted-by":"publisher","unstructured":"Zhu J, Zhang X, Zhang S, Liu J (2021) Inferring camouflaged objects by texture-aware interactive guidance network. In: Proceedings of the AAAI conference on artificial intelligence, vol 35, pp 3599\u20133607. https:\/\/doi.org\/10.1609\/aaai.v35i4.16475","DOI":"10.1609\/aaai.v35i4.16475"},{"key":"1455_CR35","doi-asserted-by":"publisher","unstructured":"Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2013) Intriguing properties of neural networks. Preprint arXiv:1312.6199. https:\/\/doi.org\/10.48550\/arXiv.1312.6199","DOI":"10.48550\/arXiv.1312.6199"},{"key":"1455_CR36","doi-asserted-by":"publisher","unstructured":"Liu S, Zeng Z, Ren T, Li F, Zhang H, Yang J, Li C, Yang J, Su H, Zhu J (2023) Grounding dino: marrying dino with grounded pre-training for open-set object detection. Preprint arXiv:2303.05499. https:\/\/doi.org\/10.48550\/arXiv.2303.054-99","DOI":"10.48550\/arXiv.2303.054-99"},{"key":"1455_CR37","doi-asserted-by":"publisher","unstructured":"Dosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S (2020) An image is worth 16\u2009\u00d7\u200916 words: transformers for image recognition at scale. Preprint arXiv:2010.11929. https:\/\/doi.org\/10.48550\/arXiv.2010.11929","DOI":"10.48550\/arXiv.2010.11929"},{"key":"1455_CR38","doi-asserted-by":"publisher","unstructured":"Benz P, Ham S, Zhang C, Karjauv A, Kweon IS (2021) Adversarial robustness comparison of vision transformer and MLP-mixer to CNNs. Preprint arXiv:2110.02797. https:\/\/doi.org\/10.48550\/arXiv.2110.02797","DOI":"10.48550\/arXiv.2110.02797"},{"key":"1455_CR39","doi-asserted-by":"publisher","unstructured":"Bhojanapalli S, Chakrabarti A, Glasner D, Li D, Unterthiner T, Veit A (2021) Understanding robustness of transformers for image classification. In: Proceedings of the IEEE\/CVF international conference on computer vision, pp 10231\u201310241. https:\/\/doi.org\/10.48550\/arXiv.2103.14586","DOI":"10.48550\/arXiv.2103.14586"},{"key":"1455_CR40","doi-asserted-by":"publisher","unstructured":"Mahmood K, Mahmood R, Van Dijk M (2021) On the robustness of vision transformers to adversarial examples. In: Proceedings of the IEEE\/CVF international conference on computer vision, pp 7838\u20137847. https:\/\/doi.org\/10.1109\/ICCV48922.2021.00774","DOI":"10.1109\/ICCV48922.2021.00774"},{"key":"1455_CR41","doi-asserted-by":"publisher","unstructured":"Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. In: 2017 IEEE symposium on security and privacy (SP), pp 39\u201357. https:\/\/doi.org\/10.1109\/SP.2017.49","DOI":"10.1109\/SP.2017.49"},{"key":"1455_CR42","doi-asserted-by":"publisher","unstructured":"Dong Y, Liao F, Pang T, Su H, Zhu J, Hu X, Li J (2018) Boosting adversarial attacks with momentum. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 9185\u20139193. https:\/\/doi.org\/10.1109\/CVPR.2018.00957","DOI":"10.1109\/CVPR.2018.00957"},{"key":"1455_CR43","doi-asserted-by":"publisher","unstructured":"Xie C, Zhang Z, Zhou Y, Bai S, Wang J, Ren Z, Yuille AL (2019) Improving transferability of adversarial examples with input diversity. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 2730\u20132739. https:\/\/doi.org\/10.1109\/CVPR.2019.00284","DOI":"10.1109\/CVPR.2019.00284"},{"key":"1455_CR44","doi-asserted-by":"publisher","unstructured":"Liu Y, Chen X, Liu C, Song D (2016) Delving into transferable adversarial examples and black-box attacks. Preprint arXiv:1611.02770. https:\/\/doi.org\/10.48550\/arXiv.1611.02770","DOI":"10.48550\/arXiv.1611.02770"},{"key":"1455_CR45","doi-asserted-by":"publisher","unstructured":"Tram\u00e8r F, Kurakin A, Papernot N, Goodfellow I, Boneh D, McDaniel P (2017) Ensemble adversarial training: attacks and defenses. Preprint arXiv:1705.07204. https:\/\/doi.org\/10.48550\/arXiv.1705.07204","DOI":"10.48550\/arXiv.1705.07204"},{"key":"1455_CR46","doi-asserted-by":"publisher","unstructured":"Wu D, Wang Y, Xia ST, Bailey J, Ma X (2020) Skip connections matter: on the transferability of adversarial examples generated with resnets. Preprint arXiv:2002.05990. https:\/\/doi.org\/10.48550\/arXiv.2002.05990","DOI":"10.48550\/arXiv.2002.05990"},{"key":"1455_CR47","first-page":"85","volume":"33","author":"Y Guo","year":"2020","unstructured":"Guo Y, Li Q, Chen H (2020) Backpropagating linearly improves transferability of adversarial examples. Adv Neural Inf Process Syst 33:85\u201395","journal-title":"Adv Neural Inf Process Syst"},{"key":"1455_CR48","doi-asserted-by":"publisher","unstructured":"Zhang C, Benz P, Karjauv A, Cho JW, Zhang K, Kweon IS (2022) Investigating top-k white-box and transferable black-box attack. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 15085\u201315094. https:\/\/doi.org\/10.1109\/CVPR52688.2022.01466","DOI":"10.1109\/CVPR52688.2022.01466"},{"key":"1455_CR49","doi-asserted-by":"publisher","unstructured":"Fan DP, Cheng MM, Liu Y, Li T, Borji A (2017) Structure-measure: a new way to evaluate foreground maps. In: Proceedings of the IEEE international conference on computer vision, pp 4548\u20134557. https:\/\/doi.org\/10.1109\/ICCV.2017.487","DOI":"10.1109\/ICCV.2017.487"},{"issue":"6","key":"1455_CR50","doi-asserted-by":"publisher","first-page":"1475","DOI":"10.1360\/SSI-2020-0370","volume":"51","author":"DP Fan","year":"2021","unstructured":"Fan DP, Ji GP, Qin X, Cheng MM (2021) Cognitive vision inspired object segmentation metric and loss function. Sci Sin Inform 51(6):1475. https:\/\/doi.org\/10.1360\/SSI-2020-0370","journal-title":"Sci Sin Inform"},{"key":"1455_CR51","doi-asserted-by":"publisher","unstructured":"Margolin R, Zelnik-Manor L, Tal A (2014) How to evaluate foreground maps? In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 248\u2013255. https:\/\/doi.org\/10.1109\/CVPR.2014.39","DOI":"10.1109\/CVPR.2014.39"},{"key":"1455_CR52","doi-asserted-by":"publisher","unstructured":"Dong Y, Liao F, Pang T et al (2018) Boosting adversarial attacks with momentum. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 9185\u20139193. https:\/\/doi.org\/10.1109\/CVPR.2018.00957","DOI":"10.1109\/CVPR.2018.00957"},{"key":"1455_CR53","doi-asserted-by":"publisher","unstructured":"Moosavi-Dezfooli SM, Fawzi A, Frossard P (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2574\u20132582. https:\/\/doi.org\/10.1109\/CVPR.2016.282","DOI":"10.1109\/CVPR.2016.282"},{"key":"1455_CR54","doi-asserted-by":"publisher","unstructured":"Moosavi-Dezfooli SM, Fawzi A, Fawzi O et al (2017) Universal adversarial perturbations. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1765\u20131773. https:\/\/doi.org\/10.1109\/CVPR.2017.17","DOI":"10.1109\/CVPR.2017.17"},{"key":"1455_CR55","unstructured":"Athalye A, Carlini N, Wagner D (2018) Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In: International conference on machine learning. PMLR, pp 274\u2013283"},{"key":"1455_CR56","doi-asserted-by":"publisher","unstructured":"Xie C, Zhang Z, Zhou Y et al (2019) Improving transferability of adversarial examples with input diversity. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 2730\u20132739. https:\/\/doi.org\/10.1109\/CVPR.2019.00284","DOI":"10.1109\/CVPR.2019.00284"},{"key":"1455_CR57","unstructured":"Guo C, Gardner J, You Y et al (2019) Simple black-box adversarial attacks. In: Proceedings of the 36th international conference on machine learning, PMLR, pp 2484\u20132493"},{"issue":"5","key":"1455_CR58","doi-asserted-by":"publisher","first-page":"828","DOI":"10.1109\/TEVC.2019.2890858","volume":"23","author":"J Su","year":"2019","unstructured":"Su J, Vargas DV, Sakurai K (2019) One pixel attack for fooling deep neural networks. IEEE Trans Evol Comput 23(5):828\u2013841. https:\/\/doi.org\/10.1109\/TEVC.2019.2890858","journal-title":"IEEE Trans Evol Comput"},{"key":"1455_CR59","doi-asserted-by":"publisher","unstructured":"Xiao C, Li B, Zhu JY et al (2018) Generating adversarial examples with adversarial networks. Preprint arXiv:1801.02610. https:\/\/doi.org\/10.48550\/arXiv.1801.02610","DOI":"10.48550\/arXiv.1801.02610"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-024-01455-7.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-024-01455-7\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-024-01455-7.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,7,17]],"date-time":"2024-07-17T17:24:29Z","timestamp":1721237069000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-024-01455-7"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,5,7]]},"references-count":59,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2024,8]]}},"alternative-id":["1455"],"URL":"https:\/\/doi.org\/10.1007\/s40747-024-01455-7","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"type":"print","value":"2199-4536"},{"type":"electronic","value":"2198-6053"}],"subject":[],"published":{"date-parts":[[2024,5,7]]},"assertion":[{"value":"2 November 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"17 April 2024","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"7 May 2024","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}