{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,7]],"date-time":"2026-03-07T16:48:10Z","timestamp":1772902090320,"version":"3.50.1"},"reference-count":32,"publisher":"Springer Science and Business Media LLC","issue":"7","license":[{"start":{"date-parts":[[2022,7,29]],"date-time":"2022-07-29T00:00:00Z","timestamp":1659052800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,7,29]],"date-time":"2022-07-29T00:00:00Z","timestamp":1659052800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Appl Intell"],"published-print":{"date-parts":[[2023,4]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Adversarial attack aims to fail the deep neural network by adding a small amount of perturbation to the input image, in which the attack success rate and resulting image quality are maximized under the <jats:italic>l<\/jats:italic><jats:sub><jats:italic>p<\/jats:italic><\/jats:sub> norm perturbation constraint. However, the <jats:italic>l<\/jats:italic><jats:sub><jats:italic>p<\/jats:italic><\/jats:sub> norm is not accurately correlated to human perception of image quality. Attack methods based on <jats:italic>l<\/jats:italic><jats:sub>0<\/jats:sub> norm constraint usually suffer from the high computational cost due to the iterative search for candidate pixels to modify. In this work, we explore how perceptual quality optimization can be incorporated into the adversarial attack design and propose a two-stage attack method to reshape the adversarial noise by an initial attack and optimize the visual quality of the attacked images without sacrificing the attack success rate. Specifically, we construct a visual attention network to generate a perceptual attention map to modulate the adversarial noise generated by a base attack method. The network is trained to maximize the visual quality in Structural Similarity Index Metric (SSIM) while achieving the same attack success rate. To improve the image perceptual quality further, we propose a fast search algorithm to perform an iterative block-wise pruning of the adversarial noise. We evaluate our method on the mini-ImageNet dataset against three different defense schemes. The results have demonstrated that our method can achieve better attack performance in image quality, attack success rate, and efficiency than the state-of-the-art attack methods.<\/jats:p>","DOI":"10.1007\/s10489-022-03838-0","type":"journal-article","created":{"date-parts":[[2022,7,29]],"date-time":"2022-07-29T10:40:23Z","timestamp":1659091223000},"page":"7408-7422","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":6,"title":["Multi-layer noise reshaping and perceptual optimization for effective adversarial attack of images"],"prefix":"10.1007","volume":"53","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-2255-4293","authenticated-orcid":false,"given":"Zhiquan","family":"He","sequence":"first","affiliation":[]},{"given":"Xujia","family":"Lan","sequence":"additional","affiliation":[]},{"given":"Jianhe","family":"Yuan","sequence":"additional","affiliation":[]},{"given":"Wenming","family":"Cao","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,7,29]]},"reference":[{"key":"3838_CR1","unstructured":"Athalye A, Carlini N, Wagner D (2018) Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. CoRR arXiv:1802.00420"},{"key":"3838_CR2","unstructured":"Athalye A, Engstrom L, Ilyas A, Kwok K (2017) Synthesizing robust adversarial examples. CoRR arXiv:1707.07397"},{"key":"3838_CR3","doi-asserted-by":"crossref","unstructured":"Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp 39\u201357","DOI":"10.1109\/SP.2017.49"},{"key":"3838_CR4","doi-asserted-by":"publisher","unstructured":"Croce F, Hein M (2019) Sparse and imperceivable adversarial attacks. In: 2019 IEEE\/CVF international conference on computer vision, ICCV 2019, Seoul, Korea (South), 27 October - 2 November, 2019. IEEE, pp 4723\u20134731 https:\/\/doi.org\/10.1109\/ICCV.2019.00482https:\/\/doi.org\/10.1109\/ICCV.2019.00482","DOI":"10.1109\/ICCV.2019.00482 10.1109\/ICCV.2019.00482"},{"key":"3838_CR5","doi-asserted-by":"crossref","unstructured":"Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L (2009) Imagenet: A large-scale hierarchical image database. In: Computer vision and pattern recognition, 2009. CVPR 2009. IEEE conference on. IEEE, pp 248\u2013255","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"3838_CR6","unstructured":"Dhillon GS, Azizzadenesheli K, Lipton ZC, Bernstein J, Kossaifi J, Khanna A, Anandkumar A (2018) Stochastic activation pruning for robust adversarial defense. CoRR arXiv:1803.01442"},{"key":"3838_CR7","unstructured":"Dong X, Chen D, Bao J, Qin C, Yuan L, Zhang W, Yu N, Chen D (2020) Greedyfool: Distortion-aware sparse adversarial attack. In: Larochelle H, Ranzato M, Hadsell R, Balcan M, Lin H (eds) Advances in neural information processing systems 33: annual conference on neural information processing systems 2020, NeurIPS 2020, 6-12 December, 2020, virtual"},{"key":"3838_CR8","doi-asserted-by":"crossref","unstructured":"Dong Y, Liao F, Pang T, Su H, Zhu J, Hu X, Li J (2018) Boosting adversarial attacks with momentum. In: 2018 IEEE\/CVF Conf Comput Vis Pattern Recognit:9185\u20139193","DOI":"10.1109\/CVPR.2018.00957"},{"key":"3838_CR9","unstructured":"Goodfellow IJ, Shlens J, Szegedy C (2015) Explaining and harnessing adversarial examples. In: Bengio Y, LeCun Y (eds) 3rd International conference on learning representations, ICLR 2015, San Diego, CA, USA, 7\u20139 May, 2015, conference track proceedings. arXiv:1412.6572"},{"key":"3838_CR10","unstructured":"Guo C, Rana M, Ciss\u00e9 M, Maaten LVD (2018) Countering adversarial images using input transformations. arXiv:1711.00117"},{"key":"3838_CR11","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770\u2013778","DOI":"10.1109\/CVPR.2016.90"},{"key":"3838_CR12","unstructured":"He Z, Wang W, Dong J, Tan T (2021) Transferable sparse adversarial attack. CoRR arXiv:2105.14727"},{"key":"3838_CR13","unstructured":"Jordan M, Manoj N, Goel S, Dimakis AG (2019) Quantifying perceptual distortion of adversarial examples. CoRR arXiv:1902.08265"},{"key":"3838_CR14","unstructured":"Kurakin A, Goodfellow IJ, Bengio S (2016) Adversarial machine learning at scale. CoRR arXiv:1611.01236"},{"key":"3838_CR15","unstructured":"Kurakin A, Goodfellow IJ, Bengio S (2017) Adversarial examples in the physical world. In: 5th International conference on learning representations, ICLR 2017, Toulon, France, 24-26 April, 2017, workshop track proceedings. Openreview.net"},{"issue":"5","key":"3838_CR16","doi-asserted-by":"publisher","first-page":"340","DOI":"10.1002\/col.1049","volume":"26","author":"MR Luo","year":"2001","unstructured":"Luo MR, Cui G, Rigg B (2001) The development of the cie 2000 color difference formula: Ciede2000. Color Res Appl 26(5):340\u2013350","journal-title":"Color Res Appl"},{"key":"3838_CR17","unstructured":"Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2018) Towards deep learning models resistant to adversarial attacks. In: 6th International conference on learning representations, ICLR 2018, Vancouver, BC, Canada, 30 April - 3 May, 2018, Conference track proceedings. OpenReview.net"},{"key":"3838_CR18","doi-asserted-by":"crossref","unstructured":"Modas A, Moosavi-Dezfooli S, Frossard P (2019) Sparsefool: a few pixels make a big difference. In: IEEE conference on computer vision and pattern recognition, CVPR 2019, long beach, CA, USA, 16-20 June, 2019, pp 9087\u20139096. Computer Vision Foundation\/IEEE","DOI":"10.1109\/CVPR.2019.00930"},{"key":"3838_CR19","doi-asserted-by":"crossref","unstructured":"Moosavi-Dezfooli SM, Fawzi A, Frossard P (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: Computer vision & pattern recognition","DOI":"10.1109\/CVPR.2016.282"},{"key":"3838_CR20","unstructured":"Nagalla S, Inampudi MRB (2014) Perceptual weights based on local energy for image quality assessment. Int J Image Process, vol 8"},{"key":"3838_CR21","doi-asserted-by":"publisher","unstructured":"Papernot N, McDaniel PD, Jha S, Fredrikson M, Celik ZB, Swami A (2016) The limitations of deep learning in adversarial settings. In: IEEE european symposium on security and privacy, EuroS&P 2016, Saarbr\u00fccken, Germany, 21-24 March, 2016. IEEE, pp 372\u2013387. https:\/\/doi.org\/10.1109\/EuroSP.2016.36","DOI":"10.1109\/EuroSP.2016.36"},{"key":"3838_CR22","doi-asserted-by":"publisher","unstructured":"Papernot N, McDaniel PD, Jha S, Fredrikson M, Celik ZB, Swami A (2016) The limitations of deep learning in adversarial settings. In: IEEE european symposium on security and privacy, EuroS&P 2016, Saarbr\u00fccken, Germany, 21-24 March, 2016. IEEE , pp 372\u2013387. https:\/\/doi.org\/10.1109\/EuroSP.2016.36","DOI":"10.1109\/EuroSP.2016.36"},{"key":"3838_CR23","doi-asserted-by":"crossref","unstructured":"Rozsa A, Rudd EM, Boult TE (2016) Adversarial diversity and hard positive generation. In: IEEE, pp 410\u2013417","DOI":"10.1109\/CVPRW.2016.58"},{"key":"3838_CR24","unstructured":"Su J, Vargas DV, Kouichi S (2017) One pixel attack for fooling deep neural networks. IEEE Trans Evol Comput"},{"key":"3838_CR25","unstructured":"Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow IJ, Fergus R (2014) Intriguing properties of neural networks. In: Bengio Y, LeCun Y (eds) 2nd International conference on learning representations, ICLR 2014, Banff, AB, Canada, 14-16 April, 2014, conference track proceedings. arXiv:1312.6199"},{"key":"3838_CR26","doi-asserted-by":"crossref","unstructured":"Thomas S, Tabrizi N (2018) Adversarial machine learning: a literature review. In: International conference on machine learning & data mining in pattern recognition","DOI":"10.1007\/978-3-319-96136-1_26"},{"key":"3838_CR27","unstructured":"Tram\u00e8r F, Kurakin A, Papernot N, Goodfellow I, Boneh D, Mcdaniel P (2017) Ensemble adversarial training: attacks and defenses. CoRR arXiv:1705.07204"},{"key":"3838_CR28","doi-asserted-by":"crossref","unstructured":"Wang Z (2004) Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process","DOI":"10.1109\/TIP.2003.819861"},{"key":"3838_CR29","doi-asserted-by":"crossref","unstructured":"Xie C, Wu Y, van der Maaten L, Yuille AL, He K (2018) Feature denoising for improving adversarial robustness. CoRR arXiv:1812.03411","DOI":"10.1109\/CVPR.2019.00059"},{"key":"3838_CR30","doi-asserted-by":"crossref","unstructured":"Xie C, Zhang Z, Zhou Y, Bai S, Wang J, Ren Z, Yuille A (2019) Improving transferability of adversarial examples with input diversity. In: Computer vision and pattern recognition. IEEE","DOI":"10.1109\/CVPR.2019.00284"},{"issue":"7","key":"3838_CR31","doi-asserted-by":"publisher","first-page":"1170","DOI":"10.1109\/TCSVT.2013.2240918","volume":"23","author":"C Yeo","year":"2013","unstructured":"Yeo C, Tan HL, Tan YH (2013) On rate distortion optimization using ssim. IEEE Trans Circuits Syst Video Technol 23(7):1170\u20131181. https:\/\/doi.org\/10.1109\/TCSVT.2013.2240918","journal-title":"IEEE Trans Circuits Syst Video Technol"},{"key":"3838_CR32","doi-asserted-by":"crossref","unstructured":"Zhao Z, Liu Z, Larson M (2020) Towards large yet imperceptible adversarial image perturbations with perceptual color distance. In: 2020 IEEE\/CVF Conference on computer vision and pattern recognition (CVPR)","DOI":"10.1109\/CVPR42600.2020.00112"}],"container-title":["Applied Intelligence"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10489-022-03838-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10489-022-03838-0\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10489-022-03838-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,3,16]],"date-time":"2023-03-16T02:41:40Z","timestamp":1678934500000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10489-022-03838-0"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,7,29]]},"references-count":32,"journal-issue":{"issue":"7","published-print":{"date-parts":[[2023,4]]}},"alternative-id":["3838"],"URL":"https:\/\/doi.org\/10.1007\/s10489-022-03838-0","relation":{},"ISSN":["0924-669X","1573-7497"],"issn-type":[{"value":"0924-669X","type":"print"},{"value":"1573-7497","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,7,29]]},"assertion":[{"value":"21 May 2022","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"29 July 2022","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}]}}