{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,6]],"date-time":"2026-03-06T20:34:05Z","timestamp":1772829245661,"version":"3.50.1"},"reference-count":28,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2019,2,22]],"date-time":"2019-02-22T00:00:00Z","timestamp":1550793600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2019,2,22]],"date-time":"2019-02-22T00:00:00Z","timestamp":1550793600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["IPSJ T Comput Vis Appl"],"published-print":{"date-parts":[[2019,12]]},"abstract":"<jats:title>Abstract<\/jats:title>\n               <jats:p>The output of convolutional neural networks (CNNs) has been shown to be discontinuous which can make the CNN image classifier vulnerable to small well-tuned artificial perturbation. That is, images modified by conducting such alteration (i.e., adversarial perturbation) that make little difference to the human eyes can completely change the CNN classification results. In this paper, we propose a practical attack using differential evolution (DE) for generating effective adversarial perturbations. We comprehensively evaluate the effectiveness of different types of DEs for conducting the attack on different network structures. The proposed method only modifies five pixels (i.e., few-pixel attack), and it is a black-box attack which only requires the miracle feedback of the target CNN systems. The results show that under strict constraints which simultaneously control the number of pixels changed and overall perturbation strength, attacking can achieve 72.29<jats:italic>%<\/jats:italic>, 72.30<jats:italic>%<\/jats:italic>, and 61.28<jats:italic>%<\/jats:italic> non-targeted attack success rates, with 88.68<jats:italic>%<\/jats:italic>, 83.63<jats:italic>%<\/jats:italic>, and 73.07<jats:italic>%<\/jats:italic> confidence on average, on three common types of CNNs. The attack only requires modifying five pixels with 20.44, 14.28, and 22.98 pixel value distortion. Thus, we show that current deep neural networks are also vulnerable to such simpler black-box attacks even under very limited attack conditions.<\/jats:p>","DOI":"10.1186\/s41074-019-0053-3","type":"journal-article","created":{"date-parts":[[2019,2,22]],"date-time":"2019-02-22T09:03:46Z","timestamp":1550826226000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":56,"title":["Attacking convolutional neural network using differential evolution"],"prefix":"10.1186","volume":"11","author":[{"given":"Jiawei","family":"Su","sequence":"first","affiliation":[]},{"given":"Danilo Vasconcellos","family":"Vargas","sequence":"additional","affiliation":[]},{"given":"Kouichi","family":"Sakurai","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2019,2,22]]},"reference":[{"issue":"81","key":"53_CR1","doi-asserted-by":"publisher","first-page":"121","DOI":"10.1007\/s10994-010-5188-5","volume":"2","author":"M Barreno","year":"2010","unstructured":"Barreno M, Nelson B, Joseph A (2010) D, and Tygar, J, The security of machine learning. Mach Learn 2(81):121\u2013148.","journal-title":"Mach Learn"},{"key":"53_CR2","doi-asserted-by":"crossref","first-page":"16","DOI":"10.1145\/1128817.1128824","volume-title":"Proceedings of the 2006 ACM Symposium on Information, computer and communications security","author":"M Barreno","year":"2006","unstructured":"Barreno M, Nelson B, Sears R, Joseph AD, Tygar JD (2006) Can machine learning be secure? In: Proceedings of the 2006 ACM Symposium on Information, computer and communications security, 16\u201325.. ACM, Taiwan."},{"issue":"10","key":"53_CR3","doi-asserted-by":"publisher","first-page":"646","DOI":"10.1109\/TEVC.2006.872133","volume":"6","author":"J Brest","year":"2006","unstructured":"Brest J, Greiner S, Boskovic B, Mernik M, Zumer V (2006) Self-adapting control parameters in differential evolution: A comparative study on numerical benchmark problems. IEEE Trans Evol Comput 6(10):646\u2013657.","journal-title":"IEEE Trans Evol Comput"},{"key":"53_CR4","doi-asserted-by":"crossref","unstructured":"Carlini N, Wagner D (2018) Audio adversarial examples: targeted attacks on speech-to-text. arXiv preprint arXiv:1801.01944.","DOI":"10.1109\/SPW.2018.00009"},{"issue":"39","key":"53_CR5","doi-asserted-by":"publisher","first-page":"315","DOI":"10.1007\/s10462-011-9276-0","volume":"4","author":"P Civicioglu","year":"2013","unstructured":"Civicioglu P, Besdok E (2013) A conceptual comparison of the cuckoo-search, particle swarm optimization, differential evolution and artificial bee colony algorithms. Artif Intell Rev 4(39):315\u2013346.","journal-title":"Artif Intell Rev"},{"key":"53_CR6","doi-asserted-by":"crossref","unstructured":"Dang H, Huang Y, Chang E-C (2017) Evading classifiers by morphing in the dark. ACM.","DOI":"10.1145\/3133956.3133978"},{"issue":"15","key":"53_CR7","doi-asserted-by":"publisher","first-page":"4","DOI":"10.1109\/TEVC.2010.2059031","volume":"1","author":"S Das","year":"2011","unstructured":"Das S, Suganthan PN (2011) Differential evolution: A survey of the state-of-the-art. IEEE Trans Evol Comput 1(15):4\u201331. IEEE.","journal-title":"IEEE Trans Evol Comput"},{"key":"53_CR8","unstructured":"Goodfellow IJ, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572."},{"key":"53_CR9","unstructured":"Krizhevsky A, Hinton G (2009) Learning multiple layers of features from tiny images. Technical report, Vol. 1, Issue. 4, pp. 7. University of Toronto."},{"key":"53_CR10","doi-asserted-by":"crossref","unstructured":"Liang B, Li H, Su M, Bian P, Li X, Shi W (2017) Deep text classification can be fooled. arXiv preprint arXiv:1704.08006.","DOI":"10.24963\/ijcai.2018\/585"},{"key":"53_CR11","unstructured":"Lin M, Chen Q, Yan S (2013) Network in network. arXiv preprint arXiv:1312.4400."},{"key":"53_CR12","unstructured":"M-D, et al (2016) Deepfool: a simple and accurate method to fool deep neural networks."},{"key":"53_CR13","unstructured":"M-D, et al (2017) Analysis of universal adversarial perturbations. arXiv preprint arXiv:1705.09554."},{"key":"53_CR14","doi-asserted-by":"crossref","unstructured":"Moosavi Dezfooli SM, Fawzi A, Fawzi O, Frossard P (2017) Universal adversarial perturbations.","DOI":"10.1109\/CVPR.2017.17"},{"key":"53_CR15","doi-asserted-by":"crossref","unstructured":"Narodytska N, Kasiviswanathan S (2016) Simple black-box adversarial attacks on deep neural networks. arXiv preprint arXiv:1612.06299.","DOI":"10.1109\/CVPRW.2017.172"},{"key":"53_CR16","doi-asserted-by":"crossref","unstructured":"Nguyen A, Yosinski J, Clune J (2015) Deep neural networks are easily fooled: High confidence predictions for unrecognizable images.","DOI":"10.1109\/CVPR.2015.7298640"},{"key":"53_CR17","doi-asserted-by":"crossref","unstructured":"Papernot N, McDaniel P, Goodfellow I, Jha S, Celik ZB, Swami A (2016) Practical black-box attacks against machine learning. arXiv preprint arXiv:1602.02697.","DOI":"10.1145\/3052973.3053009"},{"key":"53_CR18","doi-asserted-by":"crossref","unstructured":"Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A (2015) The limitations of deep learning in adversarial settings. arXiv preprint arXiv:1511.07528.","DOI":"10.1109\/EuroSP.2016.36"},{"key":"53_CR19","unstructured":"Simonyan K, Vedaldi A, Zisserman A (2013) Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034."},{"key":"53_CR20","unstructured":"Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556."},{"key":"53_CR21","unstructured":"Springenberg J, Dosovitskiy A, Brox T, Riedmiller M (2014) Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806."},{"issue":"11","key":"53_CR22","doi-asserted-by":"publisher","first-page":"341","DOI":"10.1023\/A:1008202821328","volume":"4","author":"R Storn","year":"1997","unstructured":"Storn R, Price K (1997) Differential evolution\u2013a simple and efficient heuristic for global optimization over continuous spaces. J Glob Optim 4(11):341\u2013359.","journal-title":"J Glob Optim"},{"key":"53_CR23","unstructured":"Su J, Vargas D, Sakurai K (2017) One pixel attack for fooling deep neural networks. arXiv preprint arXiv:1710.08864."},{"key":"53_CR24","unstructured":"Szegedy Cea (2013) Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199."},{"key":"53_CR25","doi-asserted-by":"crossref","unstructured":"Taigman Y, Yang M, Ranzato M, Wolf L (2014) Deepface: Closing the gap to human-level performance in face verification.","DOI":"10.1109\/CVPR.2014.220"},{"issue":"28","key":"53_CR26","doi-asserted-by":"publisher","first-page":"1759","DOI":"10.1109\/TNNLS.2016.2551748","volume":"8","author":"DV Vargas","year":"2017","unstructured":"Vargas DV, Murata J (2017) Spectrum-diverse neuroevolution with unified neural models. IEEE Trans Neural Netw Learn Syst 8(28):1759\u20131773.","journal-title":"IEEE Trans Neural Netw Learn Syst"},{"issue":"23","key":"53_CR27","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1162\/EVCO_a_00118","volume":"1","author":"DV Vargas","year":"2015","unstructured":"Vargas DV, Murata J, Takano H, Delbem ACB (2015) General subpopulation framework and taming the conflict inside populations. Evol Comput 1(23):1\u201336.","journal-title":"Evol Comput"},{"key":"53_CR28","unstructured":"Wei D, Zhou B, Torrabla A, Freeman W (2015) Understanding intra-class knowledge inside cnn. arXiv preprint arXiv:1507.02379."}],"container-title":["IPSJ Transactions on Computer Vision and Applications"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s41074-019-0053-3.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1186\/s41074-019-0053-3\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s41074-019-0053-3.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2021,7,30]],"date-time":"2021-07-30T12:13:42Z","timestamp":1627647222000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1186\/s41074-019-0053-3"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2019,2,22]]},"references-count":28,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2019,12]]}},"alternative-id":["53"],"URL":"https:\/\/doi.org\/10.1186\/s41074-019-0053-3","relation":{},"ISSN":["1882-6695"],"issn-type":[{"value":"1882-6695","type":"electronic"}],"subject":[],"published":{"date-parts":[[2019,2,22]]},"assertion":[{"value":"4 May 2018","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"15 January 2019","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"22 February 2019","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"All authors declare that they have no competing interests.","order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}},{"value":"Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Publisher\u2019s Note"}}],"article-number":"1"}}