{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,30]],"date-time":"2025-10-30T07:13:26Z","timestamp":1761808406672,"version":"3.37.3"},"reference-count":45,"publisher":"Wiley","license":[{"start":{"date-parts":[[2020,11,12]],"date-time":"2020-11-12T00:00:00Z","timestamp":1605139200000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100004543","name":"China Scholarship Council","doi-asserted-by":"publisher","award":["201806960079"],"award-info":[{"award-number":["201806960079"]}],"id":[{"id":"10.13039\/501100004543","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Security and Communication Networks"],"published-print":{"date-parts":[[2020,11,12]]},"abstract":"<jats:p>The existence of adversarial examples and the easiness with which they can be generated raise several security concerns with regard to deep learning systems, pushing researchers to develop suitable defence mechanisms. The use of networks adopting error-correcting output codes (ECOC) has recently been proposed to counter the creation of adversarial examples in a white-box setting. In this paper, we carry out an in-depth investigation of the adversarial robustness achieved by the ECOC approach. We do so by proposing a new adversarial attack specifically designed for multilabel classification architectures, like the ECOC-based one, and by applying two existing attacks. In contrast to previous findings, our analysis reveals that ECOC-based networks can be attacked quite easily by introducing a small adversarial perturbation. Moreover, the adversarial examples can be generated in such a way to achieve high probabilities for the predicted target class, hence making it difficult to use the prediction confidence to detect them. Our findings are proven by means of experimental results obtained on MNIST, CIFAR-10, and GTSRB classification tasks.<\/jats:p>","DOI":"10.1155\/2020\/8882494","type":"journal-article","created":{"date-parts":[[2020,11,16]],"date-time":"2020-11-16T22:06:54Z","timestamp":1605564414000},"page":"1-11","source":"Crossref","is-referenced-by-count":5,"title":["Challenging the Adversarial Robustness of DNNs Based on Error-Correcting Output Codes"],"prefix":"10.1155","volume":"2020","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-3180-3124","authenticated-orcid":true,"given":"Bowen","family":"Zhang","sequence":"first","affiliation":[{"name":"School of Cyber Engineering, Xidian University, Xi\u2019an 710126, China"}]},{"given":"Benedetta","family":"Tondi","sequence":"additional","affiliation":[{"name":"Department of Information Engineering and Mathematics, University of Siena, Siena 53100, Italy"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2879-241X","authenticated-orcid":true,"given":"Xixiang","family":"Lv","sequence":"additional","affiliation":[{"name":"School of Cyber Engineering, Xidian University, Xi\u2019an 710126, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7368-0866","authenticated-orcid":true,"given":"Mauro","family":"Barni","sequence":"additional","affiliation":[{"name":"Department of Information Engineering and Mathematics, University of Siena, Siena 53100, Italy"}]}],"member":"311","reference":[{"article-title":"Intriguing properties of neural networks","year":"2013","author":"C. Szegedy","key":"1"},{"article-title":"Explaining and harnessing adversarial examples","year":"2014","author":"I. J. Goodfellow","key":"2"},{"key":"3","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2018.2807385"},{"first-page":"1765","article-title":"Universal adversarial perturbations","author":"S. M. Moosavi-Dezfooli","key":"4"},{"key":"5","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.282"},{"key":"6","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2017.49"},{"key":"7","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2016.41"},{"article-title":"Towards deep neural network architectures robust to adversarial examples","year":"2014","author":"S. Gu","key":"8"},{"key":"9","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2018.04.027"},{"article-title":"Ensemble adversarial training: attacks and defenses","year":"2017","author":"F. Tram\u00e8r","key":"10"},{"article-title":"Adversarial training methods for semi-supervised text classification","year":"2016","author":"T. Miyato","key":"11"},{"first-page":"4480","article-title":"Improving the robust- ness of deep neural networks via stability training","author":"S. Zheng","key":"12"},{"first-page":"3","article-title":"Adversarial examples are not easily detected: bypassing ten detection methods","author":"N. Carlini","key":"13"},{"key":"14","doi-asserted-by":"crossref","first-page":"263","DOI":"10.1613\/jair.105","article-title":"Solving multiclass learning problems via error-correcting output codes","volume":"2","author":"T. G. Dietterich","year":"1994","journal-title":"Journal of Artificial Intelligence Research"},{"key":"15","first-page":"8643","article-title":"Error correcting output codes improve probability estimation and adversarial robustness of deep neural networks","author":"G. Verma","year":"2019","journal-title":"Advances in Neural Information Processing Systems"},{"article-title":"Learning multiple layers of features from tiny images","year":"2009","author":"A. Krizhevsky","key":"16"},{"article-title":"On adaptive attacks to adversarial example defenses","year":"2020","author":"F. Tramer","key":"17"},{"key":"18","doi-asserted-by":"publisher","DOI":"10.1109\/BTAS.2017.8272695"},{"key":"19","doi-asserted-by":"publisher","DOI":"10.1016\/j.neunet.2012.02.016"},{"article-title":"The MNIST database of handwritten digits","year":"1998","author":"Y. LeCun","key":"20"},{"key":"21","doi-asserted-by":"publisher","DOI":"10.1109\/EUSIPCO.2015.7362845"},{"issue":"10","key":"22","doi-asserted-by":"crossref","first-page":"1338","DOI":"10.1109\/TKDE.2006.162","article-title":"Multilabel neural networks with applications to functional genomics and text categorization","volume":"18","author":"M. L. Zhang","year":"2006","journal-title":"IEEE Transactions on Knowledge and Data Engineering"},{"article-title":"Very deep convolutional networks for large-scale image recognition","year":"2014","author":"K. Simonyan","key":"23"},{"key":"24","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2015.7298640"},{"article-title":"Adversarial examples in the physical world","year":"2016","author":"A. Kurakin","key":"25"},{"key":"26","doi-asserted-by":"publisher","DOI":"10.1109\/EuroSP.2016.36"},{"key":"27","doi-asserted-by":"publisher","DOI":"10.1109\/TEVC.2019.2890858"},{"first-page":"2730","article-title":"Improving transferability of adversarial examples with input diversity","author":"C. Xie","key":"28"},{"key":"29","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2018.07.023"},{"key":"30","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2018.2886017"},{"article-title":"Stochastic activation pruning for robust adversarial defense","year":"2017","author":"G. S. Dhillon","key":"31"},{"article-title":"Block switching: a stochastic approach for deep learning security","year":"2020","author":"X. Wang","key":"32"},{"article-title":"The robust manifold defense: adversarial training using generative models","year":"2017","author":"A. Ilyas","key":"33"},{"article-title":"Feature squeezing: detecting adversarial examples in deep neural networks","year":"2017","author":"W. Xu","key":"34"},{"key":"35","first-page":"7717","article-title":"Attacks meet interpretability: attribute-steered detection of adversarial samples","author":"G. Tao","year":"2018","journal-title":"Advances in Neural Information Processing Systems"},{"author":"H. Yu","key":"36","article-title":"PDA: Progressive data augmentation for general robustness of deep neural networks"},{"key":"37","first-page":"854","article-title":"Parseval networks: improving robustness to adversarial examples","volume":"70","author":"M. Cisse","year":"2017","journal-title":"Proceedings of the 34th International Conference on Machine Learning"},{"author":"A. Liu","key":"38","article-title":"Training robust deep neural networks via adversarial noise propagation"},{"article-title":"Interpreting and improving adversarial robustness of deep neural networks with neuron sensitivity","author":"C. Zhang","key":"39","doi-asserted-by":"crossref","DOI":"10.1109\/TIP.2020.3042083"},{"key":"40","doi-asserted-by":"publisher","DOI":"10.1016\/j.patrec.2017.09.037"},{"key":"41","doi-asserted-by":"publisher","DOI":"10.1016\/j.engappai.2018.04.019"},{"article-title":"Towards deep learning models resistant to adversarial attacks","year":"2017","author":"A. Madry","key":"42"},{"article-title":"Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples","year":"2018","author":"A. Athalye","key":"43"},{"article-title":"Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples","year":"2018","author":"A. Athalye","key":"44"},{"article-title":"On adaptive attacks to adversarial example defenses","year":"2020","author":"F. Tramer","key":"45"}],"container-title":["Security and Communication Networks"],"original-title":[],"language":"en","link":[{"URL":"http:\/\/downloads.hindawi.com\/journals\/scn\/2020\/8882494.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/downloads.hindawi.com\/journals\/scn\/2020\/8882494.xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/downloads.hindawi.com\/journals\/scn\/2020\/8882494.pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,11,28]],"date-time":"2022-11-28T09:39:51Z","timestamp":1669628391000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.hindawi.com\/journals\/scn\/2020\/8882494\/"}},"subtitle":[],"editor":[{"given":"Zhihua","family":"Xia","sequence":"additional","affiliation":[]}],"short-title":[],"issued":{"date-parts":[[2020,11,12]]},"references-count":45,"alternative-id":["8882494","8882494"],"URL":"https:\/\/doi.org\/10.1155\/2020\/8882494","relation":{},"ISSN":["1939-0122","1939-0114"],"issn-type":[{"type":"electronic","value":"1939-0122"},{"type":"print","value":"1939-0114"}],"subject":[],"published":{"date-parts":[[2020,11,12]]}}}