{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,3,31]],"date-time":"2025-03-31T21:45:47Z","timestamp":1743457547035,"version":"3.37.3"},"reference-count":54,"publisher":"Springer Science and Business Media LLC","issue":"11","license":[{"start":{"date-parts":[[2022,10,7]],"date-time":"2022-10-07T00:00:00Z","timestamp":1665100800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,10,7]],"date-time":"2022-10-07T00:00:00Z","timestamp":1665100800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/100009567","name":"Budapest University of Technology and Economics","doi-asserted-by":"crossref","id":[{"id":"10.13039\/100009567","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Multimed Tools Appl"],"published-print":{"date-parts":[[2023,5]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>The fast improvement of deep learning methods resulted in breakthroughs in image classification, however, these models are sensitive to adversarial perturbations, which can cause serious problems. Adversarial attacks try to change the model output by adding noise to the input, in our research we propose a combined defense method against it. Two defense approaches have been evolved in the literature, one robustizes the attacked model for higher accuracy, and the other approach detects the adversarial examples. Only very few papers discuss both approaches, thus our aim was to combine them to obtain a more robust model and to examine the combination, in particular the filtering capability of the detector. Our contribution was that the filtering based on the decision of the detector is able to enhance the accuracy, which was theoretically proved. Besides that, we developed a novel defense method called 2N labeling, where we extended the idea of the NULL labeling method. While the NULL labeling suggests only one new class for the adversarial examples, the 2N labeling method suggests twice as much. The novelty of our idea is that a new extended class is assigned to each original class, as the adversarial version of it, thus it assists the detector and robust classifier as well. The 2N labeling method was compared to competitor methods on two test datasets. The results presented that our method surpassed the others, and it can operate with a constant classification performance regardless of the presence or amplitude of adversarial attacks.<\/jats:p>","DOI":"10.1007\/s11042-022-14021-5","type":"journal-article","created":{"date-parts":[[2022,10,7]],"date-time":"2022-10-07T06:18:51Z","timestamp":1665123531000},"page":"16717-16740","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["2N labeling defense method against adversarial attacks by filtering and extended class label set"],"prefix":"10.1007","volume":"82","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-5781-1088","authenticated-orcid":false,"given":"G\u00e1bor","family":"Sz\u0171cs","sequence":"first","affiliation":[]},{"given":"Rich\u00e1rd","family":"Kiss","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,10,7]]},"reference":[{"key":"14021_CR1","doi-asserted-by":"crossref","unstructured":"Abdu-Aguye MG, Gomaa W, Makihara Y, Yagi Y (2020) Detecting adversarial attacks in time-series data. In ICASSP 2020-2020 IEEE international conference on acoustics, speech and signal processing (ICASSP) (pp. 3092-3096). IEEE","DOI":"10.1109\/ICASSP40776.2020.9053311"},{"issue":"7","key":"14021_CR2","doi-asserted-by":"publisher","first-page":"10985","DOI":"10.1007\/s11042-020-10261-5","volume":"80","author":"MA Ahmadi","year":"2021","unstructured":"Ahmadi MA, Dianat R, Amirkhani H (2021) An adversarial attack detection method in deep neural networks based on re-attacking approach. Multimed Tools Appl 80(7):10985\u201311014","journal-title":"Multimed Tools Appl"},{"key":"14021_CR3","unstructured":"Alparslan Y, Alparslan K, Keim-Shenk J, Khade S, Greenstadt R (2020) Adversarial attacks on convolutional neural networks in facial recognition domain. arXiv preprint arXiv:2001.11137"},{"key":"14021_CR4","unstructured":"Brendel W, Rauber J, Bethge M (2017) Decision-based adversarial attacks: reliable attacks against black-box machine learning models. arXiv preprint arXiv:1712.04248"},{"key":"14021_CR5","unstructured":"Breve B, Caruccio L, Cirillo S, Desiato D, Deufemia V, Polese G (2020) Enhancing user awareness during internet browsing. In ITASEC (pp. 71-81)"},{"key":"14021_CR6","doi-asserted-by":"crossref","unstructured":"Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. In 2017 IEEE symposium on security and privacy (SP) pp. 39-57. IEEE","DOI":"10.1109\/SP.2017.49"},{"key":"14021_CR7","doi-asserted-by":"publisher","first-page":"205034","DOI":"10.1109\/ACCESS.2020.3036916","volume":"8","author":"L Caruccio","year":"2020","unstructured":"Caruccio L, Desiato D, Polese G, Tortora G (2020) GDPR compliant information confidentiality preservation in big data processing. IEEE Access 8:205034\u2013205050. https:\/\/doi.org\/10.1109\/ACCESS.2020.3036916","journal-title":"IEEE Access"},{"issue":"1","key":"14021_CR8","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1186\/s40537-022-00566-7","volume":"9","author":"F Cerruto","year":"2022","unstructured":"Cerruto F, Cirillo S, Desiato D, Gambardella SM, Polese G (2022) Social network data analysis to highlight privacy threats in sharing data. J Big Data 9(1):1\u201326","journal-title":"J Big Data"},{"issue":"1","key":"14021_CR9","doi-asserted-by":"publisher","first-page":"25","DOI":"10.1049\/cit2.12028","volume":"6","author":"A Chakraborty","year":"2021","unstructured":"Chakraborty A, Alam M, Dey V, Chattopadhyay A, Mukhopadhyay D (2021) A survey on adversarial attacks and defences. CAAI Transac Intel Technol 6(1):25\u201345. https:\/\/doi.org\/10.1049\/cit2.12028","journal-title":"CAAI Transac Intel Technol"},{"key":"14021_CR10","unstructured":"Chen Y, Wainwright MJ (2015) Fast low-rank estimation by projected gradient descent: general statistical and algorithmic guarantees. arXiv preprint arXiv:1509.03025."},{"key":"14021_CR11","doi-asserted-by":"crossref","unstructured":"Chen J, Jordan MI, Wainwright MJ (2020) HopSkipJumpAttack: a query-efficient decision-based attack. In 2020 IEEE symposium on security and privacy (pp. 1277-1294). IEEE","DOI":"10.1109\/SP40000.2020.00045"},{"key":"14021_CR12","doi-asserted-by":"crossref","unstructured":"Dong Y, Su H, Wu B, Li Z, Liu W, Zhang T, Zhu J (2019) Efficient decision-based black-box adversarial attacks on face recognition. In proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (pp. 7714-7722)","DOI":"10.1109\/CVPR.2019.00790"},{"key":"14021_CR13","doi-asserted-by":"publisher","first-page":"20409","DOI":"10.1007\/s11042-019-7353-6","volume":"78","author":"W Fan","year":"2019","unstructured":"Fan W, Sun G, Su Y, Liu Z, Lu X (2019) Integration of statistical detector and Gaussian noise injection detector for adversarial example detection in deep neural networks. Multimed Tools Appl 78:20409\u201320429. https:\/\/doi.org\/10.1007\/s11042-019-7353-6","journal-title":"Multimed Tools Appl"},{"key":"14021_CR14","unstructured":"Goodfellow IJ, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572"},{"key":"14021_CR15","unstructured":"Gotmare A, Keskar NS, Xiong C, Socher R (2018) A closer look at deep learning heuristics: learning rate restarts, warmup and distillation. arXiv preprint arXiv:1810.13243"},{"key":"14021_CR16","doi-asserted-by":"crossref","unstructured":"Harder P, Pfreundt FJ, Keuper M, Keupe, J (2021) SpectralDefense: detecting adversarial attacks on CNNs in the Fourier domain. arXiv preprint arXiv:2103.03000.","DOI":"10.1109\/IJCNN52387.2021.9533442"},{"issue":"14","key":"14021_CR17","doi-asserted-by":"publisher","first-page":"22077","DOI":"10.1007\/s11042-020-10379-6","volume":"80","author":"AS Hashemi","year":"2021","unstructured":"Hashemi AS, Mozaffari S (2021) CNN adversarial attack mitigation using perturbed samples training. Multimed Tools Appl 80(14):22077\u201322095","journal-title":"Multimed Tools Appl"},{"key":"14021_CR18","doi-asserted-by":"crossref","unstructured":"He Z, Rakin AS, Fan D (2019) Parametric noise injection: trainable randomness to improve deep neural network robustness against adversarial attack. In proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (pp. 588-597)","DOI":"10.1109\/CVPR.2019.00068"},{"key":"14021_CR19","unstructured":"Hosseini H, Chen Y, Kannan S, Zhang B, Poovendran R (2017) Blocking transferability of adversarial examples in black-box learning systems. arXiv preprint arXiv:1703.04318"},{"key":"14021_CR20","doi-asserted-by":"crossref","unstructured":"Iyyer M, Wieting J, Gimpel K, Zettlemoyer L (2018) Adversarial example generation with syntactically controlled paraphrase networks. arXiv preprint arXiv:1804.06059","DOI":"10.18653\/v1\/N18-1170"},{"key":"14021_CR21","doi-asserted-by":"crossref","unstructured":"Jia S, Ma C, Song Y, Yang X (2020) Robust tracking against adversarial attacks. In European conference on computer vision (pp. 69\u201384). Springer, Cham.","DOI":"10.1007\/978-3-030-58529-7_5"},{"key":"14021_CR22","doi-asserted-by":"crossref","unstructured":"Ketkarz N (2017) Stochastic gradient descent. In deep learning with Python (pp. 113\u2013132). Apress, Berkeley, CA","DOI":"10.1007\/978-1-4842-2766-4_8"},{"key":"14021_CR23","unstructured":"Krizhevsky A, Hinton G (2009) Learning multiple layers of features from tiny images"},{"key":"14021_CR24","unstructured":"Kurakin A, Goodfellow I, Bengio S (2017) Adversarial machine learning at scale. ICLR 2017, arXiv preprint arXiv:1611.01236"},{"issue":"7","key":"14021_CR25","doi-asserted-by":"publisher","first-page":"10339","DOI":"10.1007\/s11042-020-09167-z","volume":"80","author":"H Kwon","year":"2021","unstructured":"Kwon H, Kim Y, Yoon H, Choi D (2021) Classification score approach for detecting adversarial example in deep neural network. Multimed Tools Appl 80(7):10339\u201310360","journal-title":"Multimed Tools Appl"},{"key":"14021_CR26","doi-asserted-by":"publisher","unstructured":"Li F, Du X, Zhang L (2022) Adversarial attacks defense method based on multiple filtering and image rotation. Discrete dynamics in nature and society, article ID 6124895, 11 pages, https:\/\/doi.org\/10.1155\/2022\/6124895","DOI":"10.1155\/2022\/6124895"},{"issue":"3","key":"14021_CR27","doi-asserted-by":"publisher","first-page":"2447","DOI":"10.1007\/s11071-021-07139-y","volume":"107","author":"MW Li","year":"2022","unstructured":"Li MW, Xu DY, Geng J, Hong WC (2022) A ship motion forecasting approach based on empirical mode decomposition method hybrid deep learning network and quantum butterfly optimization algorithm. Nonlinear Dyna 107(3):2447\u20132467","journal-title":"Nonlinear Dyna"},{"key":"14021_CR28","doi-asserted-by":"crossref","unstructured":"Liu Z, Liu Q, Liu T, Xu N, Lin X, Wang Y, Wen W (2019) Feature distillation: DNN-oriented jpeg compression against adversarial examples. In 2019 IEEE\/CVF conference on computer vision and pattern recognition (CVPR) pp. 860-868. IEEE","DOI":"10.1109\/CVPR.2019.00095"},{"key":"14021_CR29","doi-asserted-by":"crossref","unstructured":"Ma P, Petridis S, Pantic M (2021) Detecting adversarial attacks on audiovisual speech recognition. In ICASSP 2021-2021 IEEE international conference on acoustics, speech and signal processing (ICASSP) (pp. 6403-6407). IEEE","DOI":"10.1109\/ICASSP39728.2021.9413661"},{"key":"14021_CR30","unstructured":"Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2017) Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083"},{"key":"14021_CR31","doi-asserted-by":"publisher","first-page":"158","DOI":"10.1007\/978-3-030-58536-5_10","volume-title":"European conference on computer vision, 16th European conference, Glasgow, UK, august 23\u201328, in lecture notes in computer science","author":"C Mao","year":"2020","unstructured":"Mao C, Gupta A, Nitin V, Ray B, Song S, Yang J, Vondrick C (2020) Multitask learning strengthens adversarial robustness. In: European conference on computer vision, 16th European conference, Glasgow, UK, august 23\u201328, in lecture notes in computer science, vol 12347. Springer, Cham, pp 158\u2013174. https:\/\/doi.org\/10.1007\/978-3-030-58536-5_10"},{"key":"14021_CR32","doi-asserted-by":"crossref","unstructured":"Mekala RR, Porter A, Lindvall M (2020) Metamorphic filtering of black-box adversarial attacks on multi-network face recognition models. In proceedings of the IEEE\/ACM 42nd international conference on software engineering workshops (pp. 410-417)","DOI":"10.1145\/3387940.3391483"},{"key":"14021_CR33","doi-asserted-by":"crossref","unstructured":"Meng L, Lin CT, Jung TP, Wu D (2019) White-box target attack for EEG-based BCI regression problems. In international conference on neural information processing (pp. 476\u2013488). Springer, Cham.","DOI":"10.1007\/978-3-030-36708-4_39"},{"issue":"3","key":"14021_CR34","doi-asserted-by":"publisher","first-page":"402","DOI":"10.1109\/JPROC.2020.2970615","volume":"108","author":"DJ Miller","year":"2020","unstructured":"Miller DJ, Xiang Z, Kesidis G (2020) Adversarial learning targeting deep neural network classification: a comprehensive review of defenses against attacks. Proc IEEE 108(3):402\u2013433","journal-title":"Proc IEEE"},{"key":"14021_CR35","unstructured":"M\u00fcller R, Kornblith S, Hinton G (2019) When does label smoothing help?. arXiv preprint arXiv:1906.02629"},{"key":"14021_CR36","doi-asserted-by":"publisher","first-page":"21919","DOI":"10.1007\/s11042-022-12007-x","volume":"81","author":"H Naderi","year":"2022","unstructured":"Naderi H, Goli L, Kasaei S (2022) Generating unrestricted adversarial examples via three parameteres. Multimed Tools Appl 81:21919\u201321938. https:\/\/doi.org\/10.1007\/s11042-022-12007-x","journal-title":"Multimed Tools Appl"},{"key":"14021_CR37","unstructured":"Papernot N, McDaniel P, Goodfellow I (2016) Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277"},{"key":"14021_CR38","doi-asserted-by":"crossref","unstructured":"Papernot N, McDaniel P, Wu X, Jha S, Swami A (2016) Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE symposium on security and privacy (SP) (pp. 582-597). IEEE","DOI":"10.1109\/SP.2016.41"},{"key":"14021_CR39","doi-asserted-by":"crossref","unstructured":"Papernot N, McDaniel P, Goodfellow I, Jha S, Celik ZB, Swami A (2017) Practical black-box attacks against machine learning. In proceedings of the 2017 ACM on Asia conference on computer and communications security, pp. 506-519","DOI":"10.1145\/3052973.3053009"},{"key":"14021_CR40","unstructured":"Pereyra G, Tucker G, Chorowski J, Kaiser \u0141, Hinton G (2017) Regularizing neural networks by penalizing confident output distributions. arXiv preprint arXiv:1701.06548"},{"key":"14021_CR41","doi-asserted-by":"publisher","first-page":"156","DOI":"10.1109\/RBME.2020.3013489","volume":"14","author":"A Qayyum","year":"2020","unstructured":"Qayyum A, Qadir J, Bilal M, Al-Fuqaha A (2020) Secure and robust machine learning for healthcare: a survey. IEEE Rev Biomed Eng 14:156\u2013180","journal-title":"IEEE Rev Biomed Eng"},{"issue":"5","key":"14021_CR42","doi-asserted-by":"publisher","first-page":"909","DOI":"10.3390\/app9050909","volume":"9","author":"S Qiu","year":"2019","unstructured":"Qiu S, Liu Q, Zhou S, Wu C (2019) Review of artificial intelligence adversarial attack and defense technologies. Appl Sci 9(5):909","journal-title":"Appl Sci"},{"key":"14021_CR43","unstructured":"Shaham U, Garritano J, Yamada Y, Weinberger E, Cloninger A, Cheng X, Stanton K Kluger Y (2018) Defending against adversarial images using basis functions transformations. http:\/\/arxiv.org\/abs\/1803.10840"},{"issue":"3","key":"14021_CR44","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3317611","volume":"22","author":"M Sharif","year":"2019","unstructured":"Sharif M, Bhagavatula S, Bauer L, Reiter MK (2019) A general framework for adversarial examples with objectives. ACM Transact Pri Sec (TOPS) 22(3):1\u201330","journal-title":"ACM Transact Pri Sec (TOPS)"},{"key":"14021_CR45","unstructured":"Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556"},{"key":"14021_CR46","doi-asserted-by":"crossref","unstructured":"Stallkamp J, Schlipsing M, Salmen J Igel C (2011) The German Traffic Sign Recognition Benchmark: A multi-class classification competition. In Proceedings of the IEEE International Joint Conference on Neural Networks (IJCNN 2011), pp. 1453\u20131460","DOI":"10.1109\/IJCNN.2011.6033395"},{"key":"14021_CR47","unstructured":"Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2013) Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199"},{"key":"14021_CR48","unstructured":"Tram\u00e8r F, Kurakin A, Papernot N, Goodfellow I, Boneh D, McDaniel P (2017) Ensemble adversarial training: Attacks and defenses. http:\/\/arxiv.org\/abs\/1705.07204"},{"key":"14021_CR49","first-page":"841","volume":"31","author":"S Wachter","year":"2018","unstructured":"Wachter S, Mittelstadt B, Russell C (2018) Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harv JL Tech 31:841","journal-title":"Harv JL Tech"},{"key":"14021_CR50","doi-asserted-by":"crossref","unstructured":"Xu W, Evans D, Qi Y (2017) Feature squeezing: Detecting adversarial examples in deep neural networks. http:\/\/arxiv.org\/abs\/1704.01155","DOI":"10.14722\/ndss.2018.23198"},{"key":"14021_CR51","doi-asserted-by":"publisher","unstructured":"Yang L, Song Q., Wu Y (2021) Attacks on state-of-the-art face recognition using attentional adversarial attack generative network multimedia tools and applications, 80, pp. 855\u2013875 https:\/\/doi.org\/10.1007\/s11042-020-09604-z","DOI":"10.1007\/s11042-020-09604-z"},{"issue":"2","key":"14021_CR52","doi-asserted-by":"publisher","first-page":"289","DOI":"10.1007\/s00365-006-0663-2","volume":"26","author":"Y Yao","year":"2007","unstructured":"Yao Y, Rosasco L, Caponnetto A (2007) On early stopping in gradient descent learning. Constr Approx 26(2):289\u2013315","journal-title":"Constr Approx"},{"key":"14021_CR53","doi-asserted-by":"crossref","unstructured":"Yin M, Li S, Cai Z, Song C, Asif MS, Roy-Chowdhury AK, Krishnamurthy SV (2021) Exploiting multi-object relationships for detecting adversarial attacks in complex scenes. In proceedings of the IEEE\/CVF international conference on computer vision (pp. 7858-7867)","DOI":"10.1109\/ICCV48922.2021.00776"},{"key":"14021_CR54","doi-asserted-by":"crossref","unstructured":"Zheng Y, Velipasalar S (2021) Part-based feature squeezing to detect adversarial examples in person re-identification networks. In 2021 IEEE international conference on image processing (ICIP) (pp. 844-848). IEEE","DOI":"10.1109\/ICIP42928.2021.9506511"}],"container-title":["Multimedia Tools and Applications"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11042-022-14021-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11042-022-14021-5\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11042-022-14021-5.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,4,15]],"date-time":"2023-04-15T09:16:59Z","timestamp":1681550219000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11042-022-14021-5"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,10,7]]},"references-count":54,"journal-issue":{"issue":"11","published-print":{"date-parts":[[2023,5]]}},"alternative-id":["14021"],"URL":"https:\/\/doi.org\/10.1007\/s11042-022-14021-5","relation":{},"ISSN":["1380-7501","1573-7721"],"issn-type":[{"type":"print","value":"1380-7501"},{"type":"electronic","value":"1573-7721"}],"subject":[],"published":{"date-parts":[[2022,10,7]]},"assertion":[{"value":"14 December 2021","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"1 July 2022","order":2,"name":"revised","label":"Revised","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"23 September 2022","order":3,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"7 October 2022","order":4,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors, G\u00e1bor Sz\u0171cs and Rich\u00e1rd Kiss declare that he has no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interests"}}]}}