{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,23]],"date-time":"2026-02-23T12:15:55Z","timestamp":1771848955113,"version":"3.50.1"},"reference-count":102,"publisher":"Springer Science and Business Media LLC","issue":"4","license":[{"start":{"date-parts":[[2022,1,21]],"date-time":"2022-01-21T00:00:00Z","timestamp":1642723200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,1,21]],"date-time":"2022-01-21T00:00:00Z","timestamp":1642723200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2023,8]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Defenses against adversarial attacks are essential to ensure the reliability of machine-learning models as their applications are expanding in different domains. Existing ML defense techniques have several limitations in practical use. We proposed a trustworthy framework that employs an adaptive strategy to inspect both inputs and decisions. In particular, data streams are examined by a series of diverse filters before sending to the learning system and then crossed checked its output through anomaly (outlier) detectors before making the final decision. Experimental results (using benchmark data-sets) demonstrated that our dual-filtering strategy could mitigate adaptive or advanced adversarial manipulations for wide-range of ML attacks with higher accuracy. Moreover, the output decision boundary inspection with a classification technique automatically affirms the reliability and increases the trustworthiness of any ML-based decision support system. Unlike other defense techniques, our dual-filtering strategy does not require adversarial sample generation and updating the decision boundary for detection, makes the ML defense robust to adaptive attacks.<\/jats:p>","DOI":"10.1007\/s40747-022-00649-1","type":"journal-article","created":{"date-parts":[[2022,1,21]],"date-time":"2022-01-21T05:31:54Z","timestamp":1642743114000},"page":"3717-3738","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":7,"title":["Dual-filtering (DF) schemes for learning systems to prevent adversarial attacks"],"prefix":"10.1007","volume":"9","author":[{"given":"Dipankar","family":"Dasgupta","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6867-2653","authenticated-orcid":false,"given":"Kishor Datta","family":"Gupta","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,1,21]]},"reference":[{"key":"649_CR1","unstructured":"Adam GA, Smirnov P, Goldenberg A, Duvenaud D, Haibe-Kains B (2018) Stochastic combinatorial ensembles for defending against adversarial examples, pp 1\u201315. arXiv preprint arXiv:1808.06645"},{"key":"649_CR2","doi-asserted-by":"crossref","unstructured":"Aggarwal CC (2015) Outlier analysis. Aggarwal CC (ed) Data mining. Springer , Cham, pp 237\u2013263","DOI":"10.1007\/978-3-319-14142-8_8"},{"key":"649_CR3","unstructured":"Aigrain J, Detyniecki M (2019) Detecting adversarial examples and other misclassifications in neural networks by introspection. arXiv preprint arXiv:1905.09186"},{"key":"649_CR4","doi-asserted-by":"crossref","unstructured":"Akhtar Z, Monteiro J, Falk TH (2018) Adversarial examples detection using no-reference image quality features. In: 2018 international Carnahan conference on security technology (ICCST), pp 1\u20135","DOI":"10.1109\/CCST.2018.8585591"},{"key":"649_CR5","doi-asserted-by":"crossref","unstructured":"Akhtar Z, Monteiro J, Falk TH (2018) Adversarial examples detection using no-reference image quality features. In: 2018 international Carnahan conference on security technology (ICCST). IEEE, pp 1\u20135","DOI":"10.1109\/CCST.2018.8585591"},{"key":"649_CR6","doi-asserted-by":"crossref","unstructured":"Angelova A, Abu-Mostafam Y, Perona P (2005) Pruning training sets for learning of object categories. In: 2005 IEEE Computer Society conference on computer vision and pattern recognition (CVPR\u201905), vol 1. IEEE, pp 494\u2013501","DOI":"10.1109\/CVPR.2005.283"},{"key":"649_CR7","first-page":"972","volume":"1141","author":"A Arning","year":"1996","unstructured":"Arning A, Agrawal R, Raghavan P (1996) A linear method for deviation detection in large databases. KDD 1141:972\u2013981","journal-title":"KDD"},{"key":"649_CR8","unstructured":"Athalye A, Carlini N (2018) On the robustness of the cvpr 2018 white-box adversarial example defenses. arXiv preprint arXiv:1804.03286"},{"key":"649_CR9","unstructured":"Athalye A, Carlini N, Wagner D (2018) Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420"},{"key":"649_CR10","unstructured":"Bagnall A, Bunescu R, Stewart G (2017) Training ensembles to detect adversarial examples. arXiv preprint arXiv:1712.04006"},{"issue":"4","key":"649_CR11","doi-asserted-by":"publisher","first-page":"739","DOI":"10.1080\/00401706.1973.10489108","volume":"15","author":"DR Barr","year":"1973","unstructured":"Barr DR, Davidson T (1973) A Kolmogorov\u2013Smirnov test for censored samples. Technometrics 15(4):739\u2013757","journal-title":"Technometrics"},{"key":"649_CR12","unstructured":"Bradshaw J, Matthews AGdG, Ghahramani Z (2017) Adversarial examples, uncertainty, and transfer testing robustness in gaussian process hybrid deep networks, pp 1\u201333. arXiv preprint arXiv:1707.02476"},{"key":"649_CR13","unstructured":"Brendel W, Rauber J, Bethge M (2017) Decision-based adversarial attacks: reliable attacks against black-box machine learning models. arXiv preprint arXiv:1712.04248"},{"key":"649_CR14","doi-asserted-by":"crossref","unstructured":"Breunig MM, Kriegel HP, Ng RT, Sander J (2000) Lof: identifying density-based local outliers. In: Proceedings of the 2000 ACM SIGMOD international conference on Management of data, pp 93\u2013104","DOI":"10.1145\/342009.335388"},{"key":"649_CR15","doi-asserted-by":"crossref","unstructured":"Br\u00fcckner M, Scheffer T (2011) Stackelberg games for adversarial prediction problems. In: Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, pp 547\u2013555","DOI":"10.1145\/2020408.2020495"},{"key":"649_CR16","unstructured":"Carlini N (2019) Lessons learned from evaluating the robustness of defenses to adversarial examples. USENIX Association, Santa Clara"},{"key":"649_CR17","unstructured":"Carlini N, Athalye A, Papernot N, Brendel W, Rauber J, Tsipras D, Goodfellow I, Madry A, Kurakin A (2019) On evaluating adversarial robustness. arXiv preprint arXiv:1902.06705"},{"key":"649_CR18","unstructured":"Carlini N, Wagner D (2016) Defensive distillation is not robust to adversarial examples. arXiv preprint arXiv:1607.04311"},{"key":"649_CR19","doi-asserted-by":"crossref","unstructured":"Carlini N, Wagner D (2017) Adversarial examples are not easily detected: bypassing ten detection methods. In: Proceedings of the 10th ACM workshop on artificial intelligence and security, pp 3\u201314","DOI":"10.1145\/3128572.3140444"},{"key":"649_CR20","unstructured":"Carlini N, Wagner D (2017) Magnet and efficient defenses against adversarial attacks\u201d are not robust to adversarial examples. arXiv preprint arXiv:1711.08478"},{"key":"649_CR21","doi-asserted-by":"crossref","unstructured":"Carlini N, Wagner D (2018) Audio adversarial examples: targeted attacks on speech-to-text. In: 2018 IEEE security and privacy workshops (SPW). IEEE, pp 1\u20137","DOI":"10.1109\/SPW.2018.00009"},{"key":"649_CR22","unstructured":"Chakraborty A, Alam M, Dey V, Chattopadhyay A, Mukhopadhyay D (2018) Adversarial attacks and defences: a survey. arXiv preprint arXiv:1810.00069"},{"key":"649_CR23","doi-asserted-by":"crossref","unstructured":"Chen J, Jordan MI, Wainwright MJ (2019) Hopskipjumpattack: a query- decision-based attack. arXiv preprint arXiv:1904.021443","DOI":"10.1109\/SP40000.2020.00045"},{"key":"649_CR24","unstructured":"Chen J, Meng Z, Sun C, Tang W, Zhu Y (2017) Reabsnet: detecting and revising adversarial examples. arXiv preprint arXiv:1712.08250"},{"key":"649_CR25","doi-asserted-by":"crossref","unstructured":"Chen S, Carlini N, Wagner D (2019) Stateful detection of black-box adversarial attacks. arXiv preprint arXiv:1907.05587","DOI":"10.1145\/3385003.3410925"},{"key":"649_CR26","doi-asserted-by":"crossref","unstructured":"Chen Y, Zhou XS, Huang TS (2001) One-class svm for learning in image retrieval. In: Proceedings 2001 international conference on image processing (Cat. No. 01CH37205), vol 1. IEEE, pp 34\u201337","DOI":"10.1109\/ICIP.2001.958946"},{"key":"649_CR27","unstructured":"Crecchi F, Bacciu D, Biggio B (2019) Detecting adversarial examples through nonlinear dimensionality reduction. arXiv preprint arXiv:1904.13094"},{"key":"649_CR28","doi-asserted-by":"crossref","unstructured":"Dasgupta D (1994) Handling deceptive problems using a different genetic search. In: Proceedings of the first IEEE conference on evolutionary computation. IEEE World congress on computational intelligence. IEEE, pp 807\u2013811","DOI":"10.1109\/ICEC.1994.349952"},{"key":"649_CR29","doi-asserted-by":"publisher","DOI":"10.1177\/1548512920951275","author":"D Dasgupta","year":"2020","unstructured":"Dasgupta D, Akhtar Z, Sajib S (2020) Machine learning in cybersecurity: a comprehensive survey. J Defense Model Simul Appl Methodol Technol. https:\/\/doi.org\/10.1177\/1548512920951275","journal-title":"J Defense Model Simul Appl Methodol Technol"},{"key":"649_CR30","doi-asserted-by":"crossref","unstructured":"Dasgupta D, KrishnaKumar K, Wong D, Berry M (2004) Negative selection algorithm for aircraft fault detection. In: International conference on artificial immune systems. Springer, pp 1\u201313","DOI":"10.1007\/978-3-540-30220-9_1"},{"key":"649_CR31","unstructured":"Dathathri S, Zheng S, Murray RM, Yue Y (2018) Detecting adversarial examples via neural fingerprinting. arXiv preprint arXiv:1803.03870"},{"key":"649_CR32","unstructured":"Goldstein M, Dengel A (2012) Histogram-based outlier score (hbos): a fast unsupervised anomaly detection algorithm. KI-2012: poster and demo track, pp 59\u201363"},{"key":"649_CR33","doi-asserted-by":"publisher","unstructured":"Gong Z (2018) Adversarial algorithms in tensorflow, v0.2.0. Zenodo. https:\/\/doi.org\/10.5281\/zenodo.1154272","DOI":"10.5281\/zenodo.1154272"},{"key":"649_CR34","unstructured":"Goodfellow IJ, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples, pp 1\u201311. arXiv preprint arXiv:1412.6572"},{"key":"649_CR35","unstructured":"Grosse K, Manoharan P, Papernot N, Backes M, McDaniel P (2017) On the (statistical) detection of adversarial examples. arXiv preprint arXiv:1702.06280"},{"key":"649_CR36","doi-asserted-by":"crossref","unstructured":"Gupta KD, Dasgupta D, Akhtar Z (2020) Applicability issues of evasion-based adversarial attacks and mitigation techniques. In: 2020 IEEE symposium series on computational intelligence (SSCI). IEEE, pp 1506\u20131515","DOI":"10.1109\/SSCI47803.2020.9308589"},{"key":"649_CR37","doi-asserted-by":"crossref","unstructured":"Gupta KD, Dasgupta D, Akhtar Z (2020) Determining sequence of image processing technique (ipt) to detect adversarial attacks. arXiv preprint arXiv:2007.00337","DOI":"10.1007\/s42979-021-00773-8"},{"key":"649_CR38","unstructured":"Hayes J, Danezis G (2017) Machine learning as an adversarial service: learning black-box adversarial examples. arXiv preprint arXiv:1708.052072"},{"key":"649_CR39","unstructured":"He W, Wei J, Chen X, Carlini N, Song D (2017) Adversarial example defense: ensembles of weak defenses are not strong. In: 11th USENIX workshop on offensive technologies (WOOT 17)"},{"issue":"9\u201310","key":"649_CR40","doi-asserted-by":"publisher","first-page":"1641","DOI":"10.1016\/S0167-8655(03)00003-5","volume":"24","author":"Z He","year":"2003","unstructured":"He Z, Xu X, Deng S (2003) Discovering cluster-based local outliers. Pattern Recogn Lett 24(9\u201310):1641\u20131650","journal-title":"Pattern Recogn Lett"},{"key":"649_CR41","unstructured":"Ilyas A, Santurkar S, Tsipras D, Engstrom L, Tran B, Madry A (2019) Adversarial examples are not bugs, they are features. In: Advances in neural information processing systems, pp 125\u2013136"},{"key":"649_CR42","unstructured":"Janssens J, Husz\u00e1r F, Postma E, van\u00a0den Herik H (2012) Stochastic outlier selection. Technical report"},{"key":"649_CR43","doi-asserted-by":"crossref","unstructured":"Katz G, Barrett C, Dill DL, Julian K, Kochenderfer MJ (2017) Reluplex: an efficient smt solver for verifying deep neural networks. In: Computer aided verification, pp 97\u2013117","DOI":"10.1007\/978-3-319-63387-9_5"},{"key":"649_CR44","doi-asserted-by":"crossref","unstructured":"Katzir Z, Elovici Y (2018) Detecting adversarial perturbations through spatial behavior in activation spaces. arXiv preprint arXiv:1811.09043","DOI":"10.1109\/IJCNN.2019.8852285"},{"key":"649_CR45","unstructured":"Kingma DP, Welling M (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114"},{"key":"649_CR46","doi-asserted-by":"crossref","unstructured":"Kriegel HP, Schubert M, Zimek A (2008) Angle-based outlier detection in high-dimensional data. In: Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pp 444\u2013452","DOI":"10.1145\/1401890.1401946"},{"key":"649_CR47","unstructured":"Kurakin A, Goodfellow I, Bengio S (2016) Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533"},{"key":"649_CR48","doi-asserted-by":"crossref","unstructured":"Lazarevic A, Kumar V (2005) Feature bagging for outlier detection. In: Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, pp 157\u2013166","DOI":"10.1145\/1081870.1081891"},{"key":"649_CR49","doi-asserted-by":"crossref","unstructured":"Li Z, Zhao Y, Botta N, Ionescu C, Hu X (2020) Copod: copula-based outlier detection. arXiv preprint arXiv:2009.09463","DOI":"10.1109\/ICDM50108.2020.00135"},{"key":"649_CR50","doi-asserted-by":"crossref","unstructured":"Liu J, Zhang W, Zhang Y, Hou D, Liu Y, Zha H, Yu N (2018) Detection based defense against adversarial examples from the steganalysis point of view. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4825\u20134834","DOI":"10.1109\/CVPR.2019.00496"},{"key":"649_CR51","doi-asserted-by":"crossref","unstructured":"Liu Y, Li Z, Zhou C, Jiang Y, Sun J, Wang M, He X (2019) Generative adversarial active learning for unsupervised outlier detection. IEEE Trans Knowl Data Eng 32:1517\u20131528","DOI":"10.1109\/TKDE.2019.2905606"},{"key":"649_CR52","doi-asserted-by":"crossref","unstructured":"Lu J, Issaranon T, Forsyth D (2017) Safetynet: detecting and rejecting adversarial examples robustly. In: Proceedings of the IEEE international conference on computer vision, pp 446\u2013454","DOI":"10.1109\/ICCV.2017.56"},{"key":"649_CR53","unstructured":"Lu J, Sibai H, Fabry E, Forsyth D (2017) No need to worry about adversarial examples in object detection in autonomous vehicles. arXiv preprint arXiv:1707.03501"},{"key":"649_CR54","doi-asserted-by":"crossref","unstructured":"Ma C, Zhao C, Shi H, Chen L, Yong J, Zeng D (2019) Metaadvdet: towards robust detection of evolving adversarial attacks. arXiv preprint arXiv:1908.02199","DOI":"10.1145\/3343031.3350887"},{"key":"649_CR55","unstructured":"Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2017) Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083"},{"key":"649_CR56","unstructured":"Metzen JH, Genewein T, Fischer V, Bischoff B (2017) On detecting adversarial perturbations. arXiv preprint arXiv:1702.04267"},{"key":"649_CR57","unstructured":"Miyato T, Dai AM, Goodfellow I (2016) Adversarial training methods for semi-supervised text classification. arXiv preprint arXiv:1605.07725"},{"issue":"8","key":"649_CR58","doi-asserted-by":"publisher","first-page":"1979","DOI":"10.1109\/TPAMI.2018.2858821","volume":"41","author":"T Miyato","year":"2019","unstructured":"Miyato T, Maeda S, Koyama M, Ishii S (2019) Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE Trans Pattern Anal Mach Intell 41(8):1979\u20131993","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"649_CR59","doi-asserted-by":"crossref","unstructured":"Monteiro J, Akhtar Z, Falk TH (2018) Generalizable adversarial examples detection based on bi-model decision mismatch. arXiv preprint arXiv:1802.07770","DOI":"10.1109\/SMC.2019.8913861"},{"key":"649_CR60","doi-asserted-by":"crossref","unstructured":"Narodytska N, Kasiviswanathan SP (2016) Simple black-box adversarial perturbations for deep networks. arXiv preprint arXiv:1612.06299","DOI":"10.1109\/CVPRW.2017.172"},{"key":"649_CR61","unstructured":"Nguyen HH, Kuribayashi M, Yamagishi J, Echizen I (2019) Detecting and correcting adversarial images using image processing operations and convolutional neural networks. arXiv preprint arXiv:1912.05391"},{"key":"649_CR62","unstructured":"Nicolae MI, Sinn M, Tran MN, Buesser B, Rawat A, Wistuba M, Zantedeschi V, Baracaldo N, Chen B, Ludwig H, Molloy I, Edwards B (2018) Adversarial robustness toolbox v1.1.1. CoRR arXiv:1807.01069"},{"key":"649_CR63","unstructured":"Pang T, Du C, Dong Y, Zhu J (2018) Towards robust detection of adversarial examples. In: Advances in neural information processing systems, pp 4579\u20134589"},{"key":"649_CR64","unstructured":"Papernot N, Faghri F, Carlini N, Goodfellow I, Feinman R, Kurakin A, Xie C, Sharma Y, Brown T, Roy A, Matyasko A, Behzadan V, Hambardzumyan K, Zhang Z, Juang YL, Li Z, Sheatsley R, Garg A, Uesato J, Gierke W, Dong Y, Berthelot D, Hendricks P, Rauber J, Long R (2018) Technical report on the cleverhans v2.1.0 adversarial examples library. arXiv preprint arXiv:1610.00768"},{"key":"649_CR65","doi-asserted-by":"crossref","unstructured":"Papernot N, McDaniel P, Wu X, Jha S, Swami A (2016) Distillation as a defense to adversarial perturbations against deep neural networks. IEEE symposium on security and privacy (SP), pp 582\u2013597","DOI":"10.1109\/SP.2016.41"},{"key":"649_CR66","doi-asserted-by":"crossref","unstructured":"Papernot N, McDaniel P, Wu X, Jha S, Swami A (2016) Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE symposium on security and privacy (SP). IEEE, pp 582\u2013597","DOI":"10.1109\/SP.2016.41"},{"key":"649_CR67","unstructured":"Paudice A, Mu\u00f1oz-Gonz\u00e1lez L, Gyorgy A, Lupu EC (2018) Detection of adversarial training examples in poisoning attacks through anomaly detection. arXiv preprint arXiv:1802.03041"},{"key":"649_CR68","doi-asserted-by":"crossref","unstructured":"Prakash A, Moran N, Garber S, DiLillo A, Storer J (2018) Deflecting adversarial attacks with pixel deflection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 8571\u20138580","DOI":"10.1109\/CVPR.2018.00894"},{"key":"649_CR69","unstructured":"Raghunathan A, Xie SM, Yang F, Duchi JC, Liang P (2019) Adversarial training can hurt generalization. arXiv preprint arXiv:1906.06032"},{"key":"649_CR70","doi-asserted-by":"crossref","unstructured":"Ramaswamy S, Rastogi R, Shim K (2000) Efficient algorithms for mining outliers from large data sets. In: Proceedings of the 2000 ACM SIGMOD international conference on Management of data, pp 427\u2013438","DOI":"10.1145\/342009.335437"},{"key":"649_CR71","unstructured":"Ruff L, Vandermeulen R, Goernitz N, Deecke L, Siddiqui SA, Binder A, M\u00fcller E, Kloft M (2018) Deep one-class classification. In: International conference on machine learning, pp 4393\u20134402"},{"key":"649_CR72","unstructured":"Samangouei P, Kabkab M, Chellappa R (2018) Defense-gan: protecting classifiers against adversarial attacks using generative models. arXiv preprint arXiv:1805.06605"},{"key":"649_CR73","unstructured":"Settles B (2009) Active learning literature survey. CS Technical Reports, Department of Computer Sciences, University of Wisconsin-Madison"},{"key":"649_CR74","unstructured":"Shaham U, Garritano J, Yamada Y, Weinberger E, Cloninger A, Cheng X, Stanton K, Kluger Y (2018) Defending against adversarial images using basis functions transformations, pp 1\u201312. arXiv preprint arXiv:1803.10840"},{"key":"649_CR75","doi-asserted-by":"crossref","unstructured":"Soll M, Hinz T, Magg S, Wermter S (2019) Evaluating defensive distillation for defending text processing neural networks against adversarial examples. In: International conference on artificial neural networks. Springer, pp 685\u2013696","DOI":"10.1007\/978-3-030-30508-6_54"},{"key":"649_CR76","unstructured":"Strauss T, Hanselmann M, Junginger A, Ulmer H (2017) Ensemble methods as a defense to adversarial perturbations against deep neural networks, pp 1\u201310. arXiv preprint arXiv:1709.03423"},{"key":"649_CR77","doi-asserted-by":"crossref","unstructured":"Su J, Vargas DV, Sakurai K (2019) One pixel attack for fooling deep neural networks. IEEE Trans Evol Comput 23:828\u2013841","DOI":"10.1109\/TEVC.2019.2890858"},{"key":"649_CR78","unstructured":"Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2014) Intriguing properties of neural networks. CoRR. arXiv:1312.6199"},{"key":"649_CR79","doi-asserted-by":"crossref","unstructured":"Tabassi E, Burns K, Hadjimichael M, Molina-Markham A, Sexton J (2019) A taxonomy and terminology of adversarial machine learning. https:\/\/nvlpubs.nist.gov\/nistpubs\/ir\/2019\/NIST.IR.8269-draft.pdf","DOI":"10.6028\/NIST.IR.8269-draft"},{"key":"649_CR80","unstructured":"Tanay T, Griffin L (2016) A boundary tilting perspective on the phenomenon of adversarial examples. arXiv preprint arXiv:1608.07690"},{"key":"649_CR81","doi-asserted-by":"crossref","unstructured":"Tang J, Chen Z, Fu AWC, Cheung DW (2002) Enhancing effectiveness of outlier detections for low density patterns. In: Pacific-Asia conference on knowledge discovery and data mining. Springer, pp 535\u2013548","DOI":"10.1007\/3-540-47887-6_53"},{"key":"649_CR82","doi-asserted-by":"crossref","unstructured":"Tian S, Yang G, Cai Y (2018) Detecting adversarial examples through image transformation. In: Thirty-second AAAI conference on artificial intelligence","DOI":"10.1609\/aaai.v32i1.11828"},{"key":"649_CR83","doi-asserted-by":"crossref","unstructured":"Togbe MU, Barry M, Boly A, Chabchoub Y, Chiky R, Montiel J, Tran VT (2020) Anomaly detection for data streams based on isolation forest using scikit-multiflow. In: International conference on computational science and its applications. Springer, pp 15\u201330","DOI":"10.1007\/978-3-030-58811-3_2"},{"key":"649_CR84","doi-asserted-by":"crossref","unstructured":"Tram\u00e8r F, Boneh D (2019) Adversarial training and robustness for multiple perturbations. In: Advances in neural information processing systems, pp 5858\u20135868","DOI":"10.1145\/3319535.3354222"},{"key":"649_CR85","unstructured":"Tramer F, Carlini N, Brendel W, Madry A (2020) On adaptive attacks to adversarial example defenses. arXiv preprint arXiv:2002.08347"},{"key":"649_CR86","unstructured":"Tram\u00e8r F, Kurakin A, Papernot N, Goodfellow I, Boneh D, McDaniel P (2017) Ensemble adversarial training: attacks and defenses, pp 1\u201320. arXiv preprint arXiv:1705.07204"},{"key":"649_CR87","unstructured":"Uesato J, O\u2019Donoghue B, Oord Avd, Kohli P (2018) Adversarial risk and the dangers of evaluating against weak attacks. arXiv preprint arXiv:1802.05666"},{"key":"649_CR88","doi-asserted-by":"crossref","unstructured":"Umbarkar AJ, Sheth PD (2015) Crossover operators in genetic algorithms: a review. ICTACT J Soft Comput 6(1):1083\u20131092","DOI":"10.21917\/ijsc.2015.0150"},{"key":"649_CR89","doi-asserted-by":"crossref","unstructured":"Wang J, Sun J, Zhang P, Wang X (2018) Detecting adversarial samples for deep neural networks through mutation testing. arXiv preprint arXiv:1805.05010","DOI":"10.1109\/ICSE.2019.00126"},{"key":"649_CR90","unstructured":"Wang X, Jin H, He K (2019) Natural language adversarial attacks and defenses in word level. arXiv preprint arXiv:1909.06723 pp. 1\u201315"},{"key":"649_CR91","unstructured":"Wiyatno R, Xu A (2018) Maximal jacobian-based saliency map attack. arXiv preprint arXiv:1808.07945"},{"key":"649_CR92","unstructured":"Wong E, Kolter JZ (2017) Provable defenses against adversarial examples via the convex outer adversarial polytope. arXiv preprint arXiv:1711.00851"},{"key":"649_CR93","unstructured":"Wu X, Jang U, Chen J, Chen L, Jha S (2018) Reinforcing adversarial robustness using model confidence induced by adversarial training. In: International conference on machine learning, pp 5330\u20135338"},{"key":"649_CR94","unstructured":"Xie C, Wang J, Zhang Z, Ren Z, Yuille A (2017) Mitigating adversarial effects through randomization. arXiv preprint arXiv:1711.01991 pp. 1\u201316"},{"key":"649_CR95","doi-asserted-by":"crossref","unstructured":"Xu W, Evans D, Qi Y (2017) Feature squeezing: detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155","DOI":"10.14722\/ndss.2018.23198"},{"issue":"9","key":"649_CR96","doi-asserted-by":"publisher","first-page":"2805","DOI":"10.1109\/TNNLS.2018.2886017","volume":"30","author":"X Yuan","year":"2019","unstructured":"Yuan X, He P, Zhu Q, Li X (2019) Adversarial examples: attacks and defenses for deep learning. IEEE Trans Neural Netw Learn Syst 30(9):2805\u20132824","journal-title":"IEEE Trans Neural Netw Learn Syst"},{"key":"649_CR97","doi-asserted-by":"crossref","unstructured":"Zeng Q, Su J, Fu C, Kayas G, Luo L, Du X, Tan CC, Wu J (2019) A multiversion programming inspired approach to detecting audio adversarial examples. In: 2019 49th annual IEEE\/IFIP international conference on dependable systems and networks (DSN), pp 39\u201351","DOI":"10.1109\/DSN.2019.00019"},{"key":"649_CR98","doi-asserted-by":"crossref","unstructured":"Zhang C, Ye Z, Wang Y, Yang Z (2018) Detecting adversarial perturbations with saliency. In: 2018 IEEE 3rd international conference on signal and image processing (ICSIP). IEEE, pp 271\u2013275","DOI":"10.1109\/SIPROCESS.2018.8600516"},{"key":"649_CR99","unstructured":"Zhao P, Fu Z, Hu Q, Wang J et\u00a0al (2018) Detecting adversarial examples via key-based network. arXiv preprint arXiv:1806.00580"},{"key":"649_CR100","doi-asserted-by":"crossref","unstructured":"Zhao Y, Hryniewicki MK (2018) Xgbod: improving supervised outlier detection with unsupervised representation learning. In: 2018 international joint conference on neural networks (IJCNN). IEEE, pp 1\u20138","DOI":"10.1109\/IJCNN.2018.8489605"},{"key":"649_CR101","unstructured":"Zhao Y, Nasrullah Z, Li Z (2019) Pyod: a python toolbox for scalable outlier detection. J Mach Learn Res 20(96), 1\u20137. http:\/\/www.jmlr.org\/papers\/v20\/19-011.html"},{"key":"649_CR102","volume-title":"Body sensor networks","author":"G Zhou","year":"2008","unstructured":"Zhou G, Lu J, Wan CY, Yarvis MD, Stankovic JA (2008) Body sensor networks. MIT Press, Cambridge"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-022-00649-1.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-022-00649-1\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-022-00649-1.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,7,27]],"date-time":"2023-07-27T13:10:15Z","timestamp":1690463415000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-022-00649-1"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,1,21]]},"references-count":102,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2023,8]]}},"alternative-id":["649"],"URL":"https:\/\/doi.org\/10.1007\/s40747-022-00649-1","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"value":"2199-4536","type":"print"},{"value":"2198-6053","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,1,21]]},"assertion":[{"value":"2 April 2021","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"3 December 2021","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"21 January 2022","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}