{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,8]],"date-time":"2026-03-08T09:43:50Z","timestamp":1772963030091,"version":"3.50.1"},"reference-count":45,"publisher":"MDPI AG","issue":"4","license":[{"start":{"date-parts":[[2022,3,29]],"date-time":"2022-03-29T00:00:00Z","timestamp":1648512000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100000780","name":"European Union","doi-asserted-by":"publisher","award":["871967"],"award-info":[{"award-number":["871967"]}],"id":[{"id":"10.13039\/501100000780","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001871","name":"Funda\u00e7\u00e3o para a Ci\u00eancia e Tecnologia","doi-asserted-by":"publisher","award":["UIDP\/00760\/2020"],"award-info":[{"award-number":["UIDP\/00760\/2020"]}],"id":[{"id":"10.13039\/501100001871","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001871","name":"Funda\u00e7\u00e3o para a Ci\u00eancia e Tecnologia","doi-asserted-by":"publisher","award":["NORTE-01-0247-FEDER-40124"],"award-info":[{"award-number":["NORTE-01-0247-FEDER-40124"]}],"id":[{"id":"10.13039\/501100001871","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Future Internet"],"abstract":"<jats:p>Adversarial attacks pose a major threat to machine learning and to the systems that rely on it. In the cybersecurity domain, adversarial cyber-attack examples capable of evading detection are especially concerning. Nonetheless, an example generated for a domain with tabular data must be realistic within that domain. This work establishes the fundamental constraint levels required to achieve realism and introduces the adaptative perturbation pattern method (A2PM) to fulfill these constraints in a gray-box setting. A2PM relies on pattern sequences that are independently adapted to the characteristics of each class to create valid and coherent data perturbations. The proposed method was evaluated in a cybersecurity case study with two scenarios: Enterprise and Internet of Things (IoT) networks. Multilayer perceptron (MLP) and random forest (RF) classifiers were created with regular and adversarial training, using the CIC-IDS2017 and IoT-23 datasets. In each scenario, targeted and untargeted attacks were performed against the classifiers, and the generated examples were compared with the original network traffic flows to assess their realism. The obtained results demonstrate that A2PM provides a scalable generation of realistic adversarial examples, which can be advantageous for both adversarial training and attacks.<\/jats:p>","DOI":"10.3390\/fi14040108","type":"journal-article","created":{"date-parts":[[2022,3,29]],"date-time":"2022-03-29T21:44:52Z","timestamp":1648590292000},"page":"108","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":39,"title":["Adaptative Perturbation Patterns: Realistic Adversarial Learning for Robust Intrusion Detection"],"prefix":"10.3390","volume":"14","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-4968-3653","authenticated-orcid":false,"given":"Jo\u00e3o","family":"Vitorino","sequence":"first","affiliation":[{"name":"Research Group on Intelligent Engineering and Computing for Advanced Innovation and Development (GECAD), School of Engineering, Polytechnic of Porto (ISEP\/IPP), 4249-015 Porto, Portugal"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5030-7751","authenticated-orcid":false,"given":"Nuno","family":"Oliveira","sequence":"additional","affiliation":[{"name":"Research Group on Intelligent Engineering and Computing for Advanced Innovation and Development (GECAD), School of Engineering, Polytechnic of Porto (ISEP\/IPP), 4249-015 Porto, Portugal"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2519-9859","authenticated-orcid":false,"given":"Isabel","family":"Pra\u00e7a","sequence":"additional","affiliation":[{"name":"Research Group on Intelligent Engineering and Computing for Advanced Innovation and Development (GECAD), School of Engineering, Polytechnic of Porto (ISEP\/IPP), 4249-015 Porto, Portugal"}]}],"member":"1968","published-online":{"date-parts":[[2022,3,29]]},"reference":[{"key":"ref_1","unstructured":"Szegedy, C. (2014, January 14\u201316). Intriguing properties of neural networks. Proceedings of the 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada. Conference Track Proceedings."},{"key":"ref_2","unstructured":"European Union Agency for Cybersecurity, Malatras, A., and Dede, G. (2022, March 07). AI Cybersecurity Challenges: Threat Landscape for Artificial Intelligence. Available online: https:\/\/op.europa.eu\/en\/publication-detail\/-\/publication\/e52bf2d7-4017-11eb-b27b-01aa75ed71a1\/language-en."},{"key":"ref_3","unstructured":"Goodfellow, I.J., Shlens, J., and Szegedy, C. (2015, January 7\u20139). Explaining and harnessing adversarial examples. Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA. Conference Track Proceedings."},{"key":"ref_4","unstructured":"European Union Agency for Cybersecurity, Malatras, A., Agrafiotis, I., and Adamczyk, M. (2022, March 07). Securing Machine Learning Algorithms. Available online: https:\/\/op.europa.eu\/en\/publication-detail\/-\/publication\/c7c844fd-7f1e-11ec-8c40-01aa75ed71a1\/language-en."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"2805","DOI":"10.1109\/TNNLS.2018.2886017","article-title":"Adversarial Examples: Attacks and Defenses for Deep Learning","volume":"30","author":"Yuan","year":"2019","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"100199","DOI":"10.1016\/j.cosrev.2019.100199","article-title":"A taxonomy and survey of attacks against machine learning","volume":"34","author":"Pitropakis","year":"2019","journal-title":"Comput. Sci. Rev."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Qiu, S., Liu, Q., Zhou, S., and Wu, C. (2019). Review of Artificial Intelligence Adversarial Attack and Defense Technologies. Appl. Sci., 9.","DOI":"10.3390\/app9050909"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Apruzzese, G., Andreolini, M., Ferretti, L., Marchetti, M., and Colajanni, M. (2021). Modeling Realistic Adversarial Attacks against Network Intrusion Detection Systems. Digit. Threat. Res. Pract., 1.","DOI":"10.1145\/3469659"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"201","DOI":"10.1016\/j.ins.2013.03.022","article-title":"Adversarial attacks against intrusion detection systems: Taxonomy, solutions and open issues","volume":"239","author":"Corona","year":"2013","journal-title":"Inf. Sci."},{"key":"ref_10","unstructured":"Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. (May, January April). Towards deep learning models resistant to adversarial attacks. Proceedings of the 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada. Conference Track Proceedings."},{"key":"ref_11","first-page":"5014","article-title":"Adversarially robust generalization requires more data","volume":"31","author":"Schmidt","year":"2018","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_12","unstructured":"Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M., and Lempitsky, V. (2017). Domain-Adversarial Training of Neural Networks, Available online: https:\/\/www.jmlr.org\/papers\/volume17\/15-239\/15-239.pdf."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Ullah, S., Khan, M.A., Ahmad, J., Jamal, S.S., e Huma, Z., Hassan, M.T., Pitropakis, N., and Buchanan, W.J. (2022). HDL-IDS: A Hybrid Deep Learning Architecture for Intrusion Detection in the Internet of Vehicles. Sensors, 22.","DOI":"10.3390\/s22041340"},{"key":"ref_14","unstructured":"Tram\u00e8r, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., and McDaniel, P. (May, January 30). Ensemble adversarial training: Attacks and defenses. Proceedings of the 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada. Conference Track Proceedings."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"117","DOI":"10.1016\/j.procs.2016.06.016","article-title":"Performance Evaluation of Supervised Machine Learning Algorithms for Intrusion Detection","volume":"89","author":"Belavagi","year":"2016","journal-title":"Procedia Comput. Sci."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Primartha, R., and Tama, B.A. (2017, January 1\u20132). Anomaly detection using random forest: A performance revisited. Proceedings of the 2017 International Conference Data Software Engineering, Palembang, Indonesia.","DOI":"10.1109\/ICODSE.2017.8285847"},{"key":"ref_17","unstructured":"Kantchelian, A., Tygar, J.D., and Joseph, A.D. (2016, January 20\u201322). Evasion and hardening of tree ensemble classifiers. Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA."},{"key":"ref_18","unstructured":"Chen, H., Zhang, H., Boning, D., and Hsieh, C.J. (2019, January 9\u201315). Robust decision trees against adversarial examples. Proceedings of the 36rd International Conference on Machine Learning, ICML 2019, Long Beach, CA, USA."},{"key":"ref_19","unstructured":"Vos, D., and Verwer, S. (2021, January 18\u201324). Efficient Training of Robust Decision Trees Against Adversarial Examples. Proceedings of the 38th International Conference on Machine Learning, Online. Available online: https:\/\/proceedings.mlr.press\/v139\/vos21a.html."},{"key":"ref_20","unstructured":"Chen, Y., Wang, S., Jiang, W., Cidon, A., and Jana, S. (2021, January 11\u201313). Cost-aware robust tree ensembles for security applications. Proceedings of the 30th USENIX Security Symposium, Online. Available online: https:\/\/www.usenix.org\/conference\/usenixsecurity21\/presentation\/chen-yizheng."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"35403","DOI":"10.1109\/ACCESS.2020.2974752","article-title":"Adversarial Machine Learning Applied to Intrusion and Malware Scenarios: A Systematic Review","volume":"8","author":"Martins","year":"2020","journal-title":"IEEE Access"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Carlini, N., and Wagner, D. (2017, January 22\u201326). Towards Evaluating the Robustness of Neural Networks. Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA.","DOI":"10.1109\/SP.2017.49"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Moosavi-Dezfooli, S.M., Fawzi, A., and Frossard, P. (2016, January 27\u201330). DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.282"},{"key":"ref_24","unstructured":"Cisse, M., Adi, Y., Neverova, N., and Keshet, J. (2022, March 07). Houdini: Fooling Deep Structured Prediction Models. Available online: http:\/\/arxiv.org\/abs\/1707.05373."},{"key":"ref_25","unstructured":"Xu, K. (2019, January 6\u20139). Structured adversarial attack: Towards general implementation and better interpretability. Proceedings of the 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Chen, P.Y., Zhang, H., Sharma, Y., Yi, J., and Hsieh, C.J. (2017, January 3). ZOO: Zeroth order optimization based black-box atacks to deep neural networks without training substitute models. Proceedings of the 10th International Workshop on Artificial Intelligence and Security (AISec 2017), Dallas, TX, USA.","DOI":"10.1145\/3128572.3140448"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., and Swami, A. (2016, January 21\u201324). The Limitations of Deep Learning in Adversarial Settings. Proceedings of the 2016 IEEE European Symposium on Security and Privacy, Saarbruecken, Germany.","DOI":"10.1109\/EuroSP.2016.36"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"767","DOI":"10.3390\/jcp1040037","article-title":"Polymorphic Adversarial Cyberattacks Using WGAN","volume":"1","author":"Chauhan","year":"2021","journal-title":"J. Cybersecur. Priv."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Xu, Y., Zhong, X., Yepes, A.J., and Lau, J.H. (2021, January 6\u201311). Grey-box Adversarial Attack And Defence For Sentiment Classification. Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Online.","DOI":"10.18653\/v1\/2021.naacl-main.321"},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"828","DOI":"10.1109\/TEVC.2019.2890858","article-title":"One Pixel Attack for Fooling Deep Neural Networks","volume":"23","author":"Su","year":"2019","journal-title":"IEEE Trans. Evol. Comput."},{"key":"ref_31","unstructured":"Dai, H., Li, H., Tian, T., Huang, X., Wang, L., Zhu, J., and Song, L. (2018, January 10\u201315). Adversarial Attack on Graph Structured Data. Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden. Available online: https:\/\/proceedings.mlr.press\/v80\/dai18b.html."},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"110767","DOI":"10.1016\/j.jss.2020.110767","article-title":"Black-box adversarial sample generation based on differential evolution","volume":"170","author":"Lin","year":"2020","journal-title":"J. Syst. Softw."},{"key":"ref_33","unstructured":"Goodfellow, I. (2014, January 8\u201313). Generative Adversarial Nets. Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada."},{"key":"ref_34","unstructured":"Arjovsky, M., Chintala, S., and Bottou, L. (2017, January 6\u201311). Wasserstein Generative Adversarial Networks. Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia."},{"key":"ref_35","unstructured":"Brendel, W., Rauber, J., and Bethge, M. (May, January 30). Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. Proceedings of the 6th International Conference on Learning. Representations, ICLR 2018, Vancouver, BC, Canada. Conference Track Proceedings."},{"key":"ref_36","unstructured":"Cheng, M., Zhang, H., Hsieh, C.J., Le, T., Chen, P.Y., and Yi, J. (2019, January 6\u20139). Query-efficient hard-label black-box attack: An optimization-based approach. Proceedings of the 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA."},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Sharafaldin, I., Lashkari, A.H., and Ghorbani, A.A. (2018, January 22\u201324). Toward generating a new intrusion detection dataset and intrusion traffic characterization. Proceedings of the 4th International Conference on Information Systems Security and Privacy, Funchal, Portugal.","DOI":"10.5220\/0006639801080116"},{"key":"ref_38","unstructured":"Garcia, S., Parmisano, A., and Erquiaga, M.J. (2020). IoT-23: A labeled dataset with malicious and benign IoT network traffic. Zenodo."},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"183","DOI":"10.1016\/0925-2312(91)90023-5","article-title":"Multilayer perceptrons for classification and regression","volume":"2","author":"Murtagh","year":"1991","journal-title":"Neurocomputing"},{"key":"ref_40","unstructured":"Snoek, J., Larochelle, H., and Adams, R.P. (2012, January 3\u20136). Practical Bayesian Optimization of Machine Learning Algorithms. Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA."},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"5","DOI":"10.1023\/A:1010933404324","article-title":"Random forests","volume":"5","author":"Breiman","year":"2001","journal-title":"Mach. Learn."},{"key":"ref_42","unstructured":"Powers, D.M.W. (2020). Evaluation: From precision, recall and F-measure to ROC, informedness, markedness and correlation. arXiv, Available online: http:\/\/arxiv.org\/abs\/2010.16061."},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"1","DOI":"10.5121\/ijdkp.2015.5201","article-title":"A review on evaluation metrics for data classification evaluations","volume":"5","author":"Hossin","year":"2015","journal-title":"Int. J. Data Min. Knowl. Manag. Process"},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Oliveira, N., Pra\u00e7a, I., Maia, E., and Sousa, O. (2021). Intelligent cyber attack detection and classification for network-based intrusion detection systems. Appl. Sci., 11.","DOI":"10.3390\/app11041674"},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Shorey, T., Subbaiah, D., Goyal, A., Sakxena, A., and Mishra, A.K. (2018, January 19\u201322). performance comparison and analysis of slowloris, goldeneye and xerxes ddos attack tools. Proceedings of the 2018 International Conference on Advances in Computing, Communications and Informatics, ICACCI 2018, Bangalore, India.","DOI":"10.1109\/ICACCI.2018.8554590"}],"container-title":["Future Internet"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1999-5903\/14\/4\/108\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T22:45:30Z","timestamp":1760136330000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1999-5903\/14\/4\/108"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,3,29]]},"references-count":45,"journal-issue":{"issue":"4","published-online":{"date-parts":[[2022,4]]}},"alternative-id":["fi14040108"],"URL":"https:\/\/doi.org\/10.3390\/fi14040108","relation":{},"ISSN":["1999-5903"],"issn-type":[{"value":"1999-5903","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,3,29]]}}}