{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,24]],"date-time":"2026-02-24T01:56:12Z","timestamp":1771898172508,"version":"3.50.1"},"reference-count":61,"publisher":"MDPI AG","issue":"12","license":[{"start":{"date-parts":[[2023,6,9]],"date-time":"2023-06-09T00:00:00Z","timestamp":1686268800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"FCT\u2014Foundation for Science and Technology","award":["UIDB\/50022\/2020"],"award-info":[{"award-number":["UIDB\/50022\/2020"]}]},{"name":"IDMEC","award":["UIDB\/50022\/2020"],"award-info":[{"award-number":["UIDB\/50022\/2020"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Cyber-Physical Systems (CPS) are prone to many security exploitations due to a greater attack surface being introduced by their cyber component by the nature of their remote accessibility or non-isolated capability. Security exploitations, on the other hand, rise in complexities, aiming for more powerful attacks and evasion from detections. The real-world applicability of CPS thus poses a question mark due to security infringements. Researchers have been developing new and robust techniques to enhance the security of these systems. Many techniques and security aspects are being considered to build robust security systems; these include attack prevention, attack detection, and attack mitigation as security development techniques with consideration of confidentiality, integrity, and availability as some of the important security aspects. In this paper, we have proposed machine learning-based intelligent attack detection strategies which have evolved as a result of failures in traditional signature-based techniques to detect zero-day attacks and attacks of a complex nature. Many researchers have evaluated the feasibility of learning models in the security domain and pointed out their capability to detect known as well as unknown attacks (zero-day attacks). However, these learning models are also vulnerable to adversarial attacks like poisoning attacks, evasion attacks, and exploration attacks. To make use of a robust-cum-intelligent security mechanism, we have proposed an adversarial learning-based defense strategy for the security of CPS to ensure CPS security and invoke resilience against adversarial attacks. We have evaluated the proposed strategy through the implementation of Random Forest (RF), Artificial Neural Network (ANN), and Long Short-Term Memory (LSTM) on the ToN_IoT Network dataset and an adversarial dataset generated through the Generative Adversarial Network (GAN) model.<\/jats:p>","DOI":"10.3390\/s23125459","type":"journal-article","created":{"date-parts":[[2023,6,9]],"date-time":"2023-06-09T08:37:33Z","timestamp":1686299853000},"page":"5459","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":9,"title":["Defending the Defender: Adversarial Learning Based Defending Strategy for Learning Based Security Methods in Cyber-Physical Systems (CPS)"],"prefix":"10.3390","volume":"23","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-5268-4430","authenticated-orcid":false,"given":"Zakir Ahmad","family":"Sheikh","sequence":"first","affiliation":[{"name":"Department of Computer Science and Information Technology, Central University of Jammu, Rahya Suchani, Bagla, Jammu 181143, India"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2833-2093","authenticated-orcid":false,"given":"Yashwant","family":"Singh","sequence":"additional","affiliation":[{"name":"Department of Computer Science and Information Technology, Central University of Jammu, Rahya Suchani, Bagla, Jammu 181143, India"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7676-9014","authenticated-orcid":false,"given":"Pradeep Kumar","family":"Singh","sequence":"additional","affiliation":[{"name":"STME, Narsee Monjee Institute of Management Studies (NMIMS) Deemed to be University, Maharashtra 400056, India"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8692-7338","authenticated-orcid":false,"given":"Paulo J. Sequeira","family":"Gon\u00e7alves","sequence":"additional","affiliation":[{"name":"IDMEC, Polytechnic Institute of Castelo Branco, 6000-084 Castelo Branco, Portugal"}]}],"member":"1968","published-online":{"date-parts":[[2023,6,9]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"313","DOI":"10.1016\/j.icte.2022.04.007","article-title":"Uniting cyber security and machine learning: Advantages, challenges and future research","volume":"8","author":"Wazid","year":"2022","journal-title":"ICT Express"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"302","DOI":"10.1016\/j.comcom.2022.07.007","article-title":"Intelligent and secure framework for critical infrastructure (CPS): Current trends, challenges, and future scope","volume":"193","author":"Ahmad","year":"2022","journal-title":"Comput. Commun."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"5103","DOI":"10.1109\/JIOT.2020.2975654","article-title":"Adversarial attacks and defenses on cyber-physical systems: A survey","volume":"7","author":"Li","year":"2020","journal-title":"IEEE Internet Things J."},{"key":"ref_4","unstructured":"Wang, Y., Mianjy, P., and Arora, R. (2021, January 18\u201324). Robust Learning for Data Poisoning Attacks. Proceedings of the 38th International Conference on Machine Learning, Virtual."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"8597","DOI":"10.1007\/s00500-019-03968-7","article-title":"A taxonomy on impact of label noise and feature noise using machine learning techniques","volume":"23","author":"Shanthini","year":"2019","journal-title":"Soft Comput."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"118347","DOI":"10.1016\/j.apenergy.2021.118347","article-title":"A Deep-Learning intelligent system incorporating data augmentation for Short-Term voltage stability assessment of power systems","volume":"308","author":"Li","year":"2022","journal-title":"Appl. Energy"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Stouffer, K., Stouffer, K., and Abrams, M. (2015). Guide to Industrial Control Systems (ICS), Security NIST Special Publication 800-82 Guide to Industrial Control Systems (ICS) Security.","DOI":"10.6028\/NIST.SP.800-82r2"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"6247","DOI":"10.1109\/JIOT.2020.3024800","article-title":"Intrusion Detection for Cyber-Physical Systems Using Generative Adversarial Networks in Fog Environment","volume":"8","author":"Kaddoum","year":"2021","journal-title":"IEEE Internet Things J."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"4862","DOI":"10.1109\/TSG.2022.3204796","article-title":"Detection of False Data Injection Attacks in Smart Grid: A Secure Federated Deep Learning Approach","volume":"13","author":"Li","year":"2022","journal-title":"IEEE Trans. Smart Grid"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Sarker, I.H., Abushark, Y.B., Alsolami, F., and Khan, A.I. (2020). IntruDTree: A machine learning based cyber security intrusion detection model. Symmetry, 12.","DOI":"10.20944\/preprints202004.0481.v1"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Sheikh, Z.A., Singh, Y., Tanwar, S., Sharma, R., and Turcanu, F. (2023). EISM-CPS: An Enhanced Intelligent Security Methodology for Cyber-Physical Systems through Hyper-Parameter Optimization. Mathematics, 11.","DOI":"10.3390\/math11010189"},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3453158","article-title":"Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain","volume":"54","author":"Rosenberg","year":"2021","journal-title":"ACM Comput. Surv."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Jadidi, Z., Pal, S., Nayak, N., Selvakkumar, A., Chang, C.-C., Beheshti, M., and Jolfaei, A. (2022, January 25\u201328). Security of Machine Learning-Based Anomaly Detection in Cyber Physical Systems. Proceedings of the International Conference on Computer Communications and Networks (ICCCN), Honolulu, HI, USA.","DOI":"10.1109\/ICCCN54977.2022.9868845"},{"key":"ref_14","unstructured":"Boesch, G. (2023, January 03). What Is Adversarial Machine Learning? Attack Methods in 2023. [Online]. Available online: https:\/\/viso.ai\/deep-learning\/adversarial-machine-learning\/."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"2805","DOI":"10.1109\/TNNLS.2018.2886017","article-title":"Adversarial Examples: Attacks and Defenses for Deep Learning","volume":"30","author":"Yuan","year":"2019","journal-title":"IEEE Trans. Neural Networks Learn. Syst."},{"key":"ref_16","unstructured":"Fawzi, O., and Frossard, P. (2016). Universal adversarial perturbations. arXiv."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Adate, A., and Saxena, R. (2017, January 20\u201322). Understanding How Adversarial Noise Affects Single Image Classification. Proceedings of the International Conference on Intelligent Information Technologies, Chennai, India.","DOI":"10.1007\/978-981-10-7635-0_22"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Pengcheng, L., Yi, J., and Zhang, L. (2018, January 17\u201320). Query-Efficient Black-Box Attack by Active Learning. Proceedings of the IEEE International Conference on Data Mining (ICDM), Singapore.","DOI":"10.1109\/ICDM.2018.00159"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Clements, J., Yang, Y., Sharma, A.A., Hu, H., and Lao, Y. (2021, January 5\u20137). Rallying Adversarial Techniques against Deep Learning for Network Security. Proceedings of the 2021 IEEE Symposium Series on Computational Intelligence (SSCI), Orlando, FL, USA.","DOI":"10.1109\/SSCI50451.2021.9660011"},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"10327","DOI":"10.1109\/JIOT.2020.3048038","article-title":"Adversarial Attacks against Network Intrusion Detection in IoT Systems","volume":"8","author":"Qiu","year":"2021","journal-title":"IEEE Internet Things J."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Dong, Y., Liao, F., Pang, T., Su, H., Zhu, J., Hu, X., and Li, J. (2018, January 18\u201323). Boosting Adversarial Attacks with Momentum. Proceedings of the 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00957"},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"953","DOI":"10.1109\/TDSC.2020.3014390","article-title":"Defending against Adversarial Attack towards Deep Neural Networks via Collaborative Multi-Task Training","volume":"19","author":"Wang","year":"2020","journal-title":"IEEE Trans. Dependable Secur. Comput."},{"key":"ref_23","unstructured":"Cisse, M., Adi, Y., Neverova, N., and Keshet, J. (2017, January 4\u20139). Houdini: Fooling deep structured visual and speech recognition models with adversarial examples. Proceedings of the 31st International Conference on Neural Information Processing Systems: NIPS\u201917, Long Beach, CA, USA."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Papernot, N., Mcdaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., and Swami, A. (2016, January 21\u201324). The Limitations of Deep Learning in Adversarial Settings. Proceedings of the 1st IEEE European Symposium on Security and Privacy, Saarbruecken, Germany.","DOI":"10.1109\/EuroSP.2016.36"},{"key":"ref_25","unstructured":"Ilyas, A., Engstrom, L., Athalye, A., and Lin, J. (2018, January 10\u201315). Black-box Adversarial Attacks with Limited Queries and Information. Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Baluja, S., and Fischer, I. (2017). Adversarial Transformation Networks: Learning to Generate Adversarial Examples. arXiv.","DOI":"10.1609\/aaai.v32i1.11672"},{"key":"ref_27","unstructured":"Fawzi, A., and Frossard, P. (2016, January 27\u201330). DeepFool: A simple and accurate method to fool deep neural networks. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA."},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Chen, P. (2017, January 3). ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models. Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, Dallas, TX, USA.","DOI":"10.1145\/3128572.3140448"},{"key":"ref_29","unstructured":"Kurakin, A., Goodfellow, I.J., and Bengio, S. (2017, January 24\u201326). Adversarial machine learning at scale. Proceedings of the 5th International Conference on Learning Representations, ICLR 2017, Toulon, France."},{"key":"ref_30","unstructured":"Sarkar, S., and Mahbub, U. (2017). UPSET and ANGRI: Breaking High Performance Image Classifiers. arXiv."},{"key":"ref_31","unstructured":"Zhu, C., Ronny Huang, W., Shafahi, A., Li, H., Taylor, G., Studer, C., and Goldstein, T. (2019, January 10\u201315). Transferable clean-label poisoning attacks on deep neural nets. Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA."},{"key":"ref_32","first-page":"97","article-title":"Support vector machines under adversarial label noise","volume":"20","author":"Biggio","year":"2011","journal-title":"J. Mach. Learn. Res."},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"4503","DOI":"10.1007\/s10489-020-02086-4","article-title":"Label flipping attacks against Naive Bayes on spam filtering systems","volume":"51","author":"Zhang","year":"2021","journal-title":"Appl. Intell."},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"5019","DOI":"10.1007\/s10462-020-09814-9","article-title":"Applicability of Machine Learning in Spam and Phishing Email Filtering: Review and Approaches","volume":"53","author":"Gangavarapu","year":"2020","journal-title":"Artif. Intell. Rev."},{"key":"ref_35","first-page":"5","article-title":"Label Sanitization Against Label Flipping Poisoning Attacks","volume":"Volume 11329","author":"Paudice","year":"2019","journal-title":"ECML PKDD 2018 Workshops\u2014ECML PKDD 2018"},{"key":"ref_36","first-page":"870","article-title":"Adversarial label flips attack on support vector machines","volume":"242","author":"Xiao","year":"2012","journal-title":"Front. Artif. Intell. Appl."},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"53","DOI":"10.1016\/j.neucom.2014.08.081","article-title":"Support vector machines under adversarial label contamination","volume":"160","author":"Xiao","year":"2015","journal-title":"Neurocomputing"},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"14781","DOI":"10.1007\/s00521-020-04831-9","article-title":"On defending against label flipping attacks on malware detection systems","volume":"32","author":"Taheri","year":"2020","journal-title":"Neural Comput. Appl."},{"key":"ref_39","unstructured":"Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. (2014, January 14\u201316). Intriguing properties of neural networks. Proceedings of the 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada."},{"key":"ref_40","unstructured":"Goodfellow, I.J., Shlens, J., and Szegedy, C. (2015, January 7\u20139). Explaining and harnessing adversarial examples. Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA."},{"key":"ref_41","unstructured":"Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. (May, January 30). Towards deep learning models resistant to adversarial attacks. Proceedings of the 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada."},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Carlini, N., and Wagner, D. (2017, January 22\u201326). Towards Evaluating the Robustness of Neural Networks. Proceedings of the 2017 IEEE Symposium on Security and Privacy, San Jose, CA, USA.","DOI":"10.1109\/SP.2017.49"},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Lin, Z., Shi, Y., and Xue, Z. (2022, January 16\u201319). IDSGAN: Generative Adversarial Networks for Attack Generation Against Intrusion Detection. Proceedings of the PAKDD 2022: Advances in Knowledge Discovery and Data Mining, Chengdu, China.","DOI":"10.1007\/978-3-031-05981-0_7"},{"key":"ref_44","doi-asserted-by":"crossref","first-page":"54371","DOI":"10.1109\/ACCESS.2020.2981415","article-title":"A Survey on Decentralized Consensus Mechanisms for Cyber Physical Systems","volume":"8","author":"Bodkhe","year":"2020","journal-title":"IEEE Access"},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Papernot, N., Mcdaniel, P., and Goodfellow, I. (2017, January 2\u20136). Practical Black-Box Attacks against Machine Learning. Proceedings of the ASIA CCS \u201917: 2017 ACM on Asia Conference on Computer and Communications Security, Abu Dhabi, United Arab Emirates.","DOI":"10.1145\/3052973.3053009"},{"key":"ref_46","unstructured":"Dziugaite, G.K., and Roy, D.M. (2016). A study of the effect of JPG compression on adversarial images. arXiv."},{"key":"ref_47","unstructured":"Hosseini, H., Chen, Y., Kannan, S., Zhang, B., and Poovendran, R. (2017). Blocking Transferability of Adversarial Examples in Black-Box Learning Systems. arXiv."},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Xie, C., Wang, J., Zhang, Z., Zhou, Y., Xie, L., and Yuille, A. (2017, January 22-29). Adversarial Examples for Semantic Segmentation and Object Detection. Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy.","DOI":"10.1109\/ICCV.2017.153"},{"key":"ref_49","doi-asserted-by":"crossref","unstructured":"Soll, M., Hinz, T., Magg, S., and Wermter, S. (2019, January 17\u201319). Evaluating Defensive Distillation for Defending Text Processing Neural Networks Against Adversarial Examples. Proceedings of the 28th International Conference on Artificial Neural Networks, Munich, Germany.","DOI":"10.1007\/978-3-030-30508-6_54"},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Lyu, C. (2015, January 14\u201317). A Unified Gradient Regularization Family for Adversarial Examples. Proceedings of the 2015 IEEE International Conference on Data Mining, Atlantic City, NJ, USA.","DOI":"10.1109\/ICDM.2015.84"},{"key":"ref_51","doi-asserted-by":"crossref","unstructured":"Xu, W., Evans, D., and Qi, Y. (2018). Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. arXiv.","DOI":"10.14722\/ndss.2018.23198"},{"key":"ref_52","doi-asserted-by":"crossref","first-page":"72","DOI":"10.1109\/MCE.2020.3033270","article-title":"A Smart Mask for Active Defense Against Coronaviruses and Other Airborne Pathogens","volume":"10","author":"Kalavakonda","year":"2020","journal-title":"IEEE Consum. Electron. Mag."},{"key":"ref_53","doi-asserted-by":"crossref","first-page":"332","DOI":"10.1109\/TCYB.2018.2886012","article-title":"Rotated Sphere Haar Wavelet and Deep Contractive Auto-Encoder Network with Fuzzy Gaussian SVM for Pilot\u2019s Pupil Center Detection","volume":"51","author":"Wu","year":"2019","journal-title":"IEEE Trans. Cybern."},{"key":"ref_54","doi-asserted-by":"crossref","first-page":"149168","DOI":"10.1109\/ACCESS.2019.2947047","article-title":"A Comprehensive Review of Flux Barriers in Interior Permanent Magnet Synchronous Machines","volume":"7","author":"Sayed","year":"2019","journal-title":"IEEE Access"},{"key":"ref_55","doi-asserted-by":"crossref","first-page":"2044","DOI":"10.1109\/TIFS.2022.3175603","article-title":"Multi-Discriminator Sobolev Defense-GAN Against Adversarial Attacks for End-to-End Speech Systems","volume":"17","author":"Esmaeilpour","year":"2022","journal-title":"IEEE Trans. Inf. Forensics Secur."},{"key":"ref_56","doi-asserted-by":"crossref","unstructured":"Liao, F., Liang, M., Dong, Y., Pang, T., and Hu, X. (2018, January 18\u201323). Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser. Proceedings of the 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00191"},{"key":"ref_57","unstructured":"Grosse, K., Manoharan, P., Papernot, N., Backes, M., and McDaniel, P. (2017). On the (Statistical) Detection of Adversarial Examples. arXiv."},{"key":"ref_58","doi-asserted-by":"crossref","first-page":"165130","DOI":"10.1109\/ACCESS.2020.3022862","article-title":"TON-IoT telemetry dataset: A new generation dataset of IoT and IIoT for data-driven intrusion detection systems","volume":"8","author":"Alsaedi","year":"2020","journal-title":"IEEE Access"},{"key":"ref_59","doi-asserted-by":"crossref","first-page":"102994","DOI":"10.1016\/j.scs.2021.102994","article-title":"A new distributed architecture for evaluating AI-based security systems at the edge: Network TON_IoT datasets","volume":"72","author":"Moustafa","year":"2021","journal-title":"Sustain. Cities Soc."},{"key":"ref_60","unstructured":"Zantedeschi, V., Nicolae, M.I., and Rawat, A. (2017). AISec\u201917: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, Association for Computing Machinery."},{"key":"ref_61","unstructured":"(2023, May 04). Unreal Person, This Person Does Not Exist. Available online: https:\/\/www.unrealperson.com\/."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/12\/5459\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T19:51:38Z","timestamp":1760125898000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/12\/5459"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,6,9]]},"references-count":61,"journal-issue":{"issue":"12","published-online":{"date-parts":[[2023,6]]}},"alternative-id":["s23125459"],"URL":"https:\/\/doi.org\/10.3390\/s23125459","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,6,9]]}}}