{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,25]],"date-time":"2026-02-25T15:25:13Z","timestamp":1772033113987,"version":"3.50.1"},"reference-count":35,"publisher":"MDPI AG","issue":"5","license":[{"start":{"date-parts":[[2025,4,27]],"date-time":"2025-04-27T00:00:00Z","timestamp":1745712000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["BDCC"],"abstract":"<jats:p>As AI becomes indispensable in healthcare, its vulnerability to adversarial attacks demands serious attention. Even minimal changes to the input data can mislead Deep Learning (DL) models, leading to critical errors in diagnosis and endangering patient safety. In this study, we developed an optimized Multi-layer Perceptron (MLP) model for breast cancer classification and exposed its cybersecurity vulnerabilities through a real-world-inspired adversarial attack. Unlike prior studies, we conducted a quantitative evaluation on the impact of a Fast Gradient Sign Method (FGSM) attack on an optimized DL model designed for breast cancer detection to demonstrate how minor perturbations reduced the model\u2019s accuracy from 98% to 53%, and led to a substantial increase in the classification errors, as revealed by the confusion matrix. Our findings demonstrate how an adversarial attack can significantly compromise the performance of a healthcare AI model, underscoring the importance of aligning AI development with cybersecurity readiness. This research highlights the demand for designing resilient AI by integrating rigorous cybersecurity practices at every stage of the AI development lifecycle, i.e., before, during, and after the model engineering to prioritize the effectiveness, accuracy, and safety of AI in real-world healthcare environments.<\/jats:p>","DOI":"10.3390\/bdcc9050114","type":"journal-article","created":{"date-parts":[[2025,4,28]],"date-time":"2025-04-28T04:25:50Z","timestamp":1745814350000},"page":"114","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":5,"title":["From Accuracy to Vulnerability: Quantifying the Impact of Adversarial Perturbations on Healthcare AI Models"],"prefix":"10.3390","volume":"9","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-3463-0484","authenticated-orcid":false,"given":"Sarfraz","family":"Brohi","sequence":"first","affiliation":[{"name":"School of Computing and Creative Technologies, University of the West of England, Bristol BS16 1QY, UK"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8009-6580","authenticated-orcid":false,"given":"Qurat-ul-ain","family":"Mastoi","sequence":"additional","affiliation":[{"name":"School of Computing and Creative Technologies, University of the West of England, Bristol BS16 1QY, UK"}]}],"member":"1968","published-online":{"date-parts":[[2025,4,27]]},"reference":[{"key":"ref_1","unstructured":"WHO (2024, June 21). Breast Cancer. Available online: https:\/\/www.who.int\/news-room\/fact-sheets\/detail\/breast-cancer."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Hamid, R., and Brohi, S. (2024). A Review of Large Language Models in Healthcare: Taxonomy, Threats, Vulnerabilities, and Framework. Big Data Cogn. Comput., 8.","DOI":"10.3390\/bdcc8110161"},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Mastoi, Q., Latif, S., Brohi, S., Ahmad, J., Alqhatani, A., Alshehri, M.S., Al Mazroa, A., and Ullah, R. (2025). Explainable AI in medical imaging: An interpretable and collaborative federated learning model for brain tumor classification. Front. Oncol., 15.","DOI":"10.3389\/fonc.2025.1535478"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"100602","DOI":"10.1016\/j.eij.2024.100602","article-title":"Enhanced breast cancer detection and classification via CAMR-Gabor filters and LSTM: A deep Learning-Based method","volume":"29","author":"Kumar","year":"2025","journal-title":"Egypt. Inform. J."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"e1413","DOI":"10.1016\/j.crad.2024.08.002","article-title":"Deep learning-based computer-aided detection of ultrasound in breast cancer diagnosis: A systematic review and meta-analysis","volume":"79","author":"Li","year":"2024","journal-title":"Clin. Radiol."},{"key":"ref_6","unstructured":"Ernawan, F., Fakhreldin, M., and Saryoko, A. (2023, January 16). Deep Learning Method Based for Breast Cancer Classification. Proceedings of the 2023 International Conference on Information Technology Research and Innovation (ICITRI), Jakarta, Indonesia."},{"key":"ref_7","unstructured":"Goodfellow, I., Shlens, J., and Szegedy, C. (2014). Explaining and harnessing adversarial examples. arXiv."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"14410","DOI":"10.1109\/ACCESS.2018.2807385","article-title":"Threat of adversarial attacks on deep learning in computer vision: A survey","volume":"6","author":"Akhtar","year":"2018","journal-title":"IEEE Access"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"317","DOI":"10.1016\/j.patcog.2018.07.023","article-title":"Wild patterns: Ten years after the rise of adversarial machine learning","volume":"84","author":"Biggio","year":"2018","journal-title":"Pattern Recognit."},{"key":"ref_10","first-page":"20180083","article-title":"Algorithms that remember: Model inversion attacks and data protection law","volume":"376","author":"Veale","year":"2018","journal-title":"Philos. Trans. A Math. Phys. Eng. Sci."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Tsai, M.J., Lin, P.Y., and Lee, M.E. (2023). Adversarial Attacks on Medical Image Classification. Cancers, 15.","DOI":"10.3390\/cancers15174228"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Muoka, G.W., Yi, D., Ukwuoma, C.C., Mutale, A., Ejiyi, C.J., Mzee, A.K., Gyarteng, E.S.A., Alqahtani, A., and Al-antari, M.A. (2023). A Comprehensive Review and Analysis of Deep Learning-Based Medical Image Adversarial Attack and Defense. Mathematics, 11.","DOI":"10.3390\/math11204272"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"6433","DOI":"10.1126\/science.aaw4399","article-title":"Adversarial attacks on medical machine learning","volume":"363","author":"Finlayson","year":"2019","journal-title":"Science"},{"key":"ref_14","unstructured":"Bishop, C.M. (2006). Pattern Recognition and Machine Learning, Springer."},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Kurakin, A., Goodfellow, I., and Bengio, S. (2017). Adversarial examples in the physical world. arXiv.","DOI":"10.1201\/9781351251389-8"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Moosavi-Dezfooli, S.M., Fawzi, A., and Frossard, P. (2016, January 27\u201330). DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.282"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Carlini, N., and Wagner, D. (2017, January 22\u201326). Towards Evaluating the Robustness of Neural Networks. Proceedings of the IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA.","DOI":"10.1109\/SP.2017.49"},{"key":"ref_18","unstructured":"Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. (2019). Towards deep learning models resistant to adversarial attacks. arXiv."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"828","DOI":"10.1109\/TEVC.2019.2890858","article-title":"One pixel attack for fooling deep neural networks","volume":"23","author":"Su","year":"2019","journal-title":"IEEE Trans. Evol. Comput."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Xu, W., Evans, D., and Qi, Y. (2017). Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv.","DOI":"10.14722\/ndss.2018.23198"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"2805","DOI":"10.1109\/TNNLS.2018.2886017","article-title":"Adversarial examples: Attacks and defenses for deep learning","volume":"30","author":"Yuan","year":"2019","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"ref_22","unstructured":"Koh, P.W., and Liang, P. (2020). Understanding black-box predictions via influence functions. arXiv."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Fredrikson, M., Jha, S., and Ristenpart, T. (2015, January 12\u201316). Model Inversion Attacks That Exploit Confidence Information and Basic Countermeasures. Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (CCS \u201915), Denver, CO, USA.","DOI":"10.1145\/2810103.2813677"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Jagielski, M., Oprea, A., Biggio, B., Liu, C., Nita-Rotaru, C., and Li, B. (2018, January 20\u201324). Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning. Proceedings of the IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA.","DOI":"10.1109\/SP.2018.00057"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"211","DOI":"10.1561\/0400000042","article-title":"The algorithmic foundations of differential privacy","volume":"9","author":"Dwork","year":"2014","journal-title":"Found. Trends Theor. Comput. Sci."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Song, S., Chaudhuri, K., and Sarwate, A.D. (2013, January 3\u20135). Stochastic Gradient Descent with Differentially Private Updates. Proceedings of the 2013 IEEE Global Conference on Signal and Information Processing, Austin, TX, USA.","DOI":"10.1109\/GlobalSIP.2013.6736861"},{"key":"ref_27","unstructured":"Tramer, F., Zhang, F., Juels, A., Reiter, M.K., and Ristenpart, T. (2016, January 10\u201312). Stealing Machine Learning Models via Prediction APIs. Proceedings of the USENIX Security Symposium, Austin, TX, USA."},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Orekondy, T., Schiele, B., and Fritz, M. (2019, January 15\u201320). Knockoff Nets: Stealing Functionality of Black-Box Models. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00509"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Papernot, N., McDaniel, P., Wu, X., Jha, S., and Swami, A. (2016, January 22\u201326). Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks. Proceedings of the 2016 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA.","DOI":"10.1109\/SP.2016.41"},{"key":"ref_30","unstructured":"Zhang, H., Yu, Y., Jiao, J., Xing, E., Ghaoui, L.E., and Jordan, M. (2019). Theoretically principled trade-off between robustness and accuracy. arXiv."},{"key":"ref_31","unstructured":"Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. (2014). Intriguing properties of neural networks. arXiv."},{"key":"ref_32","unstructured":"Wolberg, W., Mangasarian, O., and Street, W. (2024, March 08). Breast Cancer Wisconsin (Diagnostic). Available online: https:\/\/archive.ics.uci.edu\/dataset\/17\/breast+cancer+wisconsin+diagnostic."},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"861","DOI":"10.1117\/12.148698","article-title":"Nuclear feature extraction for breast tumor diagnosis","volume":"1905","author":"Street","year":"1993","journal-title":"Biomed. Image Process. Biomed. Vis."},{"key":"ref_34","unstructured":"Athalye, A., Carlini, N., and Wagner, D. (2018). Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv."},{"key":"ref_35","unstructured":"Shafahi, A., Najibi, M., and Goldstein, T. (2019). Adversarial training for free!. arXiv."}],"container-title":["Big Data and Cognitive Computing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2504-2289\/9\/5\/114\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,9]],"date-time":"2025-10-09T17:22:35Z","timestamp":1760030555000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2504-2289\/9\/5\/114"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,4,27]]},"references-count":35,"journal-issue":{"issue":"5","published-online":{"date-parts":[[2025,5]]}},"alternative-id":["bdcc9050114"],"URL":"https:\/\/doi.org\/10.3390\/bdcc9050114","relation":{},"ISSN":["2504-2289"],"issn-type":[{"value":"2504-2289","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,4,27]]}}}