{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,16]],"date-time":"2026-04-16T22:24:25Z","timestamp":1776378265849,"version":"3.51.2"},"reference-count":43,"publisher":"MDPI AG","issue":"1","license":[{"start":{"date-parts":[[2024,1,16]],"date-time":"2024-01-16T00:00:00Z","timestamp":1705363200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["BDCC"],"abstract":"<jats:p>This study evaluated the generation of adversarial examples and the subsequent robustness of an image classification model. The attacks were performed using the Fast Gradient Sign method, the Projected Gradient Descent method, and the Carlini and Wagner attack to perturb the original images and analyze their impact on the model\u2019s classification accuracy. Additionally, image manipulation techniques were investigated as defensive measures against adversarial attacks. The results highlighted the model\u2019s vulnerability to conflicting examples: the Fast Gradient Signed Method effectively altered the original classifications, while the Carlini and Wagner method proved less effective. Promising approaches such as noise reduction, image compression, and Gaussian blurring were presented as effective countermeasures. These findings underscore the importance of addressing the vulnerability of machine learning models and the need to develop robust defenses against adversarial examples. This article emphasizes the urgency of addressing the threat posed by harmful standards in machine learning models, highlighting the relevance of implementing effective countermeasures and image manipulation techniques to mitigate the effects of adversarial attacks. These efforts are crucial to safeguarding model integrity and trust in an environment marked by constantly evolving hostile threats. An average 25% decrease in accuracy was observed for the VGG16 model when exposed to the Fast Gradient Signed Method and Projected Gradient Descent attacks, and an even more significant 35% decrease with the Carlini and Wagner method.<\/jats:p>","DOI":"10.3390\/bdcc8010008","type":"journal-article","created":{"date-parts":[[2024,1,16]],"date-time":"2024-01-16T04:56:59Z","timestamp":1705381019000},"page":"8","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":42,"title":["Evaluating the Robustness of Deep Learning Models against Adversarial Attacks: An Analysis with FGSM, PGD and CW"],"prefix":"10.3390","volume":"8","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-5421-7710","authenticated-orcid":false,"given":"William","family":"Villegas-Ch","sequence":"first","affiliation":[{"name":"Escuela de Ingenier\u00eda en Ciberseguridad, Facultad de Ingenier\u00edas Ciencias Aplicadas, Universidad de Las Am\u00e9ricas, Quito 170125, Ecuador"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4143-2515","authenticated-orcid":false,"given":"Angel","family":"Jaramillo-Alc\u00e1zar","sequence":"additional","affiliation":[{"name":"Escuela de Ingenier\u00eda en Ciberseguridad, Facultad de Ingenier\u00edas Ciencias Aplicadas, Universidad de Las Am\u00e9ricas, Quito 170125, Ecuador"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5000-864X","authenticated-orcid":false,"given":"Sergio","family":"Luj\u00e1n-Mora","sequence":"additional","affiliation":[{"name":"Departamento de Lenguajes y Sistemas Inform\u00e1ticos, Universidad de Alicante, 03690 Alicante, Spain"}]}],"member":"1968","published-online":{"date-parts":[[2024,1,16]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"1040","DOI":"10.1016\/j.dcan.2021.11.001","article-title":"DroidEnemy: Battling Adversarial Example Attacks for Android Malware Detection","volume":"8","author":"Bala","year":"2022","journal-title":"Digit. Commun. Netw."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"653","DOI":"10.1109\/JSYST.2019.2906120","article-title":"Adversarial-Example Attacks Toward Android Malware Detection System","volume":"14","author":"Li","year":"2020","journal-title":"IEEE Syst. J."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Park, S., and So, J. (2020). On the Effectiveness of Adversarial Training in Defending against Adversarial Example Attacks for Image Classification. Appl. Sci., 10.","DOI":"10.3390\/app10228079"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"107141","DOI":"10.1016\/j.knosys.2021.107141","article-title":"Improving Adversarial Robustness of Deep Neural Networks by Using Semantic Information","volume":"226","author":"Wang","year":"2021","journal-title":"Knowl. Based Syst."},{"key":"ref_5","first-page":"8319249","article-title":"Complete Defense Framework to Protect Deep Neural Networks against Adversarial Examples","volume":"2020","author":"Sun","year":"2020","journal-title":"Math. Probl. Eng."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"103987","DOI":"10.1109\/ACCESS.2022.3210179","article-title":"NSL-MHA-CNN: A Novel CNN Architecture for Robust Diabetic Retinopathy Prediction Against Adversarial Attacks","volume":"10","author":"Daanouni","year":"2022","journal-title":"IEEE Access"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Xu, J. (2020, January 16\u201318). Generate Adversarial Examples by Nesterov-Momentum Iterative Fast Gradient Sign Method. Proceedings of the IEEE International Conference on Software Engineering and Service Sciences, ICSESS, Beijing, China.","DOI":"10.1109\/ICSESS49938.2020.9237700"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Muncsan, T., and Kiss, A. (2021, January 2\u20133). Transferability of Fast Gradient Sign Method. Proceedings of the Advances in Intelligent Systems and Computing (AISC), Amsterdam, The Netherlands.","DOI":"10.1007\/978-3-030-55187-2_3"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"109289","DOI":"10.1109\/ACCESS.2022.3213667","article-title":"Boosting Out-of-Distribution Image Detection with Epistemic Uncertainty","volume":"10","author":"Oh","year":"2022","journal-title":"IEEE Access"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Jethanandani, M., and Tang, D. (2020, January 21). Adversarial Attacks against LipNet: End-to-End Sentence Level Lipreading. Proceedings of the 2020 IEEE Symposium on Security and Privacy Workshops, SPW 2020, San Francisco, CA, USA.","DOI":"10.1109\/SPW50608.2020.00020"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Devitt, D.A., Apodaca, L., Bird, B., Dawyot, J.P., Fenstermaker, L., and Petrie, M.D. (2022). Assessing the Impact of a Utility Scale Solar Photovoltaic Facility on a Down Gradient Mojave Desert Ecosystem. Land, 11.","DOI":"10.3390\/land11081315"},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"092301","DOI":"10.1063\/5.0101434","article-title":"Near-Cancellation of up- and down-Gradient Momentum Transport in Forced Magnetized Shear-Flow Turbulence","volume":"29","author":"Tripathi","year":"2022","journal-title":"Phys. Plasmas"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"2569","DOI":"10.1109\/TNNLS.2021.3106961","article-title":"Exploring Adversarial Attack in Spiking Neural Networks with Spike-Compatible Gradient","volume":"34","author":"Liang","year":"2021","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"ref_14","unstructured":"Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. (May, January 30). Towards Deep Learning Models Resistant to Adversarial Attacks. Proceedings of the 6th International Conference on Learning Representations, ICLR 2018\u2014Conference Track Proceedings, Vancouver, BC, Canada."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"346","DOI":"10.1016\/j.eng.2019.12.012","article-title":"Adversarial Attacks and Defenses in Deep Learning","volume":"6","author":"Ren","year":"2020","journal-title":"Engineering"},{"key":"ref_16","unstructured":"Buckman, J., Roy, A., Raffel, C., and Goodfellow, I. (May, January 30). Thermometer Encoding: One Hot Way to Resist Adversarial Examples. Proceedings of the 6th International Conference on Learning Representations, ICLR 2018\u2014Conference Track Proceedings, Vancouver, BC, Canada."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Sharif, M., Baue, L., and Reite, M.K. (2018, January 18\u201322). On the Suitability of Lp-Norms for Creating and Preventing Adversarial Examples. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPRW.2018.00211"},{"key":"ref_18","first-page":"103227","article-title":"AB-FGSM: AdaBelief Optimizer and FGSM-Based Approach to Generate Adversarial Examples","volume":"68","author":"Wang","year":"2022","journal-title":"J. Inf. Secur. Appl."},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Cheng, M., Chen, P.Y., Liu, S., Chang, S., Hsieh, C.J., and Das, P. (2021, January 2\u20139). Self-Progressing Robust Training. Proceedings of the 35th AAAI Conference on Artificial Intelligence, AAAI 2021, Virtual.","DOI":"10.1609\/aaai.v35i8.16874"},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"259","DOI":"10.1111\/jerd.12844","article-title":"Applications of Artificial Intelligence in Dentistry: A Comprehensive Review","volume":"34","author":"Pecho","year":"2022","journal-title":"J. Esthet. Restor. Dent."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Vardhan, K.V., Sarada, M., and Srinivasulu, A. (2021, January 1\u20133). Novel Modular Adder Based on Thermometer Coding for Residue Number Systems Applications. Proceedings of the 13th International Conference on Electronics, Computers and Artificial Intelligence, ECAI 2021, Pitesti, Romania.","DOI":"10.1109\/ECAI52376.2021.9515085"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Gupta, S., Hanson, C., Gunter, C.A., Frank, M., Liebovitz, D., and Malin, B. (2013, January 4\u20137). Modeling and Detecting Anomalous Topic Access. Proceedings of the IEEE ISI 2013\u20142013 IEEE International Conference on Intelligence and Security Informatics: Big Data, Emergent Threats, and Decision-Making in Security Informatics, Seattle, WA, USA.","DOI":"10.1109\/ISI.2013.6578795"},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"108101","DOI":"10.1103\/PhysRevLett.110.108101","article-title":"Lift and Down-Gradient Shear-Induced Diffusion in Red Blood Cell Suspensions","volume":"110","author":"Grandchamp","year":"2013","journal-title":"Phys. Rev. Lett."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"108249","DOI":"10.1016\/j.patcog.2021.108249","article-title":"Deep Image Prior Based Defense against Adversarial Examples","volume":"122","author":"Dai","year":"2022","journal-title":"Pattern Recognit."},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"157161","DOI":"10.1109\/ACCESS.2020.3014692","article-title":"Image Recognition Technology Based on Neural Network","volume":"8","author":"Chen","year":"2020","journal-title":"IEEE Access"},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"1346","DOI":"10.1080\/08839514.2021.1978149","article-title":"Attack Analysis of Face Recognition Authentication Systems Using Fast Gradient Sign Method","volume":"35","author":"Musa","year":"2021","journal-title":"Appl. Artif. Intell."},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"79561","DOI":"10.1109\/ACCESS.2020.2988786","article-title":"WordChange: Adversarial Examples Generation Approach for Chinese Text Classification","volume":"8","author":"Nuo","year":"2020","journal-title":"IEEE Access"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"50","DOI":"10.1016\/j.ins.2022.08.031","article-title":"Compound Adversarial Examples in Deep Neural Networks","volume":"613","author":"Li","year":"2022","journal-title":"Inf. Sci."},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"509","DOI":"10.1007\/s10489-022-03373-y","article-title":"Revisiting Model\u2019s Uncertainty and Confidences for Adversarial Example Detection","volume":"53","author":"Aldahdooh","year":"2023","journal-title":"Appl. Intell."},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"2332","DOI":"10.1007\/s10489-022-03469-5","article-title":"Adversarial Example Generation with Adabelief Optimizer and Crop Invariance","volume":"53","author":"Yang","year":"2023","journal-title":"Appl. Intell."},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"271","DOI":"10.32604\/iasc.2022.021296","article-title":"Restoration of Adversarial Examples Using Image Arithmetic Operations","volume":"32","author":"Ali","year":"2022","journal-title":"Intell. Autom. Soft Comput."},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"152766","DOI":"10.1109\/ACCESS.2019.2948658","article-title":"Assessing Optimizer Impact on DNN Model Sensitivity to Adversarial Examples","volume":"7","author":"Wang","year":"2019","journal-title":"IEEE Access"},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Kokalj-Filipovic, S., Miller, R., and Morman, J. (2019, January 15\u201317). Targeted Adversarial Examples against RF Deep Classifiers. Proceedings of the WiseML 2019\u2014Proceedings of the 2019 ACM Workshop on Wireless Security and Machine Learning, Miami, FL, USA.","DOI":"10.1145\/3324921.3328792"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Pujari, M., Cherukuri, B.P., Javaid, A.Y., and Sun, W. (2022, January 27\u201329). An Approach to Improve the Robustness of Machine Learning Based Intrusion Detection System Models Against the Carlini-Wagner Attack. Proceedings of the Proceedings of the 2022 IEEE International Conference on Cyber Security and Resilience, CSR 2022, Rhodes, Greece.","DOI":"10.1109\/CSR54599.2022.9850306"},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"100","DOI":"10.1016\/j.procbio.2022.11.006","article-title":"Predicting the Influence of Combined Oxygen and Glucose Gradients Based on Scale-down and Modelling Approaches for the Scale-up of Penicillin Fermentations","volume":"124","author":"Janoska","year":"2023","journal-title":"Process Biochem."},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"1101","DOI":"10.1007\/s10489-022-03437-z","article-title":"Generate Adversarial Examples by Adaptive Moment Iterative Fast Gradient Sign Method","volume":"53","author":"Zhang","year":"2023","journal-title":"Appl. Intell."},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"151103","DOI":"10.1109\/ACCESS.2019.2946461","article-title":"Generating Adversarial Examples in One Shot with Image-To-Image Translation GAN","volume":"7","author":"Zhang","year":"2019","journal-title":"IEEE Access"},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"4403","DOI":"10.1007\/s10462-021-10125-w","article-title":"Adversarial Example Detection for DNN Models: A Review and Experimental Comparison","volume":"55","author":"Aldahdooh","year":"2022","journal-title":"Artif. Intell. Rev."},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"63368","DOI":"10.1109\/ACCESS.2020.2985231","article-title":"MultiPAD: A Multivariant Partition-Based Method for Audio Adversarial Examples Detection","volume":"8","author":"Guo","year":"2020","journal-title":"IEEE Access"},{"key":"ref_40","first-page":"102694","article-title":"NaturalAE: Natural and Robust Physical Adversarial Examples for Object Detectors","volume":"57","author":"Xue","year":"2021","journal-title":"J. Inf. Secur. Appl."},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"4601","DOI":"10.1007\/s10586-022-03702-3","article-title":"Performance Evaluation of Deep Neural Network on Malware Detection: Visual Feature Approach","volume":"25","author":"Anandhi","year":"2022","journal-title":"Clust. Comput."},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Hlihor, P., Volpi, R., and Malag\u00f2, L. (2020, January 19\u201321). Evaluating the Robustness of Defense Mechanisms Based on AutoEncoder Reconstructions against Carlini-Wagner Adversarial Attacks. Proceedings of the Northern Lights Deep Learning Workshop 2020, Troms\u00f8, Norway.","DOI":"10.7557\/18.5173"},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"225","DOI":"10.1016\/j.future.2021.08.009","article-title":"STPD: Defending against \u21130-Norm Attacks with Space Transformation","volume":"126","author":"Chen","year":"2022","journal-title":"Future Gener. Comput. Syst."}],"container-title":["Big Data and Cognitive Computing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2504-2289\/8\/1\/8\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T13:47:42Z","timestamp":1760104062000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2504-2289\/8\/1\/8"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,1,16]]},"references-count":43,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2024,1]]}},"alternative-id":["bdcc8010008"],"URL":"https:\/\/doi.org\/10.3390\/bdcc8010008","relation":{},"ISSN":["2504-2289"],"issn-type":[{"value":"2504-2289","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,1,16]]}}}