{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,29]],"date-time":"2025-10-29T06:25:42Z","timestamp":1761719142215,"version":"build-2065373602"},"reference-count":38,"publisher":"MDPI AG","issue":"5","license":[{"start":{"date-parts":[[2022,4,22]],"date-time":"2022-04-22T00:00:00Z","timestamp":1650585600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001691","name":"Japan Society for the Promotion of Science","doi-asserted-by":"publisher","award":["JP21H03545"],"award-info":[{"award-number":["JP21H03545"]}],"id":[{"id":"10.13039\/501100001691","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Algorithms"],"abstract":"<jats:p>Universal adversarial attacks, which hinder most deep neural network (DNN) tasks using only a single perturbation called universal adversarial perturbation (UAP), are a realistic security threat to the practical application of a DNN for medical imaging. Given that computer-based systems are generally operated under a black-box condition in which only input queries are allowed and outputs are accessible, the impact of UAPs seems to be limited because well-used algorithms for generating UAPs are limited to white-box conditions in which adversaries can access model parameters. Nevertheless, we propose a method for generating UAPs using a simple hill-climbing search based only on DNN outputs to demonstrate that UAPs are easily generatable using a relatively small dataset under black-box conditions with representative DNN-based medical image classifications. Black-box UAPs can be used to conduct both nontargeted and targeted attacks. Overall, the black-box UAPs showed high attack success rates (40\u201390%). The vulnerability of the black-box UAPs was observed in several model architectures. The results indicate that adversaries can also generate UAPs through a simple procedure under the black-box condition to foil or control diagnostic medical imaging systems based on DNNs, and that UAPs are a more serious security threat.<\/jats:p>","DOI":"10.3390\/a15050144","type":"journal-article","created":{"date-parts":[[2022,4,23]],"date-time":"2022-04-23T08:14:06Z","timestamp":1650701646000},"page":"144","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":15,"title":["Simple Black-Box Universal Adversarial Attacks on Deep Neural Networks for Medical Image Classification"],"prefix":"10.3390","volume":"15","author":[{"given":"Kazuki","family":"Koga","sequence":"first","affiliation":[{"name":"Department of Bioscience and Bioinformatics, Kyushu Institute of Technology, Fukuoka 820-8502, Japan"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6355-1366","authenticated-orcid":false,"given":"Kazuhiro","family":"Takemoto","sequence":"additional","affiliation":[{"name":"Department of Bioscience and Bioinformatics, Kyushu Institute of Technology, Fukuoka 820-8502, Japan"}]}],"member":"1968","published-online":{"date-parts":[[2022,4,22]]},"reference":[{"key":"ref_1","unstructured":"Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I.J., and Fergus, R. (2014, January 14\u201316). Intriguing properties of neural networks. Proceedings of the 2nd International Conference on Learning Representations, {ICLR} 2014, Banff, AB, Canada."},{"key":"ref_2","unstructured":"Goodfellow, I.J., Shlens, J., and Szegedy, C. (2014). Explaining and harnessing adversarial examples. arXiv."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"2805","DOI":"10.1109\/TNNLS.2018.2886017","article-title":"Adversarial examples: Attacks and defenses for deep learning","volume":"30","author":"Yuan","year":"2019","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"635","DOI":"10.1109\/JPROC.2021.3050042","article-title":"Optimism in the face of adversity: Understanding and improving deep learning through adversarial robustness","volume":"109","author":"Modas","year":"2021","journal-title":"Proc. IEEE"},{"key":"ref_5","unstructured":"Matyasko, A., and Chau, L.-P. (2018, January 3\u20138). Improved network robustness with adversary critic. Proceedings of the 32nd International Conference on Neural Information Processing Systems 2018, Montreal, QC, Canada."},{"key":"ref_6","unstructured":"Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. (May, January 30). Towards deep learning models resistant to adversarial attacks. Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Carlini, N., and Wagner, D. (2017, January 22\u201326). Towards evaluating the robustness of neural networks. Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA.","DOI":"10.1109\/SP.2017.49"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"60","DOI":"10.1016\/j.media.2017.07.005","article-title":"A survey on deep learning in medical image analysis","volume":"42","author":"Litjens","year":"2017","journal-title":"Med. Image Anal."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"e271","DOI":"10.1016\/S2589-7500(19)30123-2","article-title":"A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: A systematic review and meta-analysis","volume":"1","author":"Liu","year":"2019","journal-title":"Lancet Digit. Health"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"305","DOI":"10.1038\/s42256-020-0186-1","article-title":"Secure, privacy-preserving and federated machine learning in medical imaging","volume":"2","author":"Kaissis","year":"2020","journal-title":"Nat. Mach. Intell."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"1287","DOI":"10.1126\/science.aaw4399","article-title":"Adversarial attacks on medical machine learning","volume":"363","author":"Finlayson","year":"2019","journal-title":"Science"},{"key":"ref_12","unstructured":"Moosavi-Dezfooli, S.M., Fawzi, A., Fawzi, O., and Frossard, P. (July, January 21). Universal adversarial perturbations. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Hirano, H., and Takemoto, K. (2020). Simple iterative method for generating targeted universal adversarial perturbations. Algorithms, 13.","DOI":"10.3390\/a13110268"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Hirano, H., Minagi, A., and Takemoto, K. (2021). Universal adversarial attacks on deep neural networks for medical image classification. BMC Med. Imaging, 21.","DOI":"10.1186\/s12880-020-00530-y"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Chen, P.-Y., Zhang, H., Sharma, Y., Yi, J., and Hsieh, C.-J. (2017). ZOO: Zeroth Order Optimization based black-box attacks to deep neural networks without training substitute models. Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, ACM.","DOI":"10.1145\/3128572.3140448"},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"89","DOI":"10.1016\/j.cose.2019.04.014","article-title":"POBA-GA: Perturbation optimized black-box adversarial attacks via genetic algorithm","volume":"85","author":"Chen","year":"2019","journal-title":"Comput. Secur."},{"key":"ref_17","unstructured":"Guo, C., Frank, J.S., and Weinberger, K.Q. (2019, January 22\u201325). Low frequency adversarial perturbation. Proceedings of the Thirty-Fifth Conference on Uncertainty in Artificial Intelligence, {UAI} 2019, Tel Aviv-Yafo, Israel."},{"key":"ref_18","unstructured":"Guo, C., Gardner, J.R., You, Y., Wilson, A.G., and Weinberger, K.Q. (2019, January 9\u201315). Simple black-box adversarial attacks. Proceedings of the 36th International Conference on Machine Learning, Beach, CA, USA."},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Poursaeed, O., Katsman, I., Gao, B., and Belongie, S. (2018, January 18\u201323). Generative Adversarial Perturbations. Proceedings of the 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognitio, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00465"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Tsuzuku, Y., and Sato, I. (2019, January 16\u201320). On the structural sensitivity of deep convolutional networks to the directions of Fourier basis functions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00014"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Minagi, A., Hirano, H., and Takemoto, K. (2022). Natural Images Allow Universal Adversarial Attacks on Medical Image Classification Using Deep Neural Networks with Transfer Learning. J. Imaging, 8.","DOI":"10.3390\/jimaging8020038"},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"115","DOI":"10.1038\/nature21056","article-title":"Dermatologist-level classification of skin cancer with deep neural networks","volume":"542","author":"Esteva","year":"2017","journal-title":"Nature"},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"1122","DOI":"10.1016\/j.cell.2018.02.010","article-title":"Identifying medical diagnoses and treatable diseases by image-based deep learning","volume":"172","author":"Kermany","year":"2018","journal-title":"Cell"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Khrulkov, V., and Oseledets, I. (2018, January 18\u201322). Art of singular vectors and universal adversarial perturbations. Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00893"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. (2016, January 27\u201330). Rethinking the Inception architecture for computer vision. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.308"},{"key":"ref_26","unstructured":"Simonyan, K., and Zisserman, A. (2015, January 7\u20139). Very deep convolutional networks for large-scale image recognition. Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015\u2014Conference Track Proceedings, San Diego, CA, USA."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep residual learning for image recognition. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"211","DOI":"10.1007\/s11263-015-0816-y","article-title":"ImageNet large scale visual recognition challenge","volume":"115","author":"Russakovsky","year":"2015","journal-title":"Int. J. Comput. Vis."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Moosavi-Dezfooli, S.-M., Fawzi, A., and Frossard, P. (2016, January 27\u201330). DeepFool: A simple and accurate method to fool deep neural networks. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.282"},{"key":"ref_30","unstructured":"Xiao, C., Zhong, P., and Zheng, C. (2020, January 26\u201330). Enhancing adversarial defense by k-winners-take-all. Proceedings of the 8th International Conference on Learning Representations, Addis Ababa, Ethiopia."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Mahmood, K., Gurevin, D., van Dijk, M., and Nguyen, P.H. (2021). Beware the black-box: On the robustness of recent defenses to adversarial examples. Entropy, 23.","DOI":"10.3390\/e23101359"},{"key":"ref_32","unstructured":"Geirhos, R., Rubisch, P., Michaelis, C., Bethge, M., Wichmann, F.A., and Brendel, W. (2019, January 6\u20139). ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. Proceedings of the International Conference on Learning Representations, New Orleans, LA, USA."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Tursynbek, N., Vilkoviskiy, I., Sindeeva, M., and Oseledets, I. (2021, January 2\u20139). Adversarial Turing patterns from cellular automata. Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada.","DOI":"10.1609\/aaai.v35i3.16372"},{"key":"ref_34","unstructured":"Brendel, W., Rauber, J., and Bethge, M. (May, January 30). Decision-based adversarial attacks: Reliable attacks against black-Box machine learning models. Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada."},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Chen, J., Jordan, M.I., and Wainwright, M.J. (2020, January 18\u201321). HopSkipJumpAttack: A query-efficient decision-based attack. Proceedings of the 2020 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA.","DOI":"10.1109\/SP40000.2020.00045"},{"key":"ref_36","unstructured":"Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (2018). To trust or not to trust a classifier. Proceedings of the Advances in Neural Information Processing Systems, Curran Associates, Inc."},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Amann, J., Blasimme, A., Vayena, E., Frey, D., and Madai, V.I. (2020). Explainability for artificial intelligence in healthcare: A multidisciplinary perspective. BMC Med. Inform. Decis. Mak., 20.","DOI":"10.1186\/s12911-020-01332-6"},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"3852","DOI":"10.1038\/s41467-020-17431-x","article-title":"Explainable artificial intelligence model to predict acute critical illness from electronic health records","volume":"11","author":"Lauritsen","year":"2020","journal-title":"Nat. Commun."}],"container-title":["Algorithms"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1999-4893\/15\/5\/144\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T22:59:11Z","timestamp":1760137151000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1999-4893\/15\/5\/144"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,4,22]]},"references-count":38,"journal-issue":{"issue":"5","published-online":{"date-parts":[[2022,5]]}},"alternative-id":["a15050144"],"URL":"https:\/\/doi.org\/10.3390\/a15050144","relation":{},"ISSN":["1999-4893"],"issn-type":[{"type":"electronic","value":"1999-4893"}],"subject":[],"published":{"date-parts":[[2022,4,22]]}}}