{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,4]],"date-time":"2025-11-04T11:05:40Z","timestamp":1762254340479,"version":"build-2065373602"},"reference-count":27,"publisher":"MDPI AG","issue":"12","license":[{"start":{"date-parts":[[2022,11,26]],"date-time":"2022-11-26T00:00:00Z","timestamp":1669420800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Tuscany POR FSE 2014-2020 AI-MAP","award":["CUP B15J19001040004","825619","951911"],"award-info":[{"award-number":["CUP B15J19001040004","825619","951911"]}]},{"name":"AI4EU project","award":["CUP B15J19001040004","825619","951911"],"award-info":[{"award-number":["CUP B15J19001040004","825619","951911"]}]},{"name":"AI4Media Project","award":["CUP B15J19001040004","825619","951911"],"award-info":[{"award-number":["CUP B15J19001040004","825619","951911"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Information"],"abstract":"<jats:p>The adoption of deep learning-based solutions practically pervades all the diverse areas of our everyday life, showing improved performances with respect to other classical systems. Since many applications deal with sensible data and procedures, a strong demand to know the actual reliability of such technologies is always present. This work analyzes the robustness characteristics of a specific kind of deep neural network, the neural ordinary differential equations (N-ODE) network. They seem very interesting for their effectiveness and a peculiar property based on a test-time tunable parameter that permits obtaining a trade-off between accuracy and efficiency. In addition, adjusting such a tolerance parameter grants robustness against adversarial attacks. Notably, it is worth highlighting how decoupling the values of such a tolerance between training and test time can strongly reduce the attack success rate. On this basis, we show how such tolerance can be adopted, during the prediction phase, to improve the robustness of N-ODE to adversarial attacks. In particular, we demonstrate how we can exploit this property to construct an effective detection strategy and increase the chances of identifying adversarial examples in a non-zero knowledge attack scenario. Our experimental evaluation involved two standard image classification benchmarks. This showed that the proposed detection technique provides high rejection of adversarial examples while maintaining most of the pristine samples.<\/jats:p>","DOI":"10.3390\/info13120555","type":"journal-article","created":{"date-parts":[[2022,11,28]],"date-time":"2022-11-28T07:01:30Z","timestamp":1669618890000},"page":"555","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":4,"title":["Improving the Adversarial Robustness of Neural ODE Image Classifiers by Tuning the Tolerance Parameter"],"prefix":"10.3390","volume":"13","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-5014-5089","authenticated-orcid":false,"given":"Fabio","family":"Carrara","sequence":"first","affiliation":[{"name":"Istituto di Scienza e Tecnologie dell\u2019Informazione, 56124 Pisa, Italy"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3471-1196","authenticated-orcid":false,"given":"Roberto","family":"Caldelli","sequence":"additional","affiliation":[{"name":"Media Integration and Communication Center, National Inter-University Consortium for Telecommunications (CNIT), 50134 Florence, Italy"},{"name":"Faculty of Economics, Mercatorum University, 00186 Rome, Italy"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6258-5313","authenticated-orcid":false,"given":"Fabrizio","family":"Falchi","sequence":"additional","affiliation":[{"name":"Istituto di Scienza e Tecnologie dell\u2019Informazione, 56124 Pisa, Italy"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0171-4315","authenticated-orcid":false,"given":"Giuseppe","family":"Amato","sequence":"additional","affiliation":[{"name":"Istituto di Scienza e Tecnologie dell\u2019Informazione, 56124 Pisa, Italy"}]}],"member":"1968","published-online":{"date-parts":[[2022,11,26]]},"reference":[{"key":"ref_1","unstructured":"Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I.J., and Fergus, R. (2014). Intriguing properties of neural networks. arXiv."},{"key":"ref_2","unstructured":"Goodfellow, I.J., Shlens, J., and Szegedy, C. (2015). Explaining and Harnessing Adversarial Examples. arXiv."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Moosavi-Dezfooli, S.M., Fawzi, A., and Frossard, P. (2016, January 27\u201330). Deepfool: A simple and accurate method to fool deep neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.282"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Carlini, N., and Wagner, D. (2017, January 22\u201326). Towards evaluating the robustness of neural networks. Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA.","DOI":"10.1109\/SP.2017.49"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Kurakin, A., Goodfellow, I.J., and Bengio, S. (2017). Adversarial Examples in the Physical World, Chapman and Hall.","DOI":"10.1201\/9781351251389-8"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Papernot, N., McDaniel, P.D., Wu, X., Jha, S., and Swami, A. (2016, January 22\u201326). Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks. Proceedings of the 2016 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA.","DOI":"10.1109\/SP.2016.41"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Wu, J., Xia, Z., and Feng, X. (2022). Improving Adversarial Robustness of CNNs via Maximum Margin. Appl. Sci., 12.","DOI":"10.3390\/app12157927"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Carrara, F., Caldelli, R., Falchi, F., and Amato, G. (2019, January 9\u201312). On the robustness to adversarial examples of neural ODE image classifiers. Proceedings of the 2019 IEEE International Workshop on Information Forensics and Security (WIFS), Delft, The Netherlands.","DOI":"10.1109\/WIFS47025.2019.9035109"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Carrara, F., Caldelli, R., Falchi, F., and Amato, G. (2021, January 15\u201320). Defending Neural ODE Image Classifiers from Adversarial Attacks with Tolerance Randomization. Proceedings of the Pattern Recognition\u2014ICPR International Workshops and Challenges, Virtual Event.","DOI":"10.1007\/978-3-030-68780-9_35"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., and Swami, A. (2016, January 21\u201324). The limitations of deep learning in adversarial settings. Proceedings of the 2016 IEEE European Symposium on Security and Privacy (EuroS&P), Saarbruecken, Germany.","DOI":"10.1109\/EuroSP.2016.36"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Kurakin, A., Goodfellow, I.J., Bengio, S., Dong, Y., Liao, F., Liang, M., Pang, T., Zhu, J., Hu, X., and Xie, C. (2018). Adversarial Attacks and Defences Competition. The NIPS \u201917 Competition: Building Intelligent Systems, Springer.","DOI":"10.1007\/978-3-319-94042-7_11"},{"key":"ref_12","unstructured":"Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. (2018). Towards Deep Learning Models Resistant to Adversarial Attacks. arXiv."},{"key":"ref_13","unstructured":"Grosse, K., Manoharan, P., Papernot, N., Backes, M., and McDaniel, P.D. (2017). On the (Statistical) Detection of Adversarial Examples. arXiv."},{"key":"ref_14","unstructured":"Metzen, J.H., Genewein, T., Fischer, V., and Bischoff, B. (2017). On Detecting Adversarial Perturbations. arXiv."},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Carrara, F., Becarelli, R., Caldelli, R., Falchi, F., and Amato, G. (2018, January 8\u201314). Adversarial examples detection in features distance spaces. Proceedings of the European Conference on Computer Vision (ECCV) Workshops, Munich, Germany.","DOI":"10.1007\/978-3-030-11012-3_26"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Taran, O., Rezaeifar, S., Holotyak, T., and Voloshynovskiy, S. (2019, January 15\u201320). Defending against adversarial attacks by randomized diversification. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.01148"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Barni, M., Nowroozi, E., Tondi, B., and Zhang, B. (2020, January 4\u20138). Effectiveness of random deep feature selection for securing image manipulation detectors against adversarial examples. Proceedings of the ICASSP 2020\u20132020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain.","DOI":"10.1109\/ICASSP40776.2020.9053318"},{"key":"ref_18","unstructured":"Feinman, R., Curtin, R.R., Shintre, S., and Gardner, A.B. (2017). Detecting Adversarial Samples from Artifacts. arXiv."},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Carlini, N., and Wagner, D. (2017, January 3). Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods. Proceedings of the Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, Dallas, TX, USA.","DOI":"10.1145\/3128572.3140444"},{"key":"ref_20","unstructured":"Hanshu, Y., Jiawei, D., Vincent, T., and Jiashi, F. (2019). On robustness of neural ordinary differential equations. arXiv."},{"key":"ref_21","unstructured":"Liu, X., Xiao, T., Si, S., Cao, Q., Kumar, S., and Hsieh, C.J. (2022, August 01). Stabilizing Neural ODE Networks with Stochasticity. Available online: https:\/\/openreview.net\/forum?id=Skx2iCNFwB."},{"key":"ref_22","unstructured":"Chen, T.Q., Rubanova, Y., Bettencourt, J., and Duvenaud, D.K. (2018). Neural ordinary differential equations. Advances in Neural Information Processing Systems, MIT Press."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 11\u201314). Identity mappings in deep residual networks. Proceedings of the Computer Vision\u2014ECCV 2016, Amsterdam, The Netherlands.","DOI":"10.1007\/978-3-319-46493-0_38"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Wu, Y., and He, K. (2018, January 8\u201314). Group normalization. Proceedings of the European Conference on Computer Vision (ECCV) Workshops, Munich, Germany.","DOI":"10.1007\/978-3-030-01261-8_1"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"2278","DOI":"10.1109\/5.726791","article-title":"Gradient-based learning applied to document recognition","volume":"86","author":"LeCun","year":"1998","journal-title":"Proc. IEEE"},{"key":"ref_26","unstructured":"Krizhevsky, A., and Hinton, G. (2009). Learning Multiple Layers of Features from Tiny Images, University of Toronto. Technical Report."},{"key":"ref_27","unstructured":"Rauber, J., Brendel, W., and Bethge, M. (2017). Foolbox: A Python toolbox to benchmark the robustness of machine learning models. arXiv."}],"container-title":["Information"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2078-2489\/13\/12\/555\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T01:27:15Z","timestamp":1760146035000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2078-2489\/13\/12\/555"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,11,26]]},"references-count":27,"journal-issue":{"issue":"12","published-online":{"date-parts":[[2022,12]]}},"alternative-id":["info13120555"],"URL":"https:\/\/doi.org\/10.3390\/info13120555","relation":{},"ISSN":["2078-2489"],"issn-type":[{"type":"electronic","value":"2078-2489"}],"subject":[],"published":{"date-parts":[[2022,11,26]]}}}