{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,10]],"date-time":"2026-03-10T03:30:42Z","timestamp":1773113442178,"version":"3.50.1"},"reference-count":63,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2023,8,17]],"date-time":"2023-08-17T00:00:00Z","timestamp":1692230400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,8,17]],"date-time":"2023-08-17T00:00:00Z","timestamp":1692230400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"the National Research Foundation of Korea(NRF) grant funded by the Korea government","award":["2021R1A4A1029650"],"award-info":[{"award-number":["2021R1A4A1029650"]}]},{"name":"the National Research Foundation of Korea(NRF) grant funded by the Korea government","award":["2021R1A4A1029650"],"award-info":[{"award-number":["2021R1A4A1029650"]}]},{"name":"Institute of Information &amp; communications Technology Planning &amp; Evaluation (IITP) grant funded by the Korea governmen","award":["2021-0-00111"],"award-info":[{"award-number":["2021-0-00111"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Int. J. Inf. Secur."],"published-print":{"date-parts":[[2024,2]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Although deep neural networks (DNNs) have achieved high performance across various applications, they are often deceived by adversarial examples generated by adding small perturbations. To combat adversarial attacks, many detection methods have been proposed, including feature squeezing and trapdoor. However, these methods rely on the output of DNNs or involve training a separate network to detect adversarial examples, which leads to high computational costs and low efficiency. In this study, we propose a simple and effective approach called the entropy-based detector (EBD) to protect DNNs from various adversarial attacks. EBD detects adversarial examples by comparing the difference in entropy between the input sample before and after bit depth reduction. We show that EBD can detect over 98% of the adversarial examples generated by attacks using fast-gradient sign method, basic iterative method, momentum iterative method, DeepFool and CW attacks when the false positive rate is 2.5% for CIFAR-10 and ImageNet datasets.<\/jats:p>","DOI":"10.1007\/s10207-023-00735-6","type":"journal-article","created":{"date-parts":[[2023,8,17]],"date-time":"2023-08-17T06:02:55Z","timestamp":1692252175000},"page":"299-314","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":16,"title":["Detection of adversarial attacks based on differences in image entropy"],"prefix":"10.1007","volume":"23","author":[{"given":"Gwonsang","family":"Ryu","sequence":"first","affiliation":[]},{"given":"Daeseon","family":"Choi","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,8,17]]},"reference":[{"key":"735_CR1","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S.: Deep residual learning for image recognition, In: Proceedings of the IEEE\/CVF Computer Vision Pattern Recognition (CVPR), pp. 770\u2013778, (2016)","DOI":"10.1109\/CVPR.2016.90"},{"key":"735_CR2","doi-asserted-by":"crossref","unstructured":"Lin, T.Y., Doll\u00e1r, P., Girshick, R.B., He, K., Hariharan, B., Belongie, S.J.: Feature pyramid networks for object detection, In: Proceedings of the IEEE\/CVF Computer Vision Pattern Recognition (CVPR), pp. 2117\u20132125, (2017)","DOI":"10.1109\/CVPR.2017.106"},{"key":"735_CR3","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A,N., Kaiser, \u0141., Polosukhin, I.: Attention is all you need, Adv. Neural Info. Process. Syst. 30, (2017)"},{"key":"735_CR4","unstructured":"Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks, In: Proceedings of International Conference on Learning Representations (ICLR), (2014)"},{"key":"735_CR5","doi-asserted-by":"crossref","unstructured":"Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings, In: Proceedings of the IEEE European Symposium on Security and Privacy (SP), pp. 372\u2013387, (2016)","DOI":"10.1109\/EuroSP.2016.36"},{"key":"735_CR6","doi-asserted-by":"crossref","unstructured":"Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: Deepfool: A simple and accurate method to fool deep neural networks, In: Proceedings of the IEEE\/CVF Computer Vision Pattern Recognition (CVPR), pp. 2574\u20132582, (2016)","DOI":"10.1109\/CVPR.2016.282"},{"key":"735_CR7","doi-asserted-by":"crossref","unstructured":"Xie, C., Zhang, Z., Zhou, Y., Bai, S., Wang, J., Ren, Z., Yuille, A.: Improving transferability of adversarial examples with input diversity, In: Proceedings of the IEEE\/CVF Computer Vision Pattern Recognition (CVPR), pp.2730\u20132739, (2019)","DOI":"10.1109\/CVPR.2019.00284"},{"key":"735_CR8","doi-asserted-by":"crossref","unstructured":"Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., Prakash, A. Kohno, T., Song, D.: Robust physical-world attacks on deep learning visual classification, In: Proceedings of the IEEE\/CVF Computer Vision Pattern Recognition (CVPR), pp. 1625\u20131634, (2018)","DOI":"10.1109\/CVPR.2018.00175"},{"key":"735_CR9","doi-asserted-by":"crossref","unstructured":"Zhao, Y. Zhu, H., Liang, R., Shen, Q., Zhang, S., Chen, K.: Seeing isn\u2019t believing: Towards more robust adversarial attack against real world object detectors, In: Proceedings of ACM Conference on Computer and Communications Security (CCS), pp. 1989\u20132004, (2019)","DOI":"10.1145\/3319535.3354259"},{"key":"735_CR10","doi-asserted-by":"crossref","unstructured":"Sharif, M., Bhagavatula, S., Bauer, L, Reiter, M.K.: Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition, In: Proceedings of ACM Conference on Computer and Communications Security (CCS), pp. 1528\u20131540, (2016)","DOI":"10.1145\/2976749.2978392"},{"key":"735_CR11","volume":"60","author":"G Ryu","year":"2021","unstructured":"Ryu, G., Park, H., Choi, D.: Adversarial attacks by attaching noise markers on the face against deep face recognition. J. Inf. Secur. Appl. 60, 102874 (2021)","journal-title":"J. Inf. Secur. Appl."},{"key":"735_CR12","doi-asserted-by":"crossref","unstructured":"Xie, C., Wang, J., Zhang, Z., Zhou, Y., Xie, L., Yuille, A.: Adversarial examples for semantic segmentation and object detection, In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 1369\u20131378, (2017)","DOI":"10.1109\/ICCV.2017.153"},{"key":"735_CR13","unstructured":"Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks, In: Proceedings of International Conference on Learning Representations (ICLR), (2018)"},{"key":"735_CR14","doi-asserted-by":"crossref","unstructured":"Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks, In: Proceedings of the IEEE Symposium on Security and Privacy (SP), pp. 582\u2013597, (2016)","DOI":"10.1109\/SP.2016.41"},{"key":"735_CR15","doi-asserted-by":"crossref","unstructured":"Xu, W., Evans, D., Qi, Y.: Feature squeezing: Detecting adversarial examples in deep neural networks, In: Proceedings of Network and Distributed System Security Symposium (NDSS), (2018)","DOI":"10.14722\/ndss.2018.23198"},{"issue":"12","key":"735_CR16","doi-asserted-by":"publisher","first-page":"10193","DOI":"10.1002\/int.22458","volume":"37","author":"D Ye","year":"2021","unstructured":"Ye, D., Chen, C., Liu, C., Wang, H., Jiang, S.: Detection defense against adversarial attacks with saliency map. Int. J. Intell. Syst. 37(12), 10193\u201310210 (2021)","journal-title":"Int. J. Intell. Syst."},{"key":"735_CR17","doi-asserted-by":"crossref","unstructured":"Prakash, A., Moran, N., Garber, S., Dilillo, A., Storer, J.: Deflecting adversarial attacks with pixel deflection, In: Proceedings of the IEEE\/CVF Computer Vision Pattern Recognition (CVPR), pp. 8571\u20138580, (2018)","DOI":"10.1109\/CVPR.2018.00894"},{"key":"735_CR18","doi-asserted-by":"crossref","unstructured":"Naseer, M., Khan, S., Hayat, M., Khan, F.S., Porikli, F.: A self-supervised approach for adversarial robustness, In: Proceedings of the IEEE\/CVF Computer Vision Pattern Recognition (CVPR), pp. 262\u2013271, (2020)","DOI":"10.1109\/CVPR42600.2020.00034"},{"key":"735_CR19","unstructured":"Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples, In: Proceedings of International Conference on Learning Representations (ICLR), (2015)"},{"key":"735_CR20","doi-asserted-by":"crossref","unstructured":"Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world, In: Proceedings of International Conference on Learning Representations (ICLR), (2017)","DOI":"10.1201\/9781351251389-8"},{"key":"735_CR21","doi-asserted-by":"crossref","unstructured":"Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks, In: Proceedings of the IEEE Symposium on Security and Privacy (SP), pp. 39\u201357, (2017)","DOI":"10.1109\/SP.2017.49"},{"key":"735_CR22","doi-asserted-by":"crossref","unstructured":"Zheng, H., Zhang, Z., Gu, J., Lee, H., Prakash, A.: Efficient adversarial training with transferable adversarial examples, In: Proceedings of the IEEE\/CVF Computer Vision Pattern Recognition (CVPR), pp. 1181\u20131190, (2020)","DOI":"10.1109\/CVPR42600.2020.00126"},{"key":"735_CR23","doi-asserted-by":"crossref","unstructured":"Xiao, C., Li, B., Zhu, L.Y., He, W., Liu, M., Song, D.: Generating adversarial examples with adversarial networks, arXive preprint arXiv:1801.02610, (2018)","DOI":"10.24963\/ijcai.2018\/543"},{"key":"735_CR24","unstructured":"Goodfello, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets, Adv. Neural Info. Process. Syst. 27, (2014)"},{"key":"735_CR25","unstructured":"Athalye, A., Carlini, N., Wagner, D.: Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples, In: Proceedings of International Conference on Machine Learning (ICML), pp. 274\u2013283, (2018)"},{"key":"735_CR26","doi-asserted-by":"crossref","unstructured":"Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning, In: Proceedings of ACM Asia Conference on Computer and Communications Security (AsiaCCS), pp. 506\u2013519, (2017)","DOI":"10.1145\/3052973.3053009"},{"key":"735_CR27","doi-asserted-by":"publisher","first-page":"7168","DOI":"10.3390\/app10207168","volume":"10","author":"H Park","year":"2020","unstructured":"Park, H., Ryu, G., Choi, D.: Partial retraining substitute model for query-limited black-box attacks. Appl. Sci. 10, 7168 (2020)","journal-title":"Appl. Sci."},{"key":"735_CR28","doi-asserted-by":"crossref","unstructured":"Dong, Y., Liao, F., Pang, T., Su, H., Zhu, J., Hu, X., Li, J.: Boosting adversarial attacks with momentum, In: Proceedings of the IEEE\/CVF Computer Vision Pattern Recognition (CVPR), pp. 9185\u20139193, (2018)","DOI":"10.1109\/CVPR.2018.00957"},{"key":"735_CR29","doi-asserted-by":"crossref","unstructured":"Inkawhich, N., Wen, W., Li, H.H., Chen, Y.: Feature space perturbations yield more transferable adversarial examples, In: Proceedings of the IEEE\/CVF Computer Vision Pattern Recognition (CVPR), pp. 7066\u20137074, (2019)","DOI":"10.1109\/CVPR.2019.00723"},{"key":"735_CR30","doi-asserted-by":"crossref","unstructured":"Meng, D., Chen, H.: MagNet: A two-pronged defense against adversarial examples, In: Proceedings of ACM Conference on Computer and Communications Security (CCS), pp. 135\u2013147, (2017)","DOI":"10.1145\/3133956.3134057"},{"key":"735_CR31","unstructured":"Samangouei, P., Kabkab, M., Chellappa, R.: Defense-GAN: Protecting classifiers against adversarial attacks using generative models, In: Proceedings of International Conference on Learning Representations (ICLR), (2018)"},{"key":"735_CR32","unstructured":"Song, Y., Kim, T., Nowozin, S., Ermono, S., Kushman, N.: PixelDefend: Leveraging generative models to understand and defend against adversarial examples, In: Proceedings of International Conference on Learning Representations (ICLR), (2018)"},{"key":"735_CR33","unstructured":"Oord, A.V.D., Kalchbrenner, N., Kavukcuoglu, K.: Pixel recurrent neural networks, In: Proceedings of International Conference on Machine Learning (ICML), pp. 1747\u20131756, (2016)"},{"key":"735_CR34","unstructured":"Guo, C., Rana, M., Cisse, M., Maaten, L.V.D.: Countering adversarial images using input transformations, In: Proceedings of International Conference on Learning Representations (ICLR), (2018)"},{"key":"735_CR35","unstructured":"Xie, C., Wang, J., Zhang, Z., Ren, Z., Yuille, A., Mitigating adversarial effects through randomization, In: Proceedings of International Conference on Learning Representations (ICLR), (2018)"},{"key":"735_CR36","doi-asserted-by":"crossref","unstructured":"Jeddi, A., Shafiee, M.J., Karg, M., Scharfenberger, C., Wong, A.: Learn2Perturb: An end-to-end feature perturbation learning to improve adversarial robustness, In: Proceedings of the IEEE\/CVF Computer Vision Pattern Recognition (CVPR), pp. 1241\u20131250, (2020)","DOI":"10.1109\/CVPR42600.2020.00132"},{"key":"735_CR37","doi-asserted-by":"crossref","unstructured":"Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: A large-scale hierarchical image database, In: Proceedings of the IEEE\/CVF Computer Vision Pattern Recognition (CVPR), pp. 248\u2013255, (2009)","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"735_CR38","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks, In: Proceedings of European Conference on Computer Vision (ECCV), pp. 630\u2013645, (2016)","DOI":"10.1007\/978-3-319-46493-0_38"},{"key":"735_CR39","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Ioffe, S., Vanhoucke, V.: Inception-v4, Inception-ResNet and the impact of residual connections on learning, In: Proceedings of AAAI conference on artificial intelligence, (2017)","DOI":"10.1609\/aaai.v31i1.11231"},{"key":"735_CR40","unstructured":"Miyato, T., Dai, A.M., Goodfellow, I.: Adversarial training methods for semi-supervised text classification, arXiv preprint arXiv:1605.07725, (2016)"},{"key":"735_CR41","unstructured":"Papernot, N., McDaniel, P.: Extending defensive distillation, arXiv preprint arXiv:1705.05264, (2017)"},{"key":"735_CR42","unstructured":"Metzen, J.H., Genewein, T., Fischer, V., Bischoff, B.: On detecting adversarial perturbations, In: Proceedings of International Conference on Learning Representations (ICLR), (2017)"},{"key":"735_CR43","doi-asserted-by":"crossref","unstructured":"Lu, J., Issaranon, T., Forsyth, D.: SafetyNet: Detecting and rejecting adversarial examples robustly, In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 1\u20139, (2017)","DOI":"10.1109\/ICCV.2017.56"},{"key":"735_CR44","unstructured":"Bradshaw, J., Matthews, A.G.G., Ghahramani, Z.: Adversarial examples, uncertainty, and transfer testing robustness in Gaussian process hybrid deep networks, arXiv preprint arXiv:1707.02476, (2017)"},{"key":"735_CR45","unstructured":"Buckman, J., Roy, A., Raffel, C., Goodfellow, I.: Thermometer encoding: One hot way to resist adversarial examples, In: Proceedings of International Conference on Learning Representations (ICLR), (2018)"},{"key":"735_CR46","doi-asserted-by":"publisher","first-page":"1453","DOI":"10.1002\/int.22258","volume":"35","author":"Z Yin","year":"2020","unstructured":"Yin, Z., Wang, H., Wang, J., Tang, J., Wang, W.: Defense against adversarial attacks by low-level image transformations. Int. J. Intell. Syst. 35, 1453\u20131466 (2020)","journal-title":"Int. J. Intell. Syst."},{"key":"735_CR47","doi-asserted-by":"crossref","unstructured":"Zakharov, E., Shysheya, A., Burkov, E., Lempitsky, V.: Few-shot adversarial learning of realistic neural talking head models, In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 9459\u20139468, (2019)","DOI":"10.1109\/ICCV.2019.00955"},{"key":"735_CR48","unstructured":"Pang, T., Xu, K.N., Dong, Y., Du, C., Chen, N., Zhu, J.: Rethinking softmax cross-entropy loss for adversarial robustness, In: Proceedings of International Conference on Learning Representations (ICLR), (2020)"},{"key":"735_CR49","unstructured":"Xiao, C., Zhong, P., Zheng, C.: Enhancing Adversarial Defense by k-Winners-Take-All, In: Proceedings of International Conference on Learning Representations (ICLR), (2020)"},{"key":"735_CR50","doi-asserted-by":"crossref","unstructured":"Kim, Y.J., Ganbold, B., Kim, K.G.: Web-based spine segmentation using deep learning in computed tomography images. Healthc. Inform. Res. 26, 61\u201367 (2020)","DOI":"10.4258\/hir.2020.26.1.61"},{"key":"735_CR51","doi-asserted-by":"publisher","first-page":"201","DOI":"10.4258\/hir.2019.25.3.201","volume":"25","author":"D Yoon","year":"2019","unstructured":"Yoon, D., Lim, H.S., Jung, K., Kim, T.Y., Lee, S.: Deep learning-based electrocardiogram signal noise detection and screening model. Healthc. Inform. Res. 25, 201\u2013211 (2019)","journal-title":"Healthc. Inform. Res."},{"key":"735_CR52","doi-asserted-by":"crossref","unstructured":"Ma, S., Liu, Y., Tao, G., Lee, W.C., Zhang, X.: NIC: Detecting adversarial samples with neural network invariant checking, In: Proceedings of Network and Distributed Systems Security (NDSS), (2019)","DOI":"10.14722\/ndss.2019.23415"},{"key":"735_CR53","unstructured":"Ma, X., Li, B., Wang, Y., Erfani, S.M., Wijewickrema, S., Schoenebeck, G., Song, D., Houle, M.E., Bailey, J.: Characterizing adversarial subspaces using local intrinsic dimensionality, In: Proceedings of International Conference on Learning Representations (ICLR), (2018)"},{"key":"735_CR54","doi-asserted-by":"crossref","unstructured":"Cohen, G., Sapiro, G., Giryes, R.: Detecting adversarial samples using influence functions and nearest neighbors, In: Proceedings of the IEEE\/CVF Computer Vision Pattern Recognition (CVPR), pp. 14453\u201314462, (2020)","DOI":"10.1109\/CVPR42600.2020.01446"},{"key":"735_CR55","unstructured":"Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition, In: Proceedings of International Conference on Learning Representations (ICLR), (2015)"},{"issue":"8","key":"735_CR56","doi-asserted-by":"publisher","first-page":"9174","DOI":"10.1007\/s10489-022-03991-6","volume":"53","author":"G Ryu","year":"2023","unstructured":"Ryu, G., Choi, D.: A hybrid adversarial training for deep learning model and denoising network resistant to adversarial examples. Appl. Intell. 53(8), 9174\u20139187 (2023)","journal-title":"Appl. Intell."},{"key":"735_CR57","doi-asserted-by":"crossref","unstructured":"Ryu, G., Choi, D.: Feature-based adversarial training for deep learning models resistant to transferable adversarial examples, IEICE Trans. Inf. Syst. E105-D(5), 1039\u20131049 (2022)","DOI":"10.1587\/transinf.2021EDP7198"},{"key":"735_CR58","doi-asserted-by":"crossref","unstructured":"Shan, S., Wenger, E., Wang, B., Li, B., Zheng, H., Zhao, B.Y.: Gotta catch \u2019em all: using honeypots to catch adversarial attacks on neural networks, In: Proceedings of ACM Conference on Computer and Communications Security, pp. 67\u201383, (2020)","DOI":"10.1145\/3372297.3417231"},{"key":"735_CR59","doi-asserted-by":"publisher","first-page":"122308","DOI":"10.1109\/ACCESS.2021.3109602","volume":"9","author":"H Na","year":"2021","unstructured":"Na, H., Ryu, G., Choi, D.: Adversarial attack based on perturbation of contour region to evade steganalysis-based detection. IEEE Access 9, 122308\u2013122321 (2021)","journal-title":"IEEE Access"},{"key":"735_CR60","doi-asserted-by":"publisher","first-page":"141","DOI":"10.1109\/MSP.2012.2211477","volume":"29","author":"D Li","year":"2012","unstructured":"Li, D.: The mnist database of handwritten digit images for machine learning research. IEEE Signal Process. Mag. 29, 141\u2013142 (2012)","journal-title":"IEEE Signal Process. Mag."},{"key":"735_CR61","unstructured":"Jrizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images, Citeseer, (2009)"},{"key":"735_CR62","unstructured":"Keras Applications GitHub Website. https:\/\/github.com\/keras-team\/keras-applications"},{"key":"735_CR63","unstructured":"CleverHans GitHub Website. https:\/\/github.com\/tensorflow\/cleverhans"}],"container-title":["International Journal of Information Security"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10207-023-00735-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10207-023-00735-6\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10207-023-00735-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,1,23]],"date-time":"2024-01-23T01:07:48Z","timestamp":1705972068000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10207-023-00735-6"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,8,17]]},"references-count":63,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2024,2]]}},"alternative-id":["735"],"URL":"https:\/\/doi.org\/10.1007\/s10207-023-00735-6","relation":{},"ISSN":["1615-5262","1615-5270"],"issn-type":[{"value":"1615-5262","type":"print"},{"value":"1615-5270","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,8,17]]},"assertion":[{"value":"17 August 2023","order":1,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"This article does not contain any studies with animals performed by any of the authors.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethical approval"}}]}}