{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,6]],"date-time":"2025-11-06T20:14:55Z","timestamp":1762460095145,"version":"3.37.3"},"reference-count":67,"publisher":"Springer Science and Business Media LLC","issue":"4","license":[{"start":{"date-parts":[[2022,7,9]],"date-time":"2022-07-09T00:00:00Z","timestamp":1657324800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,7,9]],"date-time":"2022-07-09T00:00:00Z","timestamp":1657324800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100000266","name":"Engineering and Physical Sciences Research Council","doi-asserted-by":"publisher","award":["EP\/R026173\/1","EP\/T026995\/1"],"award-info":[{"award-number":["EP\/R026173\/1","EP\/T026995\/1"]}],"id":[{"id":"10.13039\/501100000266","id-type":"DOI","asserted-by":"publisher"}]},{"name":"orca partnership resource fund","award":["EP\/R026173\/1"],"award-info":[{"award-number":["EP\/R026173\/1"]}]},{"DOI":"10.13039\/501100007601","name":"Horizon 2020","doi-asserted-by":"publisher","award":["956123"],"award-info":[{"award-number":["956123"]}],"id":[{"id":"10.13039\/501100007601","id-type":"DOI","asserted-by":"publisher"}]},{"name":"UK DSTL"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2023,8]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Safety concerns on the deep neural networks (DNNs) have been raised when they are applied to critical sectors. In this paper, we define safety risks by requesting the alignment of network\u2019s decision with human perception. To enable a general methodology for quantifying safety risks, we define a generic safety property and instantiate it to express various safety risks. For the quantification of risks, we take the maximum radius of safe norm balls, in which no safety risk exists. The computation of the maximum safe radius is reduced to the computation of their respective Lipschitz metrics\u2014the quantities to be computed. In addition to the known adversarial example, reachability example, and invariant example, in this paper, we identify a new class of risk\u2014uncertainty example\u2014on which humans can tell easily, but the network is unsure. We develop an algorithm, inspired by derivative-free optimization techniques and accelerated by tensor-based parallelization on GPUs, to support an efficient computation of the metrics. We perform evaluations on several benchmark neural networks, including ACSC-Xu, MNIST, CIFAR-10, and ImageNet networks. The experiments show that our method can achieve competitive performance on safety quantification in terms of the tightness and the efficiency of computation. Importantly, as a generic approach, our method can work with a broad class of safety risks and without restrictions on the structure of neural networks.<\/jats:p>","DOI":"10.1007\/s40747-022-00790-x","type":"journal-article","created":{"date-parts":[[2022,7,9]],"date-time":"2022-07-09T08:03:07Z","timestamp":1657353787000},"page":"3801-3818","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":6,"title":["Quantifying safety risks of deep neural networks"],"prefix":"10.1007","volume":"9","author":[{"given":"Peipei","family":"Xu","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8311-8738","authenticated-orcid":false,"given":"Wenjie","family":"Ruan","sequence":"additional","affiliation":[]},{"given":"Xiaowei","family":"Huang","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,7,9]]},"reference":[{"key":"790_CR1","doi-asserted-by":"crossref","unstructured":"Anderson G, Pailoor S, Dillig I, Chaudhuri S (2019) Optimization and abstraction: a synergistic approach for analyzing neural network robustness. In: Proceedings of the 40th ACM SIGPLAN conference on programming language design and implementation, pp 731\u2013744","DOI":"10.1145\/3314221.3314614"},{"key":"790_CR2","unstructured":"Athalye A, Sutskever I (2018) Synthesizing robust adversarial examples. In: The 35th international conference on machine learning (ICML), pp 284\u2013293"},{"key":"790_CR3","doi-asserted-by":"publisher","first-page":"889","DOI":"10.1137\/S1052623400378742","volume":"13","author":"C Audet","year":"2000","unstructured":"Audet C, Dennis JE (2000) Analysis of generalized pattern searches. SIAM J Optim 13:889\u2013903","journal-title":"SIAM J Optim"},{"issue":"1","key":"790_CR4","doi-asserted-by":"publisher","first-page":"188","DOI":"10.1137\/040603371","volume":"17","author":"C Audet","year":"2006","unstructured":"Audet C, Dennis JE Jr (2006) Mesh adaptive direct search algorithms for constrained optimization. SIAM J Optim 17(1):188\u2013217","journal-title":"SIAM J Optim"},{"key":"790_CR5","doi-asserted-by":"crossref","unstructured":"Audet C, Hare W (2017) Mesh adaptive direct search. In: Derivative-free and blackbox optimization. Springer, pp 135\u2013156","DOI":"10.1007\/978-3-319-68913-5_8"},{"key":"790_CR6","doi-asserted-by":"crossref","unstructured":"Balan R, Singh M, Zou D (2017) Lipschitz properties for deep convolutional networks. arXiv:1701.05217","DOI":"10.1090\/conm\/706\/14205"},{"key":"790_CR7","volume-title":"Pattern recognition and machine learning","author":"C Bishop","year":"2006","unstructured":"Bishop C (2006) Pattern recognition and machine learning. Springer, New York"},{"key":"790_CR8","first-page":"3240","volume":"33","author":"A Boopathy","year":"2019","unstructured":"Boopathy A, Weng TW, Chen PY, Liu S, Daniel L (2019) Cnn-cert: an efficient framework for certifying robustness of convolutional neural networks. Proc AAAI Conf Artif Intell 33:3240\u20133247","journal-title":"Proc AAAI Conf Artif Intell"},{"key":"790_CR9","unstructured":"Bunel R, Turkaslan I, Torr PH, Kohli P, Kumar MP (2018) A unified view of piecewise linear neural network verification. In: Neural information processing systems (NIPS\u201918), pp 4790\u20134799"},{"key":"790_CR10","doi-asserted-by":"crossref","unstructured":"Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on security and privacy (SP). IEEE, pp 39\u201357","DOI":"10.1109\/SP.2017.49"},{"key":"790_CR11","doi-asserted-by":"crossref","unstructured":"Dutta S, Jha S, Sanakaranarayanan S, Tiwari A (2017) Output range analysis for deep neural networks. arXiv:1709.09130","DOI":"10.1007\/978-3-319-77935-5_9"},{"issue":"1","key":"790_CR12","doi-asserted-by":"publisher","first-page":"24","DOI":"10.1038\/s41591-018-0316-z","volume":"25","author":"A Esteva","year":"2019","unstructured":"Esteva A, Robicquet A, Ramsundar B, Kuleshov V, DePristo M, Chou K, Cui C, Corrado G, Thrun S, Dean J (2019) A guide to deep learning in healthcare. Nat Med 25(1):24\u201329","journal-title":"Nat Med"},{"key":"790_CR13","unstructured":"Galloway A, Taylor GW, Moussa M (2018) Attacking binarized neural networks. In: International conference on learning representations (ICLR)"},{"key":"790_CR14","doi-asserted-by":"crossref","unstructured":"Gehr T, Mirman M, Drachsler-Cohen D, Tsankov P, Chaudhuri S, Vechev M (2018) Ai2: safety and robustness certification of neural networks with abstract interpretation. In: 2018 IEEE symposium on security and privacy (SP), pp 3\u201318","DOI":"10.1109\/SP.2018.00058"},{"key":"790_CR15","volume-title":"Deep learning","author":"I Goodfellow","year":"2016","unstructured":"Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT Press, Cambridge"},{"key":"790_CR16","unstructured":"Goodfellow IJ, Shlens J, Szegedy C (2015) Explaining and harnessing adversarial examples. In: International conference for learning representations (ICLR)"},{"key":"790_CR17","unstructured":"Hein M, Andriushchenko M (2017) Formal guarantees on the robustness of a classifier against adversarial manipulation. In: Neural information processing systems (NIPS), pp 2266\u20132276"},{"key":"790_CR18","doi-asserted-by":"publisher","first-page":"79","DOI":"10.1007\/978-3-030-86362-3_7","volume-title":"Artificial neural networks and machine learning\u2014ICANN 2021","author":"C Huang","year":"2021","unstructured":"Huang C, Hu Z, Huang X, Pei K (2021) Statistical certification of acceptable robustness for neural networks. In: Farka\u0161 I, Masulli P, Otte S, Wermter S (eds) Artificial neural networks and machine learning\u2014ICANN 2021. Springer International Publishing, Cham, pp 79\u201390"},{"key":"790_CR19","unstructured":"Huang W, Sun Y, Sharp J, Ruan W, Meng J, Huang X (2021) Coverage guided testing for recurrent neural networks. IEEE Trans Reliab 1\u201316"},{"key":"790_CR20","volume-title":"Machine learning safety","author":"X Huang","year":"2022","unstructured":"Huang X, Jin G, Ruan W (2022) Machine learning safety. Springer, Berlin"},{"key":"790_CR21","doi-asserted-by":"publisher","first-page":"100270","DOI":"10.1016\/j.cosrev.2020.100270","volume":"37","author":"X Huang","year":"2020","unstructured":"Huang X, Kroening D, Ruan W, Sharp J, Sun Y, Thamo E, Wu M, Yi X (2020) A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability. Comput Sci Rev 37:100270","journal-title":"Comput Sci Rev"},{"key":"790_CR22","doi-asserted-by":"crossref","unstructured":"Huang X, Kwiatkowska M, Wang S, Wu M (2017) Safety verification of deep neural networks. In: International conference on computer aided verification (CAV), pp 3\u201329","DOI":"10.1007\/978-3-319-63387-9_1"},{"key":"790_CR23","unstructured":"Ilyas A, Engstrom L, Athalye A, Lin J (2018) Black-box adversarial attacks with limited queries and information. In: The 35th international conference on machine learning (ICML), pp 2137\u20132146"},{"key":"790_CR24","unstructured":"Jacobsen JH, Behrmann J, Carlini N, Tram\u00e8r F, Papernot N (2020) Exploiting excessive invariance caused by norm-bounded adversarial robustness"},{"key":"790_CR25","doi-asserted-by":"crossref","unstructured":"Katz G, Barrett C, Dill DL, Julian K, Kochenderfer MJ (2017) Reluplex: an efficient smt solver for verifying deep neural networks. In: International conference on computer aided verification, pp 97\u2013117","DOI":"10.1007\/978-3-319-63387-9_5"},{"issue":"7553","key":"790_CR26","doi-asserted-by":"publisher","first-page":"436","DOI":"10.1038\/nature14539","volume":"521","author":"Y LeCun","year":"2015","unstructured":"LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436\u2013444","journal-title":"Nature"},{"key":"790_CR27","doi-asserted-by":"crossref","unstructured":"Lewis RM, Torczon VJ, Kolda TG (2006) A generating set direct search augmented lagrangian algorithm for optimization with a combination of general and linear constraints. Tech. rep., Sandia National Laboratories","DOI":"10.2172\/893121"},{"key":"790_CR28","doi-asserted-by":"publisher","first-page":"296","DOI":"10.1007\/978-3-030-32304-2_15","volume-title":"Static analysis","author":"J Li","year":"2019","unstructured":"Li J, Liu J, Yang P, Chen L, Huang X, Zhang L (2019) Analyzing deep neural networks with symbolic propagation: towards higher precision and faster verification. In: Chang BYE (ed) Static analysis. Springer International Publishing, Cham, pp 296\u2013319"},{"key":"790_CR29","unstructured":"Lomuscio A, Maganti L (2017) An approach to reachability analysis for feed-forward ReLU neural networks. arXiv:1706.07351"},{"key":"790_CR30","unstructured":"Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2018) Towards deep learning models resistant to adversarial attacks. In: International conference on learning representations. https:\/\/openreview.net\/forum?id=rJzIBfZAb"},{"key":"790_CR31","doi-asserted-by":"crossref","unstructured":"Maqueda AI, Loquercio A, Gallego G, Garc\u00eda N, Scaramuzza D (2018) Event-based vision meets deep learning on steering prediction for self-driving cars. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5419\u20135427","DOI":"10.1109\/CVPR.2018.00568"},{"key":"790_CR32","unstructured":"Mirman M, Gehr T, Vechev M (2018) Differentiable abstract interpretation for provably robust neural networks. In: International conference on machine learning (ICML), pp 3578\u20133586"},{"key":"790_CR33","doi-asserted-by":"crossref","unstructured":"Moosavi-Dezfooli SM, Fawzi A, Fawzi O, Frossard P (2017) Universal adversarial perturbations. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1765\u20131773","DOI":"10.1109\/CVPR.2017.17"},{"key":"790_CR34","doi-asserted-by":"crossref","unstructured":"Mopuri KR, Ojha U, Garg U, Babu RV (2018) Nag: network for adversary generation. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 742\u2013751","DOI":"10.1109\/CVPR.2018.00084"},{"key":"790_CR35","unstructured":"Mu R, Soriano Marcolino L, Ruan W, Ni Q (2021) Sparse adversarial video attacks with spatial transformations. In: 32nd British machine vision conference 2021, BMVC 2021"},{"key":"790_CR36","doi-asserted-by":"crossref","unstructured":"Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A (2016) The limitations of deep learning in adversarial settings. In: IEEE European symposium on security and privacy, pp 372\u2013387","DOI":"10.1109\/EuroSP.2016.36"},{"key":"790_CR37","unstructured":"Peck J, Roels J, Goossens B, Saeys Y (2017) Lower bounds on the robustness to adversarial perturbations. In: Advances in neural information processing systems (NIPS), pp 804\u2013813"},{"key":"790_CR38","unstructured":"P\u00e9rez-Cruz F (2009) Estimation of information theoretic measures for continuous random variables. In: Advances in neural information processing systems (NIPS), pp 1257\u20131264"},{"key":"790_CR39","doi-asserted-by":"crossref","unstructured":"Pulina L, Tacchella A (2010) An abstraction-refinement approach to verification of artificial neural networks. In: International conference on computer aided verification (CAV), pp 243\u2013257","DOI":"10.1007\/978-3-642-14295-6_24"},{"key":"790_CR40","unstructured":"Raghunathan A, Steinhardt J, Liang PS (2018) Semidefinite relaxations for certifying robustness to adversarial examples. In: Neural information processing systems (NeurIPS), pp 10877\u201310887"},{"key":"790_CR41","doi-asserted-by":"crossref","unstructured":"Ruan W, Huang X, Kwiatkowska M (2018) Reachability analysis of deep neural networks with provable guarantees. In: International joint conference on artificial intelligence (IJCAI), pp 2651\u20132659","DOI":"10.24963\/ijcai.2018\/368"},{"key":"790_CR42","doi-asserted-by":"crossref","unstructured":"Ruan W, Wu M, Sun Y, Huang X, Kroening D, Kwiatkowska M (2019) Global robustness evaluation of deep neural networks with provable guarantees for the hamming distance. In: Proceedings of the twenty-eighth international joint conference on artificial intelligence (IJCAI), pp 5944\u20135952","DOI":"10.24963\/ijcai.2019\/824"},{"key":"790_CR43","doi-asserted-by":"crossref","unstructured":"Ruan W, Yi X, Huang X (2021) Adversarial robustness of deep learning: Theory, algorithms, and applications. In: Proceedings of the 30th ACM international conference on information & knowledge management, pp 4866\u20134869","DOI":"10.1145\/3459637.3482029"},{"issue":"3","key":"790_CR44","doi-asserted-by":"publisher","first-page":"211","DOI":"10.1007\/s11263-015-0816-y","volume":"115","author":"O Russakovsky","year":"2015","unstructured":"Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M et al (2015) Imagenet large scale visual recognition challenge. Int J Comput Vis 115(3):211\u2013252","journal-title":"Int J Comput Vis"},{"key":"790_CR45","doi-asserted-by":"publisher","first-page":"354","DOI":"10.1038\/nature24270","volume":"550","author":"D Silver","year":"2017","unstructured":"Silver D, Schrittwieser J, Simonyan K, Antonoglou I, Huang A, Guez A, Hubert T, Baker L, Lai M, Bolton A, Chen Y, Lillicrap T, Hui F, Sifre L, van den Driessche G, Graepel T, Hassabis D (2017) Mastering the game of Go without human knowledge. Nature 550:354\u2013359","journal-title":"Nature"},{"issue":"5","key":"790_CR46","doi-asserted-by":"publisher","first-page":"828","DOI":"10.1109\/TEVC.2019.2890858","volume":"23","author":"J Su","year":"2019","unstructured":"Su J, Vargas DV, Sakurai K (2019) One pixel attack for fooling deep neural networks. IEEE Trans Evol Comput 23(5):828\u2013841","journal-title":"IEEE Trans Evol Comput"},{"key":"790_CR47","doi-asserted-by":"publisher","unstructured":"Sun Y, Huang X, Kroening D, Sharp J, Hill M, Ashmore R (2019) Structural test coverage criteria for deep neural networks. ACM Trans. Embed. Comput. Syst. 18(5s). https:\/\/doi.org\/10.1145\/3358233","DOI":"10.1145\/3358233"},{"key":"790_CR48","doi-asserted-by":"crossref","unstructured":"Sun Y, Wu M, Ruan W, Huang X, Kwiatkowska M, Kroening D (2018) Concolic testing for deep neural networks. In: ASE2018","DOI":"10.1145\/3238147.3238172"},{"key":"790_CR49","unstructured":"Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2014) Intriguing properties of neural networks. In: International conference on learning representations (ICLR)"},{"key":"790_CR50","unstructured":"Tjeng V, Xiao K, Tedrake R (2019) Evaluating robustness of neural networks with mixed integer programming. In: International conference on learning representations (ICLR)"},{"key":"790_CR51","unstructured":"Tram\u00e8r F, Kurakin A, Papernot N, Goodfellow I, Boneh D, McDaniel PD (2018) Ensemble adversarial training: attacks and defenses. In: International conference on learning representations (ICLR)"},{"key":"790_CR52","unstructured":"Tsipras D, Santurkar S, Engstrom L, Turner A, Madry A (2019) Robustness may be at odds with accuracy. In: International conference on learning representations (ICLR)"},{"key":"790_CR53","doi-asserted-by":"publisher","first-page":"7693","DOI":"10.1038\/d41586-018-02174-z","volume":"554","author":"S Webb","year":"2018","unstructured":"Webb S (2018) Deep learning for biology. Nature 554:7693","journal-title":"Nature"},{"key":"790_CR54","unstructured":"Weng TW, Zhang H, Chen H, Song Z, Hsieh CJ, Boning D, Dhillon IS, Daniel L (2018) Towards fast computation of certified robustness for relu networks. In: The 35th international conference on machine learning (ICML), pp 5276\u20135285"},{"key":"790_CR55","doi-asserted-by":"crossref","unstructured":"Wicker M, Huang X, Kwiatkowska M (2018) Feature-guided black-box safety testing of deep neural networks. In: International conference on tools and algorithms for the construction and analysis of systems (TACAS). Springer, pp 408\u2013426","DOI":"10.1007\/978-3-319-89960-2_22"},{"key":"790_CR56","unstructured":"Wong E, Kolter JZ (2018) Provable defenses against adversarial examples via the convex outer adversarial polytope. In: International conference on machine learning (ICML), pp 5286\u20135295"},{"key":"790_CR57","unstructured":"Wu H, Ruan W (2021) Adversarial driving: attacking end-to-end autonomous driving systems. arXiv:2103.09151"},{"key":"790_CR58","doi-asserted-by":"publisher","first-page":"298","DOI":"10.1016\/j.tcs.2019.05.046","volume":"807","author":"M Wu","year":"2020","unstructured":"Wu M, Wicker M, Ruan W, Huang X, Kwiatkowska M (2020) A game-based approximate verification of deep neural networks with provable guarantees. Theor Comput Sci 807:298\u2013329","journal-title":"Theor Comput Sci"},{"key":"790_CR59","unstructured":"Xu K, Liu S, Zhao P, Chen PY, Zhang H, Fan Q, Erdogmus D, Wang Y, Lin X (2018) Structured adversarial attack: towards general implementation and better interpretability. In: International conference on learning representations (ICLR)"},{"key":"790_CR60","unstructured":"Xu P, Ruan W, Huang X (2020) Towards the quantification of safety risks in deep neural networks. arXiv:2009.06114"},{"key":"790_CR61","first-page":"4939","volume":"31","author":"H Zhang","year":"2018","unstructured":"Zhang H, Weng TW, Chen PY, Hsieh CJ, Daniel L (2018) Efficient neural network robustness certification with general activation functions. Adv Neural Inf Process Syst 31:4939\u20134948","journal-title":"Adv Neural Inf Process Syst"},{"key":"790_CR62","first-page":"5757","volume":"33","author":"H Zhang","year":"2019","unstructured":"Zhang H, Zhang P, Hsieh CJ (2019) Recurjac: an efficient recursive algorithm for bounding Jacobian matrix of neural networks and its applications. Proc AAAI Conf Artif Intell 33:5757\u20135764","journal-title":"Proc AAAI Conf Artif Intell"},{"key":"790_CR63","doi-asserted-by":"crossref","unstructured":"Zhang Y, Ruan W, Wang F, Huang X (2020) Generalizing universal adversarial attacks beyond additive perturbations. In: 2020 IEEE international conference on data mining (ICDM). IEEE, pp 1412\u20131417","DOI":"10.1109\/ICDM50108.2020.00186"},{"key":"790_CR64","unstructured":"Zhang Y, Wang F, Ruan W (2021) Fooling object detectors: adversarial attacks by half-neighbor masks. arXiv:2101.00989"},{"key":"790_CR65","doi-asserted-by":"crossref","unstructured":"Zhao X, Banks A, Sharp J, Robu V, Flynn D, Fisher M, Huang X (2020) A safety framework for critical systems utilising deep neural networks. In: SafeComp2020, pp 244\u2013259","DOI":"10.1007\/978-3-030-54549-9_16"},{"key":"790_CR66","unstructured":"Zhao X, Huang W, Banks A, Cox V, Flynn D, Schewe S, Huang X (2021) Assessing reliability of deep learning through robustness evaluation and operational testing. In: AISafety2021"},{"key":"790_CR67","unstructured":"Zhao X, Huang W, Bharti V, Dong Y, Cox V, Banks A, Wang S, Schewe S, Huang X (2021) Reliability assessment and safety arguments for machine learning components in assuring learning-enabled autonomous systems. arXiv:2112.00646 [CoRR]"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-022-00790-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-022-00790-x\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-022-00790-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,7,27]],"date-time":"2023-07-27T13:13:53Z","timestamp":1690463633000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-022-00790-x"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,7,9]]},"references-count":67,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2023,8]]}},"alternative-id":["790"],"URL":"https:\/\/doi.org\/10.1007\/s40747-022-00790-x","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"type":"print","value":"2199-4536"},{"type":"electronic","value":"2198-6053"}],"subject":[],"published":{"date-parts":[[2022,7,9]]},"assertion":[{"value":"16 June 2021","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"1 May 2022","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"9 July 2022","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"On behalf of all authors, the corresponding author states that there is no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"This article does not contain any studies involving animals performed by any of the authors. This article does not contain any studies involving human participants performed by any of the authors.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethical statements"}}]}}