{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,30]],"date-time":"2026-04-30T17:19:29Z","timestamp":1777569569398,"version":"3.51.4"},"reference-count":95,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2023,4,20]],"date-time":"2023-04-20T00:00:00Z","timestamp":1681948800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"UK DSTL"},{"name":"UK EPSRC","award":["EP\/W001136\/1"],"award-info":[{"award-number":["EP\/W001136\/1"]}]},{"name":"End-to-End Conceptual Guarding of Neural Architectures","award":["EP\/T026995\/1"],"award-info":[{"award-number":["EP\/T026995\/1"]}]},{"name":"European Union\u2019s Horizon 2020 research and innovation programme","award":["956123"],"award-info":[{"award-number":["956123"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Embed. Comput. Syst."],"published-print":{"date-parts":[[2023,5,31]]},"abstract":"<jats:p>The increasing use of Machine Learning (ML) components embedded in autonomous systems\u2014so-called Learning-Enabled Systems (LESs)\u2014has resulted in the pressing need to assure their functional safety. As for traditional functional safety, the emerging consensus within both, industry and academia, is to use assurance cases for this purpose. Typically assurance cases support claims of reliability in support of safety, and can be viewed as a structured way of organising arguments and evidence generated from safety analysis and reliability modelling activities. While such assurance activities are traditionally guided by consensus-based standards developed from vast engineering experience, LESs pose new challenges in safety-critical application due to the characteristics and design of ML models. In this article, we first present an overall assurance framework for LESs with an emphasis on quantitative aspects, e.g., breaking down system-level safety targets to component-level requirements and supporting claims stated in reliability metrics. We then introduce a novel model-agnostic Reliability Assessment Model (RAM) for ML classifiers that utilises the operational profile and robustness verification evidence. We discuss the model assumptions and the inherent challenges of assessing ML reliability uncovered by our RAM and propose solutions to practical use. Probabilistic safety argument templates at the lower ML component-level are also developed based on the RAM. Finally, to evaluate and demonstrate our methods, we not only conduct experiments on synthetic\/benchmark datasets but also scope our methods with case studies on simulated Autonomous Underwater Vehicles and physical Unmanned Ground Vehicles.<\/jats:p>","DOI":"10.1145\/3570918","type":"journal-article","created":{"date-parts":[[2022,11,17]],"date-time":"2022-11-17T15:07:32Z","timestamp":1668697652000},"page":"1-48","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":24,"title":["Reliability Assessment and Safety Arguments for Machine Learning Components in System Assurance"],"prefix":"10.1145","volume":"22","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-3047-7777","authenticated-orcid":false,"given":"Yi","family":"Dong","sequence":"first","affiliation":[{"name":"Department of Computer Science, University of Liverpool, Ashton Street, Liverpool, U.K."}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1418-6267","authenticated-orcid":false,"given":"Wei","family":"Huang","sequence":"additional","affiliation":[{"name":"Department of Computer Science, University of Liverpool, Ashton Street, Liverpool, U.K."}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0360-5174","authenticated-orcid":false,"given":"Vibhav","family":"Bharti","sequence":"additional","affiliation":[{"name":"School of Engineering &amp; Physical Sciences, Heriot-Watt University, Edinburgh, U.K."}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3852-7855","authenticated-orcid":false,"given":"Victoria","family":"Cox","sequence":"additional","affiliation":[{"name":"Defence Science and Technology Laboratory, Salisbury, U.K."}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1350-0798","authenticated-orcid":false,"given":"Alec","family":"Banks","sequence":"additional","affiliation":[{"name":"Defence Science and Technology Laboratory, Salisbury, U.K."}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1537-8834","authenticated-orcid":false,"given":"Sen","family":"Wang","sequence":"additional","affiliation":[{"name":"Department of Electrical and Electronic Engineering, Imperial College London, U.K."}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3474-349X","authenticated-orcid":false,"given":"Xingyu","family":"Zhao","sequence":"additional","affiliation":[{"name":"Department of Computer Science, University of Liverpool, Ashton Street, Liverpool, U.K."}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9093-9518","authenticated-orcid":false,"given":"Sven","family":"Schewe","sequence":"additional","affiliation":[{"name":"Department of Computer Science, University of Liverpool, Ashton Street, Liverpool, U.K."}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6267-0366","authenticated-orcid":false,"given":"Xiaowei","family":"Huang","sequence":"additional","affiliation":[{"name":"Department of Computer Science, University of Liverpool, Ashton Street, Liverpool, U.K."}]}],"member":"320","published-online":{"date-parts":[[2023,4,20]]},"reference":[{"key":"e_1_3_3_2_2","first-page":"172","volume-title":"Considerations in Assuring Safety of Increasingly Autonomous Systems","author":"Alves Erin","year":"2018","unstructured":"Erin Alves, Devesh Bhatt, Brendan Hall, Kevin Driscoll, Anitha Murugesan, and John Rushby. 2018. Considerations in Assuring Safety of Increasingly Autonomous Systems. Technical Report NASA\/CR-2018-220080. NASA. 172 pages."},{"key":"e_1_3_3_3_2","doi-asserted-by":"publisher","DOI":"10.1109\/IJCNN48605.2020.9206696"},{"key":"e_1_3_3_4_2","doi-asserted-by":"publisher","DOI":"10.1109\/MC.2020.3022030"},{"key":"e_1_3_3_5_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-54549-9_18"},{"key":"e_1_3_3_6_2","doi-asserted-by":"publisher","DOI":"10.1145\/3453444"},{"key":"e_1_3_3_7_2","first-page":"15773","volume-title":"Advances in Neural Information Processing Systems","author":"Backurs Arturs","year":"2019","unstructured":"Arturs Backurs, Piotr Indyk, and Tal Wagner. 2019. Space and time efficient kernel density estimation in high dimensions. In Advances in Neural Information Processing Systems, Vol. 32. Curran Associates, Inc., 15773\u201315782."},{"key":"e_1_3_3_8_2","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2021\/591"},{"key":"e_1_3_3_9_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICSE-Companion52605.2021.00045"},{"issue":"2","key":"e_1_3_3_10_2","first-page":"281","article-title":"Random search for hyper-parameter optimization.","volume":"13","author":"Bergstra James","year":"2012","unstructured":"James Bergstra and Yoshua Bengio. 2012. Random search for hyper-parameter optimization. J. of Machine Learning Research 13, 2 (2012), 281\u2013305.","journal-title":"J. of Machine Learning Research"},{"key":"e_1_3_3_11_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICSE.2017.56"},{"key":"e_1_3_3_12_2","doi-asserted-by":"publisher","DOI":"10.1109\/TSE.2019.2906187"},{"key":"e_1_3_3_13_2","doi-asserted-by":"publisher","DOI":"10.1063\/1.4823194"},{"key":"e_1_3_3_14_2","doi-asserted-by":"publisher","DOI":"10.1080\/09617353.2000.11690698"},{"key":"e_1_3_3_15_2","doi-asserted-by":"publisher","DOI":"10.1109\/TSE.2010.67"},{"key":"e_1_3_3_16_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.ress.2016.08.019"},{"key":"e_1_3_3_17_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-1-84996-086-1_4"},{"key":"e_1_3_3_18_2","article-title":"Safety case templates for autonomous systems","author":"Bloomfield Robin","year":"2021","unstructured":"Robin Bloomfield, Gareth Fletcher, Heidy Khlaaf, Luke Hinde, and Philippa Ryan. 2021. Safety case templates for autonomous systems. arXiv preprint arXiv:2102.02625 (2021).","journal-title":"arXiv preprint arXiv:2102.02625"},{"key":"e_1_3_3_19_2","doi-asserted-by":"publisher","DOI":"10.1109\/MC.2019.2914775"},{"key":"e_1_3_3_20_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISSREW.2014.72"},{"key":"e_1_3_3_21_2","article-title":"Assurance 2.0: A manifesto","author":"Bloomfield Robin","year":"2020","unstructured":"Robin Bloomfield and John Rushby. 2020. Assurance 2.0: A manifesto. arXiv preprint arXiv:2004.10474 (2020).","journal-title":"arXiv preprint arXiv:2004.10474"},{"key":"e_1_3_3_22_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.artint.2019.103201"},{"key":"e_1_3_3_23_2","doi-asserted-by":"publisher","DOI":"10.1109\/TSE.2017.2738640"},{"key":"e_1_3_3_24_2","doi-asserted-by":"publisher","DOI":"10.1080\/24709360.2017.1396742"},{"key":"e_1_3_3_25_2","doi-asserted-by":"publisher","DOI":"10.1109\/TSE.2015.2491931"},{"key":"e_1_3_3_26_2","volume-title":"HAZOP: Guide to Best Practice","author":"Crawley Frank","year":"2015","unstructured":"Frank Crawley and Brian Tyler. 2015. HAZOP: Guide to Best Practice. Elsevier."},{"key":"e_1_3_3_27_2","first-page":"226","volume-title":"43rd IEEE\/ACM International Conference on Software Engineering, ICSE 2021, Madrid, Spain, 22\u201330 May 2021","author":"Dola Swaroopa","year":"2021","unstructured":"Swaroopa Dola, Matthew B. Dwyer, and Mary Lou Soffa. 2021. Distribution-aware testing of neural networks using generative models. In 43rd IEEE\/ACM International Conference on Software Engineering, ICSE 2021, Madrid, Spain, 22\u201330 May 2021. IEEE, 226\u2013237."},{"key":"e_1_3_3_28_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10994-017-5663-3"},{"key":"e_1_3_3_29_2","doi-asserted-by":"publisher","DOI":"10.1109\/32.707695"},{"key":"e_1_3_3_30_2","doi-asserted-by":"crossref","first-page":"3","DOI":"10.1109\/SP.2018.00058","volume-title":"2018 IEEE Symposium on Security and Privacy (SP)","author":"Gehr Timon","year":"2018","unstructured":"Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, and Martin Vechev. 2018. AI2: Safety and robustness certification of neural networks with abstract interpretation. In 2018 IEEE Symposium on Security and Privacy (SP). IEEE, 3\u201318."},{"key":"e_1_3_3_31_2","doi-asserted-by":"publisher","DOI":"10.1109\/ISSREW51248.2020.00050"},{"key":"e_1_3_3_32_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICSE43902.2021.00042"},{"key":"e_1_3_3_33_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.jlp.2015.10.003"},{"key":"e_1_3_3_34_2","doi-asserted-by":"publisher","DOI":"10.1109\/32.62448"},{"key":"e_1_3_3_35_2","doi-asserted-by":"publisher","DOI":"10.1145\/3368089.3409754"},{"key":"e_1_3_3_36_2","series-title":"LNCS","doi-asserted-by":"crossref","first-page":"116","DOI":"10.1007\/978-3-030-63486-5_14","volume-title":"Towards Autonomous Robotic Systems","author":"Hereau Adrien","year":"2020","unstructured":"Adrien Hereau, Karen Godary-Dejean, J\u00e9r\u00e9mie Guiochet, Cl\u00e9ment Robert, Thomas Claverie, and Didier Crestani. 2020. Testing an underwater robot executing transect missions in Mayotte. In Towards Autonomous Robotic Systems(LNCS, Vol. 12228), Abdelkhalick Mohammad, Xin Dong, and Matteo Russo (Eds.). Springer, Cham, 116\u2013127."},{"key":"e_1_3_3_37_2"},{"key":"e_1_3_3_38_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.cosrev.2020.100270"},{"key":"e_1_3_3_39_2","series-title":"LNCS","doi-asserted-by":"crossref","first-page":"3","DOI":"10.1007\/978-3-319-63387-9_1","volume-title":"Computer Aided Verification","author":"Huang Xiaowei","year":"2017","unstructured":"Xiaowei Huang, Marta Kwiatkowska, Sen Wang, and Min Wu. 2017. Safety verification of deep neural networks. In Computer Aided Verification(LNCS, Vol. 10426). Springer International Publishing, Cham, 3\u201329."},{"key":"e_1_3_3_40_2","series-title":"LNCS","first-page":"14","volume-title":"SafeComp\u201918","author":"Ishikawa Fuyuki","year":"2018","unstructured":"Fuyuki Ishikawa and Yutaka Matsuno. 2018. Continuous argument engineering: Tackling uncertainty in machine learning based systems. In SafeComp\u201918(LNCS, Vol. 11094), Barbara Gallina, Amund Skavhaug, Erwin Schoitsch, and Friedemann Bitsch (Eds.). Springer, Cham, 14\u201321."},{"key":"e_1_3_3_41_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.sysarc.2020.101914"},{"key":"e_1_3_3_42_2","first-page":"15","volume-title":"the 26th Safety-Critical Systems Symposium","author":"Johnson. C. W.","year":"2018","unstructured":"C. W. Johnson.2018. The increasing risks of risk assessment: On the rise of artificial intelligence and non-determinism in safety-critical systems. In the 26th Safety-Critical Systems Symposium. Safety-Critical Systems Club, York, UK., 15."},{"key":"e_1_3_3_43_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.tra.2016.09.010"},{"key":"e_1_3_3_44_2","series-title":"LNCS","first-page":"97","volume-title":"CAV\u201917","author":"Katz Guy","year":"2017","unstructured":"Guy Katz, Clark Barrett, David L. Dill, Kyle Julian, and Mykel J. Kochenderfer. 2017. Reluplex: An efficient SMT solver for verifying deep neural networks. In CAV\u201917(LNCS, Vol. 10426). Springer, Cham, 97\u2013117."},{"key":"e_1_3_3_45_2","volume-title":"Arguing Safety: A Systematic Approach to Managing Safety Cases","author":"Kelly Timothy Patrick","year":"1999","unstructured":"Timothy Patrick Kelly. 1999. Arguing Safety: A Systematic Approach to Managing Safety Cases. PhD Thesis. University of York."},{"key":"e_1_3_3_46_2","volume-title":"AISafety\u201921 Workshop at IJCAI\u201921","author":"Kl\u00e4s Michael","year":"2021","unstructured":"Michael Kl\u00e4s, Rasmus Adler, Lisa J\u00f6ckel, Janek Gro\u00df, and Jan Reich. 2021. Using complementary risk acceptance criteria to structure assurance cases for safety-critical AI components. In AISafety\u201921 Workshop at IJCAI\u201921."},{"key":"e_1_3_3_47_2","doi-asserted-by":"publisher","DOI":"10.1109\/MSP.2015.68"},{"key":"e_1_3_3_48_2","volume-title":"27th Safety-Critical Systems Symp.","author":"Koopman Philip","year":"2019","unstructured":"Philip Koopman, Aaron Kane, and Jen Black. 2019. Credible autonomy safety argumentation. In 27th Safety-Critical Systems Symp.Safety-Critical Systems Club, Bristol, UK."},{"key":"e_1_3_3_49_2","doi-asserted-by":"crossref","first-page":"99","DOI":"10.1201\/9781351251389-8","volume-title":"Artificial Intelligence Safety and Security","author":"Kurakin Alexey","year":"2018","unstructured":"Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. 2018. Adversarial examples in the physical world. In Artificial Intelligence Safety and Security. Chapman and Hall\/CRC, 99\u2013112."},{"key":"e_1_3_3_50_2","first-page":"65","volume-title":"New Foresight Review on Robotics and Autonomous Systems","author":"Lane David","year":"2016","unstructured":"David Lane, David Bisset, Rob Buckingham, Geoff Pegman, and Tony Prescott. 2016. New Foresight Review on Robotics and Autonomous Systems. Technical Report No. 2016.1. Lloyd\u2019s Register Foundation, London, U.K.65 pages."},{"key":"e_1_3_3_51_2","doi-asserted-by":"publisher","DOI":"10.1200\/CCI.21.00177"},{"key":"e_1_3_3_52_2","doi-asserted-by":"publisher","DOI":"10.1109\/TR.1985.5222114"},{"key":"e_1_3_3_53_2","doi-asserted-by":"publisher","DOI":"10.1145\/3338906.3338930"},{"key":"e_1_3_3_54_2","doi-asserted-by":"publisher","DOI":"10.1109\/TSE.2011.80"},{"key":"e_1_3_3_55_2","doi-asserted-by":"publisher","DOI":"10.1145\/163359.163373"},{"key":"e_1_3_3_56_2","doi-asserted-by":"publisher","DOI":"10.1145\/336512.336551"},{"key":"e_1_3_3_57_2","doi-asserted-by":"publisher","DOI":"10.1111\/risa.13116"},{"key":"e_1_3_3_58_2","volume-title":"International Conference on Learning Representations","author":"Madry Aleksander","year":"2018","unstructured":"Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations."},{"key":"e_1_3_3_59_2","series-title":"LNCS","first-page":"398","volume-title":"SafeComp\u201919","author":"Matsuno Yutaka","year":"2019","unstructured":"Yutaka Matsuno, Fuyuki Ishikawa, and Susumu Tokumoto. 2019. Tackling uncertainty in safety assurance for machine learning: Continuous argument engineering with attributed tests. In SafeComp\u201919(LNCS, Vol. 11699). Springer, Cham, 398\u2013404."},{"key":"e_1_3_3_60_2","doi-asserted-by":"publisher","DOI":"10.5555\/1400067.1400071"},{"key":"e_1_3_3_61_2","doi-asserted-by":"publisher","DOI":"10.1109\/32.120314"},{"key":"e_1_3_3_62_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.17"},{"key":"e_1_3_3_63_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.282"},{"key":"e_1_3_3_64_2","doi-asserted-by":"publisher","DOI":"10.1109\/52.199724"},{"key":"e_1_3_3_65_2","series-title":"LNCS","doi-asserted-by":"crossref","first-page":"165","DOI":"10.1007\/978-3-030-26601-1_12","volume-title":"Computer Safety, Reliability, and Security","author":"Picardi Chiara","year":"2019","unstructured":"Chiara Picardi, Richard Hawkins, Colin Paterson, and Ibrahim Habli. 2019. A pattern for arguing the assurance of machine learning in medical diagnosis systems. In Computer Safety, Reliability, and Security(LNCS, Vol. 11698), Alexander Romanovsky, Elena Troubitsyna, and Friedemann Bitsch (Eds.). Springer, Cham, 165\u2013179."},{"key":"e_1_3_3_66_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.ress.2020.107193"},{"key":"e_1_3_3_67_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2016.2626441"},{"key":"e_1_3_3_68_2","volume-title":"AISafety\u201922 Workshop at IJCAI\u201922","author":"Qi Yi","year":"2022","unstructured":"Yi Qi, Philippa Ryan Conmy, Wei Huang, Xingyu Zhao, and Xiaowei Huang. 2022. A hierarchical HAZOP-like safety analysis for learning-enabled systems. In AISafety\u201922 Workshop at IJCAI\u201922."},{"key":"e_1_3_3_69_2","article-title":"YOLOv3: An incremental improvement","author":"Redmon Joseph","year":"2018","unstructured":"Joseph Redmon and Ali Farhadi. 2018. YOLOv3: An incremental improvement. arXiv preprint arXiv:1804.02767 (2018).","journal-title":"arXiv preprint arXiv:1804.02767"},{"key":"e_1_3_3_70_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10664-020-09800-3"},{"key":"e_1_3_3_71_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.cosrev.2015.03.001"},{"key":"e_1_3_3_72_2","volume-title":"The Uses of Argument","author":"Toulmin S.","year":"1958","unstructured":"S. Toulmin. 1958. The Uses of Argument. Cambridge University Press."},{"key":"e_1_3_3_73_2","doi-asserted-by":"publisher","DOI":"10.1002\/9781118575574"},{"key":"e_1_3_3_74_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-1-4899-3324-9"},{"key":"e_1_3_3_75_2","doi-asserted-by":"publisher","DOI":"10.1145\/3290354"},{"key":"e_1_3_3_76_2","doi-asserted-by":"publisher","DOI":"10.1145\/2518106"},{"key":"e_1_3_3_77_2","volume-title":"Guidelines for Statistical Testing","author":"Strigini Lorenzo","year":"1997","unstructured":"Lorenzo Strigini and Bev Littlewood. 1997. Guidelines for Statistical Testing. Technical Report. City, University of London. http:\/\/openaccess.city.ac.uk\/254\/."},{"key":"e_1_3_3_78_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-40793-2_10"},{"key":"e_1_3_3_79_2","doi-asserted-by":"publisher","DOI":"10.1016\/0950-4230(95)00041-0"},{"key":"e_1_3_3_80_2","first-page":"39","volume-title":"The Purpose, Scope and Content of Safety Cases","author":"Regulation UK Office for Nuclear","year":"2019","unstructured":"UK Office for Nuclear Regulation. 2019. The Purpose, Scope and Content of Safety Cases. Nuclear Safety Technical Assessment Guide NS-TAST-GD-051. Office for Nuclear Regulation. 39 pages. https:\/\/www.onr.org.uk\/operational\/tech_asst_guides\/ns-tast-gd-051.pdf."},{"key":"e_1_3_3_81_2","doi-asserted-by":"publisher","DOI":"10.1080\/15598608.2009.10411924"},{"key":"e_1_3_3_82_2","first-page":"1735","volume-title":"Proc. of the 37th Conf. on Uncertainty in Artificial Intelligence","author":"Wang Benjie","year":"2021","unstructured":"Benjie Wang, Stefan Webb, and Tom Rainforth. 2021. Statistically robust neural network classification. In Proc. of the 37th Conf. on Uncertainty in Artificial Intelligence, Vol. 161. PMLR, 1735\u20131745."},{"key":"e_1_3_3_83_2","volume-title":"7th Int. Conf. Learning Representations (ICLR\u201919)","author":"Webb Stefan","year":"2019","unstructured":"Stefan Webb, Tom Rainforth, Yee Whye Teh, and M. Pawan Kumar. 2019. A statistical approach to assessing neural network robustness. In 7th Int. Conf. Learning Representations (ICLR\u201919). OpenReview.net, New Orleans, LA, USA."},{"key":"e_1_3_3_84_2","first-page":"6727","volume-title":"Int. Conf. on Machine Learning","author":"Weng Lily","year":"2019","unstructured":"Lily Weng, Pin-Yu Chen, Lam Nguyen, Mark Squillante, Akhilan Boopathy, Ivan Oseledets, and Luca Daniel. 2019. PROVEN: Verifying robustness of neural networks with a probabilistic approach. In Int. Conf. on Machine Learning. PMLR, 6727\u20136736."},{"key":"e_1_3_3_85_2","volume-title":"International Conference on Learning Representations (ICLR)","author":"Weng T.-W.","year":"2018","unstructured":"T.-W. Weng, H. Zhang, P.-Y. Chen, J. Yi, D. Su, Y. Gao, C.-J. Hsieh, and L. Daniel. 2018. Evaluating the robustness of neural networks: An extreme value theory approach. In International Conference on Learning Representations (ICLR)."},{"key":"e_1_3_3_86_2","series-title":"NeurIPS\u201920","first-page":"8588","volume-title":"Advances in Neural Information Processing Systems","author":"Yang Yao-Yuan","year":"2020","unstructured":"Yao-Yuan Yang, Cyrus Rashtchian, Hongyang Zhang, Russ R. Salakhutdinov, and Kamalika Chaudhuri. 2020. A closer look at accuracy vs. robustness. In Advances in Neural Information Processing Systems(NeurIPS\u201920, Vol. 33), H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (Eds.). Curran Associates, Inc., 8588\u20138601."},{"key":"e_1_3_3_87_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00255"},{"key":"e_1_3_3_88_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-54549-9_16"},{"key":"e_1_3_3_89_2","doi-asserted-by":"publisher","DOI":"10.1145\/3324884.3416565"},{"key":"e_1_3_3_90_2","volume-title":"AISafety\u201921 Workshop at IJCAI\u201921","author":"Zhao Xingyu","year":"2021","unstructured":"Xingyu Zhao, Wei Huang, Alec Banks, Victoria Cox, David Flynn, Sven Schewe, and Xiaowei Huang. 2021. Assessing the reliability of deep learning classifiers through robustness evaluation and operational profiles. In AISafety\u201921 Workshop at IJCAI\u201921, Vol. 2916."},{"key":"e_1_3_3_91_2","volume-title":"51st Annual IEEE-IFIP Int. Conf. on Dependable Systems and Networks (DSN\u201921)","author":"Zhao Xingyu","year":"2021","unstructured":"Xingyu Zhao, Wei Huang, Sven Schewe, Yi Dong, and Xiaowei Huang. 2021. Detecting operational adversarial examples for reliable deep learning. In 51st Annual IEEE-IFIP Int. Conf. on Dependable Systems and Networks (DSN\u201921), Vol. Fast Abstract."},{"key":"e_1_3_3_92_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.ress.2016.09.002"},{"key":"e_1_3_3_93_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v33i01.33018066"},{"key":"e_1_3_3_94_2","first-page":"13","volume-title":"the 30th Int. Symp. on Software Reliability Engineering","author":"Zhao Xingyu","year":"2019","unstructured":"Xingyu Zhao, Valentin Robu, David Flynn, Kizito Salako, and Lorenzo Strigini. 2019. Assessing the safety and reliability of autonomous vehicles from road testing. In the 30th Int. Symp. on Software Reliability Engineering. IEEE, Berlin, Germany, 13\u201323."},{"key":"e_1_3_3_95_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.infsof.2020.106393"},{"key":"e_1_3_3_96_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-71500-7_16"}],"container-title":["ACM Transactions on Embedded Computing Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3570918","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3570918","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T17:49:12Z","timestamp":1750182552000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3570918"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,4,20]]},"references-count":95,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2023,5,31]]}},"alternative-id":["10.1145\/3570918"],"URL":"https:\/\/doi.org\/10.1145\/3570918","relation":{},"ISSN":["1539-9087","1558-3465"],"issn-type":[{"value":"1539-9087","type":"print"},{"value":"1558-3465","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,4,20]]},"assertion":[{"value":"2022-01-12","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-10-12","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-04-20","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}