{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,3]],"date-time":"2025-12-03T18:07:15Z","timestamp":1764785235278,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":33,"publisher":"ACM","license":[{"start":{"date-parts":[[2023,12,5]],"date-time":"2023-12-05T00:00:00Z","timestamp":1701734400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2023,12,8]]},"DOI":"10.1145\/3630050.3630177","type":"proceedings-article","created":{"date-parts":[[2023,11,30]],"date-time":"2023-11-30T00:29:45Z","timestamp":1701304185000},"page":"9-15","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":3,"title":["Explainability-based Metrics to Help Cyber Operators Find and Correct Misclassified Cyberattacks"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-8364-2989","authenticated-orcid":false,"given":"Robin","family":"Duraz","sequence":"first","affiliation":[{"name":"Chaire of Naval Cyberdefense, Lab-STICC, Brest, France"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3445-947X","authenticated-orcid":false,"given":"David","family":"Espes","sequence":"additional","affiliation":[{"name":"UBO, Lab-STICC, Brest, France"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4604-4522","authenticated-orcid":false,"given":"Julien","family":"Francq","sequence":"additional","affiliation":[{"name":"Naval Group, Naval Cyber Laboratory, Ollioules, France"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8940-6004","authenticated-orcid":false,"given":"Sandrine","family":"Vaton","sequence":"additional","affiliation":[{"name":"IMT Atlantique, Lab-STICC, Brest, France"}]}],"member":"320","published-online":{"date-parts":[[2023,12,5]]},"reference":[{"key":"e_1_3_2_1_1_1","volume-title":"Creating an Explainable Intrusion Detection System Using Self Organizing Maps. CoRR","author":"Ables Jesse","year":"2022","unstructured":"Jesse Ables , Thomas Kirby , William Anderson , Sudip Mittal , Shahram Rahimi , Ioana Banicescu , and Maria Seale . 2022. Creating an Explainable Intrusion Detection System Using Self Organizing Maps. CoRR ( 2022 ). Jesse Ables, Thomas Kirby, William Anderson, Sudip Mittal, Shahram Rahimi, Ioana Banicescu, and Maria Seale. 2022. Creating an Explainable Intrusion Detection System Using Self Organizing Maps. CoRR (2022)."},{"key":"e_1_3_2_1_2_1","volume-title":"Sanity Checks for Saliency Maps. CoRR","author":"Adebayo Julius","year":"2018","unstructured":"Julius Adebayo , Justin Gilmer , Michael Muelly , Ian Goodfellow , Moritz Hardt , and Been Kim . 2018. Sanity Checks for Saliency Maps. CoRR ( 2018 ). Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. 2018. Sanity Checks for Saliency Maps. CoRR (2018)."},{"key":"e_1_3_2_1_3_1","volume-title":"Debugging Tests for Model Explanations. CoRR","author":"Adebayo Julius","year":"2020","unstructured":"Julius Adebayo , Michael Muelly , Ilaria Liccardi , and Been Kim . 2020. Debugging Tests for Model Explanations. CoRR ( 2020 ). Julius Adebayo, Michael Muelly, Ilaria Liccardi, and Been Kim. 2020. Debugging Tests for Model Explanations. CoRR (2020)."},{"volume-title":"WADI. In Proceedings of the 3rd International Workshop on Cyber-Physical Systems for Smart Water Networks.","author":"Ahmed Chuadhry Mujeeb","key":"e_1_3_2_1_4_1","unstructured":"Chuadhry Mujeeb Ahmed , Venkata Reddy Palleti , and Aditya P. Mathur . 2017 . WADI. In Proceedings of the 3rd International Workshop on Cyber-Physical Systems for Smart Water Networks. Chuadhry Mujeeb Ahmed, Venkata Reddy Palleti, and Aditya P. Mathur. 2017. WADI. In Proceedings of the 3rd International Workshop on Cyber-Physical Systems for Smart Water Networks."},{"key":"e_1_3_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v35i8.16819"},{"key":"e_1_3_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2022.117144"},{"key":"e_1_3_2_1_7_1","volume-title":"Moura","author":"Bhatt Umang","year":"2020","unstructured":"Umang Bhatt , Adrian Weller , and Jos\u00e9 M. F . Moura . 2020 . Evaluating and Aggregating Feature-Based Model Explanations. CoRR ( 2020). Umang Bhatt, Adrian Weller, and Jos\u00e9 M. F. Moura. 2020. Evaluating and Aggregating Feature-Based Model Explanations. CoRR (2020)."},{"key":"e_1_3_2_1_8_1","volume-title":"Explanations Can Be Manipulated and Geometry Is To Blame. CoRR","author":"Dombrowski Ann-Kathrin","year":"2019","unstructured":"Ann-Kathrin Dombrowski , Maximilian Alber , Christopher J. Anders , Marcel Ackermann , Klaus-Robert M\u00fcller , and Pan Kessel . 2019. Explanations Can Be Manipulated and Geometry Is To Blame. CoRR ( 2019 ). Ann-Kathrin Dombrowski, Maximilian Alber, Christopher J. Anders, Marcel Ackermann, Klaus-Robert M\u00fcller, and Pan Kessel. 2019. Explanations Can Be Manipulated and Geometry Is To Blame. CoRR (2019)."},{"key":"e_1_3_2_1_9_1","volume-title":"Towards a Rigorous Science of Interpretable Machine Learning. CoRR","author":"Doshi-Velez Finale","year":"2017","unstructured":"Finale Doshi-Velez and Been Kim . 2017. Towards a Rigorous Science of Interpretable Machine Learning. CoRR ( 2017 ). Finale Doshi-Velez and Been Kim. 2017. Towards a Rigorous Science of Interpretable Machine Learning. CoRR (2017)."},{"key":"e_1_3_2_1_10_1","volume-title":"H\u00f6hne","author":"Hedstr\u00f6m Anna","year":"2022","unstructured":"Anna Hedstr\u00f6m , Leander Weber , Dilyara Bareeva , Daniel Krakowczyk , Franz Motzkus , Wojciech Samek , Sebastian Lapuschkin , and Marina M. C . H\u00f6hne . 2022 . Quantus: an Explainable Ai Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond. CoRR ( 2022). Anna Hedstr\u00f6m, Leander Weber, Dilyara Bareeva, Daniel Krakowczyk, Franz Motzkus, Wojciech Samek, Sebastian Lapuschkin, and Marina M. C. H\u00f6hne. 2022. Quantus: an Explainable Ai Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond. CoRR (2022)."},{"key":"e_1_3_2_1_11_1","volume-title":"Metrics for Explainable Ai: Challenges and Prospects. CoRR","author":"Hoffman Robert R.","year":"2018","unstructured":"Robert R. Hoffman , Shane T. Mueller , Gary Klein , and Jordan Litman . 2018. Metrics for Explainable Ai: Challenges and Prospects. CoRR ( 2018 ). Robert R. Hoffman, Shane T. Mueller, Gary Klein, and Jordan Litman. 2018. Metrics for Explainable Ai: Challenges and Prospects. CoRR (2018)."},{"key":"e_1_3_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1109\/OJCOMS.2022.3188750"},{"key":"e_1_3_2_1_13_1","volume-title":"Luyu Qiu, Yi Yang, and Caleb Chen Cao.","author":"Hsiao Janet","year":"2021","unstructured":"Janet Hui-wen Hsiao , Hilary Hei Ting Ngai , Luyu Qiu, Yi Yang, and Caleb Chen Cao. 2021 . Roadmap of Designing Cognitive Metrics for Explainable Artificial Intelligence (XAI). CoRR ( 2021). Janet Hui-wen Hsiao, Hilary Hei Ting Ngai, Luyu Qiu, Yi Yang, and Caleb Chen Cao. 2021. Roadmap of Designing Cognitive Metrics for Explainable Artificial Intelligence (XAI). CoRR (2021)."},{"key":"e_1_3_2_1_14_1","volume-title":"The (Un)reliability of Saliency Methods. CoRR","author":"Kindermans Pieter-Jan","year":"2017","unstructured":"Pieter-Jan Kindermans , Sara Hooker , Julius Adebayo , Maximilian Alber , Kristof T. Sch\u00fctt , Sven D\u00e4hne , Dumitru Erhan , and Been Kim . 2017. The (Un)reliability of Saliency Methods. CoRR ( 2017 ). Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Sch\u00fctt, Sven D\u00e4hne, Dumitru Erhan, and Been Kim. 2017. The (Un)reliability of Saliency Methods. CoRR (2017)."},{"key":"e_1_3_2_1_15_1","unstructured":"Ding Li Yan Liu Jun Huang and Zerui Wang. 2022. A Trustworthy View on XAI Method Evaluation.  Ding Li Yan Liu Jun Huang and Zerui Wang. 2022. A Trustworthy View on XAI Method Evaluation."},{"key":"e_1_3_2_1_16_1","volume-title":"Stanislav Bochkarev, Michael St. Jules, Xiao Yu Wang, and Alexander Wong.","author":"Lin Zhong Qiu","year":"2019","unstructured":"Zhong Qiu Lin , Mohammad Javad Shafiee , Stanislav Bochkarev, Michael St. Jules, Xiao Yu Wang, and Alexander Wong. 2019 . Do Explanations Reflect Decisions? a Machine-Centric Strategy To Quantify the Performance of Explainability Algorithms. CoRR ( 2019). Zhong Qiu Lin, Mohammad Javad Shafiee, Stanislav Bochkarev, Michael St. Jules, Xiao Yu Wang, and Alexander Wong. 2019. Do Explanations Reflect Decisions? a Machine-Centric Strategy To Quantify the Performance of Explainability Algorithms. CoRR (2019)."},{"key":"e_1_3_2_1_17_1","volume-title":"A Unified Approach To Interpreting Model Predictions. CoRR","author":"Lundberg Scott","year":"2017","unstructured":"Scott Lundberg and Su-In Lee . 2017. A Unified Approach To Interpreting Model Predictions. CoRR ( 2017 ). Scott Lundberg and Su-In Lee. 2017. A Unified Approach To Interpreting Model Predictions. CoRR (2017)."},{"key":"e_1_3_2_1_18_1","volume-title":"Explaining Network Intrusion Detection System Using Explainable Ai Framework. CoRR","author":"Mane Shraddha","year":"2021","unstructured":"Shraddha Mane and Dattaraj Rao . 2021. Explaining Network Intrusion Detection System Using Explainable Ai Framework. CoRR ( 2021 ). Shraddha Mane and Dattaraj Rao. 2021. Explaining Network Intrusion Detection System Using Explainable Ai Framework. CoRR (2021)."},{"key":"e_1_3_2_1_19_1","volume-title":"Explanation in Artificial Intelligence: Insights From the Social Sciences. CoRR","author":"Miller Tim","year":"2017","unstructured":"Tim Miller . 2017. Explanation in Artificial Intelligence: Insights From the Social Sciences. CoRR ( 2017 ). Tim Miller. 2017. Explanation in Artificial Intelligence: Insights From the Social Sciences. CoRR (2017)."},{"key":"e_1_3_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1109\/MilCIS.2015.7348942"},{"key":"e_1_3_2_1_21_1","volume-title":"From Anecdotal Evidence To Quantitative Evaluation Methods: a Systematic Review on Evaluating Explainable Ai. CoRR","author":"Nauta Meike","year":"2022","unstructured":"Meike Nauta , Jan Trienes , Shreyasi Pathak , Elisa Nguyen , Michelle Peters , Yasmin Schmitt , J\u00f6rg Schl\u00f6tterer , Maurice van Keulen , and Christin Seifert . 2022. From Anecdotal Evidence To Quantitative Evaluation Methods: a Systematic Review on Evaluating Explainable Ai. CoRR ( 2022 ). Meike Nauta, Jan Trienes, Shreyasi Pathak, Elisa Nguyen, Michelle Peters, Yasmin Schmitt, J\u00f6rg Schl\u00f6tterer, Maurice van Keulen, and Christin Seifert. 2022. From Anecdotal Evidence To Quantitative Evaluation Methods: a Systematic Review on Evaluating Explainable Ai. CoRR (2022)."},{"key":"e_1_3_2_1_22_1","volume-title":"Explainable Intrusion Detection Systems (X-IDS): a Survey of Current Methods, Challenges, and Opportunities. CoRR","author":"Neupane Subash","year":"2022","unstructured":"Subash Neupane , Jesse Ables , William Anderson , Sudip Mittal , Shahram Rahimi , Ioana Banicescu , and Maria Seale . 2022. Explainable Intrusion Detection Systems (X-IDS): a Survey of Current Methods, Challenges, and Opportunities. CoRR ( 2022 ). Subash Neupane, Jesse Ables, William Anderson, Sudip Mittal, Shahram Rahimi, Ioana Banicescu, and Maria Seale. 2022. Explainable Intrusion Detection Systems (X-IDS): a Survey of Current Methods, Challenges, and Opportunities. CoRR (2022)."},{"key":"e_1_3_2_1_23_1","volume-title":"Marco Tulio Ribeiro, and Ameet Talwalkar","author":"Plumb Gregory","year":"2021","unstructured":"Gregory Plumb , Marco Tulio Ribeiro, and Ameet Talwalkar . 2021 . Finding and Fixing Spurious Patterns With Explanations. CoRR ( 2021). Gregory Plumb, Marco Tulio Ribeiro, and Ameet Talwalkar. 2021. Finding and Fixing Spurious Patterns With Explanations. CoRR (2021)."},{"key":"e_1_3_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/2939672.2939778"},{"key":"e_1_3_2_1_25_1","volume-title":"Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. CoRR","author":"Rudin Cynthia","year":"2018","unstructured":"Cynthia Rudin . 2018. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. CoRR ( 2018 ). Cynthia Rudin. 2018. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. CoRR (2018)."},{"key":"e_1_3_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1145\/3514094.3534128"},{"volume-title":"Proceedings of the 4th International Conference on Information Systems Security and Privacy. 108--116","author":"Sharafaldin Iman","key":"e_1_3_2_1_27_1","unstructured":"Iman Sharafaldin , Arash Habibi Lashkari , and Ali A. Ghorbani . 2018. Toward Generating a New Intrusion Detection Dataset and Intrusion Traffic Characterization . In Proceedings of the 4th International Conference on Information Systems Security and Privacy. 108--116 . Iman Sharafaldin, Arash Habibi Lashkari, and Ali A. Ghorbani. 2018. Toward Generating a New Intrusion Detection Dataset and Intrusion Traffic Characterization. In Proceedings of the 4th International Conference on Information Systems Security and Privacy. 108--116."},{"key":"e_1_3_2_1_28_1","volume-title":"Doyle","author":"Siddiqui Kashif","year":"2022","unstructured":"Kashif Siddiqui and Thomas E . Doyle . 2022 . Trust Metrics for Medical Deep Learning Using Explainable-AI Ensemble for Time Series Classification. In 2022 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE) . 370--377. Kashif Siddiqui and Thomas E. Doyle. 2022. Trust Metrics for Medical Deep Learning Using Explainable-AI Ensemble for Time Series Classification. In 2022 IEEE Canadian Conference on Electrical and Computer Engineering (CCECE). 370--377."},{"key":"e_1_3_2_1_29_1","volume-title":"An Experimental Investigation Into the Evaluation of Explainability Methods. CoRR","author":"Stassin S\u00e9drick","year":"2023","unstructured":"S\u00e9drick Stassin , Alexandre Englebert , G\u00e9raldin Nanfack , Julien Albert , Nassim Versbraegen , Gilles Peiffer , Miriam Doh , Nicolas Riche , Beno\u00eet Frenay , and Christophe De Vleeschouwer . 2023. An Experimental Investigation Into the Evaluation of Explainability Methods. CoRR ( 2023 ). S\u00e9drick Stassin, Alexandre Englebert, G\u00e9raldin Nanfack, Julien Albert, Nassim Versbraegen, Gilles Peiffer, Miriam Doh, Nicolas Riche, Beno\u00eet Frenay, and Christophe De Vleeschouwer. 2023. An Experimental Investigation Into the Evaluation of Explainability Methods. CoRR (2023)."},{"key":"e_1_3_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1109\/IJCNN48605.2020.9207199"},{"key":"e_1_3_2_1_31_1","doi-asserted-by":"crossref","unstructured":"Syed Wali and Irfan Khan. 2021. Explainable AI and Random Forest Based Reliable Intrusion Detection system.  Syed Wali and Irfan Khan. 2021. Explainable AI and Random Forest Based Reliable Intrusion Detection system.","DOI":"10.36227\/techrxiv.17169080"},{"key":"e_1_3_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2020.2988359"},{"key":"e_1_3_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2022.3204051"}],"event":{"name":"CoNEXT 2023: The 19th International Conference on emerging Networking EXperiments and Technologies","sponsor":["SIGCOMM ACM Special Interest Group on Data Communication"],"location":"Paris France","acronym":"CoNEXT 2023"},"container-title":["Proceedings of the 2023 on Explainable and Safety Bounded, Fidelitous, Machine Learning for Networking"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3630050.3630177","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3630050.3630177","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T22:50:56Z","timestamp":1750287056000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3630050.3630177"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,12,5]]},"references-count":33,"alternative-id":["10.1145\/3630050.3630177","10.1145\/3630050"],"URL":"https:\/\/doi.org\/10.1145\/3630050.3630177","relation":{},"subject":[],"published":{"date-parts":[[2023,12,5]]},"assertion":[{"value":"2023-12-05","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}