{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,3]],"date-time":"2026-03-03T16:10:44Z","timestamp":1772554244096,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":38,"publisher":"ACM","license":[{"start":{"date-parts":[[2022,10,17]],"date-time":"2022-10-17T00:00:00Z","timestamp":1665964800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Austrian Research Promotion Agency (FFG)","award":["21055551"],"award-info":[{"award-number":["21055551"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2022,10,17]]},"DOI":"10.1145\/3511808.3557418","type":"proceedings-article","created":{"date-parts":[[2022,10,16]],"date-time":"2022-10-16T01:22:22Z","timestamp":1665883342000},"page":"1798-1807","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":11,"title":["Perturbation Effect"],"prefix":"10.1145","author":[{"given":"Ilija","family":"\u0160imi\u0107","sequence":"first","affiliation":[{"name":"Know-Center GmbH, Graz, Austria"}]},{"given":"Vedran","family":"Sabol","sequence":"additional","affiliation":[{"name":"Know-Center GmbH, Graz, Austria"}]},{"given":"Eduardo","family":"Veas","sequence":"additional","affiliation":[{"name":"Graz University of Technology, Graz, Austria"}]}],"member":"320","published-online":{"date-parts":[[2022,10,17]]},"reference":[{"key":"e_1_3_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2018.2870052"},{"key":"e_1_3_2_1_2_1","volume-title":"Advances in Neural Information Processing Systems, S Bengio, H Wallach, H Larochelle, K Grauman, N Cesa-Bianchi","author":"Adebayo Julius","year":"2018","unstructured":"Julius Adebayo , Justin Gilmer , Michael Muelly , Ian Goodfellow , Moritz Hardt , and Been Kim . 2018. Sanity Checks for Saliency Maps . In Advances in Neural Information Processing Systems, S Bengio, H Wallach, H Larochelle, K Grauman, N Cesa-Bianchi , and R Garnett (Eds.), Vol. 31 . Curran Associates, Inc. https:\/\/proceedings.neurips.cc\/paper\/ 2018 \/file\/294a8ed24b1ad22ec2e7efea049b8737-Paper.pdf Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, and Been Kim. 2018. Sanity Checks for Saliency Maps. In Advances in Neural Information Processing Systems, S Bengio, H Wallach, H Larochelle, K Grauman, N Cesa-Bianchi, and R Garnett (Eds.), Vol. 31. Curran Associates, Inc. https:\/\/proceedings.neurips.cc\/paper\/2018\/file\/294a8ed24b1ad22ec2e7efea049b8737-Paper.pdf"},{"key":"e_1_3_2_1_3_1","volume-title":"Proceedings of the 32nd International Conference on Neural Information Processing Systems. 7786--7795","author":"Alvarez-Melis David","year":"2018","unstructured":"David Alvarez-Melis and Tommi S Jaakkola . 2018 . Towards robust interpretability with self-explaining neural networks . In Proceedings of the 32nd International Conference on Neural Information Processing Systems. 7786--7795 . David Alvarez-Melis and Tommi S Jaakkola. 2018. Towards robust interpretability with self-explaining neural networks. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. 7786--7795."},{"key":"e_1_3_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1186\/s12911-020-01332-6"},{"key":"e_1_3_2_1_5_1","doi-asserted-by":"crossref","unstructured":"L. Arras Gr\u00e9goire Montavon K. M\u00fcller and W. Samek. 2017. Explaining Recurrent Neural Network Predictions in Sentiment Analysis. In WASSA@EMNLP.  L. Arras Gr\u00e9goire Montavon K. M\u00fcller and W. Samek. 2017. Explaining Recurrent Neural Network Predictions in Sentiment Analysis. In WASSA@EMNLP.","DOI":"10.18653\/v1\/W17-5221"},{"key":"e_1_3_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0130140"},{"key":"e_1_3_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1145\/3447548.3467166"},{"key":"e_1_3_2_1_8_1","volume-title":"International conference on machine learning. PMLR, 115--123","author":"Bergstra James","year":"2013","unstructured":"James Bergstra , Daniel Yamins , and David Cox . 2013 . Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures . In International conference on machine learning. PMLR, 115--123 . James Bergstra, Daniel Yamins, and David Cox. 2013. Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures. In International conference on machine learning. PMLR, 115--123."},{"key":"e_1_3_2_1_9_1","volume-title":"International Conference on Machine Learning. PMLR, 883--892","author":"Chen Jianbo","year":"2018","unstructured":"Jianbo Chen , Le Song , Martin Wainwright , and Michael Jordan . 2018 . Learning to explain: An information-theoretic perspective on model interpretation . In International Conference on Machine Learning. PMLR, 883--892 . Jianbo Chen, Le Song, Martin Wainwright, and Michael Jordan. 2018. Learning to explain: An information-theoretic perspective on model interpretation. In International Conference on Machine Learning. PMLR, 883--892."},{"key":"e_1_3_2_1_10_1","unstructured":"Yanping Chen Eamonn Keogh Bing Hu Nurjahan Begum Anthony Bagnall Abdullah Mueen and Gustavo Batista. 2015. The UCR Time Series Classification Archive. www.cs.ucr.edu\/ eamonn\/time_series_data\/.  Yanping Chen Eamonn Keogh Bing Hu Nurjahan Begum Anthony Bagnall Abdullah Mueen and Gustavo Batista. 2015. The UCR Time Series Classification Archive. www.cs.ucr.edu\/ eamonn\/time_series_data\/."},{"key":"e_1_3_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.371"},{"key":"e_1_3_2_1_12_1","volume-title":"A survey of methods for explaining black box models. ACM computing surveys (CSUR)","author":"Guidotti Riccardo","year":"2018","unstructured":"Riccardo Guidotti , Anna Monreale , Salvatore Ruggieri , Franco Turini , Fosca Giannotti , and Dino Pedreschi . 2018. A survey of methods for explaining black box models. ACM computing surveys (CSUR) , Vol. 51 , 5 ( 2018 ), 1--42. Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2018. A survey of methods for explaining black box models. ACM computing surveys (CSUR), Vol. 51, 5 (2018), 1--42."},{"key":"e_1_3_2_1_13_1","volume-title":"Visual analytics in deep learning: An interrogative survey for the next frontiers","author":"Hohman Fred","year":"2018","unstructured":"Fred Hohman , Minsuk Kahng , Robert Pienta , and Duen Horng Chau . 2018. Visual analytics in deep learning: An interrogative survey for the next frontiers . IEEE transactions on visualization and computer graphics, Vol. 25 , 8 ( 2018 ), 2674--2693. Fred Hohman, Minsuk Kahng, Robert Pienta, and Duen Horng Chau. 2018. Visual analytics in deep learning: An interrogative survey for the next frontiers. IEEE transactions on visualization and computer graphics, Vol. 25, 8 (2018), 2674--2693."},{"key":"e_1_3_2_1_14_1","volume-title":"Pieter Jan Kindermans, and Been Kim","author":"Hooker Sara","year":"2019","unstructured":"Sara Hooker , Dumitru Erhan , Pieter Jan Kindermans, and Been Kim . 2019 . A benchmark for interpretability methods in deep neural networks. In Advances in Neural Information Processing Systems , Vol. 32 . arxiv: 1806.10758 Sara Hooker, Dumitru Erhan, Pieter Jan Kindermans, and Been Kim. 2019. A benchmark for interpretability methods in deep neural networks. In Advances in Neural Information Processing Systems, Vol. 32. arxiv: 1806.10758"},{"key":"e_1_3_2_1_15_1","series-title":"Time Series Predictions","volume-title":"Hector Corrada Bravo, and Soheil Feizi","author":"Ismail Aya Abdelsalam","year":"2020","unstructured":"Aya Abdelsalam Ismail , Mohamed Gunady , Hector Corrada Bravo, and Soheil Feizi . 2020 . Benchmarking Deep Learning Interpretability in Time Series Predictions . In Advances in Neural Information Processing Systems,, H Larochelle, M Ranzato, R Hadsell, M F Balcan, and H Lin (Eds.), Vol. 33 . Curran Associates, Inc ., 6441--6452. https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/47a3893cc405396a5c30d91320572d6d-Paper.pdf Aya Abdelsalam Ismail, Mohamed Gunady, Hector Corrada Bravo, and Soheil Feizi. 2020. Benchmarking Deep Learning Interpretability in Time Series Predictions. In Advances in Neural Information Processing Systems,, H Larochelle, M Ranzato, R Hadsell, M F Balcan, and H Lin (Eds.), Vol. 33. Curran Associates, Inc., 6441--6452. https:\/\/proceedings.neurips.cc\/paper\/2020\/file\/47a3893cc405396a5c30d91320572d6d-Paper.pdf"},{"key":"e_1_3_2_1_16_1","first-page":"00619","volume-title":"Data Mining and Knowledge Discovery","volume":"33","author":"Fawaz Hassan Ismail","year":"2019","unstructured":"Hassan Ismail Fawaz , Germain Forestier , Jonathan Weber , Lhassane Idoumghar , and Pierre-Alain Muller . 2019 . Deep learning for time series classification: a review . Data Mining and Knowledge Discovery , Vol. 33 , 4 (01 Jul 2019), 917--963. https:\/\/doi.org\/10.1007\/s10618-019- 00619 - 00611 10.1007\/s10618-019-00619-1 Hassan Ismail Fawaz, Germain Forestier, Jonathan Weber, Lhassane Idoumghar, and Pierre-Alain Muller. 2019. Deep learning for time series classification: a review. Data Mining and Knowledge Discovery, Vol. 33, 4 (01 Jul 2019), 917--963. https:\/\/doi.org\/10.1007\/s10618-019-00619-1"},{"key":"e_1_3_2_1_17_1","volume-title":"Data Mining and Knowledge Discovery","volume":"34","author":"Fawaz Hassan Ismail","year":"2020","unstructured":"Hassan Ismail Fawaz , Benjamin Lucas , Germain Forestier , Charlotte Pelletier , Daniel F. Schmidt , Jonathan Weber , Geoffrey I. Webb , Lhassane Idoumghar , Pierre-Alain Muller , and Fran\u00e7ois Petitjean . 2020 . InceptionTime: Finding AlexNet for time series classification . Data Mining and Knowledge Discovery , Vol. 34 , 6 (01 Nov 2020), 1936--1962. https:\/\/doi.org\/10.1007\/s10618-020-00710-y 10.1007\/s10618-020-00710-y Hassan Ismail Fawaz, Benjamin Lucas, Germain Forestier, Charlotte Pelletier, Daniel F. Schmidt, Jonathan Weber, Geoffrey I. Webb, Lhassane Idoumghar, Pierre-Alain Muller, and Fran\u00e7ois Petitjean. 2020. InceptionTime: Finding AlexNet for time series classification. Data Mining and Knowledge Discovery, Vol. 34, 6 (01 Nov 2020), 1936--1962. https:\/\/doi.org\/10.1007\/s10618-020-00710-y"},{"key":"e_1_3_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.2466\/11.IT.3.1"},{"key":"e_1_3_2_1_19_1","volume-title":"Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, Wojciech Samek, Gr\u00e9 goire Montavon, Andrea Vedaldi, Lars Kai Hansen, and Klaus-Robert M\u00fcller (Eds.)","author":"Kindermans Pieter-Jan","unstructured":"Pieter-Jan Kindermans , Sara Hooker , Julius Adebayo , Maximilian Alber , Kristof T Sch\u00fctt , Sven D\u00e4hne , Dumitru Erhan , and Been Kim . 2019. The (Un)reliability of Saliency Methods . In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, Wojciech Samek, Gr\u00e9 goire Montavon, Andrea Vedaldi, Lars Kai Hansen, and Klaus-Robert M\u00fcller (Eds.) . Springer International Publishing , Cham , 267--280. https:\/\/doi.org\/10.1007\/978-3-030-28954-6_14 10.1007\/978-3-030-28954-6_14 Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T Sch\u00fctt, Sven D\u00e4hne, Dumitru Erhan, and Been Kim. 2019. The (Un)reliability of Saliency Methods. In Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, Wojciech Samek, Gr\u00e9 goire Montavon, Andrea Vedaldi, Lars Kai Hansen, and Klaus-Robert M\u00fcller (Eds.). Springer International Publishing, Cham, 267--280. https:\/\/doi.org\/10.1007\/978-3-030-28954-6_14"},{"key":"e_1_3_2_1_20_1","volume-title":"International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=Hkn7CBaTW","author":"Kindermans Pieter-Jan","year":"2018","unstructured":"Pieter-Jan Kindermans , Kristof T Sch\u00fctt , Maximilian Alber , Klaus-Robert M\u00fc ller, Dumitru Erhan , Been Kim , and Sven D\"a hne. 2018 . Learning how to explain neural networks: PatternNet and PatternAttribution . In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=Hkn7CBaTW Pieter-Jan Kindermans, Kristof T Sch\u00fctt, Maximilian Alber, Klaus-Robert M\u00fc ller, Dumitru Erhan, Been Kim, and Sven D\"a hne. 2018. Learning how to explain neural networks: PatternNet and PatternAttribution. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=Hkn7CBaTW"},{"key":"e_1_3_2_1_21_1","volume-title":"Captum: A unified and generic model interpretability library for PyTorch. arxiv","author":"Kokhlikyan Narine","year":"2020","unstructured":"Narine Kokhlikyan , Vivek Miglani , Miguel Martin , Edward Wang , Bilal Alsallakh , Jonathan Reynolds , Alexander Melnikov , Natalia Kliushkina , Carlos Araya , Siqi Yan , and Orion Reblitz-Richardson . 2020 . Captum: A unified and generic model interpretability library for PyTorch. arxiv : 2009.07896 [cs.LG] Narine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, and Orion Reblitz-Richardson. 2020. Captum: A unified and generic model interpretability library for PyTorch. arxiv: 2009.07896 [cs.LG]"},{"key":"e_1_3_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2019.2949286"},{"key":"e_1_3_2_1_23_1","first-page":"4765","article-title":"A Unified Approach to Interpreting Model Predictions","volume":"30","author":"Lundberg Scott M","year":"2017","unstructured":"Scott M Lundberg and Su-In Lee . 2017 . A Unified Approach to Interpreting Model Predictions . Advances in Neural Information Processing Systems , Vol. 30 (2017), 4765 -- 4774 . Scott M Lundberg and Su-In Lee. 2017. A Unified Approach to Interpreting Model Predictions. Advances in Neural Information Processing Systems, Vol. 30 (2017), 4765--4774.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/2939672.2939778"},{"key":"e_1_3_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1038\/s42256-019-0048-x"},{"key":"e_1_3_2_1_26_1","volume-title":"Evaluating the visualization of what a deep neural network has learned","author":"Samek Wojciech","year":"2016","unstructured":"Wojciech Samek , Alexander Binder , Gr\u00e9goire Montavon , Sebastian Lapuschkin , and Klaus-Robert M\u00fcller . 2016. Evaluating the visualization of what a deep neural network has learned . IEEE transactions on neural networks and learning systems, Vol. 28 , 11 ( 2016 ), 2660--2673. Wojciech Samek, Alexander Binder, Gr\u00e9goire Montavon, Sebastian Lapuschkin, and Klaus-Robert M\u00fcller. 2016. Evaluating the visualization of what a deep neural network has learned. IEEE transactions on neural networks and learning systems, Vol. 28, 11 (2016), 2660--2673."},{"key":"e_1_3_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1109\/JPROC.2021.3060483"},{"key":"e_1_3_2_1_28_1","volume-title":"Towards A Rigorous Evaluation Of XAI Methods On Time Series. 2019 IEEE\/CVF International Conference on Computer Vision Workshop (ICCVW)","author":"Schlegel Udo","year":"2019","unstructured":"Udo Schlegel , Hiba Arnout , Mennatallah El-Assady , D Oelke , and D Keim . 2019 . Towards A Rigorous Evaluation Of XAI Methods On Time Series. 2019 IEEE\/CVF International Conference on Computer Vision Workshop (ICCVW) (2019), 4197--4201. Udo Schlegel, Hiba Arnout, Mennatallah El-Assady, D Oelke, and D Keim. 2019. Towards A Rigorous Evaluation Of XAI Methods On Time Series. 2019 IEEE\/CVF International Conference on Computer Vision Workshop (ICCVW) (2019), 4197--4201."},{"key":"e_1_3_2_1_29_1","volume-title":"Restricting the Flow: Information Bottlenecks for Attribution. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=S1xWh1rYwB","author":"Schulz Karl","year":"2020","unstructured":"Karl Schulz , Leon Sixt , Federico Tombari , and Tim Landgraf . 2020 . Restricting the Flow: Information Bottlenecks for Attribution. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=S1xWh1rYwB Karl Schulz, Leon Sixt, Federico Tombari, and Tim Landgraf. 2020. Restricting the Flow: Information Bottlenecks for Attribution. In International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=S1xWh1rYwB"},{"key":"e_1_3_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.74"},{"key":"e_1_3_2_1_31_1","volume-title":"International Conference on Machine Learning. PMLR, 3145--3153","author":"Shrikumar Avanti","year":"2017","unstructured":"Avanti Shrikumar , Peyton Greenside , and Anshul Kundaje . 2017 . Learning important features through propagating activation differences . In International Conference on Machine Learning. PMLR, 3145--3153 . Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. 2017. Learning important features through propagating activation differences. In International Conference on Machine Learning. PMLR, 3145--3153."},{"key":"e_1_3_2_1_32_1","volume-title":"Not Just a Black Box: Learning Important Features Through Propagating Activation Differences. CoRR","author":"Shrikumar Avanti","year":"2016","unstructured":"Avanti Shrikumar , Peyton Greenside , Anna Shcherbina , and Anshul Kundaje . 2016. Not Just a Black Box: Learning Important Features Through Propagating Activation Differences. CoRR , Vol. abs\/ 1605 .0 ( 2016 ). arxiv: 1605.01713 http:\/\/arxiv.org\/abs\/1605.01713 Avanti Shrikumar, Peyton Greenside, Anna Shcherbina, and Anshul Kundaje. 2016. Not Just a Black Box: Learning Important Features Through Propagating Activation Differences. CoRR, Vol. abs\/1605.0 (2016). arxiv: 1605.01713 http:\/\/arxiv.org\/abs\/1605.01713"},{"key":"e_1_3_2_1_33_1","volume-title":"Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. CoRR","author":"Simonyan K","year":"2014","unstructured":"K Simonyan , A Vedaldi , and Andrew Zisserman . 2014. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. CoRR , Vol. abs\/ 1312 .6 ( 2014 ). K Simonyan, A Vedaldi, and Andrew Zisserman. 2014. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. CoRR, Vol. abs\/1312.6 (2014)."},{"key":"e_1_3_2_1_34_1","volume-title":"Feature Importance Explanations for Temporal Black-Box Models. arXiv preprint arXiv:2102.11934","author":"Sood Akshay","year":"2021","unstructured":"Akshay Sood and Mark Craven . 2021. Feature Importance Explanations for Temporal Black-Box Models. arXiv preprint arXiv:2102.11934 ( 2021 ). Akshay Sood and Mark Craven. 2021. Feature Importance Explanations for Temporal Black-Box Models. arXiv preprint arXiv:2102.11934 (2021)."},{"key":"e_1_3_2_1_35_1","volume-title":"International Conference on Machine Learning. PMLR, 3319--3328","author":"Sundararajan Mukund","year":"2017","unstructured":"Mukund Sundararajan , Ankur Taly , and Qiqi Yan . 2017 . Axiomatic attribution for deep networks . In International Conference on Machine Learning. PMLR, 3319--3328 . Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In International Conference on Machine Learning. PMLR, 3319--3328."},{"key":"e_1_3_2_1_36_1","volume-title":"Advances in Neural Information Processing Systems","volume":"33","author":"Tonekaboni Sana","year":"2020","unstructured":"Sana Tonekaboni , Shalmali Joshi , Kieran Campbell , David K Duvenaud , and Anna Goldenberg . 2020 . What went wrong and when? Instance-wise feature importance for time-series black-box models . Advances in Neural Information Processing Systems , Vol. 33 (2020). Sana Tonekaboni, Shalmali Joshi, Kieran Campbell, David K Duvenaud, and Anna Goldenberg. 2020. What went wrong and when? Instance-wise feature importance for time-series black-box models. Advances in Neural Information Processing Systems, Vol. 33 (2020)."},{"key":"e_1_3_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-10590-1_53"},{"key":"e_1_3_2_1_38_1","unstructured":"Ilija \u0160imi\u0107 Vedran Sabol and Eduardo Veas. 2021. XAI Methods for Neural Time Series Classification: A Brief Review. arxiv: 2108.08009 [cs.LG]  Ilija \u0160imi\u0107 Vedran Sabol and Eduardo Veas. 2021. XAI Methods for Neural Time Series Classification: A Brief Review. arxiv: 2108.08009 [cs.LG]"}],"event":{"name":"CIKM '22: The 31st ACM International Conference on Information and Knowledge Management","location":"Atlanta GA USA","acronym":"CIKM '22","sponsor":["SIGWEB ACM Special Interest Group on Hypertext, Hypermedia, and Web","SIGIR ACM Special Interest Group on Information Retrieval"]},"container-title":["Proceedings of the 31st ACM International Conference on Information &amp; Knowledge Management"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3511808.3557418","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3511808.3557418","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T17:48:55Z","timestamp":1750182535000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3511808.3557418"}},"subtitle":["A Metric to Counter Misleading Validation of Feature Attribution"],"short-title":[],"issued":{"date-parts":[[2022,10,17]]},"references-count":38,"alternative-id":["10.1145\/3511808.3557418","10.1145\/3511808"],"URL":"https:\/\/doi.org\/10.1145\/3511808.3557418","relation":{},"subject":[],"published":{"date-parts":[[2022,10,17]]},"assertion":[{"value":"2022-10-17","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}