{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,9,17]],"date-time":"2025-09-17T03:16:31Z","timestamp":1758078991728,"version":"3.44.0"},"reference-count":9,"publisher":"Association for Computing Machinery (ACM)","issue":"12","content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. VLDB Endow."],"published-print":{"date-parts":[[2025,8]]},"abstract":"<jats:p>\n            <jats:italic toggle=\"yes\">Learned Cost Model<\/jats:italic>\n            s (LCMs) have shown superior results over traditional database cost models as they can significantly improve the accuracy of cost predictions. However, LCMs still fail for some query plans, as prediction errors can be large in the tail. Unfortunately, recent LCMs are based on complex deep neural models, and thus, there is no easy way to understand where this accuracy drop is rooted, which critically prevents systematic troubleshooting. In this demo paper, we present the very first approach for opening the black box by bringing AI explainability approaches to LCMs. As a core contribution, we developed new explanation techniques that extend existing methods that are available for the general explainability of AI models and adapt them significantly to be usable for LCMs. In our demo, we provide an interactive tool to showcase how explainability for LCMs works. We believe this is a first step for making LCMs debuggable and thus paving the road for new approaches for systematically fixing problems in LCMs.\n          <\/jats:p>","DOI":"10.14778\/3750601.3750645","type":"journal-article","created":{"date-parts":[[2025,9,16]],"date-time":"2025-09-16T13:38:05Z","timestamp":1758029885000},"page":"5255-5258","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["Opening the Black-Box: Explaining Learned Cost Models for Databases"],"prefix":"10.14778","volume":"18","author":[{"given":"Roman","family":"Heinrich","sequence":"first","affiliation":[{"name":"TU Darmstadt &amp; DFKI"}]},{"given":"Oleksandr","family":"Havrylov","sequence":"additional","affiliation":[{"name":"TU Darmstadt"}]},{"given":"Manisha","family":"Luthra","sequence":"additional","affiliation":[{"name":"TU Darmstadt &amp; DFKI"}]},{"given":"Johannes","family":"Wehrstein","sequence":"additional","affiliation":[{"name":"TU Darmstadt"}]},{"given":"Carsten","family":"Binnig","sequence":"additional","affiliation":[{"name":"TU Darmstadt &amp; DFKI"}]}],"member":"320","published-online":{"date-parts":[[2025,9,16]]},"reference":[{"key":"e_1_2_1_1_1","volume-title":"How to Explain Individual Classification Decisions. J. Mach. Learn. Res. 11","author":"Baehrens David","year":"2010","unstructured":"David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, and Klaus-Robert M\u00fcller. 2010. How to Explain Individual Classification Decisions. J. Mach. Learn. Res. 11 (2010)."},{"key":"e_1_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1145\/3725309"},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.14778\/3551793.3551799"},{"key":"e_1_2_1_4_1","volume-title":"A Survey on Explainability of Graph Neural Networks","author":"Kakkad Jaykumar","year":"2023","unstructured":"Jaykumar Kakkad, Jaspal Jannu, Kartik Sharma, Charu Aggarwal, and Sourav Medya. 2023. A Survey on Explainability of Graph Neural Networks. IEEE Data Eng. Bull. 46, 2 (2023)."},{"key":"e_1_2_1_5_1","volume-title":"Learned Cardinalities: Estimating Correlated Joins with Deep Learning. In CIDR","author":"Kipf Andreas","year":"2019","unstructured":"Andreas Kipf, Thomas Kipf, Bernhard Radke, Viktor Leis, Peter A. Boncz, and Alfons Kemper. 2019. Learned Cardinalities: Estimating Correlated Joins with Deep Learning. In CIDR 2019."},{"key":"e_1_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.14778\/3342263.3342646"},{"key":"e_1_2_1_7_1","volume-title":"Riedmiller","author":"Springenberg Jost Tobias","year":"2015","unstructured":"Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin A. Riedmiller. 2015. Striving for Simplicity: The All Convolutional Net. In ICLR 2015."},{"key":"e_1_2_1_8_1","volume-title":"NeurIPS","author":"Ying Zhitao","year":"2019","unstructured":"Zhitao Ying, Dylan Bourgeois, Jiaxuan You, Marinka Zitnik, and Jure Leskovec. 2019. GNNExplainer: Generating Explanations for Graph Neural Networks. In NeurIPS 2019."},{"key":"e_1_2_1_9_1","article-title":"Explainability in Graph Neural Networks: A Taxonomic Survey","volume":"45","author":"Yuan Hao","year":"2023","unstructured":"Hao Yuan, Haiyang Yu, Shurui Gui, and Shuiwang Ji. 2023. Explainability in Graph Neural Networks: A Taxonomic Survey. IEEE Trans. Pattern Anal. Mach. Intell. 45, 5 (2023).","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."}],"container-title":["Proceedings of the VLDB Endowment"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.14778\/3750601.3750645","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,9,16]],"date-time":"2025-09-16T13:38:53Z","timestamp":1758029933000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.14778\/3750601.3750645"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,8]]},"references-count":9,"journal-issue":{"issue":"12","published-print":{"date-parts":[[2025,8]]}},"alternative-id":["10.14778\/3750601.3750645"],"URL":"https:\/\/doi.org\/10.14778\/3750601.3750645","relation":{},"ISSN":["2150-8097"],"issn-type":[{"value":"2150-8097","type":"print"}],"subject":[],"published":{"date-parts":[[2025,8]]},"assertion":[{"value":"2025-09-16","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}