{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T17:56:53Z","timestamp":1776103013838,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":25,"publisher":"ACM","license":[{"start":{"date-parts":[[2020,3,17]],"date-time":"2020-03-17T00:00:00Z","timestamp":1584403200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"DARPA D3M program"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2020,3,17]]},"DOI":"10.1145\/3377325.3377536","type":"proceedings-article","created":{"date-parts":[[2020,3,4]],"date-time":"2020-03-04T23:14:49Z","timestamp":1583363689000},"page":"531-535","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":80,"title":["ViCE"],"prefix":"10.1145","author":[{"given":"Oscar","family":"Gomez","sequence":"first","affiliation":[{"name":"New York University Abu Dhabi"}]},{"given":"Steffen","family":"Holter","sequence":"additional","affiliation":[{"name":"New York University Abu Dhabi"}]},{"given":"Jun","family":"Yuan","sequence":"additional","affiliation":[{"name":"NYU"}]},{"given":"Enrico","family":"Bertini","sequence":"additional","affiliation":[{"name":"NYU"}]}],"member":"320","published-online":{"date-parts":[[2020,3,17]]},"reference":[{"key":"e_1_3_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2018.2870052"},{"key":"e_1_3_2_1_2_1","volume-title":"IJCAI-17 workshop on explainable AI (XAI)","volume":"8","author":"Biran Or","year":"2017","unstructured":"Or Biran and Courtenay Cotton . 2017 . Explanation and justification in machine learning: A survey . In IJCAI-17 workshop on explainable AI (XAI) , Vol. 8 . Or Biran and Courtenay Cotton. 2017. Explanation and justification in machine learning: A survey. In IJCAI-17 workshop on explainable AI (XAI), Vol. 8."},{"key":"e_1_3_2_1_3_1","first-page":"832","article-title":"Machine Learning Interpretability","volume":"8","author":"Carvalho Diogo V","year":"2019","unstructured":"Diogo V Carvalho , Eduardo M Pereira , and Jaime S Cardoso . 2019 . Machine Learning Interpretability : A Survey on Methods and Metrics. Electronics 8 , 8 (2019), 832 . Diogo V Carvalho, Eduardo M Pereira, and Jaime S Cardoso. 2019. Machine Learning Interpretability: A Survey on Methods and Metrics. Electronics 8, 8 (2019), 832.","journal-title":"A Survey on Methods and Metrics. Electronics"},{"key":"e_1_3_2_1_4_1","volume-title":"Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608","author":"Doshi-Velez Finale","year":"2017","unstructured":"Finale Doshi-Velez and Been Kim . 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 ( 2017 ). Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017)."},{"key":"e_1_3_2_1_5_1","unstructured":"FICO. 2018. Explainable Machine Learning Challenge. https:\/\/community.fico.com\/s\/explainable-machine-learning-challenge?tabset-3158a=2.  FICO. 2018. Explainable Machine Learning Challenge. https:\/\/community.fico.com\/s\/explainable-machine-learning-challenge?tabset-3158a=2."},{"key":"e_1_3_2_1_6_1","volume-title":"A survey of methods for explaining black box models. ACM computing surveys (CSUR) 51, 5","author":"Guidotti Riccardo","year":"2019","unstructured":"Riccardo Guidotti , Anna Monreale , Salvatore Ruggieri , Franco Turini , Fosca Giannotti , and Dino Pedreschi . 2019. A survey of methods for explaining black box models. ACM computing surveys (CSUR) 51, 5 ( 2019 ), 93. Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. 2019. A survey of methods for explaining black box models. ACM computing surveys (CSUR) 51, 5 (2019), 93."},{"key":"e_1_3_2_1_7_1","first-page":"262","article-title":"Using the ADAP learning algorithm to forecast the onset of diabetes mellitus","volume":"10","author":"Johannes R S","year":"1988","unstructured":"R S Johannes . 1988 . Using the ADAP learning algorithm to forecast the onset of diabetes mellitus . Johns Hopkins APL Technical Digest 10 (1988), 262 -- 266 . R S Johannes. 1988. Using the ADAP learning algorithm to forecast the onset of diabetes mellitus. Johns Hopkins APL Technical Digest 10 (1988), 262--266.","journal-title":"Johns Hopkins APL Technical Digest"},{"key":"e_1_3_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1109\/VAST.2017.8585720"},{"key":"e_1_3_2_1_9_1","volume-title":"Inverse Classification for Comparison-based Interpretability in Machine Learning. arXiv preprint arXiv:1712.08443","author":"Laugel Thibault","year":"2017","unstructured":"Thibault Laugel , Marie-Jeanne Lesot , Christophe Marsala , Xavier Renard , and Marcin Detyniecki . 2017. Inverse Classification for Comparison-based Interpretability in Machine Learning. arXiv preprint arXiv:1712.08443 ( 2017 ). Thibault Laugel, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, and Marcin Detyniecki. 2017. Inverse Classification for Comparison-based Interpretability in Machine Learning. arXiv preprint arXiv:1712.08443 (2017)."},{"key":"e_1_3_2_1_10_1","volume-title":"The mythos of model interpretability. arXiv preprint arXiv:1606.03490","author":"Lipton Zachary C","year":"2016","unstructured":"Zachary C Lipton . 2016. The mythos of model interpretability. arXiv preprint arXiv:1606.03490 ( 2016 ). Zachary C Lipton. 2016. The mythos of model interpretability. arXiv preprint arXiv:1606.03490 (2016)."},{"key":"e_1_3_2_1_11_1","unstructured":"Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems. 4765--4774.  Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems. 4765--4774."},{"key":"e_1_3_2_1_12_1","unstructured":"David Martens and Foster Provost. 2013. Explaining data-driven document classifications. (2013).  David Martens and Foster Provost. 2013. Explaining data-driven document classifications. (2013)."},{"key":"e_1_3_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.artint.2018.07.007"},{"key":"e_1_3_2_1_14_1","volume-title":"RuleMatrix: Visualizing and Understanding Classifiers with Rules","author":"Ming Yao","year":"2019","unstructured":"Yao Ming , Huamin Qu , and Enrico Bertini . 2019. RuleMatrix: Visualizing and Understanding Classifiers with Rules . IEEE transactions on visualization and computer graphics 25, 1 ( 2019 ), 342--352. Yao Ming, Huamin Qu, and Enrico Bertini. 2019. RuleMatrix: Visualizing and Understanding Classifiers with Rules. IEEE transactions on visualization and computer graphics 25, 1 (2019), 342--352."},{"key":"e_1_3_2_1_15_1","doi-asserted-by":"crossref","unstructured":"Christoph Molnar. 2019. Interpretable Machine Learning. https:\/\/christophm.github.io\/interpretable-ml-book\/.  Christoph Molnar. 2019. Interpretable Machine Learning. https:\/\/christophm.github.io\/interpretable-ml-book\/.","DOI":"10.21105\/joss.00786"},{"key":"e_1_3_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/3351095.3372850"},{"key":"e_1_3_2_1_17_1","volume-title":"Jennifer Wortman Vaughan, and Hanna Wallach","author":"Poursabzi-Sangdeh Forough","year":"2018","unstructured":"Forough Poursabzi-Sangdeh , Daniel G Goldstein , Jake M Hofman , Jennifer Wortman Vaughan, and Hanna Wallach . 2018 . Manipulating and measuring model interpretability. arXiv preprint arXiv:1802.07810 (2018). Forough Poursabzi-Sangdeh, Daniel G Goldstein, Jake M Hofman, Jennifer Wortman Vaughan, and Hanna Wallach. 2018. Manipulating and measuring model interpretability. arXiv preprint arXiv:1802.07810 (2018)."},{"key":"e_1_3_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1145\/2939672.2939778"},{"key":"e_1_3_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1145\/1518701.1518895"},{"key":"e_1_3_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/3077257.3077260"},{"key":"e_1_3_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/3287560.3287566"},{"key":"e_1_3_2_1_22_1","first-page":"841","article-title":"Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GPDR","volume":"31","author":"Wachter Sandra","year":"2017","unstructured":"Sandra Wachter , Brent Mittelstadt , and Chris Russell . 2017 . Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GPDR . Harv. JL & Tech. 31 (2017), 841 . Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2017. Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GPDR. Harv. JL & Tech. 31 (2017), 841.","journal-title":"Harv. JL & Tech."},{"key":"e_1_3_2_1_23_1","volume-title":"The What-If Tool: Interactive Probing of Machine Learning Models","author":"Wexler James","year":"2019","unstructured":"James Wexler , Mahima Pushkarna , Tolga Bolukbasi , Martin Wattenberg , Fernanda Vi\u00e9gas , and Jimbo Wilson . 2019. The What-If Tool: Interactive Probing of Machine Learning Models . IEEE transactions on visualization and computer graphics ( 2019 ). James Wexler, Mahima Pushkarna, Tolga Bolukbasi, Martin Wattenberg, Fernanda Vi\u00e9gas, and Jimbo Wilson. 2019. The What-If Tool: Interactive Probing of Machine Learning Models. IEEE transactions on visualization and computer graphics (2019)."},{"key":"e_1_3_2_1_24_1","volume-title":"Manifold: A model-agnostic framework for interpretation and diagnosis of machine learning models","author":"Zhang Jiawei","year":"2018","unstructured":"Jiawei Zhang , Yang Wang , Piero Molino , Lezhi Li , and David S Ebert . 2018 . Manifold: A model-agnostic framework for interpretation and diagnosis of machine learning models . IEEE transactions on visualization and computer graphics 25, 1 (2018), 364--373. Jiawei Zhang, Yang Wang, Piero Molino, Lezhi Li, and David S Ebert. 2018. Manifold: A model-agnostic framework for interpretation and diagnosis of machine learning models. IEEE transactions on visualization and computer graphics 25, 1 (2018), 364--373."},{"key":"e_1_3_2_1_25_1","volume-title":"Dik Lun Lee, and Weiwei Cui","author":"Zhao Xun","year":"2019","unstructured":"Xun Zhao , Yanhong Wu , Dik Lun Lee, and Weiwei Cui . 2019 . iForest: Interpreting Random Forests via Visual Analytics. IEEE transactions on visualization and computer graphics 25, 1 (2019), 407--416. Xun Zhao, Yanhong Wu, Dik Lun Lee, and Weiwei Cui. 2019. iForest: Interpreting Random Forests via Visual Analytics. IEEE transactions on visualization and computer graphics 25, 1 (2019), 407--416."}],"event":{"name":"IUI '20: 25th International Conference on Intelligent User Interfaces","location":"Cagliari Italy","acronym":"IUI '20","sponsor":["SIGAI ACM Special Interest Group on Artificial Intelligence","SIGCHI ACM Special Interest Group on Computer-Human Interaction"]},"container-title":["Proceedings of the 25th International Conference on Intelligent User Interfaces"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3377325.3377536","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3377325.3377536","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T22:33:17Z","timestamp":1750199597000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3377325.3377536"}},"subtitle":["visual counterfactual explanations for machine learning models"],"short-title":[],"issued":{"date-parts":[[2020,3,17]]},"references-count":25,"alternative-id":["10.1145\/3377325.3377536","10.1145\/3377325"],"URL":"https:\/\/doi.org\/10.1145\/3377325.3377536","relation":{},"subject":[],"published":{"date-parts":[[2020,3,17]]},"assertion":[{"value":"2020-03-17","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}