{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,17]],"date-time":"2025-11-17T21:40:52Z","timestamp":1763415652713,"version":"3.37.3"},"reference-count":71,"publisher":"Springer Science and Business Media LLC","issue":"6","license":[{"start":{"date-parts":[[2023,3,11]],"date-time":"2023-03-11T00:00:00Z","timestamp":1678492800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,3,11]],"date-time":"2023-03-11T00:00:00Z","timestamp":1678492800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100005416","name":"Norges Forskningsr\u00e5d","doi-asserted-by":"publisher","award":["309439","315029","303514"],"award-info":[{"award-number":["309439","315029","303514"]}],"id":[{"id":"10.13039\/501100005416","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Int J Comput Vis"],"published-print":{"date-parts":[[2023,6]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Despite the significant improvements that self-supervised representation learning has led to when learning from unlabeled data, no methods have been developed that explain what influences the learned representation. We address this need through our proposed approach, RELAX, which is the first approach for attribution-based explanations of representations. Our approach can also model the uncertainty in its explanations, which is essential to produce trustworthy explanations. RELAX explains representations by measuring similarities in the representation space between an input and masked out versions of itself, providing intuitive explanations that significantly outperform the gradient-based baselines. We provide theoretical interpretations of RELAX and conduct a novel analysis of feature extractors trained using supervised and unsupervised learning, providing insights into different learning strategies. Moreover, we conduct a user study to assess how well the proposed approach aligns with human intuition and show that the proposed method outperforms the baselines in both the quantitative and human evaluation studies. Finally, we illustrate the usability of RELAX in several use cases and highlight that incorporating uncertainty can be essential for providing faithful explanations, taking a crucial step towards explaining representations.<\/jats:p>","DOI":"10.1007\/s11263-023-01773-2","type":"journal-article","created":{"date-parts":[[2023,3,11]],"date-time":"2023-03-11T07:02:55Z","timestamp":1678518175000},"page":"1584-1610","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":12,"title":["RELAX: Representation Learning Explainability"],"prefix":"10.1007","volume":"131","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-1395-7154","authenticated-orcid":false,"given":"Kristoffer K.","family":"Wickstr\u00f8m","sequence":"first","affiliation":[]},{"given":"Daniel J.","family":"Trosten","sequence":"additional","affiliation":[]},{"given":"Sigurd","family":"L\u00f8kse","sequence":"additional","affiliation":[]},{"given":"Ahc\u00e8ne","family":"Boubekki","sequence":"additional","affiliation":[]},{"given":"Karl \u00f8yvind","family":"Mikalsen","sequence":"additional","affiliation":[]},{"given":"Michael C.","family":"Kampffmeyer","sequence":"additional","affiliation":[]},{"given":"Robert","family":"Jenssen","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,3,11]]},"reference":[{"key":"1773_CR1","unstructured":"Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M., & Kim, B. (2018). Sanity checks for saliency maps. In Advances in neural information processing systems. Curran Associates, Inc."},{"key":"1773_CR2","unstructured":"Alvarez-Melis, D., & Jaakkola, T. S. (2018). Towards robust interpretability with self-explaining neural networks. In Proceedings of the 32nd international conference on neural information processing systems (pp. 7786\u20137795). Curran Associates Inc., Red Hook, NY, USA, NIPS\u201918."},{"key":"1773_CR3","unstructured":"Antoran, J., Bhatt, U., Adel, T., Weller, A., & Hernandez-Lobato, J. M. (2020). Getting a clue: A method for explaining uncertainty e (2021). Getting a clue: A method for explaining uncertainty estimates. In International conference on learning representations."},{"key":"1773_CR4","doi-asserted-by":"publisher","first-page":"14","DOI":"10.1016\/j.inffus.2021.11.008","volume":"81","author":"L Arras","year":"2022","unstructured":"Arras, L., Osman, A., & Samek, W. (2022). Clevr-xai: A benchmark dataset for the ground truth evaluation of neural network explanations. Information Fusion, 81, 14\u201340. https:\/\/doi.org\/10.1016\/j.inffus.2021.11.008","journal-title":"Information Fusion"},{"issue":"7","key":"1773_CR5","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0130140","volume":"10","author":"S Bach","year":"2015","unstructured":"Bach, S., Binder, A., Montavon, G., Klauschen, F., M\u00fcller, K. R., & Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE, 10(7), e0130140. https:\/\/doi.org\/10.1371\/journal.pone.0130140","journal-title":"PLoS ONE"},{"key":"1773_CR6","doi-asserted-by":"crossref","unstructured":"Bau, D., Zhou, B., Khosla, A., Oliva, A., & Torralba, A. (2017). Network dissection: Quantifying interpretability of deep visual representations. In IEEE computer vision and pattern recognition.","DOI":"10.1109\/CVPR.2017.354"},{"key":"1773_CR7","unstructured":"Bykov, K., H\u00f6hne, M. M. C., M\u00fcller, K. R., Nakajima, S., & Kloft, M. (2020) How much can I trust you?\u2014Quantifying uncertainties in explaining neural networks. CoRR arXiv:2006.09000"},{"key":"1773_CR8","unstructured":"Caron, M., Misra, I., Mairal, J., Goyal, P., Bojanowski, P., & Joulin, A. (2020). Unsupervised learning of visual features by contrasting cluster assignments. In Advances in neural information processing systems (pp. 9912\u20139924)."},{"key":"1773_CR9","unstructured":"Chen, C., Li, O., Tao, D., Barnett, A., Rudin, C., & Su, J. K. (2019). This looks like that: Deep learning for interpretable image recognition. In International conference on neural information processing systems."},{"key":"1773_CR10","unstructured":"Chen, T., Kornblith, S., Norouzi, M., & Hinton, G. (2020). A simple framework for contrastive learning of visual representations. In International conference on machine learning (pp. 1597\u20131607)."},{"key":"1773_CR11","doi-asserted-by":"crossref","unstructured":"Chen, X., & He, K. (2021). Exploring simple siamese representation learning. In IEEE computer vision and pattern recognition (pp. 15750\u201315758).","DOI":"10.1109\/CVPR46437.2021.01549"},{"key":"1773_CR12","doi-asserted-by":"publisher","first-page":"886","DOI":"10.1109\/CVPR.2005.177","volume":"1","author":"N Dalal","year":"2005","unstructured":"Dalal, N., & Triggs, B. (2005). Histograms of oriented gradients for human detection. IEEE Computer Vision and Pattern Recognition, 1, 886\u2013893. https:\/\/doi.org\/10.1109\/CVPR.2005.177","journal-title":"IEEE Computer Vision and Pattern Recognition"},{"key":"1773_CR13","doi-asserted-by":"crossref","unstructured":"Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In IEEE Computer Vision and Pattern Recognition (pp. 248\u2013255). IEEE.","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"1773_CR14","unstructured":"Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv:1702.08608"},{"key":"1773_CR15","unstructured":"Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., & Houlsby, N. (2021) An image is worth 16x16 words: Transformers for image recognition at scale. In International conference on learning representations. https:\/\/openreview.net\/forum?id=YicbFdNTTy"},{"key":"1773_CR16","doi-asserted-by":"publisher","first-page":"303","DOI":"10.1007\/s11263-009-0275-4","volume":"88","author":"M Everingham","year":"2009","unstructured":"Everingham, M., Van Gool, L., Williams, C. K., Winn, J., & Zisserman, A. (2009). The pascal visual object classes (VOC) challenge. International Journal of Computer Vision, 88, 303\u2013338. https:\/\/doi.org\/10.1007\/s11263-009-0275-4","journal-title":"International Journal of Computer Vision"},{"key":"1773_CR17","unstructured":"Falcon, W., & Cho, K. (2020). A framework for contrastive self-supervised learning and designing a new approach. arXiv preprint arXiv:2009.00104."},{"key":"1773_CR18","doi-asserted-by":"publisher","unstructured":"Fong, R., & Vedaldi, A. (2018). Net2vec: Quantifying and explaining how concepts are encoded by filters in deep neural networks. In IEEE computer vision and pattern recognition (pp. 8730\u20138738). https:\/\/doi.org\/10.1109\/CVPR.2018.00910","DOI":"10.1109\/CVPR.2018.00910"},{"key":"1773_CR19","doi-asserted-by":"crossref","unstructured":"Fong, R., Patrick, M., & Vedaldi, A. (2019). Understanding deep networks via extremal perturbations and smooth masks. In IEEE International Conference on Computer Vision.","DOI":"10.1109\/ICCV.2019.00304"},{"key":"1773_CR20","doi-asserted-by":"publisher","unstructured":"Fong, R. C., & Vedaldi, A. (2017). Interpretable explanations of black boxes by meaningful perturbation. In IEEE international conference on computer vision (pp. 3449\u20133457). https:\/\/doi.org\/10.1109\/ICCV.2017.371","DOI":"10.1109\/ICCV.2017.371"},{"key":"1773_CR21","unstructured":"Gal, Y., & Ghahramani, Z. (2016). Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In International conference on machine learning (pp. 1050\u20131059)."},{"key":"1773_CR22","unstructured":"Ghiasi, G., Lin, T. Y., & Le, Q. V. (2018). Dropblock: A regularization method for convolutional networks. In International conference on neural information processing systems (pp. 10750\u201310760)."},{"key":"1773_CR23","doi-asserted-by":"publisher","unstructured":"He, K., Zhang, X., Ren, S., & Sun, J. (2016) Deep residual learning for image recognition. In 2016 CVPR (pp. 770\u2013778). https:\/\/doi.org\/10.1109\/CVPR.2016.90","DOI":"10.1109\/CVPR.2016.90"},{"key":"1773_CR24","doi-asserted-by":"crossref","unstructured":"He, K., Fan, H., Wu, Y., Xie, S., & Girshick, R. (2020). Momentum contrast for unsupervised visual representation learning. In IEEE computer vision and pattern recognition.","DOI":"10.1109\/CVPR42600.2020.00975"},{"key":"1773_CR25","doi-asserted-by":"crossref","unstructured":"He, K., Chen, X., Xie, S., Li, Y., Dollar, P., & Girshick, R. (2022). Masked autoencoders are scalable vision learners. In Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (CVPR) (pp. 16000\u201316009).","DOI":"10.1109\/CVPR52688.2022.01553"},{"key":"1773_CR26","unstructured":"Karimi, A. H., Barthe, G., Balle, B., & Valera, I. (2020). Model-agnostic counterfactual explanations for consequential decisions. In International conference on artificial intelligence and statistics (pp. 895\u2013905)."},{"key":"1773_CR27","unstructured":"Kim, B., Wattenberg, M., Gilmer, J., Cai, C., Wexler, J., & Viegas, F. (2018). Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (TCAV). In International conference on machine learning (pp. 2673\u20132682)."},{"key":"1773_CR28","unstructured":"Koh, P. W., & Liang, P. (2017). Understanding black-box predictions via influence functions. In International conference on machine learning (pp. 1885\u20131894)."},{"key":"1773_CR29","doi-asserted-by":"crossref","unstructured":"Kolek, S., Nguyen, D. A., Levie, R., Bruna, J., & Kutyniok, G.(2021). A rate-distortion framework for explaining black-box model decisions. arXiv:2110.08252","DOI":"10.1007\/978-3-031-04083-2_6"},{"key":"1773_CR30","unstructured":"Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In: F. Pereira, C. Burges, L. Bottou, et\u00a0al. (Eds.), Advances in Neural Information Processing Systems (Vol\u00a025). Curran Associates, Inc. https:\/\/proceedings.neurips.cc\/paper\/2012\/file\/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf"},{"key":"1773_CR31","unstructured":"Laina, I., Fong, R., & Vedaldi, A. (2020). Quantifying learnability and describability of visual concepts emerging in representation learning. In Advances in neural information processing systems."},{"issue":"4","key":"1773_CR32","doi-asserted-by":"publisher","first-page":"764","DOI":"10.1016\/j.jesp.2013.03.013","volume":"49","author":"C Leys","year":"2013","unstructured":"Leys, C., Ley, C., Klein, O., Bernard, P., & Licata, L. (2013). Detecting outliers: Do not use standard deviation around the mean, use absolute deviation around the median. Journal of Experimental Social Psychology, 49(4), 764\u2013766. https:\/\/doi.org\/10.1016\/j.jesp.2013.03.013","journal-title":"Journal of Experimental Social Psychology"},{"key":"1773_CR33","doi-asserted-by":"publisher","unstructured":"Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollar, P. & Lawrence Zitnick, C. (2014). Microsoft COCO: Common objects in context. In: Computer Vision\u2014ECCV 2014 (pp. 740\u2013755). Springer International Publishing. https:\/\/doi.org\/10.1007\/978-3-319-10602-1_48","DOI":"10.1007\/978-3-319-10602-1_48"},{"key":"1773_CR34","doi-asserted-by":"crossref","unstructured":"Lin, Y., Gou, Y., Liu, Z., Li, B., Lv, J., & Peng, X. (2021). Completer: Incomplete multi-view clustering via contrastive prediction. In IEEE computer vision and pattern recognition (pp. 11174\u201311183).","DOI":"10.1109\/CVPR46437.2021.01102"},{"key":"1773_CR35","unstructured":"Liu, W., Lin, R., Liu, Z., Xiong, L., Scholkopf, B., & Weller, A. (2021). Learning with hyperspherical uniformity. In Proceedings of the 24th international conference on artificial intelligence and statistics, Proceedings of machine learning research (Vol. 130, pp. 1180\u20131188). PMLR. http:\/\/proceedings.mlr.press\/v130\/liu21d.html"},{"issue":"11","key":"1773_CR36","doi-asserted-by":"publisher","first-page":"3136","DOI":"10.1007\/s11263-021-01498-0","volume":"129","author":"M Losch","year":"2021","unstructured":"Losch, M., Fritz, M., & Schiele, B. (2021). Semantic bottlenecks: Quantifying and improving inspectability of deep representations. International Journal of Computer Vision, 129(11), 3136\u20133153. https:\/\/doi.org\/10.1007\/s11263-021-01498-0","journal-title":"International Journal of Computer Vision"},{"key":"1773_CR37","doi-asserted-by":"crossref","unstructured":"McCullagh, P., & Nelder, J. (1989). Generalized linear models (2nd ed.). Chapman & Hall.","DOI":"10.1007\/978-1-4899-3242-6"},{"key":"1773_CR38","doi-asserted-by":"publisher","unstructured":"McDiarmid, C. (1989). On the method of bounded difference (pp. 148\u2013188). Cambridge University Press. https:\/\/doi.org\/10.1017\/CBO9781107359949.008","DOI":"10.1017\/CBO9781107359949.008"},{"key":"1773_CR39","doi-asserted-by":"crossref","first-page":"415","DOI":"10.1098\/rsta.1909.0016","volume":"209","author":"J Mercer","year":"1909","unstructured":"Mercer, J. (1909). Functions of positive and negative type, and their connection with the theory of integral equations. Philosophical Transactions of the Royal Society, London, 209, 415\u2013446.","journal-title":"Philosophical Transactions of the Royal Society, London"},{"key":"1773_CR40","unstructured":"Molnar, C. (2022). Interpretable machine learning. 2nd edn. https:\/\/christophm.github.io\/interpretable-ml-book"},{"key":"1773_CR41","unstructured":"Mordvintsev, A., Olah, C., & Tyka, M. (2015). Inceptionism: Going deeper into neural networks. https:\/\/research.googleblog.com\/2015\/06\/inceptionism-going-deeper-into-neural.html"},{"key":"1773_CR42","unstructured":"Nguyen, A., & Martinez, M. R. (2020). On quantitative aspects of model interpretability. arXiv:2007.07584"},{"key":"1773_CR43","doi-asserted-by":"publisher","first-page":"491","DOI":"10.1016\/j.patcog.2017.11.023","volume":"76","author":"J Nordhaug Myhre","year":"2018","unstructured":"Nordhaug Myhre, J., \u00d8vind Mikalsen, K., & L\u00f8kse, S. (2018). Robust clustering using a kNN mode seeking ensemble. Pattern Recognition, 76, 491\u2013505. https:\/\/doi.org\/10.1016\/j.patcog.2017.11.023","journal-title":"Pattern Recognition"},{"issue":"3","key":"1773_CR44","doi-asserted-by":"publisher","first-page":"1065","DOI":"10.1214\/aoms\/1177704472","volume":"33","author":"E Parzen","year":"1962","unstructured":"Parzen, E. (1962). On estimation of a probability density function and mode. The Annals of Mathematical Statistics, 33(3), 1065\u20131076. https:\/\/doi.org\/10.1214\/aoms\/1177704472","journal-title":"The Annals of Mathematical Statistics"},{"key":"1773_CR45","unstructured":"Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Benoit Steiner, L., Fang, J. B., & Chintala, S. (2019). Pytorch: An imperative style, high-performance deep learning library. In Advances in neural information processing systems (pp. 8024\u20138035)."},{"key":"1773_CR46","doi-asserted-by":"publisher","first-page":"9780","DOI":"10.1609\/aaai.v33i01.33019780","volume":"33","author":"D Pedreschi","year":"2019","unstructured":"Pedreschi, D., Giannotti, F., Guidotti, R., Monreale, A., & Ruggieri, S. (2019). Meaningful explanations of black box AI decision systems. Proceedings of the AAAI Conference on Artificial Intelligence, 33, 9780\u20139784. https:\/\/doi.org\/10.1609\/aaai.v33i01.33019780","journal-title":"Proceedings of the AAAI Conference on Artificial Intelligence"},{"key":"1773_CR47","unstructured":"Petsiuk, V., Das, A., & Saenko, K. (2018). Rise: Randomized input sampling for explanation of black-box models. In Proceedings of the British machine vision conference."},{"key":"1773_CR48","doi-asserted-by":"crossref","unstructured":"Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). \u201cWhy should I trust you?\u201d: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135\u20131144), San Francisco, CA, USA, August 13\u201317, 2016.","DOI":"10.1145\/2939672.2939778"},{"key":"1773_CR49","doi-asserted-by":"publisher","unstructured":"Samek, W., Binder, A., Montavon, G., Lapuschkin, S., & M\u00fcller, K. R. (2017). Evaluating the visualization of what a deep neural network has learned. In IEEE TNNLS (pp. 2660\u20132673). https:\/\/doi.org\/10.1109\/TNNLS.2016.2599820","DOI":"10.1109\/TNNLS.2016.2599820"},{"key":"1773_CR50","doi-asserted-by":"crossref","unstructured":"Samek, W., Montavon, G., Lapuschkin, S., Anders, C. J., & M\u00fcller, K. R. (2021). Explaining deep neural networks and beyond: A review of methods and applications. In Proceedings of the IEEE (pp. 247\u2013278).","DOI":"10.1109\/JPROC.2021.3060483"},{"key":"1773_CR51","unstructured":"Schulz, K., Sixt, L., Tombari, F., & Landgraf, T. (2020). Restricting the flow: Information bottlenecks for attribution. In International conference on learning representations."},{"key":"1773_CR52","doi-asserted-by":"publisher","unstructured":"Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In 2017 IEEE international conference on computer vision (ICCV) (pp. 618\u2013626). https:\/\/doi.org\/10.1109\/ICCV.2017.74","DOI":"10.1109\/ICCV.2017.74"},{"issue":"6B","key":"1773_CR53","doi-asserted-by":"publisher","first-page":"3960","DOI":"10.1214\/09-AOS700","volume":"37","author":"T Shi","year":"2009","unstructured":"Shi, T., Belkin, M., & Yu, B. (2009). Data spectroscopy: Eigenspaces of convolution operators and clustering. The Annals of Statistics, 37(6B), 3960\u20133984. https:\/\/doi.org\/10.1214\/09-AOS700","journal-title":"The Annals of Statistics"},{"key":"1773_CR54","unstructured":"Simonyan, K., & Zisserman, A. (2015). Very deep convolutional networks for large-scale image recognition. In International conference on learning representations."},{"key":"1773_CR55","unstructured":"Smilkov, D., Thorat, N., Kim, B., Viegas, F., & Wattenberg, M. (2017). Smoothgrad: Removing noise by adding noise. In International conference on machine learning visualization workshop."},{"key":"1773_CR56","unstructured":"Springenberg, J. T., Dosovitskiy, A., Brox, T., & Riedmiller, M. (2015). Striving for simplicity: The all convolutional net. In ICLR Workshop."},{"key":"1773_CR57","first-page":"1929","volume":"15","author":"N Srivastava","year":"2014","unstructured":"Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., & Salakhutdinov, R. (2014). Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15, 1929\u20131958.","journal-title":"Journal of Machine Learning Research"},{"key":"1773_CR58","unstructured":"Sundararajan, M., Taly, A., & Yan, Q. (2017). Axiomatic attribution for deep networks. In D. Precup, & Y. W. Teh (Eds.), Proceedings of the 34th international conference on machine learning, ICML 2017, Sydney, NSW, Australia, 6\u201311 August 2017, Proceedings of Machine Learning Research (Vol.\u00a070, pp. 3319\u20133328). PMLR. http:\/\/proceedings.mlr.press\/v70\/sundararajan17a.html"},{"key":"1773_CR59","unstructured":"Teye, M., Azizpour, H., & Smith, K. (2018). Bayesian uncertainty estimation for batch normalized deep networks. In International conference on machine learning (pp. 4907\u20134916)."},{"key":"1773_CR60","unstructured":"Theodoridis, S., & Koutroumbas, K. (2009). Pattern recognition (4th ed.). Academic Press."},{"key":"1773_CR61","unstructured":"Tonekaboni, S., Joshi, S., McCradden, M. D., & Goldenberg, A. (2019). What clinicians want: Contextualizing explainable machine learning for clinical end use. In Machine learning for healthcare conference (pp. 359\u2013380)."},{"key":"1773_CR62","unstructured":"Wang, W., Arora, R., Livescu, K., & Bilmes, J. (2015) On deep multi-view representation learning. In International conference on machine learning (pp. 1083\u20131092)."},{"key":"1773_CR63","doi-asserted-by":"crossref","unstructured":"Wen, J., Wu, Z., Zhang, Z., Fei, L., Zhang, B., & Xu, Y. (2020). Cdimc-net: Cognitive deep incomplete multi-view clustering network. In International joint conference on artificial intelligence.","DOI":"10.24963\/ijcai.2020\/447"},{"issue":"9","key":"1773_CR64","doi-asserted-by":"publisher","first-page":"532","DOI":"10.1145\/359146.359153","volume":"22","author":"DHD West","year":"1979","unstructured":"West, D. H. D. (1979). Updating mean and variance estimates: An improved method. Communications of the ACM, 22(9), 532\u2013535. https:\/\/doi.org\/10.1145\/359146.359153","journal-title":"Communications of the ACM"},{"key":"1773_CR65","doi-asserted-by":"crossref","unstructured":"Wickstr\u00f8m, K., Kampffmeyer, M., & Jenssen, R. (2018). Uncertainty modeling and interpretability in convolutional neural networks for polyp segmentation. In IEEE International workshop on machine learning for signal processing (pp. 1\u20136).","DOI":"10.1109\/MLSP.2018.8516998"},{"key":"1773_CR66","doi-asserted-by":"publisher","DOI":"10.1016\/j.media.2019.101619","volume":"60","author":"K Wickstr\u00f8m","year":"2020","unstructured":"Wickstr\u00f8m, K., Kampffmeyer, M., & Jenssen, R. (2020). Uncertainty and interpretability in convolutional neural networks for semantic segmentation of colorectal polyps. Medical Image Analysis, 60, 101619.","journal-title":"Medical Image Analysis"},{"issue":"7","key":"1773_CR67","doi-asserted-by":"publisher","first-page":"2435","DOI":"10.1109\/JBHI.2020.3042637","volume":"25","author":"K Wickstr\u00f8m","year":"2021","unstructured":"Wickstr\u00f8m, K., Mikalsen, K., Kampffmeyer, M., Revhaug, A., & Jenssen, R. (2021). Uncertainty-aware deep ensembles for reliable and explainable predictions of clinical time series. IEEE Journal of Biomedical and Health Informatics, 25(7), 2435\u20132444. https:\/\/doi.org\/10.1109\/JBHI.2020.3042637","journal-title":"IEEE Journal of Biomedical and Health Informatics"},{"key":"1773_CR68","unstructured":"Yang, B., Fu, X., Sidiropoulos, N. D., & Hong, M. (2017.) Towards k-means-friendly spaces: Simultaneous deep learning and clustering. In International conference on machine learning (pp. 3861\u20133870)."},{"key":"1773_CR69","doi-asserted-by":"crossref","unstructured":"Zeiler, M. D., & Fergus, R. (2014). Visualizing and understanding convolutional networks. In D. Fleet, T. Pajdla, B. Schiele, et\u00a0al. (Eds.), European conference on computer vision (pp. 818\u2013833).","DOI":"10.1007\/978-3-319-10590-1_53"},{"issue":"10","key":"1773_CR70","doi-asserted-by":"publisher","first-page":"1084","DOI":"10.1007\/s11263-017-1059-x","volume":"126","author":"J Zhang","year":"2017","unstructured":"Zhang, J., Bargal, S. A., Lin, Z., Brandt, J., Shen, X., & Sclaroff, S. (2017). Top-down neural attention by excitation backprop. International Journal of Computer Vision, 126(10), 1084\u20131102. https:\/\/doi.org\/10.1007\/s11263-017-1059-x","journal-title":"International Journal of Computer Vision"},{"key":"1773_CR71","unstructured":"Zhang, Y., Song, K., Sun, Y., Tan, S., & Udell, M. (2019). \u201cWhy should you trust my explanation?\u201d Understanding uncertainty in LIME explanations. In Workshop on AI for social good."}],"container-title":["International Journal of Computer Vision"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11263-023-01773-2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11263-023-01773-2\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11263-023-01773-2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,12,8]],"date-time":"2023-12-08T19:01:00Z","timestamp":1702062060000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11263-023-01773-2"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,3,11]]},"references-count":71,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2023,6]]}},"alternative-id":["1773"],"URL":"https:\/\/doi.org\/10.1007\/s11263-023-01773-2","relation":{},"ISSN":["0920-5691","1573-1405"],"issn-type":[{"type":"print","value":"0920-5691"},{"type":"electronic","value":"1573-1405"}],"subject":[],"published":{"date-parts":[[2023,3,11]]},"assertion":[{"value":"4 February 2022","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"16 February 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"11 March 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors have no competing interests to declare that are relevant to the content of this article.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}