{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,26]],"date-time":"2026-03-26T16:01:48Z","timestamp":1774540908331,"version":"3.50.1"},"reference-count":58,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2024,1,12]],"date-time":"2024-01-12T00:00:00Z","timestamp":1705017600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,1,12]],"date-time":"2024-01-12T00:00:00Z","timestamp":1705017600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Hum-Cent Intell Syst"],"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Topic modelling is a Natural Language Processing (NLP) technique that has gained popularity in the recent past. It identifies word co-occurrence patterns inside a document corpus to reveal hidden topics. Graph Neural Topic Model (GNTM) is a topic modelling technique that uses Graph Neural Networks (GNNs) to learn document representations effectively. It provides high-precision documents-topics and topics-words probability distributions. Such models find immense application in many sectors, including healthcare, financial services, and safety-critical systems like autonomous cars. This model is not explainable. As a matter of fact, the user cannot comprehend the underlying decision-making process. The paper introduces a technique to explain the documents-topics probability distributions output of GNTM. The explanation is achieved by building a local explainable model such as a probabilistic Na\u00efve Bayes classifier. The experimental results using various benchmark NLP datasets show a fidelity of 88.39% between the predictions of GNTM and the local explainable model. This similarity implies that the proposed technique can effectively explain the documents-topics probability distribution output of GNTM.<\/jats:p>","DOI":"10.1007\/s44230-023-00058-8","type":"journal-article","created":{"date-parts":[[2024,1,12]],"date-time":"2024-01-12T18:01:58Z","timestamp":1705082518000},"page":"53-76","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":4,"title":["A Local Explainability Technique for Graph Neural Topic Models"],"prefix":"10.1007","volume":"4","author":[{"given":"Bharathwajan","family":"Rajendran","sequence":"first","affiliation":[]},{"given":"Chandran G.","family":"Vidya","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2735-9414","authenticated-orcid":false,"given":"J.","family":"Sanil","sequence":"additional","affiliation":[]},{"given":"S.","family":"Asharaf","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,1,12]]},"reference":[{"key":"58_CR1","doi-asserted-by":"publisher","DOI":"10.1016\/j.is.2022.102131","volume":"112","author":"A Abdelrazek","year":"2023","unstructured":"Abdelrazek A, Eid Y, Gawish E, Medhat W, Hassan A. Topic modeling algorithms and applications: a survey. Inform Syst. 2023;112: 102131. https:\/\/doi.org\/10.1016\/j.is.2022.102131.","journal-title":"Inform Syst"},{"issue":"10","key":"58_CR2","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3507900","volume":"54","author":"R Churchill","year":"2022","unstructured":"Churchill R, Singh L. The evolution of topic modeling. ACM Comput Surv. 2022;54(10):1\u201335. https:\/\/doi.org\/10.1145\/3507900.","journal-title":"ACM Comput Surv"},{"key":"58_CR3","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0266325","author":"M R\u00fcdiger","year":"2022","unstructured":"R\u00fcdiger M, Antons D, Joshi AM, Torsten-Oliver S. Topic modeling revisited: new evidence on algorithm performance and quality metrics. PLoS ONE. 2022. https:\/\/doi.org\/10.1371\/journal.pone.0266325.","journal-title":"PLoS ONE"},{"issue":"24","key":"58_CR4","doi-asserted-by":"publisher","first-page":"16","DOI":"10.4108\/eai.13-7-2018.159623","volume":"7","author":"P Kherwa","year":"2019","unstructured":"Kherwa P, Bansal P. Topic modeling: a comprehensive review. EAI Endors Trans Scalable Inf Syst. 2019;7(24):16. https:\/\/doi.org\/10.4108\/eai.13-7-2018.159623.","journal-title":"EAI Endors Trans Scalable Inf Syst"},{"issue":"2","key":"58_CR5","doi-asserted-by":"publisher","first-page":"189","DOI":"10.3233\/IDT-200001","volume":"15","author":"VS Anoop","year":"2021","unstructured":"Anoop VS, Deepak P, Asharaf S. A distributional semantics-based information retrieval framework for online social networks. Intell Decis Technol. 2021;15(2):189\u201399. https:\/\/doi.org\/10.3233\/IDT-200001.","journal-title":"Intell Decis Technol"},{"issue":"3","key":"58_CR6","doi-asserted-by":"publisher","first-page":"273","DOI":"10.3233\/IDT-150252","volume":"10","author":"J Qi","year":"2016","unstructured":"Qi J, Ohsawa Y. Matrix-like visualization based on topic modeling for discovering connections between disjoint disciplines. Intell Decis Technol. 2016;10(3):273\u201383. https:\/\/doi.org\/10.3233\/IDT-150252.","journal-title":"Intell Decis Technol"},{"key":"58_CR7","doi-asserted-by":"publisher","DOI":"10.1186\/s40537-019-0255-7","author":"CB Asmussen","year":"2019","unstructured":"Asmussen CB, M\u00f8ller C. Smart literature review: a practical topic modelling approach to exploratory literature review. J Big Data. 2019. https:\/\/doi.org\/10.1186\/s40537-019-0255-7.","journal-title":"J Big Data"},{"key":"58_CR8","doi-asserted-by":"publisher","DOI":"10.1007\/s10664-021-10026-0","author":"CC Silva","year":"2021","unstructured":"Silva CC, Galster M, Gilson F. Topic modeling in software engineering research. Empir Softw Eng. 2021. https:\/\/doi.org\/10.1007\/s10664-021-10026-0.","journal-title":"Empir Softw Eng"},{"key":"58_CR9","doi-asserted-by":"publisher","DOI":"10.3389\/fsoc.2022.886498","author":"R Egger","year":"2022","unstructured":"Egger R, Yu J. A topic modeling comparison between LDA, NMF, Top2Vec, and BERTopic to demystify twitter posts. Front Sociol. 2022. https:\/\/doi.org\/10.3389\/fsoc.2022.886498.","journal-title":"Front Sociol."},{"key":"58_CR10","doi-asserted-by":"publisher","unstructured":"Hagerer G, Leung WS, Liu Q, Danner H, Groh G. A case study and qualitative analysis of simple cross-lingual opinion mining. In: Proceedings of the 13th international joint conference on knowledge discovery, knowledge engineering and knowledge management\u2014KDIR. 2021; pp. 17\u201326. SciTePress, Portugal. https:\/\/doi.org\/10.5220\/0010649500003064. INSTICC","DOI":"10.5220\/0010649500003064"},{"key":"58_CR11","doi-asserted-by":"publisher","DOI":"10.1007\/s44196-021-00055-4","author":"W Liu","year":"2021","unstructured":"Liu W, Pang J, Li N, Zhou X, Yue F. Research on multi-label text classification method based on tALBERT-CNN. Int J Comput Intell Syst. 2021. https:\/\/doi.org\/10.1007\/s44196-021-00055-4.","journal-title":"Int J Comput Intell Syst"},{"issue":"7","key":"58_CR12","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3462478","volume":"54","author":"U Chauhan","year":"2022","unstructured":"Chauhan U, Shah A. Topic modeling using latent dirichlet allocation: A survey. ACM Comput Surv. 2022;54(7):1\u201335. https:\/\/doi.org\/10.1145\/3462478.","journal-title":"ACM Comput Surv"},{"key":"58_CR13","doi-asserted-by":"publisher","first-page":"15169","DOI":"10.1007\/s11042-018-6894-4","volume":"78","author":"H Jelodar","year":"2019","unstructured":"Jelodar H, Wang Y, Yuan C, Feng X, Jiang X, Li Y, Zhao L. Latent Dirichlet allocation (LDA) and topic modeling: models, applications, a survey. Multimed Tools Appl. 2019;78:15169\u2013211. https:\/\/doi.org\/10.1007\/s11042-018-6894-4.","journal-title":"Multimed Tools Appl"},{"key":"58_CR14","first-page":"993","volume":"3","author":"DM Blei","year":"2003","unstructured":"Blei DM, Ng AY, Jordan MI. Latent Dirichlet allocation. J Mach Learn Res. 2003;3:993\u20131022.","journal-title":"J Mach Learn Res"},{"key":"58_CR15","doi-asserted-by":"publisher","unstructured":"Shakeel K, Tahir GR, Tehseen I, Ali M. A framework of URDU topic modeling using Latent Dirichlet Allocation (LDA). In: 2018 IEEE 8th annual computing and communication workshop and conference (CCWC), Las Vegas, NV, USA; 2018. https:\/\/doi.org\/10.1109\/CCWC.2018.8301655.","DOI":"10.1109\/CCWC.2018.8301655"},{"key":"58_CR16","doi-asserted-by":"publisher","first-page":"102470","DOI":"10.1016\/j.media.2022.102470","volume":"79","author":"BHM van der Velden","year":"2022","unstructured":"van der Velden BHM, Kuijf HJ, Gilhuijs KGA, Viergever MA. Explainable artificial intelligence (XAI) in deep learning-based medical image analysis. Med Image Anal. 2022;79:102470. https:\/\/doi.org\/10.1016\/j.media.2022.102470.","journal-title":"Med Image Anal"},{"key":"58_CR17","doi-asserted-by":"publisher","DOI":"10.1126\/scirobotics.aay7120","author":"D Gunning","year":"2019","unstructured":"Gunning D, Stefik M, Choi J, Miller T, Stumpf S, Yang G-Z. XAI-explainable artificial intelligence. Sci Robot. 2019. https:\/\/doi.org\/10.1126\/scirobotics.aay7120.","journal-title":"Sci Robot"},{"key":"58_CR18","doi-asserted-by":"publisher","unstructured":"Samek W, Wiegand T, M\u00fcller KR. Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models. ITU J ICT Discover. https:\/\/doi.org\/10.48550\/arXiv.1708.08296.","DOI":"10.48550\/arXiv.1708.08296"},{"issue":"5","key":"58_CR19","doi-asserted-by":"publisher","first-page":"1424","DOI":"10.1002\/widm.1424","volume":"11","author":"PP Angelov","year":"2021","unstructured":"Angelov PP, Soares EA, Jiang R, Arnold NI, Atkinson PM. Explainable artificial intelligence: an analytical review. WIREs Data Min Knowl Disc. 2021;11(5):1424. https:\/\/doi.org\/10.1002\/widm.1424.","journal-title":"WIREs Data Min Knowl Disc"},{"key":"58_CR20","doi-asserted-by":"crossref","unstructured":"Samek W, Montavon G, Vedaldi A, Hansen LK, M\u00fcller KR. Explainable AI: interpreting, explaining and visualizing deep learning, vol. 11700. Lecture Notes in Artificial Intelligence. Switzerland: Springer; 2019.","DOI":"10.1007\/978-3-030-28954-6"},{"key":"58_CR21","doi-asserted-by":"publisher","DOI":"10.1016\/j.knosys.2023.110273","volume":"263","author":"W Saeed","year":"2023","unstructured":"Saeed W, Omlin C. Explainable AI (XAI): a systematic meta-survey of current challenges and future opportunities. Knowl-Based Syst. 2023;263: 110273. https:\/\/doi.org\/10.1016\/j.knosys.2023.110273.","journal-title":"Knowl-Based Syst"},{"key":"58_CR22","unstructured":"Shen D, Qin C, Wang C, Dong Z, Zhu H, Xiong H. Topic modeling revisited: a document graph-based neural network perspective. In: Ranzato M, Beygelzimer A, Dauphin Y, Liang PS, Vaughan JW (eds.) Advances in neural information processing systems, vol 34. Curran Associates, Inc., Virtual Mode; 2021. p. 14681\u201393. https:\/\/openreview.net\/pdf?id=yewqeLly5D8."},{"issue":"2","key":"58_CR23","doi-asserted-by":"publisher","DOI":"10.1016\/j.ipm.2022.103215","volume":"60","author":"B Zhu","year":"2023","unstructured":"Zhu B, Cai Y, Ren H. Graph neural topic model with commonsense knowledge. Inf Process Manag. 2023;60(2): 103215. https:\/\/doi.org\/10.1016\/j.ipm.2022.103215.","journal-title":"Inf Process Manag"},{"key":"58_CR24","doi-asserted-by":"publisher","DOI":"10.3390\/s22030852","author":"R Murakami","year":"2022","unstructured":"Murakami R, Chakraborty B. Investigating the efficient use of word embedding with neural-topic models for interpretable topics from short texts. Sensors. 2022. https:\/\/doi.org\/10.3390\/s22030852.","journal-title":"Sensors"},{"key":"58_CR25","doi-asserted-by":"publisher","DOI":"10.1016\/j.bdr.2022.100344","volume":"30","author":"X Kang","year":"2022","unstructured":"Kang X, Xiaoqiu L, Yuan-fang L, Tongtong W, Guilin Q, Ning Y, Dong W, Zheng Z. Neural topic modeling with deep mutual information estimation. Big Data Res. 2022;30: 100344. https:\/\/doi.org\/10.1016\/j.bdr.2022.100344.","journal-title":"Big Data Res"},{"key":"58_CR26","doi-asserted-by":"publisher","unstructured":"Garg R, Kiwelekar AW, Netak LD, Bhate SS. In: Gunjan, V.K., Zurada, J.M. (eds.) Personalization of news for a logistics organisation by finding relevancy using NLP. Cham: Springer; 2021. p. 215\u2013226. https:\/\/doi.org\/10.1007\/978-3-030-68291-0_16.","DOI":"10.1007\/978-3-030-68291-0_16"},{"key":"58_CR27","doi-asserted-by":"publisher","unstructured":"Garg R, Kiwelekar AW, Netak LD, Bhate SS. In: Gunjan VK, Zurada JM (eds) Potential use-cases of natural language processing for a logistics organization. Cham: Springer; 2021. p. 157\u2013191. https:\/\/doi.org\/10.1007\/978-3-030-68291-0_13.","DOI":"10.1007\/978-3-030-68291-0_13"},{"key":"58_CR28","doi-asserted-by":"publisher","unstructured":"Sammut C. In: Sammut C, Webb GI (eds) Markov Chain Monte Carlo. Encyclopedia of machine learning Boston: Springer;. 2011. p. 639\u201342. https:\/\/doi.org\/10.1007\/978-0-387-30164-8_511.","DOI":"10.1007\/978-0-387-30164-8_511"},{"key":"58_CR29","doi-asserted-by":"publisher","unstructured":"Haugh MB. A tutorial on Markov chain Monte-Carlo and Bayesian modeling. Report; 2021. https:\/\/doi.org\/10.2139\/ssrn.3759243. https:\/\/papers.ssrn.com\/sol3\/papers.cfm?abstract_id=3759243.","DOI":"10.2139\/ssrn.3759243"},{"key":"58_CR30","doi-asserted-by":"publisher","first-page":"183","DOI":"10.1023\/A:1007665907178","volume":"37","author":"MI Jordan","year":"1999","unstructured":"Jordan MI, Ghahramani Z, Jaakkola TS, Saul LK. An introduction to variational methods for graphical models. Mach Learn. 1999;37:183\u2013233. https:\/\/doi.org\/10.1023\/A:1007665907178.","journal-title":"Mach Learn"},{"key":"58_CR31","unstructured":"Kingma DP, Welling M. Auto-encoding variational bayes. In: 2nd international conference on learning representations (ICLR2014). Ithaca, NY. arXiv.org. Rimrock Resort, Canada. 2014; https:\/\/arxiv.org\/abs\/1312.6114."},{"key":"58_CR32","unstructured":"Miao Y, Grefenstette E, Blunsom P. Discovering discrete latent topics with neural variational inference. In: Precup D, Teh YW (eds) Proceedings of the 34th international conference on machine learning. Proceedings of machine learning research, vol. 70. PMLR, Sydney, Australia; 2017. p. 2410\u201319. https:\/\/proceedings.mlr.press\/v70\/miao17a.html."},{"key":"58_CR33","doi-asserted-by":"publisher","unstructured":"Zhao H, Phung D, Huynh V, Jin Y, Du L, Buntine W. Topic modelling meets deep neural networks: a survey. In: Proceedings of the thirtieth international joint conference on artificial intelligence (IJCAI-21) survey track; 2021. p. 4713\u201320. https:\/\/doi.org\/10.24963\/ijcai.2021\/638.","DOI":"10.24963\/ijcai.2021\/638"},{"issue":"11","key":"58_CR34","doi-asserted-by":"publisher","first-page":"13609","DOI":"10.1609\/aaai.v37i11.26595","volume":"37","author":"H Sun","year":"2023","unstructured":"Sun H, Tu Q, Li J, Yan R. Convntm: conversational neural topic model. Proc AAAI Conf Artif Intell. 2023;37(11):13609\u201317. https:\/\/doi.org\/10.1609\/aaai.v37i11.26595.","journal-title":"Proc. AAAI Conf. Artif. Intell."},{"key":"58_CR35","doi-asserted-by":"publisher","unstructured":"Zhao H, Phung D, Huynh V, Jin Y, Du L, Buntine W. Topic modelling meets deep neural networks: a survey. In: Zhou Z-H (ed) Proceedings of the thirtieth international joint conference on artificial intelligence. Association for the Advancement of Artificial Intelligence (AAAI), United States of America; 2021. p. 4713\u201320. https:\/\/doi.org\/10.24963\/ijcai.2021\/638. https:\/\/www.ijcai.org\/proceedings\/2021\/. https:\/\/ijcai-21.org.","DOI":"10.24963\/ijcai.2021\/638"},{"issue":"1","key":"58_CR36","doi-asserted-by":"publisher","first-page":"4","DOI":"10.1109\/tnnls.2020.2978386","volume":"32","author":"Z Wu","year":"2021","unstructured":"Wu Z, Pan S, Chen F, Long G, Zhang C, Yu PS. A comprehensive survey on graph neural networks. IEEE Trans Neural Netw Learn Syst. 2021;32(1):4\u201324. https:\/\/doi.org\/10.1109\/tnnls.2020.2978386.","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"58_CR37","doi-asserted-by":"publisher","unstructured":"Zhou D, Hu X, Wang R. Neural topic modeling by incorporating document relationship graph. In: Proceedings of the 2020 conference on empirical methods in natural language processing (EMNLP). Association for Computational Linguistics, Online; 2020. p. 3790\u20136. https:\/\/doi.org\/10.18653\/v1\/2020.emnlp-main.310.","DOI":"10.18653\/v1\/2020.emnlp-main.310"},{"key":"58_CR38","doi-asserted-by":"publisher","unstructured":"Ying R, Bourgeois D, You J, Zitnik M, Leskovec J. GNNexplainer: generating explanations for graph neural networks; 2019. arXiv:1903.03894. https:\/\/doi.org\/10.48550\/arXiv.1903.03894.","DOI":"10.48550\/arXiv.1903.03894"},{"key":"58_CR39","doi-asserted-by":"publisher","unstructured":"Yuan H, Tang J, Hu X, Ji S. XGNN: towards model-level explanations of graph neural networks. In: Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery and data mining. KDD\u201920. Association for Computing Machinery, New York, NY, USA; 2020. p. 430\u201338. https:\/\/doi.org\/10.1145\/3394486.3403085.","DOI":"10.1145\/3394486.3403085"},{"key":"58_CR40","unstructured":"Yuan H, Yu H, Wang J, Li K, Ji S. On explainability of graph neural networks via subgraph explorations. In: Meila M, Zhang T (eds) Proceedings of the 38th international conference on machine learning. Proceedings of machine learning research, vol. 139. p. 12241\u201352. PMLR, Virtual Mode; 2021. https:\/\/proceedings.mlr.press\/v139\/yuan21c.html."},{"key":"58_CR41","doi-asserted-by":"publisher","unstructured":"Vu MN, Thai MT. PGM-explainer: probabilistic graphical model explanations for graph neural networks; 2020. arXiv:2010.05788. https:\/\/doi.org\/10.48550\/arXiv.2010.05788.","DOI":"10.48550\/arXiv.2010.05788"},{"key":"58_CR42","doi-asserted-by":"publisher","unstructured":"Ribeiro MT, Singh S, Guestrin C. \u201cWhy should I trust you?\u201d: Explaining the predictions of any classifier; 2016. arXiv:1602.04938. https:\/\/doi.org\/10.48550\/arXiv.1602.04938.","DOI":"10.48550\/arXiv.1602.04938"},{"key":"58_CR43","doi-asserted-by":"publisher","unstructured":"Huang Q, Yamada M, Yuan\u00a0Tian DS, Yin D, Chang Y. GraphLIME: local interpretable model explanations for graph neural networks; 2020. arXiv:2001.06216. https:\/\/doi.org\/10.48550\/arXiv.2001.06216.","DOI":"10.48550\/arXiv.2001.06216"},{"issue":"5","key":"58_CR44","doi-asserted-by":"publisher","first-page":"5782","DOI":"10.1109\/TPAMI.2022.3204236","volume":"45","author":"H Yuan","year":"2023","unstructured":"Yuan H, Yu H, Gui S, Ji S. Explainability in graph neural networks: a taxonomic survey. IEEE Trans Pattern Anal Mach Intell. 2023;45(5):5782\u201399. https:\/\/doi.org\/10.1109\/TPAMI.2022.3204236.","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"58_CR45","doi-asserted-by":"publisher","DOI":"10.1145\/3589964","author":"L Wu","year":"2023","unstructured":"Wu L, Zhao H, Li Z, Huang Z, Liu Q, Chen E. Learning the explainable semantic relations via unified graph topic-disentangled neural networks. ACM Trans Knowl Discov Data. 2023. https:\/\/doi.org\/10.1145\/3589964.","journal-title":"ACM Trans Knowl Discov Data"},{"key":"58_CR46","doi-asserted-by":"publisher","first-page":"28","DOI":"10.1016\/j.inffus.2021.01.008","volume":"71","author":"A Holzinger","year":"2021","unstructured":"Holzinger A, Malle B, Saranti A, Pfeifer B. Towards multi-modal causability with graph neural networks enabling information fusion for explainable AI. Inform Fus. 2021;71:28\u201337. https:\/\/doi.org\/10.1016\/j.inffus.2021.01.008.","journal-title":"Inform Fus"},{"key":"58_CR47","doi-asserted-by":"publisher","DOI":"10.1016\/j.ipm.2021.102614","author":"Q Xie","year":"2021","unstructured":"Xie Q, Tiwari P, Gupta D, Huang J, Peng M. Neural variational sparse topic model for sparse explainable text representation. Inf Process Manag. 2021. https:\/\/doi.org\/10.1016\/j.ipm.2021.102614.","journal-title":"Inf Process Manag"},{"key":"58_CR48","doi-asserted-by":"publisher","unstructured":"Berrar D. Bayes\u2019 theorem and Naive Bayes classifier. In: Ranganathan S, Gribskov M, Nakai K, Sch\u00f6nbach C (eds) Encyclopedia of bioinformatics and computational biology. Academic Press, Oxford; 2019. p. 403\u201312. https:\/\/doi.org\/10.1016\/B978-0-12-809633-8.20473-1. https:\/\/www.sciencedirect.com\/science\/article\/pii\/B9780128096338204731.","DOI":"10.1016\/B978-0-12-809633-8.20473-1"},{"key":"58_CR49","doi-asserted-by":"publisher","unstructured":"Chang V, Ali MA, Hossain A. Chapter 2-Investigation of Covid-19 and scientific analysis big data analytics with the help of machine learning. In: Chang V, Abdel-Basset M, Ramachandran M, Green NG, Wills G (eds) Novel AI and data science advancements for sustainability in the era of COVID-19. Academic Press, Oxford; 2022. p. 21\u201366. https:\/\/doi.org\/10.1016\/B978-0-323-90054-6.00007-6. https:\/\/www.sciencedirect.com\/science\/article\/pii\/B9780323900546000076.","DOI":"10.1016\/B978-0-323-90054-6.00007-6"},{"key":"58_CR50","doi-asserted-by":"publisher","unstructured":"Theodoridis S. Chapter 2-Probability and stochastic processes. In: Theodoridis S (ed) Machine learning (Second Edition), Second edition. Academic Press, Oxford; 2020. p. 19\u201365. https:\/\/doi.org\/10.1016\/B978-0-12-818803-3.00011-8. https:\/\/www.sciencedirect.com\/science\/article\/pii\/B9780128188033000118.","DOI":"10.1016\/B978-0-12-818803-3.00011-8"},{"key":"58_CR51","doi-asserted-by":"publisher","first-page":"211","DOI":"10.1007\/s00355-008-0353-5","volume":"33","author":"M D\u2019Agostino","year":"2009","unstructured":"D\u2019Agostino M, Dardanoni V. What\u2019s so special about Euclidean distance? Soc Choice Welf. 2009;33:211\u201333. https:\/\/doi.org\/10.1007\/s00355-008-0353-5.","journal-title":"Soc Choice Welf"},{"issue":"1","key":"58_CR52","doi-asserted-by":"publisher","DOI":"10.1088\/1742-6596\/1566\/1\/012058","volume":"1566","author":"R Suwanda","year":"2020","unstructured":"Suwanda R, Syahputra Z, Zamzami EM. Analysis of Euclidean distance and Manhattan distance in the K-means algorithm for variations number of centroid K. J Phys Conf Ser. 2020;1566(1): 012058. https:\/\/doi.org\/10.1088\/1742-6596\/1566\/1\/012058.","journal-title":"J Phys Conf Ser"},{"key":"58_CR53","doi-asserted-by":"publisher","DOI":"10.3390\/info14080469","author":"N Alangari","year":"2023","unstructured":"Alangari N, El Bachir MM, Mathkour H, Almosallam I. Exploring evaluation methods for interpretable machine learning: a survey. Information. 2023. https:\/\/doi.org\/10.3390\/info14080469.","journal-title":"Information."},{"key":"58_CR54","unstructured":"Craven MW, Shavlik JW. Extracting tree-structured representations of trained networks. In: Proceedings of the 8th international conference on neural information processing systems. MIT Press, Cambridge, MA, USA; 1995. p. 24\u201330. https:\/\/dl.acm.org\/doi\/10.5555\/2998828.2998832."},{"key":"58_CR55","first-page":"993","volume":"3","author":"DM Blei","year":"2003","unstructured":"Blei DM, Andrew MIJ, Ng Y. Latent Dirichlet allocation. J Mach Learn Res. 2003;3:993\u20131022.","journal-title":"J Mach Learn Res"},{"issue":"1","key":"58_CR56","doi-asserted-by":"publisher","first-page":"93","DOI":"10.1186\/s40537-019-0255-7","volume":"6","author":"CB Asmussen","year":"2019","unstructured":"Asmussen CB, M\u00f8ller C. Smart literature review: a practical topic modelling approach to exploratory literature review. J Big Data. 2019;6(1):93. https:\/\/doi.org\/10.1186\/s40537-019-0255-7.","journal-title":"J Big Data"},{"key":"58_CR57","doi-asserted-by":"publisher","DOI":"10.3389\/frai.2020.00042","author":"R Albalawi","year":"2020","unstructured":"Albalawi R, Yeap TH, Benyoucef M. Using topic modeling methods for short-text data: a comparative analysis. Front Artif Intell. 2020. https:\/\/doi.org\/10.3389\/frai.2020.00042.","journal-title":"Front Artif Intell"},{"key":"58_CR58","doi-asserted-by":"publisher","unstructured":"Grootendorst M. Bertopic: neural topic modeling with a class-based TF-IDF procedure. arXiv:2203.05794 [cs.CL], 10; 2022. https:\/\/doi.org\/10.48550\/arXiv.2203.05794.","DOI":"10.48550\/arXiv.2203.05794"}],"container-title":["Human-Centric Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s44230-023-00058-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s44230-023-00058-8\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s44230-023-00058-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,4,17]],"date-time":"2024-04-17T10:57:28Z","timestamp":1713351448000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s44230-023-00058-8"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,1,12]]},"references-count":58,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2024,3]]}},"alternative-id":["58"],"URL":"https:\/\/doi.org\/10.1007\/s44230-023-00058-8","relation":{},"ISSN":["2667-1336"],"issn-type":[{"value":"2667-1336","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,1,12]]},"assertion":[{"value":"14 July 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"13 December 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"12 January 2024","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"Not applicable.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent for Publication"}},{"value":"Not applicable.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics Approval and Consent to Participate"}}]}}