{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,3,26]],"date-time":"2025-03-26T10:42:59Z","timestamp":1742985779395,"version":"3.40.3"},"publisher-location":"Cham","reference-count":51,"publisher":"Springer Nature Switzerland","isbn-type":[{"type":"print","value":"9783031746291"},{"type":"electronic","value":"9783031746307"}],"license":[{"start":{"date-parts":[[2025,1,1]],"date-time":"2025-01-01T00:00:00Z","timestamp":1735689600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,2,8]],"date-time":"2025-02-08T00:00:00Z","timestamp":1738972800000},"content-version":"vor","delay-in-days":38,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2025]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>\nSafety-critical applications require transparency in artificial intelligence (AI) components, but widely used convolutional neural networks (CNNs) widely used for perception tasks lack inherent interpretability. Hence, insights into what CNNs have learned are primarily based on performance metrics, because these allow, e.g., for cross-architecture CNN comparison. However, these neglect how knowledge is stored inside. To tackle this yet unsolved problem, our work proposes two methods for estimating the layer-wise similarity between semantic information inside CNN latent spaces. These allow insights into both the flow and likeness of semantic information within CNN layers, and into the degree of their similarity between different network architectures. As a basis, we use two renowned explainable artificial intelligence (XAI) techniques, which are used to obtain concept activation vectors, i.e., global vector representations in the latent space. These are compared with respect to their activation on test inputs. When applied to three diverse object detectors and two datasets, our methods reveal that (1) similar semantic concepts are learned <jats:italic>regardless of the CNN architecture<\/jats:italic>, and (2) similar concepts emerge in similar <jats:italic>relative<\/jats:italic> layer depth, independent of the total number of layers. Finally, our approach poses a promising step towards semantic model comparability and comprehension of how different CNNs process semantic information.<\/jats:p>","DOI":"10.1007\/978-3-031-74630-7_1","type":"book-chapter","created":{"date-parts":[[2025,2,8]],"date-time":"2025-02-08T04:26:44Z","timestamp":1738988804000},"page":"3-20","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Revealing Similar Semantics Inside CNNs: An Interpretable Concept-Based Comparison of\u00a0Feature Spaces"],"prefix":"10.1007","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-2494-6285","authenticated-orcid":false,"given":"Georgii","family":"Mikriukov","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2690-2478","authenticated-orcid":false,"given":"Gesina","family":"Schwalbe","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5781-6575","authenticated-orcid":false,"given":"Christian","family":"Hellert","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9139-8947","authenticated-orcid":false,"given":"Korinna","family":"Bade","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,2,8]]},"reference":[{"key":"1_CR1","unstructured":"32, I.S.: ISO 26262-1:2018(En): Road Vehicles \u2013 Functional Safety \u2013 Part 1: Vocabulary (2018). https:\/\/www.iso.org\/standard\/68383.html"},{"key":"1_CR2","unstructured":"Achtibat, R., et al.: From \u201cwhere\u201d to \u201cwhat\u201d: towards human-understandable explanations through concept relevance propagation. arXiv preprint arXiv:2206.03208 (2022)"},{"key":"1_CR3","doi-asserted-by":"publisher","first-page":"82","DOI":"10.1016\/j.inffus.2019.12.012","volume":"58","author":"AB Arrieta","year":"2020","unstructured":"Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fus. 58, 82\u2013115 (2020)","journal-title":"Inf. Fus."},{"issue":"7","key":"1_CR4","doi-asserted-by":"publisher","first-page":"e0130140","DOI":"10.1371\/journal.pone.0130140","volume":"10","author":"S Bach","year":"2015","unstructured":"Bach, S., Binder, A., Montavon, G., Klauschen, F., M\u00fcller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7), e0130140 (2015)","journal-title":"PLoS ONE"},{"key":"1_CR5","doi-asserted-by":"crossref","unstructured":"Bau, D., Zhou, B., Khosla, A., Oliva, A., Torralba, A.: Network dissection: Quantifying interpretability of deep visual representations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6541\u20136549 (2017)","DOI":"10.1109\/CVPR.2017.354"},{"key":"1_CR6","series-title":"Lecture Notes in Computer Science","doi-asserted-by":"publisher","first-page":"558","DOI":"10.1007\/978-3-030-58580-8_33","volume-title":"Computer Vision \u2013 ECCV 2020","author":"D Bolya","year":"2020","unstructured":"Bolya, D., Foley, S., Hays, J., Hoffman, J.: TIDE: a general toolbox for identifying object detection errors. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12348, pp. 558\u2013573. Springer, Cham (2020). https:\/\/doi.org\/10.1007\/978-3-030-58580-8_33"},{"key":"1_CR7","unstructured":"Chen, C., Li, O., Tao, D., Barnett, A., Rudin, C., Su, J.K.: This looks like that: deep learning for interpretable image recognition. In: Advances in Neural Information Processing Systems, vol. 32 (2019)"},{"key":"1_CR8","unstructured":"Chyung, C., Tsang, M., Liu, Y.: Extracting interpretable concept-based decision trees from CNNs. In: Proceedings of the 2019 ICML Workshop Human in the Loop Learning. CoRR 1906.04664, June 2019"},{"key":"1_CR9","doi-asserted-by":"publisher","unstructured":"Esser, P., Rombach, R., Ommer, B.: A disentangling invertible interpretation network for explaining latent representations. In: Proceedings of the 2020 IEEE Conference on Computer Vision and Pattern Recognition, June 2020, pp. 9220\u20139229. IEEE (2020). https:\/\/doi.org\/10.1109\/CVPR42600.2020.00924","DOI":"10.1109\/CVPR42600.2020.00924"},{"key":"1_CR10","doi-asserted-by":"crossref","unstructured":"Fong, R., Vedaldi, A.: Net2Vec: quantifying and explaining how concepts are encoded by filters in deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8730\u20138738 (2018)","DOI":"10.1109\/CVPR.2018.00910"},{"key":"1_CR11","doi-asserted-by":"crossref","unstructured":"Ge, Y., et al.: A peek into the reasoning of neural networks: interpreting with structural visual concepts. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 2195\u20132204 (2021)","DOI":"10.1109\/CVPR46437.2021.00223"},{"key":"1_CR12","unstructured":"Ghorbani, A., Wexler, J., Zou, J.Y., Kim, B.: Towards automatic concept-based explanations. In: Advances in Neural Information Processing Systems, vol. 32 (2019)"},{"key":"1_CR13","unstructured":"Giunchiglia, E., Stoian, M., Khan, S., Cuzzolin, F., Lukasiewicz, T.: ROAD-R: the autonomous driving dataset with logical requirements. In: IJCLR 2022 Workshops, June 2022"},{"issue":"3","key":"1_CR14","first-page":"50","volume":"38","author":"B Goodman","year":"2017","unstructured":"Goodman, B., Flaxman, S.: European union regulations on algorithmic decision-making and a \u201cright to explanation\u2019\u2019. AI Mag. 38(3), 50\u201357 (2017)","journal-title":"AI Mag."},{"key":"1_CR15","doi-asserted-by":"publisher","first-page":"103865","DOI":"10.1016\/j.compbiomed.2020.103865","volume":"123","author":"M Graziani","year":"2020","unstructured":"Graziani, M., Andrearczyk, V., Marchand-Maillet, S., M\u00fcller, H.: Concept attribution: explaining CNN decisions to physicians. Comput. Biol. Med. 123, 103865 (2020). https:\/\/doi.org\/10.1016\/j.compbiomed.2020.103865","journal-title":"Comput. Biol. Med."},{"key":"1_CR16","unstructured":"Gu, J., Tresp, V.: Semantics for global and local interpretation of deep neural networks. CoRR abs\/1910.09085, October 2019"},{"key":"1_CR17","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770\u2013778 (2016)","DOI":"10.1109\/CVPR.2016.90"},{"issue":"1","key":"1_CR18","doi-asserted-by":"publisher","first-page":"1096","DOI":"10.1109\/TVCG.2019.2934659","volume":"26","author":"F Hohman","year":"2020","unstructured":"Hohman, F., Park, H., Robinson, C., Polo Chau, D.H.: SUMMIT: scaling deep learning interpretability by visualizing activation and attribution summarizations. IEEE Trans. Vis. Comput. Graph. 26(1), 1096\u20131106 (2020). https:\/\/doi.org\/10.1109\/TVCG.2019.2934659","journal-title":"IEEE Trans. Vis. Comput. Graph."},{"key":"1_CR19","doi-asserted-by":"crossref","unstructured":"Howard, A., et\u00a0al.: Searching for MobileNetV3. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp. 1314\u20131324 (2019)","DOI":"10.1109\/ICCV.2019.00140"},{"key":"1_CR20","doi-asserted-by":"publisher","unstructured":"Jocher, G.: YOLOv5 in PyTorch, ONNX, CoreML, TFLite, October 2020. https:\/\/github.com\/ultralytics\/yolov5. https:\/\/doi.org\/10.5281\/zenodo.4154370","DOI":"10.5281\/zenodo.4154370"},{"key":"1_CR21","unstructured":"Kazhdan, D., Dimanov, B., Jamnik, M., Li\u00f2, P., Weller, A.: Now you see me (CME): concept-based model extraction. In: Proceedings of the 29th ACM International on Conference on Information and Knowledge Management Workshops. CEUR Workshop Proceedings, vol.\u00a02699. CEUR-WS.org (2020)"},{"key":"1_CR22","unstructured":"Kim, B., et\u00a0al.: Interpretability beyond feature attribution: quantitative testing with concept activation vectors (TCAV). In: International Conference on Machine Learning, pp. 2668\u20132677. PMLR (2018)"},{"key":"1_CR23","unstructured":"Koh, P.W., et al.: Concept bottleneck models. In: International Conference on Machine Learning, pp. 5338\u20135348. PMLR (2020)"},{"issue":"1","key":"1_CR24","doi-asserted-by":"publisher","first-page":"1096","DOI":"10.1038\/s41467-019-08987-4","volume":"10","author":"S Lapuschkin","year":"2019","unstructured":"Lapuschkin, S., W\u00e4ldchen, S., Binder, A., Montavon, G., Samek, W., M\u00fcller, K.R.: Unmasking Clever Hans predictors and assessing what machines really learn. Nat. Commun. 10(1), 1096 (2019). https:\/\/doi.org\/10.1038\/s41467-019-08987-4","journal-title":"Nat. Commun."},{"key":"1_CR25","series-title":"Lecture Notes in Computer Science","doi-asserted-by":"publisher","first-page":"740","DOI":"10.1007\/978-3-319-10602-1_48","volume-title":"Computer Vision \u2013 ECCV 2014","author":"T-Y Lin","year":"2014","unstructured":"Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740\u2013755. Springer, Cham (2014). https:\/\/doi.org\/10.1007\/978-3-319-10602-1_48"},{"key":"1_CR26","series-title":"Lecture Notes in Computer Science","doi-asserted-by":"publisher","first-page":"21","DOI":"10.1007\/978-3-319-46448-0_2","volume-title":"Computer Vision \u2013 ECCV 2016","author":"W Liu","year":"2016","unstructured":"Liu, W., et al.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21\u201337. Springer, Cham (2016). https:\/\/doi.org\/10.1007\/978-3-319-46448-0_2"},{"key":"1_CR27","doi-asserted-by":"crossref","unstructured":"Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proceedings of International Conference on Computer Vision (ICCV), December 2015","DOI":"10.1109\/ICCV.2015.425"},{"key":"1_CR28","unstructured":"Losch, M., Fritz, M., Schiele, B.: Interpretability beyond classification output: semantic bottleneck networks. In: Proceedings of the 3rd ACM Computer Science in Cars Symposium on Extended Abstracts, October 2019"},{"key":"1_CR29","series-title":"Communications in Computer and Information Science","doi-asserted-by":"publisher","first-page":"185","DOI":"10.1007\/978-3-030-63820-7_21","volume-title":"Neural Information Processing","author":"A Lucieri","year":"2020","unstructured":"Lucieri, A., Bajwa, M.N., Dengel, A., Ahmed, S.: Explaining AI-based decision support systems using concept localization maps. In: Yang, H., Pasupa, K., Leung, A.C.-S., Kwok, J.T., Chan, J.H., King, I. (eds.) ICONIP 2020. CCIS, vol. 1332, pp. 185\u2013193. Springer, Cham (2020). https:\/\/doi.org\/10.1007\/978-3-030-63820-7_21"},{"key":"1_CR30","doi-asserted-by":"crossref","unstructured":"Mikriukov, G., Schwalbe, G., Hellert, C., Bade, K.: Evaluating the stability of semantic concept representations in CNNs for robust explainability. arXiv preprint arXiv:2304.14864 (2023)","DOI":"10.1007\/978-3-031-44067-0_26"},{"issue":"3","key":"1_CR31","doi-asserted-by":"publisher","first-page":"8510","DOI":"10.1109\/LRA.2022.3187831","volume":"7","author":"D Miller","year":"2022","unstructured":"Miller, D., Moghadam, P., Cox, M., Wildie, M., Jurdak, R.: What\u2019s in the black box? The false negative mechanisms inside object detectors. IEEE Robot. Autom. Lett. 7(3), 8510\u20138517 (2022)","journal-title":"IEEE Robot. Autom. Lett."},{"key":"1_CR32","doi-asserted-by":"crossref","unstructured":"Mittelstadt, B., Russell, C., Wachter, S.: Explaining explanations in AI. In: Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 279\u2013288 (2019)","DOI":"10.1145\/3287560.3287574"},{"key":"1_CR33","series-title":"Lecture Notes in Computer Science (Lecture Notes in Artificial Intelligence)","doi-asserted-by":"publisher","first-page":"55","DOI":"10.1007\/978-3-030-28954-6_4","volume-title":"Explainable AI: Interpreting, Explaining and Visualizing Deep Learning","author":"A Nguyen","year":"2019","unstructured":"Nguyen, A., Yosinski, J., Clune, J.: Understanding neural networks via feature visualization: a survey. In: Samek, W., Montavon, G., Vedaldi, A., Hansen, L.K., M\u00fcller, K.-R. (eds.) Explainable AI: Interpreting, Explaining and Visualizing Deep Learning. LNCS (LNAI), vol. 11700, pp. 55\u201376. Springer, Cham (2019). https:\/\/doi.org\/10.1007\/978-3-030-28954-6_4"},{"key":"1_CR34","series-title":"Lecture Notes in Computer Science (Lecture Notes in Artificial Intelligence)","doi-asserted-by":"publisher","first-page":"148","DOI":"10.1007\/978-3-030-58285-2_11","volume-title":"KI 2020: Advances in Artificial Intelligence","author":"J Rabold","year":"2020","unstructured":"Rabold, J., Schwalbe, G., Schmid, U.: Expressive explanations of DNNs by\u00a0combining concept analysis with ILP. In: Schmid, U., Kl\u00fcgl, F., Wolter, D. (eds.) KI 2020. LNCS (LNAI), vol. 12325, pp. 148\u2013162. Springer, Cham (2020). https:\/\/doi.org\/10.1007\/978-3-030-58285-2_11"},{"key":"1_CR35","series-title":"Lecture Notes in Computer Science (Lecture Notes in Artificial Intelligence)","doi-asserted-by":"publisher","first-page":"105","DOI":"10.1007\/978-3-319-99960-9_7","volume-title":"Inductive Logic Programming","author":"J Rabold","year":"2018","unstructured":"Rabold, J., Siebers, M., Schmid, U.: Explaining black-box classifiers with ILP \u2013 empowering LIME with Aleph to approximate non-linear decisions with relational rules. In: Riguzzi, F., Bellodi, E., Zese, R. (eds.) ILP 2018. LNCS (LNAI), vol. 11105, pp. 105\u2013117. Springer, Cham (2018). https:\/\/doi.org\/10.1007\/978-3-319-99960-9_7"},{"key":"1_CR36","unstructured":"Redmon, J., Farhadi, A.: YOLOv3: an incremental improvement. arXiv preprint arXiv:1804.02767 (2018)"},{"key":"1_CR37","unstructured":"Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, vol. 28 (2015)"},{"key":"1_CR38","unstructured":"Ribeiro, M.T., Singh, S., Guestrin, C.: Model-agnostic interpretability of machine learning. arXiv preprint arXiv:1606.05386 (2016)"},{"issue":"5","key":"1_CR39","doi-asserted-by":"publisher","first-page":"206","DOI":"10.1038\/s42256-019-0048-x","volume":"1","author":"C Rudin","year":"2019","unstructured":"Rudin, C.: Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 1(5), 206\u2013215 (2019). https:\/\/doi.org\/10.1038\/s42256-019-0048-x","journal-title":"Nat. Mach. Intell."},{"key":"1_CR40","unstructured":"Schwalbe, G.: Concept embedding analysis: a review. arXiv arXiv:2203.13909 [cs, stat], March 2022"},{"key":"1_CR41","unstructured":"Schwalbe, G., Finzel, B.: A comprehensive taxonomy for explainable artificial intelligence: a systematic survey of surveys on methods and concepts. arXiv e-prints, pp. arXiv\u20132105 (2021)"},{"key":"1_CR42","unstructured":"Schwalbe, G., Wirth, C., Schmid, U.: Enabling verification of deep neural networks in perception tasks using fuzzy logic and concept embeddings, March 2022"},{"key":"1_CR43","doi-asserted-by":"crossref","unstructured":"Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM:: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618\u2013626 (2017)","DOI":"10.1109\/ICCV.2017.74"},{"key":"1_CR44","unstructured":"Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)"},{"key":"1_CR45","doi-asserted-by":"publisher","unstructured":"Varghese, S., et al.: An unsupervised temporal consistency (TC) loss to improve the performance of semantic segmentation networks. In: 2021 IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops, June 2021, pp. 12\u201320 (2021). https:\/\/doi.org\/10.1109\/CVPRW53098.2021.00010","DOI":"10.1109\/CVPRW53098.2021.00010"},{"key":"1_CR46","unstructured":"Wan, A., et al.: NBDT: neural-backed decision tree. In: Posters 2021 International Conference on Learning Representations, September 2020"},{"key":"1_CR47","doi-asserted-by":"crossref","unstructured":"Wang, A., Lee, W.N., Qi, X.: HINT: hierarchical neuron concept explainer. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 10254\u201310264 (2022)","DOI":"10.1109\/CVPR52688.2022.01001"},{"key":"1_CR48","unstructured":"Wang, D., Cui, X., Wang, Z.J.: CHAIN: concept-harmonized hierarchical inference interpretation of deep convolutional neural networks. CoRR abs\/2002.01660 (2020)"},{"key":"1_CR49","doi-asserted-by":"crossref","unstructured":"Zhang, Q., Cao, R., Shi, F., Wu, Y.N., Zhu, S.C.: Interpreting CNN knowledge via an explanatory graph. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence, pp. 4454\u20134463. AAAI Press (2018)","DOI":"10.1609\/aaai.v32i1.11819"},{"key":"1_CR50","doi-asserted-by":"crossref","unstructured":"Zhang, R., Madumal, P., Miller, T., Ehinger, K.A., Rubinstein, B.I.: Invertible concept-based explanations for CNN models with non-negative concept activation vectors. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 11682\u201311690 (2021)","DOI":"10.1609\/aaai.v35i13.17389"},{"key":"1_CR51","doi-asserted-by":"crossref","unstructured":"Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921\u20132929 (2016)","DOI":"10.1109\/CVPR.2016.319"}],"container-title":["Communications in Computer and Information Science","Machine Learning and Principles and Practice of Knowledge Discovery in Databases"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/978-3-031-74630-7_1","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,2,8]],"date-time":"2025-02-08T04:27:17Z","timestamp":1738988837000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/978-3-031-74630-7_1"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025]]},"ISBN":["9783031746291","9783031746307"],"references-count":51,"URL":"https:\/\/doi.org\/10.1007\/978-3-031-74630-7_1","relation":{},"ISSN":["1865-0929","1865-0937"],"issn-type":[{"type":"print","value":"1865-0929"},{"type":"electronic","value":"1865-0937"}],"subject":[],"published":{"date-parts":[[2025]]},"assertion":[{"value":"8 February 2025","order":1,"name":"first_online","label":"First Online","group":{"name":"ChapterHistory","label":"Chapter History"}},{"value":"ECML PKDD","order":1,"name":"conference_acronym","label":"Conference Acronym","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Joint European Conference on Machine Learning and Knowledge Discovery in Databases","order":2,"name":"conference_name","label":"Conference Name","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Turin","order":3,"name":"conference_city","label":"Conference City","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Italy","order":4,"name":"conference_country","label":"Conference Country","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"2023","order":5,"name":"conference_year","label":"Conference Year","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"18 September 2023","order":7,"name":"conference_start_date","label":"Conference Start Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"22 September 2023","order":8,"name":"conference_end_date","label":"Conference End Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"23","order":9,"name":"conference_number","label":"Conference Number","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"ecml2023","order":10,"name":"conference_id","label":"Conference ID","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"https:\/\/2023.ecmlpkdd.org\/","order":11,"name":"conference_url","label":"Conference URL","group":{"name":"ConferenceInfo","label":"Conference Information"}}]}}