{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T14:17:28Z","timestamp":1773843448182,"version":"3.50.1"},"reference-count":89,"publisher":"Association for Computing Machinery (ACM)","issue":"6","funder":[{"name":"National Science Foundation","award":["##2512857, #2512858, 15-18897, 15-13263, 21- 20448, 19-34884, and 22-23812"],"award-info":[{"award-number":["##2512857, #2512858, 15-18897, 15-13263, 21- 20448, 19-34884, and 22-23812"]}]},{"name":"Fonds de Recherche du Quebec"},{"DOI":"10.13039\/100007631","name":"Canadian Institute for Advanced Research","doi-asserted-by":"crossref","id":[{"id":"10.13039\/100007631","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100000038","name":"Natural Sciences and Engineering Research Council of Canada","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100000038","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Softw. Eng. Methodol."],"published-print":{"date-parts":[[2025,7,31]]},"abstract":"<jats:p>\n            Deep Learning (DL) is a class of machine learning algorithms that are used in a wide variety of applications. Like any software system, DL programs can have bugs. To support bug localization in DL programs, several tools have been proposed in the past. As most of the bugs that occur due to improper model structure known as structural bugs lead to inadequate performance during training, it is challenging for developers to identify the root cause and address these bugs. To support bug detection and localization in DL programs, in this article, we propose Theia, which detects and localizes structural bugs in DL programs. Unlike the previous works, Theia considers the training dataset characteristics to automatically detect bugs in DL programs developed using two DL libraries,\n            <jats:italic toggle=\"yes\">Keras<\/jats:italic>\n            and\n            <jats:italic toggle=\"yes\">PyTorch<\/jats:italic>\n            . Since training the DL models is a time-consuming process, Theia detects these bugs at the beginning of the training process and alerts the developer with informative messages containing the bug\u2019s location and actionable fixes which will help them to improve the structure of the model. We evaluated Theia on a benchmark of 40 real-world buggy DL programs obtained from\n            <jats:italic toggle=\"yes\">Stack Overflow<\/jats:italic>\n            . Our results show that Theia successfully localizes 57\/75 structural bugs in 40 buggy programs, whereas NeuraLint, a state-of-the-art approach capable of localizing structural bugs before training localizes 17\/75 bugs.\n          <\/jats:p>","DOI":"10.1145\/3708473","type":"journal-article","created":{"date-parts":[[2024,12,16]],"date-time":"2024-12-16T15:52:22Z","timestamp":1734364342000},"page":"1-29","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":2,"title":["Leveraging Data Characteristics for Bug Localization in Deep Learning Programs"],"prefix":"10.1145","volume":"34","author":[{"ORCID":"https:\/\/orcid.org\/0009-0007-4729-8421","authenticated-orcid":false,"given":"Ruchira","family":"Manke","sequence":"first","affiliation":[{"name":"Department of Computer Science, Tulane University, New Orleans, Louisiana, USA"}]},{"ORCID":"https:\/\/orcid.org\/0009-0001-0213-725X","authenticated-orcid":false,"given":"Mohammad","family":"Wardat","sequence":"additional","affiliation":[{"name":"Department of Computer Science and Engineering, Oakland University, Rochester, Michigan, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5704-4173","authenticated-orcid":false,"given":"Foutse","family":"Khomh","sequence":"additional","affiliation":[{"name":"Polytechnique Montr\u00e9al, Montreal, Quebec, Canada"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9410-9562","authenticated-orcid":false,"given":"Hridesh","family":"Rajan","sequence":"additional","affiliation":[{"name":"School of Science and Engineering, Tulane University, New Orleans, Louisiana, USA"}]}],"member":"320","published-online":{"date-parts":[[2025,7]]},"reference":[{"key":"e_1_3_1_2_2","unstructured":"Cannot Train a Neural Network Solving XOR Mapping. 2015. Retrieved from https:\/\/stackoverflow.com\/questions\/34311586\/"},{"key":"e_1_3_1_3_2","unstructured":"How to Prepare a Dataset for Keras? 2015. Retrieved from https:\/\/stackoverflow.com\/questions\/31880720\/"},{"key":"e_1_3_1_4_2","unstructured":"Trying Kaggle Titanic with Keras.. Getting Loss and Valid_Loss -0.0000. 2015. Retrieved from https:\/\/stackoverflow.com\/questions\/31627380\/"},{"key":"e_1_3_1_5_2","unstructured":"Accuracy Not High Enough for Dogs_Cats Classification Dataset Using CNN with Keras-tf Python. 2016. Retrieved from https:\/\/stackoverflow.com\/questions\/40045159\/"},{"key":"e_1_3_1_6_2","unstructured":"How to Train and Tune an Artificial Multilayer Perceptron Neural Network Using Keras? 2016. Retrieved from https:\/\/stackoverflow.com\/questions\/34673164\/"},{"key":"e_1_3_1_7_2","unstructured":"Keras Low Accuracy Classification Task. 2016. Retrieved from https:\/\/stackoverflow.com\/questions\/38648195\/"},{"key":"e_1_3_1_8_2","unstructured":"Neural Network Accuracy Optimization. 2016. Retrieved from https:\/\/stackoverflow.com\/questions\/39525358\/"},{"key":"e_1_3_1_9_2","unstructured":"Why Can\u2019t my CNN Learn? 2016. Retrieved from https:\/\/stackoverflow.com\/questions\/37229086\/"},{"key":"e_1_3_1_10_2","unstructured":"How Does Keras Handle Multilabel Classification? 2017. Retrieved from https:\/\/stackoverflow.com\/questions\/44164749\/"},{"key":"e_1_3_1_11_2","unstructured":"Keras: Training Loss Decreases (Accuracy Increase) While Validation Loss Increases (Accuracy Decrease). 2017. Retrieved from https:\/\/stackoverflow.com\/questions\/47272383\/"},{"key":"e_1_3_1_12_2","unstructured":"Why Does a Binary Keras CNN Always Predict 1? 2017. Retrieved from https:\/\/stackoverflow.com\/questions\/45378493\/"},{"key":"e_1_3_1_13_2","unstructured":"CNN Not Efficient on My Dataset in Keras. 2018. Retrieved from https:\/\/stackoverflow.com\/questions\/51749207\/"},{"key":"e_1_3_1_14_2","unstructured":"CNN Train Accuracy Gets Better during training but Test Accuracy Stays around 40%. 2018. Retrieved from https:\/\/stackoverflow.com\/questions\/48594888\/"},{"key":"e_1_3_1_15_2","unstructured":"CNN with Keras Accuracy Not Improving. 2018. Retrieved from https:\/\/stackoverflow.com\/questions\/50079585\/"},{"key":"e_1_3_1_16_2","unstructured":"Create a Square Function Estimator with Keras. 2018. Retrieved from https:\/\/stackoverflow.com\/questions\/48221692\/"},{"key":"e_1_3_1_17_2","unstructured":"How to Improve the Performance of CNN Model for a Specific Dataset? Getting Low Accuracy on Both Training and Testing Dataset. 2018. Retrieved from https:\/\/stackoverflow.com\/questions\/70554413\/"},{"key":"e_1_3_1_18_2","unstructured":"Input Nodes in Keras NN. 2018. Retrieved from https:\/\/stackoverflow.com\/questions\/51930566\/"},{"key":"e_1_3_1_19_2","unstructured":"Keras Overfits on One Class Cifar-10. 2018. Retrieved from https:\/\/stackoverflow.com\/questions\/51118032\/"},{"key":"e_1_3_1_20_2","unstructured":"My Keras Model Does Not Predict Negative Values. 2018. Retrieved from https:\/\/stackoverflow.com\/questions\/48251943\/"},{"key":"e_1_3_1_21_2","unstructured":"Non Linear Regression: Why Isn\u2019t the Model Learning? 2018. Retrieved from https:\/\/stackoverflow.com\/questions\/48934338\/"},{"key":"e_1_3_1_22_2","unstructured":"Simple Keras Neural Network Isn\u2019t Learning. 2018. Retrieved from https:\/\/stackoverflow.com\/questions\/48385830\/"},{"key":"e_1_3_1_23_2","unstructured":"Accuracy Equals 0 CNN Python Keras. Retrieved from https:\/\/stackoverflow.com\/questions\/58844149\/"},{"key":"e_1_3_1_24_2","unstructured":"Keras CNN Intermediate Level Has No Feature Changes. 2019. Retrieved from https:\/\/stackoverflow.com\/questions\/54923573\/"},{"key":"e_1_3_1_25_2","unstructured":"Keras CNN Model with a Wrong ROC Curve and Low Accuracy. 2019. Retrieved from https:\/\/stackoverflow.com\/questions\/56914715\/"},{"key":"e_1_3_1_26_2","unstructured":"Loss Doesn\u2019t Decrease in PyTorch CNN. 2019. Retrieved from https:\/\/stackoverflow.com\/questions\/58666904\/"},{"key":"e_1_3_1_27_2","unstructured":"Low Accuracy after Training a CNN. 2019. Retrieved from https:\/\/stackoverflow.com\/questions\/59325381\/"},{"key":"e_1_3_1_28_2","unstructured":"Manual Predictions of Neural Net Go Wrong. 2019. Retrieved from https:\/\/stackoverflow.com\/questions\/58609115\/"},{"key":"e_1_3_1_29_2","unstructured":"My CNN Accuracy Goes down after Adding One More Feature. 2019. Retrieved from https:\/\/stackoverflow.com\/questions\/55343875\/"},{"key":"e_1_3_1_30_2","unstructured":"Sudden 50% Accuracy Drop while Training Convolutional NN. 2019. Retrieved from https:\/\/stackoverflow.com\/questions\/55198221\/"},{"key":"e_1_3_1_31_2","unstructured":"Super Low Accuracy for Neural Network Model. 2019. Retrieved from https:\/\/stackoverflow.com\/questions\/59278771\/"},{"key":"e_1_3_1_32_2","unstructured":"tf.keras Loss Becomes NaN. 2019. Retrieved from https:\/\/stackoverflow.com\/questions\/55328966\/"},{"key":"e_1_3_1_33_2","unstructured":"Getting Pretty Bad Accuracy Using CNN Model in Keras. 2020. Retrieved from https:\/\/stackoverflow.com\/questions\/65275387\/"},{"key":"e_1_3_1_34_2","unstructured":"Keras Model Not Training Layers Validation Accuracy Always 0.5. 2020. Retrieved from https:\/\/stackoverflow.com\/questions\/60261103\/"},{"key":"e_1_3_1_35_2","unstructured":"Normalize Training Data with Channel Means and Standard Deviation in CNN Model. 2020. Retrieved from https:\/\/stackoverflow.com\/questions\/63027146\/"},{"key":"e_1_3_1_36_2","unstructured":"Poor Accuracy of CNN Model with Keras. 2020. Retrieved from https:\/\/stackoverflow.com\/questions\/64522751\/"},{"key":"e_1_3_1_37_2","unstructured":"Pytorch CNN Loss Is Not Changing. 2020. Retrieved from https:\/\/stackoverflow.com\/questions\/60003876\/"},{"key":"e_1_3_1_38_2","unstructured":"Why Is My Model Performing Poorly for a Keras Sequential Model? 2020. Retrieved from https:\/\/stackoverflow.com\/questions\/64188884\/"},{"key":"e_1_3_1_39_2","unstructured":"Pytorch CNN Not Learning. 2021. Retrieved from https:\/\/stackoverflow.com\/questions\/65659888\/"},{"key":"e_1_3_1_40_2","unstructured":"Why Does the Loss Decreases and the Accuracy Doesn\u2019t Increases? PyTorch. 2021. Retrieved from https:\/\/stackoverflow.com\/questions\/70428592\/"},{"key":"e_1_3_1_41_2","first-page":"1103","volume-title":"International Conference on Learning Representations (ICLR \u201917)","volume":"2","author":"Baker Bowen","year":"2017","unstructured":"Bowen Baker, Otkrist Gupta, Nikhil Naik, and Ramesh Raskar. 2017. Designing neural network architectures using reinforcement learning. In International Conference on Learning Representations (ICLR \u201917), Vol. 2, 1103\u20131120."},{"key":"e_1_3_1_42_2","doi-asserted-by":"publisher","DOI":"10.1145\/3529318"},{"key":"e_1_3_1_43_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-35289-8_26"},{"key":"e_1_3_1_44_2","doi-asserted-by":"publisher","DOI":"10.1145\/3510003.3510057"},{"key":"e_1_3_1_45_2","doi-asserted-by":"publisher","DOI":"10.1145\/3540250.3549123"},{"key":"e_1_3_1_46_2","doi-asserted-by":"publisher","DOI":"10.1145\/3510003.3510099"},{"key":"e_1_3_1_47_2","unstructured":"Nwankpa Chigozie Winifred Ijomah Anthony Gachagan and Stephen Marshall. 2018. Activation functions: Comparison of trends in practice and research for deep learning. arXiv:1811.03378. Retrieved from https:\/\/arxiv.org\/abs\/1811.03378"},{"key":"e_1_3_1_48_2","first-page":"248","article-title":"ImageNet: A large-scale hierarchical image database","author":"Deng J.","year":"2009","unstructured":"J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In IEEE Computer Vision and Pattern Recognition (CVPR), 248\u2013255.","journal-title":"IEEE Computer Vision and Pattern Recognition (CVPR)"},{"key":"e_1_3_1_49_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-16722-6_10"},{"key":"e_1_3_1_50_2","first-page":"265","article-title":"TensorFlow: A system for large-scale machine learning","author":"Abadi Martin","year":"2016","unstructured":"Martin Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016. TensorFlow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation, 265\u2013283.","journal-title":"12th USENIX Symposium on Operating Systems Design and Implementation"},{"key":"e_1_3_1_51_2","unstructured":"Francois Chollet. 2015. Keras: The Python deep learning library. Retrieved from https:\/\/keras.io\/"},{"key":"e_1_3_1_52_2","unstructured":"Francois Chollet. 2015. Keras: The Python deep learning library. Retrieved from https:\/\/keras.io\/api\/losses\/"},{"key":"e_1_3_1_53_2","doi-asserted-by":"publisher","DOI":"10.1109\/ASE56229.2023.00171"},{"key":"e_1_3_1_54_2","doi-asserted-by":"publisher","DOI":"10.1145\/263698.264352"},{"key":"e_1_3_1_55_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_3_1_56_2","doi-asserted-by":"publisher","DOI":"10.1016\/B978-0-12-741252-8.50010-8"},{"key":"e_1_3_1_57_2","first-page":"1","article-title":"Exploring strategies for training deep neural networks","volume":"10","author":"Hugo Larochelle","year":"2009","unstructured":"Larochelle Hugo, Yoshua Bengio, J\u00e9r\u00f4me Louradour, and Pascal Lamblin. 2009. Exploring strategies for training deep neural networks. In Journal of Machine Learning Research, Vol. 10, 1\u201340.","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_3_1_58_2","doi-asserted-by":"publisher","DOI":"10.1145\/3377811.3380395"},{"key":"e_1_3_1_59_2","doi-asserted-by":"publisher","DOI":"10.1109\/HSI.2018.8431232"},{"key":"e_1_3_1_60_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-88389-8_12"},{"key":"e_1_3_1_61_2","doi-asserted-by":"publisher","DOI":"10.1145\/3338906.3338955"},{"key":"e_1_3_1_62_2","doi-asserted-by":"publisher","DOI":"10.1145\/3377811.3380378"},{"key":"e_1_3_1_63_2","doi-asserted-by":"publisher","DOI":"10.1145\/3292500.3330648"},{"key":"e_1_3_1_64_2","first-page":"2874","volume-title":"International Conference on Learning Representations (ICLR \u201917)","volume":"4","author":"Keskar Nitish Shirish","year":"2017","unstructured":"Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Peter Tang. 2017. On large-batch training for deep learning: Generalization gap and sharp minima. In International Conference on Learning Representations (ICLR \u201917), Vol. 4, 2874\u20132889."},{"key":"e_1_3_1_65_2","author":"Krizhevsky Alex","year":"2009","unstructured":"Alex Krizhevsky. 2009. Learning Multiple Layers of Features from Tiny Images. Technical Report.","journal-title":"Learning Multiple Layers of Features from Tiny Images"},{"issue":"7","key":"e_1_3_1_66_2","first-page":"1","article-title":"Convolutional deep belief networks on Cifar-10","volume":"40","author":"Krizhevsky Alex","year":"2010","unstructured":"Alex Krizhevsky and Geoff Hinton. 2010. Convolutional deep belief networks on Cifar-10. Unpublished Manuscript 40 7 (2010), 1\u20139.","journal-title":"Unpublished Manuscript"},{"key":"e_1_3_1_67_2","first-page":"1097","volume-title":"the 25th International Conference on Neural Information Processing Systems (NIPS \u201912)","volume":"1","author":"Krizhevsky Alex","year":"2012","unstructured":"Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. ImageNet classification with deep convolutional networks. In the 25th International Conference on Neural Information Processing Systems (NIPS \u201912), Vol. 1, 1097\u20131105."},{"key":"e_1_3_1_68_2","volume-title":"Neural Networks: Tricks of the Trade, Springer","author":"L\u00e9 Yann A. LeCun,","year":"2012","unstructured":"Yann A. LeCun, L\u00e9onBottou, Genevieve B. Orr, and Klaus-Robert Muller. 2012. Efficient backprop. In Neural Networks: Tricks of the Trade, Springer, Berlin, 9\u201350."},{"key":"e_1_3_1_69_2","doi-asserted-by":"publisher","DOI":"10.1162\/neco.1989.1.4.541"},{"key":"e_1_3_1_70_2","doi-asserted-by":"publisher","DOI":"10.1109\/5.726791"},{"key":"e_1_3_1_71_2","doi-asserted-by":"publisher","DOI":"10.1145\/3236024.3236082"},{"key":"e_1_3_1_72_2","doi-asserted-by":"publisher","DOI":"10.5281\/zenodo.14292112"},{"key":"e_1_3_1_73_2","volume-title":"Two Approaches to Interprocedural Data Flow Analysis","author":"Micha Sharir","year":"1978","unstructured":"Sharir Micha and Amir Pnueli. 1978. Two Approaches to Interprocedural Data Flow Analysis. New York University.Courant Institute of Mathematical Sciences."},{"key":"e_1_3_1_74_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10664-023-10291-1"},{"key":"e_1_3_1_75_2","first-page":"807","volume-title":"International Conference on Machine Learning (ICML \u201910)","author":"Nair Vinod","year":"2010","unstructured":"Vinod Nair and Geoffrey E. Hinton. 2010. Rectified linear units improve restricted Boltzmann machines. In International Conference on Machine Learning (ICML \u201910), 807\u2013814."},{"key":"e_1_3_1_76_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3470006","article-title":"Automatic fault detection for deep learning programs using graph transformations","volume":"31","author":"Nikanjam Amin","year":"2021","unstructured":"Amin Nikanjam, Braiek Ben Houssem, Morovati Mehdi Mohammad, and Khomh Foutse. 2021. Automatic fault detection for deep learning programs using graph transformations. ACM Trans. Softw. Eng. Methodol. 31 (2021), 1\u201327.","journal-title":"ACM Trans. Softw. Eng. Methodol."},{"key":"e_1_3_1_77_2","unstructured":"Keiron O\u2019Shea and Ryan Nash. 2015. An introduction to convolutional neural networks. arXiv:1511.08458. Retrieved from https:\/\/arxiv.org\/abs\/1511.08458"},{"key":"e_1_3_1_78_2","unstructured":"Adam Paszke Sam Gross Soumith Chintala and Gregory Chanan. 2016. PyTorch: Open Source Machine Learning Framework. Retrieved from https:\/\/pytorch.org\/"},{"key":"e_1_3_1_79_2","unstructured":"Csaky Richard. 2019. Deep learning based chatbot models. arXiv:1908.08835. Retrieved from https:\/\/arxiv.org\/abs\/1908.08835"},{"key":"e_1_3_1_80_2","first-page":"129","article-title":"Deep learning detecting fraud in credit card transactions","author":"Roy Abhimanyu","year":"2018","unstructured":"Abhimanyu Roy, Jingyi Sun, Robert Mahoney, Loreto Alonzi, Stephen Adams, and Peter Beling. 2018. Deep learning detecting fraud in credit card transactions. In 2018 Systems and Information Engineering Design Symposium, 129\u2013134.","journal-title":"2018 Systems and Information Engineering Design Symposium"},{"key":"e_1_3_1_81_2","doi-asserted-by":"publisher","DOI":"10.1145\/3411764.3445538"},{"key":"e_1_3_1_82_2","first-page":"448","volume-title":"International Conference on Machine Learning","author":"Sergey Ioffe","year":"2015","unstructured":"Ioffe Sergey and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, 448\u2013456."},{"key":"e_1_3_1_83_2","volume-title":"Topics in Advanced Language Implementation","author":"Shivers Olin","year":"1991","unstructured":"Olin Shivers. 1991. DataFlow analysis and TypeRecovery in scheme. In Topics in Advanced Language Implementation. Lee Peter (Ed.), MIT Press."},{"key":"e_1_3_1_84_2","first-page":"712","volume-title":"29th European Conference on Object-Oriented Programming (ECOOP \u201915)","author":"Shiyi Wei","year":"2015","unstructured":"Wei Shiyi and Barbara G. Ryder. 2015. Adaptive context-sensitive analysis for JavaScript. In 29th European Conference on Object-Oriented Programming (ECOOP \u201915), 712\u2013734."},{"key":"e_1_3_1_85_2","unstructured":"Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556. Retrieved from https:\/\/arxiv.org\/abs\/1409.1556"},{"key":"e_1_3_1_86_2","doi-asserted-by":"publisher","DOI":"10.5555\/2627435.2670313"},{"key":"e_1_3_1_87_2","doi-asserted-by":"publisher","DOI":"10.1145\/3510003.3510071"},{"key":"e_1_3_1_88_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICSE43902.2021.00034"},{"key":"e_1_3_1_89_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICSE43902.2021.00043"},{"key":"e_1_3_1_90_2","first-page":"129","article-title":"An empirical study on TensorFlow program bugs","author":"Zhang Yuhao","year":"2018","unstructured":"Yuhao Zhang, Chen Yifan, Cheung Shing-Chi, Xiong Yingfei, and Zhang Lu. 2018. An empirical study on TensorFlow program bugs. In 27th ACM SIGSOFT International Symposium on Software Testing and Analysis, 129\u2013140.","journal-title":"27th ACM SIGSOFT International Symposium on Software Testing and Analysis"}],"container-title":["ACM Transactions on Software Engineering and Methodology"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3708473","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,7,1]],"date-time":"2025-07-01T13:30:12Z","timestamp":1751376612000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3708473"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,7]]},"references-count":89,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2025,7,31]]}},"alternative-id":["10.1145\/3708473"],"URL":"https:\/\/doi.org\/10.1145\/3708473","relation":{},"ISSN":["1049-331X","1557-7392"],"issn-type":[{"value":"1049-331X","type":"print"},{"value":"1557-7392","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,7]]},"assertion":[{"value":"2023-10-25","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-11-20","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-07-01","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}