{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,28]],"date-time":"2026-04-28T01:15:06Z","timestamp":1777338906325,"version":"3.51.4"},"publisher-location":"New York, NY, USA","reference-count":66,"publisher":"ACM","license":[{"start":{"date-parts":[[2025,4,22]],"date-time":"2025-04-22T00:00:00Z","timestamp":1745280000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/https:\/\/doi.org\/10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["NSFC62306276\/NSFCU23B2055\/NSFCU19B2027"],"award-info":[{"award-number":["NSFC62306276\/NSFCU23B2055\/NSFCU19B2027"]}],"id":[{"id":"10.13039\/https:\/\/doi.org\/10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Zhejiang Provincial Natural Science Foundation of China","award":["No. LQ23F020017"],"award-info":[{"award-number":["No. LQ23F020017"]}]},{"DOI":"10.13039\/https:\/\/doi.org\/10.13039\/501100012226","name":"Fundamental Research Funds for the Central Universities","doi-asserted-by":"publisher","award":["226-2023-00138"],"award-info":[{"award-number":["226-2023-00138"]}],"id":[{"id":"10.13039\/https:\/\/doi.org\/10.13039\/501100012226","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Yongjiang Talent Introduction Programme","award":["2022A-238-G"],"award-info":[{"award-number":["2022A-238-G"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2025,4,22]]},"DOI":"10.1145\/3696410.3714816","type":"proceedings-article","created":{"date-parts":[[2025,4,22]],"date-time":"2025-04-22T22:52:18Z","timestamp":1745362338000},"page":"119-133","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":3,"title":["OntoTune: Ontology-Driven Self-training for Aligning Large Language Models"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0009-0009-5656-6859","authenticated-orcid":false,"given":"Zhiqiang","family":"Liu","sequence":"first","affiliation":[{"name":"Zhejiang University, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0008-0228-3477","authenticated-orcid":false,"given":"Chengtao","family":"Gan","sequence":"additional","affiliation":[{"name":"Zhejiang University, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0004-5004-1705","authenticated-orcid":false,"given":"Junjie","family":"Wang","sequence":"additional","affiliation":[{"name":"Zhejiang University, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0007-4046-1003","authenticated-orcid":false,"given":"Yichi","family":"Zhang","sequence":"additional","affiliation":[{"name":"Zhejiang University, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0009-4863-337X","authenticated-orcid":false,"given":"Zhongpu","family":"Bo","sequence":"additional","affiliation":[{"name":"Ant Group, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2639-9462","authenticated-orcid":false,"given":"Mengshu","family":"Sun","sequence":"additional","affiliation":[{"name":"Ant Group, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5496-7442","authenticated-orcid":false,"given":"Huajun","family":"Chen","sequence":"additional","affiliation":[{"name":"Zhejiang University, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8429-9326","authenticated-orcid":false,"given":"Wen","family":"Zhang","sequence":"additional","affiliation":[{"name":"Zhejiang University, Hangzhou, China"}]}],"member":"320","published-online":{"date-parts":[[2025,4,22]]},"reference":[{"key":"e_1_3_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2404.16621"},{"key":"e_1_3_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1016\/J.ARTINT.2024.104145"},{"key":"e_1_3_2_1_3_1","volume-title":"A Survey. CoRR abs\/2202.12040","author":"Amini Massih-Reza","year":"2022","unstructured":"Massih-Reza Amini, Vasilii Feofanov, Lo\u00efc Pauletto, Emilie Devijver, and Yury Maximov. 2022. Self-Training: A Survey. CoRR abs\/2202.12040 (2022). arXiv:2202.12040 https:\/\/arxiv.org\/abs\/2202.12040"},{"key":"e_1_3_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.18653\/V1"},{"key":"e_1_3_2_1_5_1","volume-title":"Proceedings of The 12th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2018","author":"Berend G\u00e1bor","year":"2018","unstructured":"G\u00e1bor Berend, M\u00e1rton Makrai, and Peter F\u00f6ldi\u00e1k. 2018. 300-sparsans at SemEval2018 Task 9: Hypernymy as interaction of sparse attributes. In Proceedings of The 12th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2018, New Orleans, Louisiana, USA, June 5--6, 2018, Marianna Apidianaki, Saif M. Mohammad, Jonathan May, Ekaterina Shutova, Steven Bethard, and Marine Carpuat (Eds.). Association for Computational Linguistics, 928--934. doi:10.18653\/ V1\/S18--1152"},{"key":"e_1_3_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/S18-1116"},{"key":"e_1_3_2_1_7_1","volume-title":"ACL 2024, Bangkok, Thailand and virtual meeting, August 11--16","author":"Bhatia Gagan","year":"2024","unstructured":"Gagan Bhatia, El Moatez Billah Nagoudi, Hasan Cavusoglu, and Muhammad Abdul-Mageed. 2024. FinTral: A Family of GPT-4 Level Multimodal Financial Large Language Models. In Findings of the Association for Computational Linguistics, ACL 2024, Bangkok, Thailand and virtual meeting, August 11--16, 2024, Lun-Wei Ku, Andre Martins, and Vivek Srikumar (Eds.). Association for Computational Linguistics, 13064--13087. https:\/\/aclanthology.org\/2024.findings-acl.774"},{"key":"e_1_3_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/S18-1115"},{"key":"e_1_3_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICDE55515.2023.00265"},{"key":"e_1_3_2_1_10_1","volume-title":"The Twelfth International Conference on Learning Representations, ICLR 2024","author":"Cheng Daixuan","year":"2024","unstructured":"Daixuan Cheng, Shaohan Huang, and Furu Wei. 2024. Adapting Large Language Models via Reading Comprehension. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7--11, 2024. OpenReview.net. https:\/\/openreview.net\/forum?id=y886UXPEZ0"},{"key":"e_1_3_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2408.06142"},{"key":"e_1_3_2_1_12_1","volume-title":"Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge. CoRR abs\/1803.05457","author":"Clark Peter","year":"2018","unstructured":"Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge. CoRR abs\/1803.05457 (2018). arXiv:1803.05457 http:\/\/arxiv.org\/abs\/1803.05457"},{"key":"e_1_3_2_1_13_1","unstructured":"OpenCompass Contributors. 2023. OpenCompass: A Universal Evaluation Platform for Foundation Models. https:\/\/github.com\/open-compass\/opencompass."},{"key":"e_1_3_2_1_14_1","unstructured":"Felix J Dorfner Amin Dada Felix Busch Marcus R Makowski Tianyu Han Daniel Truhn Jens Kleesiek Madhumita Sushil Jacqueline Lammert Lisa C Adams et al. 2024. Biomedical Large Languages Models Seem not to be Superior to Generalist Models on Unseen Medical Data. arXiv preprint arXiv:2408.13833 (2024)."},{"key":"e_1_3_2_1_15_1","unstructured":"Abhimanyu Dubey Abhinav Jauhri Abhinav Pandey Abhishek Kadian Ahmad Al-Dahle Aiesha Letman Akhil Mathur Alan Schelten Amy Yang Angela Fan et al. 2024. The llama 3 herd of models. arXiv preprint arXiv:2407.21783 (2024)."},{"key":"e_1_3_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2405.05904"},{"key":"e_1_3_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2405.01886"},{"key":"e_1_3_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2304.08247"},{"key":"e_1_3_2_1_19_1","volume-title":"Revisiting Self-Training for Neural Sequence Generation. In 8th International Conference on Learning Representations, ICLR 2020","author":"He Junxian","year":"2020","unstructured":"Junxian He, Jiatao Gu, Jiajun Shen, and Marc'Aurelio Ranzato. 2020. Revisiting Self-Training for Neural Sequence Generation. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26--30, 2020. OpenReview.net. https:\/\/openreview.net\/forum?id=SJgdnAVKDH"},{"key":"e_1_3_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.18653\/V1"},{"key":"e_1_3_2_1_21_1","volume-title":"Measuring Massive Multitask Language Understanding. In 9th International Conference on Learning Representations, ICLR 2021","author":"Hendrycks Dan","year":"2021","unstructured":"Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring Massive Multitask Language Understanding. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3--7, 2021. OpenReview.net. https: \/\/openreview.net\/forum?id=d7KBjmI3GmQ"},{"key":"e_1_3_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2402.06457"},{"key":"e_1_3_2_1_23_1","volume-title":"LoRA: Low-Rank Adaptation of Large Language Models. In The Tenth International Conference on Learning Representations, ICLR 2022","author":"Hu Edward J.","year":"2022","unstructured":"Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-Rank Adaptation of Large Language Models. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25--29, 2022. OpenReview.net. https:\/\/openreview.net\/forum?id=nZeVKeeFYf9"},{"key":"e_1_3_2_1_24_1","volume-title":"Large Language Models Cannot Self-Correct Reasoning Yet. In The Twelfth International Conference on Learning Representations, ICLR 2024","author":"Huang Jie","year":"2024","unstructured":"Jie Huang, Xinyun Chen, Swaroop Mishra, Huaixiu Steven Zheng, Adams Wei Yu, Xinying Song, and Denny Zhou. 2024. Large Language Models Cannot Self-Correct Reasoning Yet. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7--11, 2024. OpenReview.net. https:\/\/openreview.net\/forum?id=IkmD3fKBPQ"},{"key":"e_1_3_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2023.emnlp-main.67"},{"key":"e_1_3_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2023.findings-acl.642"},{"key":"e_1_3_2_1_27_1","unstructured":"Albert Q. Jiang Alexandre Sablayrolles Arthur Mensch Chris Bamford Devendra Singh Chaplot Diego de Las Casas Florian Bressand Gianna Lengyel Guillaume Lample Lucile Saulnier L\u00e9lio Renard Lavaud Marie-Anne Lachaux Pierre Stock Teven Le Scao Thibaut Lavril Thomas Wang Timoth\u00e9e Lacroix and William El Sayed. 2023. Mistral 7B. CoRR abs\/2310.06825 (2023). doi:10. 48550\/ARXIV.2310.06825 arXiv:2310.06825"},{"key":"e_1_3_2_1_28_1","volume-title":"What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams. CoRR abs\/2009.13081","author":"Jin Di","year":"2020","unstructured":"Di Jin, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang, and Peter Szolovits. 2020. What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams. CoRR abs\/2009.13081 (2020). arXiv:2009.13081 https:\/\/arxiv.org\/abs\/2009.13081"},{"key":"e_1_3_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D19-1259"},{"key":"e_1_3_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.18653\/V1"},{"key":"e_1_3_2_1_31_1","volume-title":"ACL 2024, Bangkok, Thailand and virtual meeting, August 11--16","author":"Labrak Yanis","year":"2024","unstructured":"Yanis Labrak, Adrien Bazoge, Emmanuel Morin, Pierre-Antoine Gourraud, Mickael Rouvier, and Richard Dufour. 2024. BioMistral: A Collection of Open-Source Pretrained Large Language Models for Medical Domains. In Findings of the Association for Computational Linguistics, ACL 2024, Bangkok, Thailand and virtual meeting, August 11--16, 2024, Lun-Wei Ku, Andre Martins, and Vivek Srikumar (Eds.). Association for Computational Linguistics, 5848--5864. https:\/\/aclanthology.org\/2024.findings-acl.348"},{"key":"e_1_3_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2024.acl-long.599"},{"key":"e_1_3_2_1_33_1","volume-title":"Hashimoto","author":"Li Xuechen","year":"2023","unstructured":"Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. AlpacaEval: An Automatic Evaluator of Instruction-following Models. https:\/\/github.com\/tatsulab\/alpaca_eval."},{"key":"e_1_3_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2404.07965"},{"key":"e_1_3_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1093\/BIB"},{"key":"e_1_3_2_1_36_1","volume-title":"International Conference on Machine Learning, ICML 2023","volume":"24477","author":"Meng Yu","year":"2023","unstructured":"Yu Meng, Martin Michalski, Jiaxin Huang, Yu Zhang, Tarek F. Abdelzaher, and Jiawei Han. 2023. Tuning Language Models as Training Data Generators for Augmentation-Enhanced Few-Shot Learning. In International Conference on Machine Learning, ICML 2023, 23--29 July 2023, Honolulu, Hawaii, USA (Proceedings of Machine Learning Research, Vol. 202), Andreas Krause, Emma Brunskill, Kyunghyun Cho, Barbara Engelhardt, Sivan Sabato, and Jonathan Scarlett (Eds.). PMLR, 24457--24477. https:\/\/proceedings.mlr.press\/v202\/meng23b.html"},{"key":"e_1_3_2_1_37_1","volume-title":"Proceedings of a Workshop held at Plainsboro","author":"Miller George A.","year":"1994","unstructured":"George A. Miller. 1994. WORDNET: A Lexical Database for English. In Human Language Technology, Proceedings of a Workshop held at Plainsboro, New Jerey, USA, March 8--11, 1994. Morgan Kaufmann. https:\/\/aclanthology.org\/H94--1111\/"},{"key":"e_1_3_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2024.acl-long.127"},{"key":"e_1_3_2_1_39_1","volume-title":"Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC\/COLING 2024","author":"Moskvoretskii Viktor","year":"2024","unstructured":"Viktor Moskvoretskii, Alexander Panchenko, and Irina Nikishina. 2024. Are Large Language Models Good at Lexical Semantics? A Case of Taxonomy Learning. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC\/COLING 2024, 20--25 May, 2024, Torino, Italy, Nicoletta Calzolari, Min-Yen Kan, V\u00e9ronique Hoste, Alessandro Lenci, Sakriani Sakti, and Nianwen Xue (Eds.). ELRA and ICCL, 1498--1510. https:\/\/aclanthology.org\/2024.lrec-main.133"},{"key":"e_1_3_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.18653\/V1"},{"key":"e_1_3_2_1_42_1","volume-title":"Conference on Health, Inference, and Learning, CHIL 2022","volume":"260","author":"Pal Ankit","year":"2022","unstructured":"Ankit Pal, Logesh Kumar Umapathi, and Malaikannan Sankarasubbu. 2022. MedMCQA: A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering. In Conference on Health, Inference, and Learning, CHIL 2022, 7--8 April 2022, Virtual Event (Proceedings of Machine Learning Research, Vol. 174), Gerardo Flores, George H. Chen, Tom J. Pollard, Joyce C. Ho, and Tristan Naumann (Eds.). PMLR, 248--260. https:\/\/proceedings.mlr.press\/v174\/pal22a.html"},{"key":"e_1_3_2_1_43_1","volume-title":"The Twelfth International Conference on Learning Representations, ICLR 2024","author":"Qi Xiangyu","year":"2024","unstructured":"Xiangyu Qi, Yi Zeng, Tinghao Xie, Pin-Yu Chen, Ruoxi Jia, Prateek Mittal, and Peter Henderson. 2024. Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!. In The Twelfth International Conference on Learning Representations, ICLR 2024, Vienna, Austria, May 7--11, 2024. OpenReview.net. https:\/\/openreview.net\/forum?id=hTEGyKf0dZ"},{"key":"e_1_3_2_1_44_1","first-page":"1","article-title":"2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer","volume":"21","author":"Raffel Colin","year":"2020","unstructured":"Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. J. Mach. Learn. Res. 21 (2020), 140:1--140:67. http:\/\/jmlr.org\/papers\/v21\/20-074.html","journal-title":"J. Mach. Learn. Res."},{"key":"e_1_3_2_1_45_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2024.acl-long.330"},{"key":"e_1_3_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.1186\/1472--6947--8-S1-S1"},{"key":"e_1_3_2_1_47_1","unstructured":"Avi Singh John D. Co-Reyes Rishabh Agarwal Ankesh Anand Piyush Patil Xavier Garcia Peter J. Liu James Harrison Jaehoon Lee Kelvin Xu Aaron T. Parisi Abhishek Kumar Alexander A. Alemi Alex Rizkowsky Azade Nova Ben Adlam Bernd Bohnet Gamaleldin Fathy Elsayed Hanie Sedghi Igor Mordatch Isabelle Simpson Izzeddin Gur Jasper Snoek Jeffrey Pennington Jiri Hron Kathleen Kenealy Kevin Swersky Kshiteej Mahajan Laura Culp Lechao Xiao Maxwell L. Bileschi Noah Constant Roman Novak Rosanne Liu Tris Warkentin Yundi Qian Yamini Bansal Ethan Dyer Behnam Neyshabur Jascha Sohl-Dickstein and Noah Fiedel. 2024. Beyond Human Data: Scaling Self-Training for Problem-Solving with Language Models. Trans. Mach. Learn. Res. 2024 (2024). https:\/\/openreview.net\/forum?id=lNAyUngGFK"},{"key":"e_1_3_2_1_48_1","doi-asserted-by":"publisher","unstructured":"Hugo Touvron Louis Martin Kevin Stone Peter Albert Amjad Almahairi Yasmine Babaei Nikolay Bashlykov Soumya Batra Prajjwal Bhargava Shruti Bhosale Dan Bikel Lukas Blecher Cristian Canton-Ferrer Moya Chen Guillem Cucurull David Esiobu Jude Fernandes Jeremy Fu Wenyin Fu Brian Fuller Cynthia Gao Vedanuj Goswami Naman Goyal Anthony Hartshorn Saghar Hosseini Rui Hou Hakan Inan Marcin Kardas Viktor Kerkez Madian Khabsa Isabel Kloumann Artem Korenev Punit Singh Koura Marie-Anne Lachaux Thibaut Lavril Jenya Lee Diana Liskovich Yinghai Lu Yuning Mao Xavier Martinet Todor Mihaylov Pushkar Mishra Igor Molybog Yixin Nie Andrew Poulton Jeremy Reizenstein Rashi Rungta Kalyan Saladi Alan Schelten Ruan Silva Eric Michael Smith Ranjan Subramanian Xiaoqing Ellen Tan Binh Tang Ross Taylor Adina Williams Jian Xiang Kuan Puxin Xu Zheng Yan Iliyan Zarov Yuchen Zhang Angela Fan Melanie Kambadur Sharan Narang Aur\u00e9lien Rodriguez Robert Stojnic Sergey Edunov and Thomas Scialom. 2023. Llama 2: Open Foundation and Fine-Tuned Chat Models. CoRR abs\/2307.09288 (2023). doi:10.48550\/ARXIV.2307.09288 arXiv:2307.09288","DOI":"10.48550\/ARXIV.2307.09288"},{"key":"e_1_3_2_1_49_1","volume-title":"ACL 2024, Bangkok, Thailand and virtual meeting, August 11--16","author":"Tyen Gladys","year":"2024","unstructured":"Gladys Tyen, Hassan Mansoor, Victor Carbune, Peter Chen, and Tony Mak. 2024. LLMs cannot find reasoning errors, but can correct them given the error location. In Findings of the Association for Computational Linguistics, ACL 2024, Bangkok, Thailand and virtual meeting, August 11--16, 2024, Lun-Wei Ku, Andre Martins, and Vivek Srikumar (Eds.). Association for Computational Linguistics, 13894--13908. https:\/\/aclanthology.org\/2024.findings-acl.826"},{"key":"e_1_3_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2406.14282"},{"key":"e_1_3_2_1_51_1","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2407"},{"key":"e_1_3_2_1_52_1","doi-asserted-by":"publisher","DOI":"10.18653\/V1"},{"key":"e_1_3_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2023.acl-long.754"},{"key":"e_1_3_2_1_54_1","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2304.14454"},{"key":"e_1_3_2_1_55_1","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2303.17564"},{"key":"e_1_3_2_1_56_1","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2018\/777"},{"key":"e_1_3_2_1_57_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.01070"},{"key":"e_1_3_2_1_59_1","doi-asserted-by":"publisher","unstructured":"Xi Yang Nima M. Pournejatian Hoo Chang Shin Kaleb E. Smith Christopher Parisien Colin Compas Cheryl Martin Mona G. Flores Ying Zhang Tanja Magoc Christopher A. Harle Gloria P. Lipori Duane A. Mitchell William R. Hogan Elizabeth A. Shenkman Jiang Bian and Yonghui Wu. 2022. GatorTron: A Large Clinical Language Model to Unlock Patient Information from Unstructured Electronic Health Records. CoRR abs\/2203.03540 (2022). doi:10.48550\/ARXIV.2203.03540 arXiv:2203.03540","DOI":"10.48550\/ARXIV.2203.03540"},{"key":"e_1_3_2_1_60_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2024.acl-long.58"},{"key":"e_1_3_2_1_61_1","volume-title":"STaR: Bootstrapping Reasoning With Reasoning. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022","author":"Zelikman Eric","year":"2022","unstructured":"Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah D. Goodman. 2022. STaR: Bootstrapping Reasoning With Reasoning. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022, Sanmi Koyejo, S. Mohamed, A. Agarwal, Danielle Belgrave, K. Cho, and A. Oh (Eds.). http:\/\/papers.nips.cc\/paper_files\/paper\/2022\/hash\/639a9a172c044fbb64175b5fad42e9a5-Abstract-Conference.html"},{"key":"e_1_3_2_1_62_1","doi-asserted-by":"publisher","DOI":"10.1145\/3308558.3313612"},{"key":"e_1_3_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.18653\/V1"},{"key":"e_1_3_2_1_64_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2024.acl-demos.38"},{"key":"e_1_3_2_1_65_1","volume-title":"LIMA: Less Is More for Alignment. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023","author":"Zhou Chunting","year":"2023","unstructured":"Chunting Zhou, Pengfei Liu, Puxin Xu, Srinivasan Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, and Omer Levy. 2023. LIMA: Less Is More for Alignment. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023, Alice Oh, Tristan Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine (Eds.). http:\/\/papers.nips.cc\/paper_files\/paper\/ 2023\/hash\/ac662d74829e4407ce1d126477f4a03a-Abstract-Conference.html"},{"key":"e_1_3_2_1_66_1","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2311.05112"},{"key":"e_1_3_2_1_67_1","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2406.04614"},{"key":"e_1_3_2_1_68_1","doi-asserted-by":"publisher","DOI":"10.48550\/ARXIV.2307.15043"}],"event":{"name":"WWW '25: The ACM Web Conference 2025","location":"Sydney NSW Australia","acronym":"WWW '25","sponsor":["SIGWEB ACM Special Interest Group on Hypertext, Hypermedia, and Web"]},"container-title":["Proceedings of the ACM on Web Conference 2025"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3696410.3714816","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3696410.3714816","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T01:18:42Z","timestamp":1750295922000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3696410.3714816"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,4,22]]},"references-count":66,"alternative-id":["10.1145\/3696410.3714816","10.1145\/3696410"],"URL":"https:\/\/doi.org\/10.1145\/3696410.3714816","relation":{},"subject":[],"published":{"date-parts":[[2025,4,22]]},"assertion":[{"value":"2025-04-22","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}