{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T18:26:15Z","timestamp":1776104775465,"version":"3.50.1"},"reference-count":69,"publisher":"Association for Computing Machinery (ACM)","issue":"7","content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Hum.-Comput. Interact."],"published-print":{"date-parts":[[2025,10,18]]},"abstract":"<jats:p>In various work contexts, such as meeting scheduling, collaborating, and project planning, collective decision-making is essential but often challenging due to diverse individual preferences, varying work focuses, and power dynamics among members. To address this, we propose a system leveraging Large Language Models (LLMs) to facilitate group decision-making by managing conversations and balancing preferences among individuals. Our system aims to extract individual preferences from each member's conversation with the system and suggest options that satisfy the preferences of the members. We specifically apply this system to corporate meeting scheduling. We create synthetic employee profiles and simulate conversations at scale, leveraging LLMs to evaluate the system performance as a novel approach to conducting a user study. Our results indicate efficient coordination with reduced interactions between the members and the LLM-based system. The system refines and improves its proposed options over time, ensuring that many of the members' individual preferences are satisfied in an equitable way. Finally, we conduct a survey study involving human participants to assess our system's ability to aggregate preferences and reasoning about them. Our findings show that the system exhibits strong performance in both dimensions.<\/jats:p>","DOI":"10.1145\/3757418","type":"journal-article","created":{"date-parts":[[2025,10,16]],"date-time":"2025-10-16T17:06:01Z","timestamp":1760634361000},"page":"1-44","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":1,"title":["Leveraging Large Language Models for Collective Decision-Making"],"prefix":"10.1145","volume":"9","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-1728-0729","authenticated-orcid":false,"given":"Marios","family":"Papachristou","sequence":"first","affiliation":[{"name":"Cornell University, Ithaca, NY, USA and Arizona State University, Tempe, AZ, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6615-8615","authenticated-orcid":false,"given":"Longqi","family":"Yang","sequence":"additional","affiliation":[{"name":"Microsoft, Redmond, WA, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0144-9822","authenticated-orcid":false,"given":"Chin-Chia","family":"Hsu","sequence":"additional","affiliation":[{"name":"Microsoft, Redmond, WA, USA"}]}],"member":"320","published-online":{"date-parts":[[2025,10,16]]},"reference":[{"key":"e_1_2_2_1_1","volume-title":"Using Large Language Models to Simulate Multiple Humans. arXiv preprint arXiv:2208.10264","author":"Aher Gati","year":"2022","unstructured":"Gati Aher, Rosa I Arriaga, and Adam Tauman Kalai. 2022. Using Large Language Models to Simulate Multiple Humans. arXiv preprint arXiv:2208.10264 (2022)."},{"key":"e_1_2_2_2_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-44799-5_12"},{"key":"e_1_2_2_3_1","unstructured":"Lisa P. Argyle Ethan Busby Joshua Gubler Chris Bail Thomas Howe Christopher Rytting and David Wingate. 2023. AI Chat Assistants can Improve Conversations about Divisive Topics. arXiv:2302.07268 [cs.HC]"},{"key":"e_1_2_2_4_1","volume-title":"Out of One","author":"Argyle Lisa P","year":"2022","unstructured":"Lisa P Argyle, Ethan C Busby, Nancy Fulda, Joshua Gubler, Christopher Rytting, and David Wingate. 2022. Out of One, Many: Using Language Models to Simulate Human Samples. arXiv preprint arXiv:2209.06899 (2022)."},{"key":"e_1_2_2_5_1","unstructured":"Yuntao Bai Andy Jones Kamal Ndousse Amanda Askell Anna Chen Nova DasSarma Dawn Drain Stanislav Fort Deep Ganguli Tom Henighan et al. 2022. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862 (2022)."},{"key":"e_1_2_2_6_1","first-page":"38176","article-title":"Fine-tuning language models to find agreement among humans with diverse preferences","volume":"35","author":"Bakker Michiel","year":"2022","unstructured":"Michiel Bakker, Martin Chadwick, Hannah Sheahan, Michael Tessler, Lucy Campbell-Gillingham, Jan Balaguer, Nat McAleese, Amelia Glaese, John Aslanides, Matt Botvinick, et al., 2022. Fine-tuning language models to find agreement among humans with diverse preferences. Advances in Neural Information Processing Systems, Vol. 35 (2022), 38176-38189.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_2_7_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.engappai.2006.10.004"},{"key":"e_1_2_2_8_1","volume-title":"Using gpt for market research. Available at SSRN 4395751","author":"Brand James","year":"2023","unstructured":"James Brand, Ayelet Israeli, and Donald Ngwe. 2023. Using gpt for market research. Available at SSRN 4395751 (2023)."},{"key":"e_1_2_2_9_1","volume-title":"Ranking with Long-Term Constraints. arXiv preprint arXiv:2307.04923","author":"Brantley Kiant\u00e9","year":"2023","unstructured":"Kiant\u00e9 Brantley, Zhichong Fang, Sarah Dean, and Thorsten Joachims. 2023. Ranking with Long-Term Constraints. arXiv preprint arXiv:2307.04923 (2023)."},{"key":"e_1_2_2_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/3531146.3534642"},{"key":"e_1_2_2_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/1124772.1124929"},{"key":"e_1_2_2_12_1","volume-title":"Yuanzhi Li, Scott Lundberg, et al.","author":"Bubeck S\u00e9bastien","year":"2023","unstructured":"S\u00e9bastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al., 2023. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712 (2023)."},{"key":"e_1_2_2_13_1","doi-asserted-by":"publisher","DOI":"10.1145\/3209978.3209998"},{"key":"e_1_2_2_14_1","volume-title":"Retrieved April","volume":"29","author":"Carlini Nicholas","year":"2020","unstructured":"Nicholas Carlini. 2020. Privacy Considerations in Large Language Models. Retrieved April, Vol. 29 (2020), 2021."},{"key":"e_1_2_2_15_1","volume-title":"Chateval: Towards better llm-based evaluators through multi-agent debate. arXiv preprint arXiv:2308.07201","author":"Chan Chi-Min","year":"2023","unstructured":"Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, and Zhiyuan Liu. 2023. Chateval: Towards better llm-based evaluators through multi-agent debate. arXiv preprint arXiv:2308.07201 (2023)."},{"key":"e_1_2_2_16_1","doi-asserted-by":"publisher","DOI":"10.1073\/pnas.2310431120"},{"key":"e_1_2_2_17_1","volume-title":"Conducting Qualitative Interviews with AI. Available at SSRN","author":"Chopra Felix","year":"2023","unstructured":"Felix Chopra and Ingar Haaland. 2023. Conducting Qualitative Interviews with AI. Available at SSRN (2023)."},{"key":"e_1_2_2_18_1","doi-asserted-by":"publisher","DOI":"10.1145\/3025453.3025780"},{"key":"e_1_2_2_19_1","unstructured":"Elisabeth Crawford. 2009. Learning to improve negotiation in semi-cooperative agreement problems. Ph.D. Dissertation. Carnegie Mellon University."},{"key":"e_1_2_2_20_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10458-006-0010-2"},{"key":"e_1_2_2_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/3490486.3538346"},{"key":"e_1_2_2_22_1","volume-title":"Improving Factuality and Reasoning in Language Models through Multiagent Debate. arXiv preprint arXiv:2305.14325","author":"Du Yilun","year":"2023","unstructured":"Yilun Du, Shuang Li, Antonio Torralba, Joshua B Tenenbaum, and Igor Mordatch. 2023. Improving Factuality and Reasoning in Language Models through Multiagent Debate. arXiv preprint arXiv:2305.14325 (2023)."},{"key":"e_1_2_2_23_1","doi-asserted-by":"publisher","DOI":"10.1145\/192844.193049"},{"key":"e_1_2_2_24_1","doi-asserted-by":"crossref","unstructured":"Sara Fish Paul G\u00f6lz David C. Parkes Ariel D. Procaccia Gili Rusak Itai Shapira and Manuel W\u00fcthrich. 2023. Generative Social Choice. arXiv:2309.01291 [cs.GT]","DOI":"10.1145\/3670865.3673547"},{"key":"e_1_2_2_25_1","unstructured":"James Fishkin. 2009. When the people speak: Deliberative democracy and public consultation. Oup Oxford."},{"key":"e_1_2_2_26_1","volume-title":"On the creativity of large language models. arXiv preprint arXiv:2304.00008","author":"Franceschelli Giorgio","year":"2023","unstructured":"Giorgio Franceschelli and Mirco Musolesi. 2023. On the creativity of large language models. arXiv preprint arXiv:2304.00008 (2023)."},{"key":"e_1_2_2_27_1","volume-title":"Sparks: Inspiration for science writing using language models. In Designing interactive systems conference. 1002-1019.","author":"Gero Katy Ilonka","year":"2022","unstructured":"Katy Ilonka Gero, Vivian Liu, and Lydia Chilton. 2022. Sparks: Inspiration for science writing using language models. In Designing interactive systems conference. 1002-1019."},{"key":"e_1_2_2_28_1","first-page":"73","article-title":"On the measure of concentration with special reference to income and statistics","volume":"208","author":"Gini Corrado","year":"1936","unstructured":"Corrado Gini. 1936. On the measure of concentration with special reference to income and statistics. Colorado College Publication, General Series, Vol. 208, 1 (1936), 73-79.","journal-title":"Colorado College Publication, General Series"},{"key":"e_1_2_2_29_1","unstructured":"GitHub. [n.d.]. GitHub Copilot. https:\/\/github.com\/features\/copilot."},{"key":"e_1_2_2_30_1","first-page":"1108","volume-title":"Science","volume":"380","author":"Grossmann Igor","year":"2023","unstructured":"Igor Grossmann, Matthew Feinberg, Dawn C Parker, Nicholas A Christakis, Philip E Tetlock, and William A Cunningham. 2023. AI and the transformation of social science research. Science, Vol. 380, 6650 (2023), 1108-1109."},{"key":"e_1_2_2_31_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10726-021-09731-4"},{"key":"e_1_2_2_32_1","doi-asserted-by":"crossref","unstructured":"John J Horton. 2023. Large language models as simulated economic agents: What can we learn from homo silicus? Technical Report. National Bureau of Economic Research.","DOI":"10.3386\/w31122"},{"key":"e_1_2_2_33_1","volume-title":"International Conference on Machine Learning. PMLR, 10697-10707","author":"Kandpal Nikhil","year":"2022","unstructured":"Nikhil Kandpal, Eric Wallace, and Colin Raffel. 2022. Deduplicating training data mitigates privacy risks in language models. In International Conference on Machine Learning. PMLR, 10697-10707."},{"key":"e_1_2_2_34_1","volume-title":"Chi, and Derek Zhiyuan Cheng","author":"Kang Wang-Cheng","year":"2023","unstructured":"Wang-Cheng Kang, Jianmo Ni, Nikhil Mehta, Maheswaran Sathiamoorthy, Lichan Hong, Ed Chi, and Derek Zhiyuan Cheng. 2023. Do LLMs Understand User Preferences? Evaluating LLMs On User Rating Prediction. arXiv preprint arXiv:2305.06474 (2023)."},{"key":"e_1_2_2_35_1","volume-title":"Open democracy: Reinventing popular rule for the twenty-first century","author":"Landemore H\u00e9l\u00e8ne","unstructured":"H\u00e9l\u00e8ne Landemore. 2020. Open democracy: Reinventing popular rule for the twenty-first century. Princeton University Press."},{"key":"e_1_2_2_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/3685053"},{"key":"e_1_2_2_37_1","volume-title":"Faisal Ladhak, Frieda Rong, et al.","author":"Lee Mina","year":"2022","unstructured":"Mina Lee, Megha Srivastava, Amelia Hardy, John Thickstun, Esin Durmus, Ashwin Paranjape, Ines Gerard-Ursin, Xiang Lisa Li, Faisal Ladhak, Frieda Rong, et al., 2022. Evaluating Human-Language Model Interaction. arXiv preprint arXiv:2212.09746 (2022)."},{"key":"e_1_2_2_38_1","volume-title":"PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations. arXiv preprint arXiv:2307.02762","author":"Li Ruosen","year":"2023","unstructured":"Ruosen Li, Teerth Patel, and Xinya Du. 2023. PRD: Peer Rank and Discussion Improve Large Language Model based Evaluations. arXiv preprint arXiv:2307.02762 (2023)."},{"key":"e_1_2_2_39_1","unstructured":"Percy Liang Rishi Bommasani Tony Lee Dimitris Tsipras Dilara Soylu Michihiro Yasunaga Yian Zhang Deepak Narayanan Yuhuai Wu Ananya Kumar et al. 2022. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022)."},{"key":"e_1_2_2_40_1","volume-title":"Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate. arXiv preprint arXiv:2305.19118","author":"Liang Tian","year":"2023","unstructured":"Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, and Shuming Shi. 2023. Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate. arXiv preprint arXiv:2305.19118 (2023)."},{"key":"e_1_2_2_41_1","volume-title":"Training Socially Aligned Language Models in Simulated Human Society. arXiv preprint arXiv:2305.16960","author":"Liu Ruibo","year":"2023","unstructured":"Ruibo Liu, Ruixin Yang, Chenyan Jia, Ge Zhang, Denny Zhou, Andrew M Dai, Diyi Yang, and Soroush Vosoughi. 2023. Training Socially Aligned Language Models in Simulated Human Society. arXiv preprint arXiv:2305.16960 (2023)."},{"key":"e_1_2_2_42_1","doi-asserted-by":"publisher","DOI":"10.1145\/3544548.3581141"},{"key":"e_1_2_2_43_1","doi-asserted-by":"publisher","DOI":"10.1145\/3178876.3186103"},{"key":"e_1_2_2_44_1","doi-asserted-by":"publisher","DOI":"10.1145\/3524842.3528470"},{"key":"e_1_2_2_45_1","doi-asserted-by":"publisher","DOI":"10.1007\/s40558-017-0099-y"},{"key":"e_1_2_2_46_1","volume-title":"GPT-4 technical report. arXiv","author":"AI.","year":"2023","unstructured":"OpenAI. 2023. GPT-4 technical report. arXiv (2023), 2303-08774."},{"key":"e_1_2_2_47_1","volume-title":"Percy Liang, and Michael S Bernstein.","author":"Park Joon Sung","year":"2023","unstructured":"Joon Sung Park, Joseph C O'Brien, Carrie J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. 2023. Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442 (2023), 1-22."},{"key":"e_1_2_2_48_1","volume-title":"The impact of ai on developer productivity: Evidence from github copilot. arXiv preprint arXiv:2302.06590","author":"Peng Sida","year":"2023","unstructured":"Sida Peng, Eirini Kalliamvakou, Peter Cihon, and Mert Demirer. 2023. The impact of ai on developer productivity: Evidence from github copilot. arXiv preprint arXiv:2302.06590 (2023)."},{"key":"e_1_2_2_49_1","volume-title":"Communicative agents for software development. arXiv preprint arXiv:2307.07924","author":"Qian Chen","year":"2023","unstructured":"Chen Qian, Xin Cong, Cheng Yang, Weize Chen, Yusheng Su, Juyuan Xu, Zhiyuan Liu, and Maosong Sun. 2023. Communicative agents for software development. arXiv preprint arXiv:2307.07924 (2023)."},{"key":"e_1_2_2_50_1","unstructured":"Jack W Rae Sebastian Borgeaud Trevor Cai Katie Millican Jordan Hoffmann Francis Song John Aslanides Sarah Henderson Roman Ring Susannah Young et al. 2021. Scaling language models: Methods analysis & insights from training gopher. arXiv preprint arXiv:2112.11446 (2021)."},{"key":"e_1_2_2_51_1","volume-title":"Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125","author":"Ramesh Aditya","year":"2022","unstructured":"Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125 (2022)."},{"key":"e_1_2_2_52_1","doi-asserted-by":"publisher","DOI":"10.1145\/2441776.2441784"},{"key":"e_1_2_2_53_1","doi-asserted-by":"publisher","DOI":"10.1145\/3544549.3585688"},{"key":"e_1_2_2_54_1","first-page":"36479","article-title":"Photorealistic text-to-image diffusion models with deep language understanding","volume":"35","author":"Saharia Chitwan","year":"2022","unstructured":"Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al., 2022. Photorealistic text-to-image diffusion models with deep language understanding. Advances in Neural Information Processing Systems, Vol. 35 (2022), 36479-36494.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_2_55_1","first-page":"1","article-title":"Can artificial intelligence help for scientific writing","volume":"27","author":"Salvagno Michele","year":"2023","unstructured":"Michele Salvagno, Fabio Silvio Taccone, Alberto Giovanni Gerli, et al., 2023. Can artificial intelligence help for scientific writing? Critical care, Vol. 27, 1 (2023), 1-5.","journal-title":"Critical care"},{"key":"e_1_2_2_56_1","doi-asserted-by":"publisher","DOI":"10.1145\/3604915.3608845"},{"key":"e_1_2_2_57_1","unstructured":"Christopher T. Small Ivan Vendrov Esin Durmus Hadjar Homaei Elizabeth Barry Julien Cornebise Ted Suzman Deep Ganguli and Colin Megill. 2023. Opportunities and Risks of LLMs for Scalable Deliberation with Polis. arXiv:2306.11932 [cs.SI]"},{"key":"e_1_2_2_58_1","unstructured":"Hugo Touvron Louis Martin Kevin Stone Peter Albert Amjad Almahairi Yasmine Babaei Nikolay Bashlykov Soumya Batra Prajjwal Bhargava Shruti Bhosale Dan Bikel Lukas Blecher Cristian Canton Ferrer Moya Chen Guillem Cucurull David Esiobu Jude Fernandes Jeremy Fu Wenyin Fu Brian Fuller Cynthia Gao Vedanuj Goswami Naman Goyal Anthony Hartshorn Saghar Hosseini Rui Hou Hakan Inan Marcin Kardas Viktor Kerkez Madian Khabsa Isabel Kloumann Artem Korenev Punit Singh Koura Marie-Anne Lachaux Thibaut Lavril Jenya Lee Diana Liskovich Yinghai Lu Yuning Mao Xavier Martinet Todor Mihaylov Pushkar Mishra Igor Molybog Yixin Nie Andrew Poulton Jeremy Reizenstein Rashi Rungta Kalyan Saladi Alan Schelten Silva Ruan Smith Eric Michael Subramanian Ranjan Tan Xiaoqing Ellen Tang Binh Taylor Ross Williams Adina Xiang Jian Xu Kuan Puxin Yan Zheng Zarov Iliyan Zhang Yuchen Fan Angela Kambadur Melanie Narang Sharan Rodriguez Aurelien Stojnic Robert Edunov Sergey and Scialom Thomas. 2023. Llama 2: Open Foundation and Fine-Tuned Chat Models. arXiv preprint arXiv:2310.12345 (2023)."},{"key":"e_1_2_2_59_1","volume-title":"Manoel Horta Ribeiro, and Robert West","author":"Veselovsky Veniamin","year":"2023","unstructured":"Veniamin Veselovsky, Manoel Horta Ribeiro, and Robert West. 2023. Artificial Artificial Artificial Intelligence: Crowd Workers Widely Use Large Language Models for Text Production Tasks. arXiv:2306.07899 [cs.CL]"},{"key":"e_1_2_2_60_1","volume-title":"Chi, Quoc Le, and Denny Zhou","author":"Wei Jason","year":"2022","unstructured":"Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903 (2022)."},{"key":"e_1_2_2_61_1","volume-title":"Autogen: Enabling next-gen llm applications via multi-agent conversation framework. arXiv preprint arXiv:2308.08155","author":"Wu Qingyun","year":"2023","unstructured":"Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Shaokun Zhang, Erkang Zhu, Beibin Li, Li Jiang, Xiaoyun Zhang, and Chi Wang. 2023. Autogen: Enabling next-gen llm applications via multi-agent conversation framework. arXiv preprint arXiv:2308.08155 (2023)."},{"key":"e_1_2_2_62_1","volume-title":"Large language models as optimizers. arXiv preprint arXiv:2309.03409","author":"Yang Chengrun","year":"2023","unstructured":"Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V Le, Denny Zhou, and Xinyun Chen. 2023. Large language models as optimizers. arXiv preprint arXiv:2309.03409 (2023)."},{"key":"e_1_2_2_63_1","doi-asserted-by":"publisher","DOI":"10.1145\/3538638"},{"key":"e_1_2_2_64_1","doi-asserted-by":"crossref","unstructured":"Longqi Yang David Holtz Sonia Jaffe Siddharth Suri Shilpi Sinha Jeffrey Weston Connor Joyce Neha Shah Kevin Sherman Brent Hecht et al. 2022b. The effects of remote work on collaboration among information workers. Nature human behaviour Vol. 6 1 (2022) 43-54.","DOI":"10.1038\/s41562-021-01196-4"},{"key":"e_1_2_2_65_1","doi-asserted-by":"publisher","DOI":"10.1145\/1099203.1099223"},{"key":"e_1_2_2_66_1","doi-asserted-by":"publisher","DOI":"10.1145\/3543507.3583400"},{"key":"e_1_2_2_67_1","doi-asserted-by":"publisher","DOI":"10.1145\/3274467"},{"key":"e_1_2_2_68_1","unstructured":"Wayne Xin Zhao Kun Zhou Junyi Li Tianyi Tang Xiaolei Wang Yupeng Hou Yingqian Min Beichen Zhang Junjie Zhang Zican Dong et al. 2023. A survey of large language models. arXiv preprint arXiv:2303.18223 (2023)."},{"key":"e_1_2_2_69_1","volume-title":"Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593","author":"Ziegler Daniel M","year":"2019","unstructured":"Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. 2019. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593 (2019)."}],"container-title":["Proceedings of the ACM on Human-Computer Interaction"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3757418","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,17]],"date-time":"2025-10-17T01:52:33Z","timestamp":1760665953000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3757418"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,10,16]]},"references-count":69,"journal-issue":{"issue":"7","published-print":{"date-parts":[[2025,10,18]]}},"alternative-id":["10.1145\/3757418"],"URL":"https:\/\/doi.org\/10.1145\/3757418","relation":{},"ISSN":["2573-0142"],"issn-type":[{"value":"2573-0142","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,10,16]]},"assertion":[{"value":"2025-10-16","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}