{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T19:36:34Z","timestamp":1776108994796,"version":"3.50.1"},"reference-count":44,"publisher":"Association for Computing Machinery (ACM)","issue":"7","content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Hum.-Comput. Interact."],"published-print":{"date-parts":[[2025,10,18]]},"abstract":"<jats:p>Research into community content moderation often assumes that moderation teams govern with a single, unified voice. However, recent work has found that moderators disagree with one another at modest, but concerning rates. The problem is not the root disagreements themselves. Subjectivity in moderation is unavoidable, and there are clear benefits to including diverse perspectives within a moderation team. Instead, the crux of the issue is that, due to resource constraints, moderation decisions end up being made by individual decision-makers. The result is decision-making that is inconsistent, which is frustrating for community members. To address this, we develop Venire, an ML-backed system for panel review on Reddit. Venire uses a machine learning model trained on log data to identify the cases where moderators are most likely to disagree. Venire fast-tracks these cases for multi-person review. Ideally, Venire allows moderators to surface and resolve disagreements that would have otherwise gone unnoticed. We conduct three studies through which we design and evaluate Venire: a set of formative interviews with moderators, technical evaluations on two datasets, and a think-aloud study in which moderators used Venire to make decisions on real moderation cases. Quantitatively, we demonstrate that Venire is able to improve decision consistency and surface latent disagreements. Qualitatively, we find that Venire helps moderators resolve difficult moderation cases more confidently. Venire represents a novel paradigm for human-AI content moderation, and shifts the conversation from replacing human decision-making to supporting it.<\/jats:p>","DOI":"10.1145\/3757699","type":"journal-article","created":{"date-parts":[[2025,10,16]],"date-time":"2025-10-16T17:32:00Z","timestamp":1760635920000},"page":"1-35","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":3,"title":["Venire: A Machine Learning-Guided Panel Review System for Community Content Moderation"],"prefix":"10.1145","volume":"9","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-1410-3911","authenticated-orcid":false,"given":"Vinay","family":"Koshy","sequence":"first","affiliation":[{"name":"University of Illinois at Urbana-Champaign, Urbana, Illinois, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8818-2456","authenticated-orcid":false,"given":"Frederick","family":"Choi","sequence":"additional","affiliation":[{"name":"University of Illinois at Urbana-Champaign, Urbana, Illinois, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9326-1135","authenticated-orcid":false,"given":"Yi-Shyuan","family":"Chiang","sequence":"additional","affiliation":[{"name":"University of Illinois at Urbana-Champaign, Urbana, Illinois, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3315-6055","authenticated-orcid":false,"given":"Hari","family":"Sundaram","sequence":"additional","affiliation":[{"name":"University of Illinois at Urbana-Champaign, Urbana, Illinois, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7473-1418","authenticated-orcid":false,"given":"Eshwar","family":"Chandrasekharan","sequence":"additional","affiliation":[{"name":"University of Illinois at Urbana-Champaign, Urbana, Illinois, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8788-3405","authenticated-orcid":false,"given":"Karrie","family":"Karahalios","sequence":"additional","affiliation":[{"name":"University of Illinois at Urbana-Champaign, Urbana, Illinois, USA"}]}],"member":"320","published-online":{"date-parts":[[2025,10,16]]},"reference":[{"key":"e_1_2_1_1_1","volume-title":"Information cascades in the laboratory. The American economic review","author":"Anderson Lisa R","year":"1997","unstructured":"Lisa R Anderson and Charles A Holt. 1997. Information cascades in the laboratory. The American economic review (1997), 847-862."},{"key":"e_1_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1145\/3134659"},{"key":"e_1_2_1_3_1","first-page":"1","volume-title":"Proceedings of the ACM on Human-Computer Interaction","volume":"7","author":"Bozarth Lia","year":"2023","unstructured":"Lia Bozarth, Jane Im, Christopher Quarles, and Ceren Budak. 2023. Wisdom of Two Crowds: Misinformation Moderation on Reddit and How to Improve this Process-A Case Study of COVID-19. Proceedings of the ACM on Human-Computer Interaction, Vol. 7, CSCW1 (2023), 1-33."},{"key":"e_1_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v37i6.25840"},{"key":"e_1_2_1_5_1","volume-title":"Matthew Wortley Mustelier, and Eric Gilbert","author":"Chandrasekharan Eshwar","year":"2019","unstructured":"Eshwar Chandrasekharan, Chaitrali Gandhi, Matthew Wortley Mustelier, and Eric Gilbert. 2019. Crossmod: A cross-community learning-based system to assist reddit moderators. Proceedings of the ACM on human-computer interaction, Vol. 3, CSCW (2019), 1-30."},{"key":"e_1_2_1_6_1","first-page":"1","volume-title":"Proceedings of the ACM on Human-Computer Interaction","volume":"7","author":"Chen Quan Ze","year":"2023","unstructured":"Quan Ze Chen and Amy X Zhang. 2023. Judgment Sieve: Reducing uncertainty in group judgments through interventions targeting ambiguity versus disagreement. Proceedings of the ACM on Human-Computer Interaction, Vol. 7, CSCW2 (2023), 1-26."},{"key":"e_1_2_1_7_1","first-page":"1","volume-title":"Proceedings of the ACM on Human-Computer Interaction","volume":"7","author":"Choi Frederick","year":"2023","unstructured":"Frederick Choi, Tanvi Bajpai, Sowmya Pratipati, and Eshwar Chandrasekharan. 2023. ConvEx: A Visual Conversation Exploration System for Discord Moderators. Proceedings of the ACM on Human-Computer Interaction, Vol. 7, CSCW2 (2023), 1-30."},{"key":"e_1_2_1_8_1","first-page":"1","volume-title":"Proceedings of the ACM on Human-computer Interaction","volume":"6","author":"Cullen Amanda LL","year":"2022","unstructured":"Amanda LL Cullen and Sanjay R Kairam. 2022. Practicing moderation: Community moderation as reflective practice. Proceedings of the ACM on Human-computer Interaction, Vol. 6, CSCW1 (2022), 1-32."},{"key":"e_1_2_1_9_1","volume-title":"Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805","author":"Devlin Jacob","year":"2018","unstructured":"Jacob Devlin. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)."},{"key":"e_1_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/3290605.3300372"},{"key":"e_1_2_1_11_1","volume-title":"When the majority is wrong: Modeling annotator disagreement for subjective tasks. arXiv preprint arXiv:2305.06626","author":"Fleisig Eve","year":"2023","unstructured":"Eve Fleisig, Rediet Abebe, and Dan Klein. 2023. When the majority is wrong: Modeling annotator disagreement for subjective tasks. arXiv preprint arXiv:2305.06626 (2023)."},{"key":"e_1_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2024.naacl-long.126"},{"key":"e_1_2_1_13_1","volume-title":"Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media","author":"Gillespie Tarleton","unstructured":"Tarleton Gillespie. 2018. Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press."},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/3491102.3502004"},{"key":"e_1_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1145\/3025453.3025781"},{"key":"e_1_2_1_16_1","first-page":"1","volume-title":"Proceedings of the ACM on Human-Computer Interaction","volume":"4","author":"Halfaker Aaron","year":"2020","unstructured":"Aaron Halfaker and R Stuart Geiger. 2020. Ores: Lowering barriers with participatory machine learning in wikipedia. Proceedings of the ACM on Human-Computer Interaction, Vol. 4, CSCW2 (2020), 1-37."},{"key":"e_1_2_1_17_1","first-page":"2666","volume-title":"Proceedings of the ACM web conference","author":"Ribeiro Manoel Horta","year":"2023","unstructured":"Manoel Horta Ribeiro, Justin Cheng, and Robert West. 2023. Automated content moderation increases adherence to community guidelines. In Proceedings of the ACM web conference 2023. 2666-2676."},{"key":"e_1_2_1_18_1","first-page":"1","volume-title":"Proceedings of the ACM on Human-Computer Interaction","volume":"7","author":"Hsieh Jane","year":"2023","unstructured":"Jane Hsieh, Joselyn Kim, Laura Dabbish, and Haiyi Zhu. 2023. '' Nip it in the Bud'': Moderation Strategies in Open Source Software Projects and the Role of Bots. Proceedings of the ACM on Human-Computer Interaction, Vol. 7, CSCW2 (2023), 1-29."},{"key":"e_1_2_1_19_1","volume-title":"Proceedings of the 2024 ACM Designing Interactive Systems Conference. 1483-1498","author":"Huang Evey Jiaxin","year":"2024","unstructured":"Evey Jiaxin Huang, Abhraneel Sarma, Sohyeon Hwang, Eshwar Chandrasekharan, and Stevie Chancellor. 2024. Opportunities, tensions, and challenges in computational approaches to addressing online harassment. In Proceedings of the 2024 ACM Designing Interactive Systems Conference. 1483-1498."},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/3313831.3376383"},{"key":"e_1_2_1_21_1","first-page":"1","volume-title":"Proceedings of the ACM on human-computer interaction","volume":"3","author":"Jhaver Shagun","year":"2019","unstructured":"Shagun Jhaver, Darren Scott Appling, Eric Gilbert, and Amy Bruckman. 2019a. '' Did you suspect the post would be removed?'' Understanding user reactions to content removals on Reddit. Proceedings of the ACM on human-computer interaction, Vol. 3, CSCW (2019), 1-33."},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/3338243"},{"key":"e_1_2_1_23_1","first-page":"1","volume-title":"Proceedings of the ACM on Human-Computer Interaction","volume":"7","author":"Jhaver Shagun","year":"2023","unstructured":"Shagun Jhaver, Alice Qian Zhang, Quan Ze Chen, Nikhila Natarajan, Ruotong Wang, and Amy X Zhang. 2023. Personalizing content moderation on social media: User perspectives on moderation choices, interface design, and labor. Proceedings of the ACM on Human-Computer Interaction, Vol. 7, CSCW2 (2023), 1-33."},{"key":"e_1_2_1_24_1","first-page":"1","volume-title":"Proceedings of the ACM on Human-Computer Interaction","volume":"4","author":"Juneja Prerna","year":"2020","unstructured":"Prerna Juneja, Deepika Rama Subramanian, and Tanushree Mitra. 2020. Through the looking glass: Study of transparency in Reddit's moderation practices. Proceedings of the ACM on Human-Computer Interaction, Vol. 4, GROUP (2020), 1-35."},{"key":"e_1_2_1_25_1","first-page":"1","volume-title":"Proceedings of the ACM on Human-Computer Interaction","volume":"7","author":"Koshy Vinay","year":"2023","unstructured":"Vinay Koshy, Tanvi Bajpai, Eshwar Chandrasekharan, Hari Sundaram, and Karrie Karahalios. 2023. Measuring User-Moderator Alignment on r\/ChangeMyView. Proceedings of the ACM on Human-Computer Interaction, Vol. 7, CSCW2 (2023), 1-36."},{"key":"e_1_2_1_26_1","first-page":"299","volume-title":"Seventeenth Symposium on Usable Privacy and Security (SOUPS","author":"Kumar Deepak","year":"2021","unstructured":"Deepak Kumar, Patrick Gage Kelley, Sunny Consolvo, Joshua Mason, Elie Bursztein, Zakir Durumeric, Kurt Thomas, and Michael Bailey. 2021. Designing toxic content classification for a diversity of perspectives. In Seventeenth Symposium on Usable Privacy and Security (SOUPS 2021). 299-318."},{"key":"e_1_2_1_27_1","volume-title":"Proceedings of the CHI Conference on Human Factors in Computing Systems. 1-24","author":"Kuo Tzu-Sheng","year":"2024","unstructured":"Tzu-Sheng Kuo, Aaron Lee Halfaker, Zirui Cheng, Jiwoo Kim, Meng-Hsin Wu, Tongshuang Wu, Kenneth Holstein, and Haiyi Zhu. 2024. Wikibench: Community-driven data curation for ai evaluation on wikipedia. In Proceedings of the CHI Conference on Human Factors in Computing Systems. 1-24."},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1145\/1352793.1352837"},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1145\/985692.985761"},{"key":"e_1_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1145\/2702123.2702416"},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1609\/icwsm.v16i1.19318"},{"key":"e_1_2_1_32_1","volume-title":"2019 IEEE Western New York Image and Signal Processing Workshop (WNYISPW). IEEE, 1-5.","author":"Lucas Elizabeth","year":"2019","unstructured":"Elizabeth Lucas, Cecilia O Alm, and Reynold Bailey. 2019. Understanding human and predictive moderation of online science discourse. In 2019 IEEE Western New York Image and Signal Processing Workshop (WNYISPW). IEEE, 1-5."},{"key":"e_1_2_1_33_1","volume-title":"Learning to defer in content moderation: The human-ai interplay. arXiv preprint arXiv:2402.12237","author":"Lykouris Thodoris","year":"2024","unstructured":"Thodoris Lykouris and Wentao Weng. 2024. Learning to defer in content moderation: The human-ai interplay. arXiv preprint arXiv:2402.12237 (2024)."},{"key":"e_1_2_1_34_1","first-page":"1","volume-title":"Proceedings of the ACM on Human-Computer Interaction","volume":"6","author":"Ma Renkai","year":"2022","unstructured":"Renkai Ma and Yubo Kou. 2022. '' I'm not sure what difference is between their content and mine, other than the person itself'' A Study of Fairness Perception of Content Moderation on YouTube. Proceedings of the ACM on Human-Computer Interaction, Vol. 6, CSCW2 (2022), 1-28."},{"key":"e_1_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2024.naacl-long.59"},{"key":"e_1_2_1_36_1","volume-title":"International Conference on Machine Learning. PMLR, 5281-5290","author":"Raghu Maithra","year":"2019","unstructured":"Maithra Raghu, Katy Blumer, Rory Sayres, Ziad Obermeyer, Bobby Kleinberg, Sendhil Mullainathan, and Jon Kleinberg. 2019. Direct uncertainty prediction for medical second opinions. In International Conference on Machine Learning. PMLR, 5281-5290."},{"key":"e_1_2_1_37_1","volume-title":"Survey equivalence: A procedure for measuring classifier accuracy against human labels. arXiv preprint arXiv:2106.01254","author":"Resnick Paul","year":"2021","unstructured":"Paul Resnick, Yuqing Kong, Grant Schoenebeck, and Tim Weninger. 2021. Survey equivalence: A procedure for measuring classifier accuracy against human labels. arXiv preprint arXiv:2106.01254 (2021)."},{"key":"e_1_2_1_38_1","volume-title":"Two contrasting data annotation paradigms for subjective NLP tasks. arXiv preprint arXiv:2112.07475","author":"R\u00f6ttger Paul","year":"2021","unstructured":"Paul R\u00f6ttger, Bertie Vidgen, Dirk Hovy, and Janet B Pierrehumbert. 2021. Two contrasting data annotation paradigms for subjective NLP tasks. arXiv preprint arXiv:2112.07475 (2021)."},{"key":"e_1_2_1_39_1","first-page":"585","volume-title":"Proceedings of the International AAAI Conference on Web and Social Media","volume":"15","author":"Samory Mattia","year":"2021","unstructured":"Mattia Samory. 2021. On positive moderation decisions. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 15. 585-596."},{"key":"e_1_2_1_40_1","first-page":"1","volume-title":"Proceedings of the ACM on Human-Computer Interaction","volume":"7","author":"Seering Joseph","year":"2023","unstructured":"Joseph Seering and Sanjay R Kairam. 2023. Who moderates on Twitch and what do they do? Quantifying practices in community moderation on Twitch. Proceedings of the ACM on Human-Computer Interaction, Vol. 7, GROUP (2023), 1-18."},{"key":"e_1_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1177\/1461444818821316"},{"key":"e_1_2_1_42_1","volume-title":"One Style Does Not Regulate All': Moderation Practices in Public and Private WhatsApp Groups. arXiv preprint arXiv:2401.08091","author":"Shahid Farhana","year":"2024","unstructured":"Farhana Shahid, Dhruv Agarwal, and Aditya Vashistha. 2024. 'One Style Does Not Regulate All': Moderation Practices in Public and Private WhatsApp Groups. arXiv preprint arXiv:2401.08091 (2024)."},{"key":"e_1_2_1_43_1","volume-title":"The Free Encyclopedia. https:\/\/en.wikipedia.org\/w\/index.php?title=Venire_facias&oldid=1095751539. [Online","author":"Wikipedia","year":"2024","unstructured":"Wikipedia contributors. 2022. Venire facias - Wikipedia, The Free Encyclopedia. https:\/\/en.wikipedia.org\/w\/index.php?title=Venire_facias&oldid=1095751539. [Online; accessed 23-September-2024]."},{"key":"e_1_2_1_44_1","first-page":"902","volume-title":"Proceedings of the International AAAI Conference on Web and Social Media","volume":"17","author":"Yin Wenjie","year":"2023","unstructured":"Wenjie Yin, Vibhor Agarwal, Aiqi Jiang, Arkaitz Zubiaga, and Nishanth Sastry. 2023. Annobert: Effectively representing multiple annotators' label choices to improve hate speech detection. In Proceedings of the International AAAI Conference on Web and Social Media, Vol. 17. 902-913."}],"container-title":["Proceedings of the ACM on Human-Computer Interaction"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3757699","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,16]],"date-time":"2025-10-16T17:32:09Z","timestamp":1760635929000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3757699"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,10,16]]},"references-count":44,"journal-issue":{"issue":"7","published-print":{"date-parts":[[2025,10,18]]}},"alternative-id":["10.1145\/3757699"],"URL":"https:\/\/doi.org\/10.1145\/3757699","relation":{},"ISSN":["2573-0142"],"issn-type":[{"value":"2573-0142","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,10,16]]},"assertion":[{"value":"2025-10-16","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}