{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T21:00:24Z","timestamp":1776114024859,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":46,"publisher":"ACM","license":[{"start":{"date-parts":[[2024,6,5]],"date-time":"2024-06-05T00:00:00Z","timestamp":1717545600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"UKRI","award":["EP\/S022937\/1"],"award-info":[{"award-number":["EP\/S022937\/1"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2024,6,5]]},"DOI":"10.1145\/3655693.3655694","type":"proceedings-article","created":{"date-parts":[[2024,6,4]],"date-time":"2024-06-04T18:22:10Z","timestamp":1717525330000},"page":"1-10","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":6,"title":["Helpful or Harmful? Exploring the Efficacy of Large Language Models for Online Grooming Prevention"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-4414-1134","authenticated-orcid":false,"given":"Ellie","family":"Prosser","sequence":"first","affiliation":[{"name":"University of Bristol, United Kingdom"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8099-0646","authenticated-orcid":false,"given":"Matthew","family":"Edwards","sequence":"additional","affiliation":[{"name":"University of Bristol, United Kingdom"}]}],"member":"320","published-online":{"date-parts":[[2024,6,5]]},"reference":[{"key":"e_1_3_2_1_1_1","volume-title":"International Journal of Artificial Intelligence in Education","author":"Abdelghani Rania","year":"2023","unstructured":"Rania Abdelghani, Yen-Hsiang Wang, Xingdi Yuan, Tong Wang, Pauline Lucas, H\u00e9l\u00e8ne Sauz\u00e9on, and Pierre-Yves Oudeyer. 2023. GPT-3-driven pedagogical agents to train children\u2019s curious question-asking skills. International Journal of Artificial Intelligence in Education (2023), 1\u201336."},{"key":"e_1_3_2_1_2_1","volume-title":"A neural probabilistic language model. Advances in neural information processing systems 13","author":"Bengio Yoshua","year":"2000","unstructured":"Yoshua Bengio, R\u00e9jean Ducharme, and Pascal Vincent. 2000. A neural probabilistic language model. Advances in neural information processing systems 13 (2000)."},{"key":"e_1_3_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/3586183.3606725"},{"key":"e_1_3_2_1_4_1","volume-title":"Language models are few-shot learners. Advances in neural information processing systems 33","author":"Brown Tom","year":"2020","unstructured":"Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared\u00a0D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877\u20131901."},{"key":"e_1_3_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1109\/CCWC57344.2023.10099179"},{"key":"e_1_3_2_1_6_1","volume-title":"Deep reinforcement learning from human preferences. Advances in neural information processing systems 30","author":"Christiano F","year":"2017","unstructured":"Paul\u00a0F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. Advances in neural information processing systems 30 (2017)."},{"key":"e_1_3_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.ijcci.2021.100403"},{"key":"e_1_3_2_1_8_1","volume-title":"How to prompt? Opportunities and challenges of zero-and few-shot learning for human-AI interaction in creative applications of generative models. arXiv preprint arXiv:2209.01390","author":"Dang Hai","year":"2022","unstructured":"Hai Dang, Lukas Mecke, Florian Lehmann, Sven Goller, and Daniel Buschek. 2022. How to prompt? Opportunities and challenges of zero-and few-shot learning for human-AI interaction in creative applications of generative models. arXiv preprint arXiv:2209.01390 (2022)."},{"key":"e_1_3_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/3545945.3569823"},{"key":"e_1_3_2_1_10_1","volume-title":"Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805","author":"Devlin Jacob","year":"2018","unstructured":"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)."},{"key":"e_1_3_2_1_11_1","volume-title":"Number of ChatGPT Users (Mon","author":"Duarte Fabio","year":"2024","unstructured":"Fabio Duarte. 2024. Number of ChatGPT Users (Mon 2024). https:\/\/explodingtopics.com\/blog\/chatgpt-users Accessed 7\/1\/2024."},{"key":"e_1_3_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1145\/2930674.2930680"},{"key":"e_1_3_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1145\/3491101.3503564"},{"key":"e_1_3_2_1_14_1","volume-title":"Pretrained language models for text generation: A survey. arXiv preprint arXiv:2201.05273","author":"Li Junyi","year":"2022","unstructured":"Junyi Li, Tianyi Tang, Wayne\u00a0Xin Zhao, Jian-Yun Nie, and Ji-Rong Wen. 2022. Pretrained language models for text generation: A survey. arXiv preprint arXiv:2201.05273 (2022)."},{"key":"e_1_3_2_1_15_1","volume-title":"What Makes Good In-Context Examples for GPT-3 ?arXiv preprint arXiv:2101.06804","author":"Liu Jiachang","year":"2021","unstructured":"Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2021. What Makes Good In-Context Examples for GPT-3 ?arXiv preprint arXiv:2101.06804 (2021)."},{"key":"e_1_3_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/3560815"},{"key":"e_1_3_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/3491102.3501825"},{"key":"e_1_3_2_1_18_1","volume-title":"DirectGPT: A Direct Manipulation Interface to Interact with Large Language Models. arXiv preprint arXiv:2310.03691","author":"Masson Damien","year":"2023","unstructured":"Damien Masson, Sylvain Malacria, G\u00e9ry Casiez, and Daniel Vogel. 2023. DirectGPT: A Direct Manipulation Interface to Interact with Large Language Models. arXiv preprint arXiv:2310.03691 (2023)."},{"key":"e_1_3_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.2753\/JEC1086-4415150305"},{"key":"e_1_3_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.2196\/50638"},{"key":"e_1_3_2_1_21_1","unstructured":"Stefan Milne. 2023. Learning from superheroes and AI: UW researchers study how a chatbot can teach kids supportive self-talk. https:\/\/www.technologyreview.com\/2023\/09\/05\/1079009\/you-need-to-talk-to-your-kid-about-ai-here-are-6-things-you-should-say\/ Accessed 7\/1\/2024."},{"key":"e_1_3_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/3605943"},{"key":"e_1_3_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.4073\/csr.2009.2"},{"key":"e_1_3_2_1_24_1","volume-title":"Perturbation, Testing and Iteration using Visual Analytics for Large Language Models. arXiv preprint arXiv:2304.01964","author":"Mishra Aditi","year":"2023","unstructured":"Aditi Mishra, Utkarsh Soni, Anjana Arunkumar, Jinbin Huang, Bum\u00a0Chul Kwon, and Chris Bryan. 2023. PromptAid: Prompt Exploration, Perturbation, Testing and Iteration using Visual Analytics for Large Language Models. arXiv preprint arXiv:2304.01964 (2023)."},{"key":"e_1_3_2_1_25_1","unstructured":"Stuart O\u2019Brien. 2023. AI-Generated Homework Now a Key Issue for Schools. https:\/\/education-forum.co.uk\/briefing\/ai-generated-homework-now-a-key-issue-for-schools\/ Accessed 7\/1\/2024."},{"key":"e_1_3_2_1_26_1","unstructured":"OpenAI. 2023. GPT-4 Technical Report. arxiv:2303.08774\u00a0[cs.CL]"},{"key":"e_1_3_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1007\/s40653-022-00440-x"},{"key":"e_1_3_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11431-020-1647-3"},{"key":"e_1_3_2_1_29_1","unstructured":"Alec Radford Karthik Narasimhan Tim Salimans Ilya Sutskever 2018. Improving language understanding by generative pre-training. (2018)."},{"key":"e_1_3_2_1_30_1","volume-title":"Language models are unsupervised multitask learners. OpenAI blog 1, 8","author":"Radford Alec","year":"2019","unstructured":"Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, 2019. Language models are unsupervised multitask learners. OpenAI blog 1, 8 (2019), 9."},{"key":"e_1_3_2_1_31_1","volume-title":"ABScribe: Rapid Exploration of Multiple Writing Variations in Human-AI Co-Writing Tasks using Large Language Models. arXiv preprint arXiv:2310.00117","author":"Reza Mohi","year":"2023","unstructured":"Mohi Reza, Nathan Laundry, Ilya Musabirov, Peter Dushniku, Zhi\u00a0Yuan Yu, Kashish Mittal, Tovi Grossman, Michael Liut, Anastasia Kuzminykh, Joseph\u00a0Jay Williams, 2023. ABScribe: Rapid Exploration of Multiple Writing Variations in Human-AI Co-Writing Tasks using Large Language Models. arXiv preprint arXiv:2310.00117 (2023)."},{"key":"e_1_3_2_1_32_1","volume-title":"ChaCha: Leveraging Large Language Models to Prompt Children to Share Their Emotions about Personal Events. arXiv preprint arXiv:2309.12244","author":"Seo Woosuk","year":"2023","unstructured":"Woosuk Seo, Chanmo Yang, and Young-Ho Kim. 2023. ChaCha: Leveraging Large Language Models to Prompt Children to Share Their Emotions about Personal Events. arXiv preprint arXiv:2309.12244 (2023)."},{"key":"e_1_3_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1145\/319950.320022"},{"key":"e_1_3_2_1_34_1","unstructured":"Joe Tidy. 2024. Character.ai: Young people turning to AI therapist bots. https:\/\/www.bbc.co.uk\/news\/technology-67872693 Accessed 10\/1\/2024."},{"key":"e_1_3_2_1_35_1","volume-title":"Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288","author":"Touvron Hugo","year":"2023","unstructured":"Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 (2023)."},{"key":"e_1_3_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.chb.2016.09.040"},{"key":"e_1_3_2_1_37_1","volume-title":"Attention is all you need. Advances in neural information processing systems 30","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan\u00a0N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems 30 (2017)."},{"key":"e_1_3_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1145\/3491102.3502057"},{"key":"e_1_3_2_1_39_1","volume-title":"Prompt engineering for healthcare: Methodologies and applications. arXiv preprint arXiv:2304.14670","author":"Wang Jiaqi","year":"2023","unstructured":"Jiaqi Wang, Enze Shi, Sigang Yu, Zihao Wu, Chong Ma, Haixing Dai, Qiushi Yang, Yanqing Kang, Jinru Wu, Huawen Hu, 2023. Prompt engineering for healthcare: Methodologies and applications. arXiv preprint arXiv:2304.14670 (2023)."},{"key":"e_1_3_2_1_40_1","volume-title":"Hard prompts made easy: Gradient-based discrete optimization for prompt tuning and discovery. arXiv preprint arXiv:2302.03668","author":"Wen Yuxin","year":"2023","unstructured":"Yuxin Wen, Neel Jain, John Kirchenbauer, Micah Goldblum, Jonas Geiping, and Tom Goldstein. 2023. Hard prompts made easy: Gradient-based discrete optimization for prompt tuning and discovery. arXiv preprint arXiv:2302.03668 (2023)."},{"key":"e_1_3_2_1_41_1","unstructured":"Rhiannon Williams and Melissa Heikkil\u00e4. 2023. You need to talk to your kid about AI. Here are 6 things you should say.https:\/\/www.technologyreview.com\/2023\/09\/05\/1079009\/you-need-to-talk-to-your-kid-about-ai-here-are-6-things-you-should-say\/ Accessed 7\/1\/2024."},{"key":"e_1_3_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.1145\/2998181.2998352"},{"key":"e_1_3_2_1_43_1","unstructured":"Chloe Xiang. 2023. \u2019He Would Still Be Here\u2019: Man Dies by Suicide After Talking with AI Chatbot Widow Says. https:\/\/www.vice.com\/en\/article\/pkadgm\/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says Accessed 7\/1\/2024."},{"key":"e_1_3_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2023.bea-1.52"},{"key":"e_1_3_2_1_45_1","doi-asserted-by":"publisher","DOI":"10.1145\/3544548.3581388"},{"key":"e_1_3_2_1_46_1","volume-title":"Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593","author":"Ziegler M","year":"2019","unstructured":"Daniel\u00a0M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom\u00a0B Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. 2019. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593 (2019)."}],"event":{"name":"EICC 2024: European Interdisciplinary Cybersecurity Conference","location":"Xanthi Greece","acronym":"EICC 2024"},"container-title":["European Interdisciplinary Cybersecurity Conference"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3655693.3655694","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3655693.3655694","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,8,29]],"date-time":"2025-08-29T16:20:31Z","timestamp":1756484431000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3655693.3655694"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,6,5]]},"references-count":46,"alternative-id":["10.1145\/3655693.3655694","10.1145\/3655693"],"URL":"https:\/\/doi.org\/10.1145\/3655693.3655694","relation":{},"subject":[],"published":{"date-parts":[[2024,6,5]]},"assertion":[{"value":"2024-06-05","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}