{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,6]],"date-time":"2026-04-06T10:06:21Z","timestamp":1775469981295,"version":"3.50.1"},"publisher-location":"Cham","reference-count":15,"publisher":"Springer Nature Switzerland","isbn-type":[{"value":"9783031426810","type":"print"},{"value":"9783031426827","type":"electronic"}],"license":[{"start":{"date-parts":[[2023,1,1]],"date-time":"2023-01-01T00:00:00Z","timestamp":1672531200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,8,28]],"date-time":"2023-08-28T00:00:00Z","timestamp":1693180800000},"content-version":"vor","delay-in-days":239,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2023]]},"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>\n                    The emerging research area of large language models (LLMs) has far-reaching implications for various aspects of our daily lives. In education, in particular, LLMs hold enormous potential for enabling personalized learning and equal opportunities for all students. In a traditional classroom environment, students often struggle to develop individual writing skills because the workload of the teachers limits their ability to provide detailed feedback on each student\u2019s essay. To bridge this gap, we have developed a tool called PEER (Paper Evaluation and Empowerment Resource) which exploits the power of LLMs and provides students with comprehensive and engaging feedback on their essays. Our goal is to motivate each student to enhance their writing skills through positive feedback and specific suggestions for improvement. Since its launch in February 2023, PEER has received high levels of interest and demand, resulting in more than 4000 essays uploaded to the platform to date. Moreover, there has been an overwhelming response from teachers who are interested in the project since it has the potential to alleviate their workload by making the task of grading essays less tedious. By collecting a real-world data set incorporating essays of students and feedback from teachers, we will be able to refine and enhance PEER through model fine-tuning in the next steps. Our goal is to leverage LLMs to enhance personalized learning, reduce teacher workload, and ensure that every student has an equal opportunity to excel in writing. The code is available at\n                    <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" xlink:href=\"https:\/\/github.com\/Kasneci-Lab\/AI-assisted-writing\" ext-link-type=\"uri\">https:\/\/github.com\/Kasneci-Lab\/AI-assisted-writing<\/jats:ext-link>\n                    .\n                  <\/jats:p>","DOI":"10.1007\/978-3-031-42682-7_73","type":"book-chapter","created":{"date-parts":[[2023,8,29]],"date-time":"2023-08-29T15:01:46Z","timestamp":1693321306000},"page":"755-761","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":13,"title":["PEER: Empowering Writing with\u00a0Large Language Models"],"prefix":"10.1007","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-3380-4641","authenticated-orcid":false,"given":"Kathrin","family":"Se\u00dfler","sequence":"first","affiliation":[]},{"given":"Tao","family":"Xiang","sequence":"additional","affiliation":[]},{"given":"Lukas","family":"Bogenrieder","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3146-4484","authenticated-orcid":false,"given":"Enkelejda","family":"Kasneci","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,8,28]]},"reference":[{"key":"73_CR1","unstructured":"Brown, T., et al.: Language models are few-shot learners. In: Advances in Neural Information Processing Systems, vol. 33, pp. 1877\u20131901 (2020)"},{"key":"73_CR2","volume-title":"The Rating of Chessplayers, Past and Present","author":"AE Elo","year":"1978","unstructured":"Elo, A.E.: The Rating of Chessplayers, Past and Present. Arco Pub., New York (1978)"},{"issue":"1","key":"73_CR3","doi-asserted-by":"publisher","first-page":"81","DOI":"10.3102\/003465430298487","volume":"77","author":"J Hattie","year":"2007","unstructured":"Hattie, J., Timperley, H.: The power of feedback. Rev. Educ. Res. 77(1), 81\u2013112 (2007)","journal-title":"Rev. Educ. Res."},{"key":"73_CR4","doi-asserted-by":"publisher","first-page":"e208","DOI":"10.7717\/peerj-cs.208","volume":"5","author":"MA Hussein","year":"2019","unstructured":"Hussein, M.A., Hassan, H., Nassef, M.: Automated language essay scoring systems: a literature review. PeerJ Comput. Sci. 5, e208 (2019)","journal-title":"PeerJ Comput. Sci."},{"key":"73_CR5","doi-asserted-by":"publisher","DOI":"10.1016\/j.lindif.2023.102274","volume":"103","author":"E Kasneci","year":"2023","unstructured":"Kasneci, E., et al.: ChatGPT for good? On opportunities and challenges of large language models for education. Learn. Individ. Diff. 103, 102274 (2023)","journal-title":"Learn. Individ. Diff."},{"key":"73_CR6","doi-asserted-by":"crossref","unstructured":"Liu, J., Shen, D., Zhang, Y., Dolan, W.B., Carin, L., Chen, W.: What makes good in-context examples for GPT-3? In: Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pp. 100\u2013114 (2022)","DOI":"10.18653\/v1\/2022.deelio-1.10"},{"key":"73_CR7","doi-asserted-by":"crossref","unstructured":"Molloy, E.K., Boud, D.: Feedback models for learning, teaching and performance. In: Handbook of Research on Educational Communications and Technology, pp. 413\u2013424 (2014)","DOI":"10.1007\/978-1-4614-3185-5_33"},{"key":"73_CR8","unstructured":"OpenAI Team: ChatGPT: optimizing language models for dialogue (2022)"},{"key":"73_CR9","unstructured":"Ouyang, L., et al.: Training language models to follow instructions with human feedback. In: Advances in Neural Information Processing Systems, vol. 35, pp. 27730\u201327744 (2022)"},{"issue":"8","key":"73_CR10","first-page":"9","volume":"1","author":"A Radford","year":"2019","unstructured":"Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI Blog 1(8), 9 (2019)","journal-title":"OpenAI Blog"},{"issue":"3","key":"73_CR11","doi-asserted-by":"publisher","first-page":"2495","DOI":"10.1007\/s10462-021-10068-2","volume":"55","author":"D Ramesh","year":"2022","unstructured":"Ramesh, D., Sanampudi, S.K.: An automated essay scoring systems: a systematic literature review. Artif. Intell. Rev. 55(3), 2495\u20132527 (2022)","journal-title":"Artif. Intell. Rev."},{"key":"73_CR12","unstructured":"Schick, T., et al.: PEER: a collaborative language model. arXiv preprint arXiv:2208.11663 (2022)"},{"key":"73_CR13","unstructured":"Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)"},{"key":"73_CR14","unstructured":"Wei, J., et al.: Chain-of-thought prompting elicits reasoning in large language models. In: Advances in Neural Information Processing Systems (2022)"},{"key":"73_CR15","doi-asserted-by":"crossref","unstructured":"Yuan, A., Coenen, A., Reif, E., Ippolito, D.: Wordcraft: story writing with large language models. In: 27th International Conference on Intelligent User Interfaces, pp. 841\u2013852 (2022)","DOI":"10.1145\/3490099.3511105"}],"container-title":["Lecture Notes in Computer Science","Responsive and Sustainable Educational Futures"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/978-3-031-42682-7_73","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,11,2]],"date-time":"2025-11-02T13:58:26Z","timestamp":1762091906000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/978-3-031-42682-7_73"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023]]},"ISBN":["9783031426810","9783031426827"],"references-count":15,"URL":"https:\/\/doi.org\/10.1007\/978-3-031-42682-7_73","relation":{},"ISSN":["0302-9743","1611-3349"],"issn-type":[{"value":"0302-9743","type":"print"},{"value":"1611-3349","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023]]},"assertion":[{"value":"28 August 2023","order":1,"name":"first_online","label":"First Online","group":{"name":"ChapterHistory","label":"Chapter History"}},{"value":"EC-TEL","order":1,"name":"conference_acronym","label":"Conference Acronym","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"European Conference on Technology Enhanced Learning","order":2,"name":"conference_name","label":"Conference Name","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Aveiro","order":3,"name":"conference_city","label":"Conference City","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Portugal","order":4,"name":"conference_country","label":"Conference Country","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"2023","order":5,"name":"conference_year","label":"Conference Year","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"4 September 2023","order":7,"name":"conference_start_date","label":"Conference Start Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"8 September 2023","order":8,"name":"conference_end_date","label":"Conference End Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"18","order":9,"name":"conference_number","label":"Conference Number","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"ectel2023","order":10,"name":"conference_id","label":"Conference ID","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"https:\/\/ea-tel.eu\/ectel2023","order":11,"name":"conference_url","label":"Conference URL","group":{"name":"ConferenceInfo","label":"Conference Information"}}]}}