{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T01:05:35Z","timestamp":1773795935380,"version":"3.50.1"},"reference-count":54,"publisher":"MIT Press","license":[{"start":{"date-parts":[[2024,10,10]],"date-time":"2024-10-10T00:00:00Z","timestamp":1728518400000},"content-version":"vor","delay-in-days":283,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":["direct.mit.edu"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2024,10,2]]},"abstract":"<jats:title>Abstract<\/jats:title>\n               <jats:p>We evaluate recent Large Language Models (LLMs) on the challenging task of summarizing short stories, which can be lengthy, and include nuanced subtext or scrambled timelines. Importantly, we work directly with authors to ensure that the stories have not been shared online (and therefore are unseen by the models), and to obtain informed evaluations of summary quality using judgments from the authors themselves. Through quantitative and qualitative analysis grounded in narrative theory, we compare GPT-4, Claude-2.1, and LLama-2-70B. We find that all three models make faithfulness mistakes in over 50% of summaries and struggle with specificity and interpretation of difficult subtext. We additionally demonstrate that LLM ratings and other automatic metrics for summary quality do not correlate well with the quality ratings from the writers.<\/jats:p>","DOI":"10.1162\/tacl_a_00702","type":"journal-article","created":{"date-parts":[[2024,10,10]],"date-time":"2024-10-10T16:22:38Z","timestamp":1728577358000},"page":"1290-1310","update-policy":"https:\/\/doi.org\/10.1162\/mitpressjournals.corrections.policy","source":"Crossref","is-referenced-by-count":11,"title":["Reading Subtext: Evaluating Large Language Models on Short Story Summarization with Writers"],"prefix":"10.1162","volume":"12","author":[{"given":"Melanie","family":"Subbiah","sequence":"first","affiliation":[{"name":"Department of Computer Science, Columbia University, USA. m.subbiah@columbia.edu"}]},{"given":"Sean","family":"Zhang","sequence":"additional","affiliation":[{"name":"Department of Computer Science, Columbia University, USA. srz2116@columbia.edu"}]},{"given":"Lydia B.","family":"Chilton","sequence":"additional","affiliation":[{"name":"Department of Computer Science, Columbia University, USA. chilton@cs.columbia.edu"}]},{"given":"Kathleen","family":"McKeown","sequence":"additional","affiliation":[{"name":"Department of Computer Science, Columbia University, USA. kathy@cs.columbia.edu"}]}],"member":"281","published-online":{"date-parts":[[2024,10,2]]},"reference":[{"key":"2024101016221343900_bib1","article-title":"Experimental narratives: A Comparison of human crowdsourced storytelling and AI storytelling","author":"Begus","year":"2023","journal-title":"arXiv preprint arXiv: 2310.12902"},{"key":"2024101016221343900_bib2","doi-asserted-by":"publisher","DOI":"10.7208\/chicago\/9780226065595.001.0001","volume-title":"The Rhetoric of Fiction","author":"Booth","year":"1983"},{"key":"2024101016221343900_bib3","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3613904.3642731","article-title":"Art or artifice? Large language models and the false promise of creativity","volume-title":"Proceedings of the CHI Conference on Human Factors in Computing Systems","author":"Chakrabarty","year":"2024"},{"key":"2024101016221343900_bib4","article-title":"Creativity support in the age of large language models: An empirical study involving emerging writers","author":"Chakrabarty","year":"2023","journal-title":"arXiv preprint arXiv:2309. 12570"},{"key":"2024101016221343900_bib5","doi-asserted-by":"publisher","first-page":"6848","DOI":"10.18653\/v1\/2022.emnlp-main.460","article-title":"Help me write a poem - instruction tuning as a vehicle for collaborative poetry writing\u201d","volume-title":"Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing","author":"Chakrabarty","year":"2022"},{"key":"2024101016221343900_bib6","article-title":"Booookscore: A systematic exploration of book-length summarization in the era of LLMs","volume-title":"The Twelfth International Conference on Learning Representations","author":"Chang","year":"2024"},{"key":"2024101016221343900_bib7","doi-asserted-by":"publisher","first-page":"8602","DOI":"10.18653\/v1\/2022.acl-long.589","article-title":"SummScreen: A dataset for abstractive screenplay summarization","volume-title":"Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics","author":"Chen","year":"2022"},{"key":"2024101016221343900_bib8","first-page":"7053","article-title":"A web-based collaborative annotation and consolidation tool","volume-title":"Proceedings of the Twelfth Language Resources and Evaluation Conference","author":"Daudert","year":"2020"},{"key":"2024101016221343900_bib9","doi-asserted-by":"publisher","first-page":"6805","DOI":"10.18653\/v1\/2023.emnlp-main.421","article-title":"Evaluation of African American language bias in natural language generation","volume-title":"Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing","author":"Deas","year":"2023"},{"key":"2024101016221343900_bib10","doi-asserted-by":"publisher","first-page":"2587","DOI":"10.18653\/v1\/2022.naacl-main.187","article-title":"QAFactEval: Improved QA-based factual consistency evaluation for summarization","volume-title":"Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies","author":"Fabbri","year":"2022"},{"key":"2024101016221343900_bib11","doi-asserted-by":"publisher","first-page":"391","DOI":"10.1162\/tacl_a_00373","article-title":"SummEval: Re-evaluating summarization evaluation","volume":"9","author":"Fabbri","year":"2021","journal-title":"Transactions of the Association for Computational Linguistics"},{"key":"2024101016221343900_bib12","volume-title":"Narrative Discourse: An Essay in Method","author":"Genette","year":"1980"},{"key":"2024101016221343900_bib13","doi-asserted-by":"publisher","first-page":"351","DOI":"10.18653\/v1\/2022.emnlp-demos.35","article-title":"FALTE: A toolkit for fine-grained annotation for long text evaluation","volume-title":"Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations","author":"Goyal","year":"2022"},{"key":"2024101016221343900_bib14","article-title":"News summarization and evaluation in the era of GPT-3","author":"Goyal","year":"2022","journal-title":"arXiv preprint arXiv:2209. 12356"},{"key":"2024101016221343900_bib15","doi-asserted-by":"publisher","first-page":"444","DOI":"10.18653\/v1\/2022.emnlp-main.29","article-title":"SNaC: Coherence error detection for narrative summarization","volume-title":"Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing","author":"Goyal","year":"2022"},{"key":"2024101016221343900_bib16","volume-title":"The Black Side of the River: Race, Language, and Belonging in Washington, DC","author":"Grieser","year":"2022"},{"key":"2024101016221343900_bib17","doi-asserted-by":"publisher","first-page":"708","DOI":"10.18653\/v1\/N18-1065","article-title":"Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies","volume-title":"Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)","author":"Grusky","year":"2018"},{"key":"2024101016221343900_bib18","doi-asserted-by":"publisher","DOI":"10.1002\/9781444305920","volume-title":"Basic Elements of Narrative","author":"Herman","year":"2009"},{"key":"2024101016221343900_bib19","article-title":"Teaching machines to read and comprehend","volume":"28","author":"Hermann","year":"2015","journal-title":"Advances in Neural Information Processing Systems"},{"key":"2024101016221343900_bib20","article-title":"Inspo: Writing stories with a flock of AIs and humans","author":"Huang","year":"2023","journal-title":"arXiv preprint arXiv:2311.16521"},{"key":"2024101016221343900_bib21","article-title":"Creative writing with an AI-powered writing assistant: Perspectives from professional writers","author":"Ippolito","year":"2022","journal-title":"arXiv preprint arXiv:2211.05030"},{"key":"2024101016221343900_bib22","doi-asserted-by":"publisher","first-page":"108189","DOI":"10.1016\/j.compbiomed.2024.108189","article-title":"A comprehensive evaluation of large language models on benchmark biomedical text processing tasks","author":"Jahan","year":"2024","journal-title":"Computers in Biology and Medicine"},{"key":"2024101016221343900_bib23","article-title":"Fables: Evaluating faithfulness and content selection in book-length summarization","author":"Kim","year":"2024","journal-title":"arXiv preprint arXiv:2404.01261"},{"key":"2024101016221343900_bib24","doi-asserted-by":"publisher","DOI":"10.21236\/ADA006655","volume-title":"Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel","author":"Kincaid","year":"1975"},{"key":"2024101016221343900_bib25","doi-asserted-by":"publisher","first-page":"1650","DOI":"10.18653\/v1\/2023.eacl-main.121","article-title":"LongEval: Guidelines for human evaluation of faithfulness in long-form summarization","volume-title":"Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics","author":"Krishna","year":"2023"},{"key":"2024101016221343900_bib26","doi-asserted-by":"publisher","first-page":"6536","DOI":"10.18653\/v1\/2022.findings-emnlp.488","article-title":"BOOKSUM: A collection of datasets for long-form narrative summarization","volume-title":"Findings of the Association for Computational Linguistics: EMNLP 2022","author":"Kryscinski","year":"2022"},{"key":"2024101016221343900_bib27","doi-asserted-by":"publisher","first-page":"5043","DOI":"10.18653\/v1\/2020.acl-main.453","article-title":"Exploring content selection in summarization of novel chapters","volume-title":"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics","author":"Ladhak","year":"2020"},{"key":"2024101016221343900_bib28","first-page":"74","article-title":"Rouge: A package for automatic evaluation of summaries","volume-title":"Text summarization branches out","author":"Lin","year":"2004"},{"key":"2024101016221343900_bib29","doi-asserted-by":"publisher","first-page":"4481","DOI":"10.18653\/v1\/2024.findings-naacl.280","article-title":"Benchmarking generation and evaluation capabilities of large language models for instruction controllable summarization","volume-title":"Findings of the Association for Computational Linguistics: NAACL 2024","author":"Liu","year":"2024"},{"key":"2024101016221343900_bib30","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2023.emnlp-main.920","article-title":"Unveiling the essence of poetry: Introducing a comprehensive dataset and benchmark for poem summarization","volume-title":"The 2023 Conference on Empirical Methods in Natural Language Processing","author":"Mahbub","year":"2023"},{"issue":"1","key":"2024101016221343900_bib31","doi-asserted-by":"publisher","first-page":"111","DOI":"10.1016\/0010-0285(77)90006-8","article-title":"Remembrance of things parsed: Story structure and recall","volume":"9","author":"Mandler","year":"1977","journal-title":"Cognitive Psychology"},{"key":"2024101016221343900_bib32","doi-asserted-by":"publisher","first-page":"12076","DOI":"10.18653\/v1\/2023.emnlp-main.741","article-title":"FActScore: Fine-grained atomic evaluation of factual precision in long form text generation","volume-title":"Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing","author":"Min","year":"2023"},{"key":"2024101016221343900_bib33","article-title":"Beloved. 1987","author":"Morrison","year":"2004","journal-title":"New York: Vintage"},{"key":"2024101016221343900_bib34","unstructured":"OpenAI. 2023. GPT-4 Technical Report."},{"key":"2024101016221343900_bib35","article-title":"Does writing with language models reduce content diversity?","volume-title":"The Twelfth International Conference on Learning Representations","author":"Padmakumar","year":"2024"},{"issue":"1","key":"2024101016221343900_bib36","doi-asserted-by":"publisher","first-page":"36","DOI":"10.1598\/RRQ.38.1.3","article-title":"Assessing narrative comprehension in young children","volume":"38","author":"Paris","year":"2003","journal-title":"Reading Research Quarterly"},{"key":"2024101016221343900_bib37","doi-asserted-by":"publisher","first-page":"298","DOI":"10.18653\/v1\/2021.emnlp-main.26","article-title":"Narrative theory for computational narrative understanding","volume-title":"Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing","author":"Piper","year":"2021"},{"key":"2024101016221343900_bib38","article-title":"Summarization is (almost) dead","author":"Xiao","year":"2023","journal-title":"arXiv preprint arXiv:2309.09558"},{"key":"2024101016221343900_bib39","doi-asserted-by":"publisher","first-page":"11626","DOI":"10.18653\/v1\/2023.acl-long.650","article-title":"Understanding factual errors in summarization: Errors, summarizers, datasets, error detectors","volume-title":"Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)","author":"Tang","year":"2023"},{"key":"2024101016221343900_bib40","article-title":"MiniCheck: Efficient fact-checking of LLMs on grounding documents","author":"Tang","year":"2024","journal-title":"arXiv preprint arXiv:2404.10774"},{"key":"2024101016221343900_bib41","article-title":"Tofueval: Evaluating hallucinations of llms on topic-focused dialogue summarization","author":"Tang","year":"2024","journal-title":"arXiv preprint arXiv:2402.13249"},{"key":"2024101016221343900_bib42","article-title":"Llama 2: Open foundation and fine-tuned chat models","author":"Touvron","year":"2023","journal-title":"arXiv preprint arXiv:2307.09288"},{"key":"2024101016221343900_bib43","doi-asserted-by":"publisher","first-page":"1139","DOI":"10.18653\/v1\/2022.emnlp-main.75","article-title":"SQuALITY: Building a long-document summarization dataset the hard way","volume-title":"Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing","author":"Wang","year":"2022"},{"key":"2024101016221343900_bib44","article-title":"Recursively summarizing books with human feedback","author":"Jeff","year":"2021","journal-title":"arXiv preprint arXiv:2109.10862"},{"key":"2024101016221343900_bib45","article-title":"OpenToM: A comprehensive benchmark for evaluating theory-of-mind reasoning capabilities of large language models","author":"Hainiu","year":"2024","journal-title":"arXiv preprint arXiv:2402.06044"},{"key":"2024101016221343900_bib46","first-page":"447","article-title":"Fantastic questions and where to find them: FairytaleQA \u2013 an authentic dataset for narrative comprehension","volume-title":"Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)","author":"Ying","year":"2022"},{"key":"2024101016221343900_bib47","article-title":"GhostWriter: Augmenting collaborative human-AI writing experiences through personalization and agency","author":"Yeh","year":"2024","journal-title":"arXiv preprint arXiv:2402.08855"},{"key":"2024101016221343900_bib48","doi-asserted-by":"publisher","first-page":"841","DOI":"10.1145\/3490099.3511105","article-title":"Wordcraft: Story writing with large language models","volume-title":"27th International Conference on Intelligent User Interfaces","author":"Yuan","year":"2022"},{"key":"2024101016221343900_bib49","doi-asserted-by":"crossref","first-page":"11328","DOI":"10.18653\/v1\/2023.acl-long.634","article-title":"AlignScore: Evaluating factual consistency with a unified alignment function","volume-title":"Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)","author":"Zha","year":"2023"},{"key":"2024101016221343900_bib50","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1109\/ICASSP49357.2023.10097149","article-title":"Mug: A general meeting understanding and generation benchmark","volume-title":"ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","author":"Zhang","year":"2023"},{"key":"2024101016221343900_bib51","article-title":"BERTScore: Evaluating text generation with BERT","volume-title":"International Conference on Learning Representations","author":"Zhang","year":"2020"},{"key":"2024101016221343900_bib52","doi-asserted-by":"publisher","first-page":"39","DOI":"10.1162\/tacl_a_00632","article-title":"Benchmarking large language models for news summarization","volume":"12","author":"Zhang","year":"2024","journal-title":"Transactions of the Association for Computational Linguistics"},{"key":"2024101016221343900_bib53","doi-asserted-by":"publisher","first-page":"2023","DOI":"10.18653\/v1\/2022.emnlp-main.131","article-title":"Towards a unified multi-dimensional evaluator for text generation","volume-title":"Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing","author":"Zhong","year":"2022"},{"key":"2024101016221343900_bib54","doi-asserted-by":"publisher","first-page":"1744","DOI":"10.18653\/v1\/2023.eacl-main.128","article-title":"Fiction-writing mode: An effective control for human-machine collaborative writing","volume-title":"Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics","author":"Zhong","year":"2023"}],"container-title":["Transactions of the Association for Computational Linguistics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/direct.mit.edu\/tacl\/article-pdf\/doi\/10.1162\/tacl_a_00702\/2474814\/tacl_a_00702.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/direct.mit.edu\/tacl\/article-pdf\/doi\/10.1162\/tacl_a_00702\/2474814\/tacl_a_00702.pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,10,10]],"date-time":"2024-10-10T16:22:47Z","timestamp":1728577367000},"score":1,"resource":{"primary":{"URL":"https:\/\/direct.mit.edu\/tacl\/article\/doi\/10.1162\/tacl_a_00702\/124837\/Reading-Subtext-Evaluating-Large-Language-Models"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024]]},"references-count":54,"URL":"https:\/\/doi.org\/10.1162\/tacl_a_00702","relation":{},"ISSN":["2307-387X"],"issn-type":[{"value":"2307-387X","type":"electronic"}],"subject":[],"published-other":{"date-parts":[[2024]]},"published":{"date-parts":[[2024]]}}}