{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,18]],"date-time":"2026-04-18T05:07:28Z","timestamp":1776488848080,"version":"3.51.2"},"publisher-location":"New York, NY, USA","reference-count":14,"publisher":"ACM","license":[{"start":{"date-parts":[[2023,6,19]],"date-time":"2023-06-19T00:00:00Z","timestamp":1687132800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2023,6,19]]},"DOI":"10.1145\/3591196.3596827","type":"proceedings-article","created":{"date-parts":[[2023,6,18]],"date-time":"2023-06-18T16:17:26Z","timestamp":1687105046000},"page":"282-287","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":12,"title":["Investigating the Perception of the Future in GPT-3, -3.5 and GPT-4"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-8302-1007","authenticated-orcid":false,"given":"Diana","family":"Kozachek","sequence":"first","affiliation":[{"name":"Freie University of Berlin, Insitute Futur, Germany"}]}],"member":"320","published-online":{"date-parts":[[2023,6,19]]},"reference":[{"key":"e_1_3_2_1_1_1","volume-title":"Language Models are Few-Shot Learners, from http:\/\/arxiv.org\/pdf\/2005.14165v4","author":"Brown T. B.","year":"2020","unstructured":"Brown , T. B. , Mann , B. , Ryder , N. , Subbiah , M. , Kaplan , J. , Dhariwal , P. , ( 2020 ). Language Models are Few-Shot Learners, from http:\/\/arxiv.org\/pdf\/2005.14165v4 . Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., (2020). Language Models are Few-Shot Learners, from http:\/\/arxiv.org\/pdf\/2005.14165v4."},{"key":"e_1_3_2_1_2_1","volume-title":"GPT Takes the Bar Exam, from https:\/\/arxiv.org\/pdf\/2212.14402","author":"Bommarito M., II","year":"2022","unstructured":"Bommarito , M., II , & Katz , D. M. ( 2022 ). GPT Takes the Bar Exam, from https:\/\/arxiv.org\/pdf\/2212.14402 . Bommarito, M., II, & Katz, D. M. (2022). GPT Takes the Bar Exam, from https:\/\/arxiv.org\/pdf\/2212.14402."},{"key":"e_1_3_2_1_3_1","volume-title":"Evidence of a predictive coding hierarchy in the human brain listening to speech. Nature human behaviour, 7(3), 430\u2013441","author":"Caucheteux C.","year":"2023","unstructured":"Caucheteux , C. , Gramfort , A. , & King , J.-R. ( 2023 ). Evidence of a predictive coding hierarchy in the human brain listening to speech. Nature human behaviour, 7(3), 430\u2013441 . Caucheteux, C., Gramfort, A., & King, J.-R. (2023). Evidence of a predictive coding hierarchy in the human brain listening to speech. Nature human behaviour, 7(3), 430\u2013441."},{"issue":"1","key":"e_1_3_2_1_4_1","doi-asserted-by":"crossref","first-page":"4","DOI":"10.1108\/14636680810855991","article-title":"Six pillars: futures thinking for transforming","volume":"10","author":"Inayatullah S.","year":"2008","unstructured":"Inayatullah , S. ( 2008 ). Six pillars: futures thinking for transforming . Foresight , 10 ( 1 ), 4 \u2013 21 . Inayatullah, S. (2008). Six pillars: futures thinking for transforming. Foresight, 10(1), 4\u201321.","journal-title":"Foresight"},{"key":"e_1_3_2_1_5_1","volume-title":"Reconstructing ChatGPT. Retrieved on February 3rd 2023 from https:\/\/jfsdigital.org\/2023\/01\/24\/reconstructing-chatgpt-part-2\/","author":"Inayatullah S.","year":"2023","unstructured":"Inayatullah , S. ( 2023 ). Reconstructing ChatGPT. Retrieved on February 3rd 2023 from https:\/\/jfsdigital.org\/2023\/01\/24\/reconstructing-chatgpt-part-2\/ . Inayatullah, S. (2023). Reconstructing ChatGPT. Retrieved on February 3rd 2023 from https:\/\/jfsdigital.org\/2023\/01\/24\/reconstructing-chatgpt-part-2\/."},{"key":"e_1_3_2_1_6_1","volume-title":"Deconstructing ChatGPT. Retrieved on February 3rd 2023 from https:\/\/jfsdigital.org\/2023\/01\/23\/deconstructing-chatgpt-part-1\/","author":"Inayatullah S.","year":"2023","unstructured":"Inayatullah , S. ( 2023 ). Deconstructing ChatGPT. Retrieved on February 3rd 2023 from https:\/\/jfsdigital.org\/2023\/01\/23\/deconstructing-chatgpt-part-1\/ . Inayatullah, S. (2023). Deconstructing ChatGPT. Retrieved on February 3rd 2023 from https:\/\/jfsdigital.org\/2023\/01\/23\/deconstructing-chatgpt-part-1\/."},{"key":"e_1_3_2_1_7_1","volume-title":"Human-made Scenario Processing. [Source Code]. From https:\/\/github.com\/koizachek\/VisionaryMachine\/tree\/main\/code","author":"Kozachek D.","year":"2023","unstructured":"Kozachek , D. ( 2023 ). Human-made Scenario Processing. [Source Code]. From https:\/\/github.com\/koizachek\/VisionaryMachine\/tree\/main\/code . Kozachek, D. (2023). Human-made Scenario Processing. [Source Code]. From https:\/\/github.com\/koizachek\/VisionaryMachine\/tree\/main\/code."},{"key":"e_1_3_2_1_8_1","volume-title":"Prompt-Tuning. [Source Code]. From https:\/\/github.com\/koizachek\/VisionaryMachine\/tree\/main\/prompttune","author":"Kozachek D.","year":"2023","unstructured":"Kozachek , D. ( 2023 ). Prompt-Tuning. [Source Code]. From https:\/\/github.com\/koizachek\/VisionaryMachine\/tree\/main\/prompttune . Kozachek, D. (2023). Prompt-Tuning. [Source Code]. From https:\/\/github.com\/koizachek\/VisionaryMachine\/tree\/main\/prompttune."},{"key":"e_1_3_2_1_9_1","volume-title":"Fine-Tuning. [Source Code]. From https:\/\/github.com\/koizachek\/VisionaryMachine\/tree\/main\/finetune","author":"Kozachek D.","year":"2023","unstructured":"Kozachek , D. ( 2023 ). Fine-Tuning. [Source Code]. From https:\/\/github.com\/koizachek\/VisionaryMachine\/tree\/main\/finetune . Kozachek, D. (2023). Fine-Tuning. [Source Code]. From https:\/\/github.com\/koizachek\/VisionaryMachine\/tree\/main\/finetune."},{"key":"e_1_3_2_1_10_1","volume-title":"30 Prompts. [Source Data]. From https:\/\/github.com\/koizachek\/VisionaryMachine\/tree\/main\/prompttune","author":"Kozachek D.","year":"2023","unstructured":"Kozachek , D. ( 2023 ). 30 Prompts. [Source Data]. From https:\/\/github.com\/koizachek\/VisionaryMachine\/tree\/main\/prompttune . Kozachek, D. (2023). 30 Prompts. [Source Data]. From https:\/\/github.com\/koizachek\/VisionaryMachine\/tree\/main\/prompttune."},{"key":"e_1_3_2_1_11_1","volume-title":"GPT-3.4, or GPT-4 generate human-like future scenarios? [Survey Data]. From https:\/\/github.com\/koizachek\/VisionaryMachine\/tree\/main\/survey","author":"Kozachek D.","year":"2023","unstructured":"Kozachek , D. ( 2023 ). Delphi Survey Results: Can GPT-3 , GPT-3.4, or GPT-4 generate human-like future scenarios? [Survey Data]. From https:\/\/github.com\/koizachek\/VisionaryMachine\/tree\/main\/survey Kozachek, D. (2023). Delphi Survey Results: Can GPT-3, GPT-3.4, or GPT-4 generate human-like future scenarios? [Survey Data]. From https:\/\/github.com\/koizachek\/VisionaryMachine\/tree\/main\/survey"},{"key":"e_1_3_2_1_12_1","unstructured":"Ramesh A. Pavlov M. Goh G. Gray S. Voss C. Radford A. Zero-Shot Text-to-Image Generation from http:\/\/arxiv.org\/pdf\/2102.12092v2.  Ramesh A. Pavlov M. Goh G. Gray S. Voss C. Radford A. Zero-Shot Text-to-Image Generation from http:\/\/arxiv.org\/pdf\/2102.12092v2."},{"key":"e_1_3_2_1_13_1","volume-title":"AI\u2010assisted scenario generation for strategic planning. FUTURES & FORESIGHT SCIENCE","author":"Spaniol M. J.","year":"2023","unstructured":"Spaniol , M. J. , & Rowland , N. J. ( 2023 ). AI\u2010assisted scenario generation for strategic planning. FUTURES & FORESIGHT SCIENCE . Spaniol, M. J., & Rowland, N. J. (2023). AI\u2010assisted scenario generation for strategic planning. FUTURES & FORESIGHT SCIENCE."},{"key":"e_1_3_2_1_14_1","first-page":"68","volume-title":"Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)","author":"Xiao Liu","year":"2022","unstructured":"Xiao Liu , Kaixuan Ji, Yicheng Fu , Weng Tam, Zhengxiao Du , Zhilin Yang, and Jie Tang . ( 2022 ). P-Tuning: Prompt Tuning Can Be Comparable to Fine-tuning Across Scales and Tasks . In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) , pages 61\u2013 68 , Dublin, Ireland. Association for Computational Linguistics. Xiao Liu, Kaixuan Ji, Yicheng Fu, Weng Tam, Zhengxiao Du, Zhilin Yang, and Jie Tang. (2022). P-Tuning: Prompt Tuning Can Be Comparable to Fine-tuning Across Scales and Tasks. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 61\u201368, Dublin, Ireland. Association for Computational Linguistics."}],"event":{"name":"C&C '23: Creativity and Cognition","location":"Virtual Event USA","acronym":"C&C '23","sponsor":["SIGCHI ACM Special Interest Group on Computer-Human Interaction"]},"container-title":["Creativity and Cognition"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3591196.3596827","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T16:47:46Z","timestamp":1750178866000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3591196.3596827"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,6,19]]},"references-count":14,"alternative-id":["10.1145\/3591196.3596827","10.1145\/3591196"],"URL":"https:\/\/doi.org\/10.1145\/3591196.3596827","relation":{},"subject":[],"published":{"date-parts":[[2023,6,19]]},"assertion":[{"value":"2023-06-19","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}