{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T01:38:03Z","timestamp":1760060283304,"version":"build-2065373602"},"reference-count":23,"publisher":"MDPI AG","issue":"9","license":[{"start":{"date-parts":[[2025,8,25]],"date-time":"2025-08-25T00:00:00Z","timestamp":1756080000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"JSPS KAKENHI","award":["25K15242"],"award-info":[{"award-number":["25K15242"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["BDCC"],"abstract":"<jats:p>Metaphor detection is challenging in natural language processing (NLP) because it requires recognizing nuanced semantic shifts beyond literal meaning, and conventional models often falter when contextual cues are limited. We propose a method to enhance metaphor detection by augmenting input sentences with auxiliary context generated by ChatGPT. In our approach, ChatGPT produces semantically relevant sentences that are inserted before, after, or on both sides of a target sentence, allowing us to analyze the impact of context position and length on classification. Experiments on three benchmark datasets (MOH-X, VUA_All, VUA_Verb) show that this context-enriched input consistently outperforms the no-context baseline across accuracy, precision, recall, and F1-score, with the MOH-X dataset achieving the largest F1 gain. These improvements are statistically significant based on two-tailed t-tests. Our findings demonstrate that generative models can effectively enrich context for metaphor understanding, highlighting context placement and quantity as critical factors. Finally, we outline future directions, including advanced prompt engineering, optimizing context lengths, and extending this approach to multilingual metaphor detection.<\/jats:p>","DOI":"10.3390\/bdcc9090218","type":"journal-article","created":{"date-parts":[[2025,8,26]],"date-time":"2025-08-26T06:25:57Z","timestamp":1756189557000},"page":"218","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Applying Additional Auxiliary Context Using Large Language Model for Metaphor Detection"],"prefix":"10.3390","volume":"9","author":[{"ORCID":"https:\/\/orcid.org\/0009-0002-5226-3008","authenticated-orcid":false,"given":"Takuya","family":"Hayashi","sequence":"first","affiliation":[{"name":"Major in Computer and Information Sciences, Graduate School of Science and Engineering, Ibaraki University, 4-12-1, Nakanarusawa, Hitachi 316-8511, Ibaraki, Japan"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8101-2796","authenticated-orcid":false,"given":"Minoru","family":"Sasaki","sequence":"additional","affiliation":[{"name":"Dapartment of Computer and Information Sciences, College of Engineering, Ibaraki University, 4-12-1, Nakanarusawa, Hitachi 316-8511, Ibaraki, Japan"}]}],"member":"1968","published-online":{"date-parts":[[2025,8,25]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Lakoff, G., and Johnson, M. (2003). Metaphors We Live By, with a New Afterword, University of Chicago Press.","DOI":"10.7208\/chicago\/9780226470993.001.0001"},{"key":"ref_2","unstructured":"Zhang, S., and Liu, Y. (2022, January 12\u201317). Metaphor Detection via Linguistics Enhanced Siamese Network. Proceedings of the 29th International Conference on Computational Linguistics (COLING 2022), Gyeongju, Republic of Korea. Available online: https:\/\/aclanthology.org\/2022.coling-1.364\/."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Hayashi, T., and Sasaki, M. (2024). Metaphor Detection with Additional Auxiliary Context. Proceedings of the 2024 16th IIAI International Congress on Advanced Applied Informatics (IIAI-AAI 2024), Takamatsu, Japan, 6\u201312 July 2024, IEEE.","DOI":"10.1109\/IIAI-AAI63651.2024.00032"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Pragglejaz Group (2007). MIP: A Method for Identifying Metaphorically Used Words in Discourse. Metaphor. Symb., 22, 1\u201339.","DOI":"10.1080\/10926480709336752"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"197","DOI":"10.1016\/0004-3702(78)90001-2","article-title":"Making Preferences More Active","volume":"11","author":"Wilks","year":"1978","journal-title":"Artif. Intell."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Choi, M., Lee, S., Choi, E., Park, H., Lee, J., Lee, D., and Lee, J. (2021). MelBERT: Metaphor Detection via Contextualized Late Interaction using Metaphorical Identification Theories. Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2021), Online, 6\u201311 June 2021, Association for Computational Linguistics.","DOI":"10.18653\/v1\/2021.naacl-main.141"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Li, Y., Wang, S., Lin, C., Guerin, F., and Barrault, L. (2023, January 2\u20136). FrameBERT: Conceptual Metaphor Detection with Frame Embedding Learning. Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2023), Dubrovnik, Croatia.","DOI":"10.18653\/v1\/2023.eacl-main.114"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Zhang, S., and Liu, Y. (2023). Adversarial Multi-task Learning for End-to-end Metaphor Detection. Findings of the Association for Computational Linguistics: ACL 2023, Association for Computational Linguistics.","DOI":"10.18653\/v1\/2023.findings-acl.96"},{"key":"ref_9","unstructured":"Jia, K., and Li, R. (2024). Enhancing Metaphor Detection through Soft Labels and Target Word Prediction. arXiv, Available online: https:\/\/arxiv.org\/abs\/2403.18253."},{"key":"ref_10","unstructured":"Calzolari, N., Kan, M.-Y., Hoste, V., Lenci, A., Sakti, S., and Xue, N. (2024). ContrastWSD: Enhancing Metaphor Detection with Word Sense Disambiguation Following the Metaphor Identification Procedure. Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), ELRA and ICCL. Available online: https:\/\/aclanthology.org\/2024.lrec-main.346\/."},{"key":"ref_11","unstructured":"Jia, K., Wu, Y., Liu, M., and Li, R. (2024). Curriculum-style Data Augmentation for LLM-based Metaphor Detection. arXiv, Available online: https:\/\/arxiv.org\/abs\/2412.02956."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Liu, H., He, C., Meng, F., Niu, C., and Jia, Y. (2024). LaiDA: Linguistics-aware In-context Learning with Data Augmentation for Metaphor Components Identification. arXiv, Available online: https:\/\/arxiv.org\/abs\/2408.05404.","DOI":"10.1007\/978-981-97-9443-0_25"},{"key":"ref_13","unstructured":"Ghosh, D., Muresan, S., Feldman, A., Chakrabarty, T., and Liu, E. (2024). A Hard Nut to Crack: Idiom Detection with Conversational Large Language Models. Proceedings of the 4th Workshop on Figurative Language Processing (FigLang 2024), Mexico City, Mexico, 21 June 2024, Association for Computational Linguistics."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Liu, E., Cui, C., Zheng, K., and Neubig, G. (2022). Testing the Ability of Language Models to Interpret Figurative Language. Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2022), Seattle, WA, USA, 10\u201315 July 2022, Association for Computational Linguistics.","DOI":"10.18653\/v1\/2022.naacl-main.330"},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"589","DOI":"10.1162\/tacl_a_00478","article-title":"It\u2019s Not Rocket Science: Interpreting Figurative Language in Narratives","volume":"10","author":"Chakrabarty","year":"2022","journal-title":"Trans. Assoc. Comput. Linguist."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Bollegala, D., and Shutova, E. (2013). Metaphor Interpretation Using Paraphrases Extracted from the Web. PLoS ONE, 8.","DOI":"10.1371\/journal.pone.0074304"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Mohammad, S., Shutova, E., and Turney, P. (2016, January 11\u201312). Metaphor as a Medium for Emotion: An Empirical Study. Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics (*SEM 2016), Berlin, Germany.","DOI":"10.18653\/v1\/S16-2003"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Steen, G.J., Dorst, A.G., Herrmann, J.B., Kaal, A.A., Krennmayr, T., and Pasma, T. (2010). A Method for Linguistic Metaphor Identification: From MIP to MIPVU, John Benjamins Publishing.","DOI":"10.1075\/celcr.14"},{"key":"ref_19","unstructured":"Oh, S., Huang, X., Pink, M., Hahn, M., and Demberg, V. (2025). A Tug-of-war between an Idiom\u2019s Figurative and Literal Meanings in Large Language Models. arXiv, Available online: https:\/\/arxiv.org\/abs\/2506.01723."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Lin, Z., Ma, Q., Yan, J., and Chen, J. (2021). CATE: A Contrastive Pre-trained Model for Metaphor Detection with Semi-supervised Learning. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021), Punta Cana, Dominican Republic, 7\u201311 November 2021, Association for Computational Linguistics. Available online: https:\/\/aclanthology.org\/2021.emnlp-main.316\/.","DOI":"10.18653\/v1\/2021.emnlp-main.316"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Song, W., Zhou, S., Fu, R., Liu, T., and Liu, L. (2021). Verb Metaphor Detection via Contextual Relation Learning. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics (ACL 2021), Online, 1\u20136 August 2021, Association for Computational Linguistics.","DOI":"10.18653\/v1\/2021.acl-long.327"},{"key":"ref_22","first-page":"8139","article-title":"Multi-task Learning for Metaphor Detection with Graph Convolutional Neural Networks and Word Sense Disambiguation","volume":"34","author":"Le","year":"2020","journal-title":"Proc. AAAI Conf. Artif. Intell."},{"key":"ref_23","unstructured":"Yang, C., Li, Z., Liu, Z., and Huang, Q. (2023). Deep Learning-Based Knowledge Injection for Metaphor Detection: A Comprehensive Review. arXiv, Available online: https:\/\/arxiv.org\/abs\/2308.04306."}],"container-title":["Big Data and Cognitive Computing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2504-2289\/9\/9\/218\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,9]],"date-time":"2025-10-09T18:32:40Z","timestamp":1760034760000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2504-2289\/9\/9\/218"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,8,25]]},"references-count":23,"journal-issue":{"issue":"9","published-online":{"date-parts":[[2025,9]]}},"alternative-id":["bdcc9090218"],"URL":"https:\/\/doi.org\/10.3390\/bdcc9090218","relation":{},"ISSN":["2504-2289"],"issn-type":[{"type":"electronic","value":"2504-2289"}],"subject":[],"published":{"date-parts":[[2025,8,25]]}}}