{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,28]],"date-time":"2026-02-28T09:42:57Z","timestamp":1772271777385,"version":"3.50.1"},"reference-count":61,"publisher":"Springer Science and Business Media LLC","issue":"2","license":[{"start":{"date-parts":[[2023,9,29]],"date-time":"2023-09-29T00:00:00Z","timestamp":1695945600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,9,29]],"date-time":"2023-09-29T00:00:00Z","timestamp":1695945600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100018542","name":"Natural Science Foundation of Sichuan Province","doi-asserted-by":"publisher","award":["2022NSFSC0503"],"award-info":[{"award-number":["2022NSFSC0503"]}],"id":[{"id":"10.13039\/501100018542","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100012542","name":"Sichuan Province Science and Technology Support Program","doi-asserted-by":"publisher","award":["2022ZHCG0007"],"award-info":[{"award-number":["2022ZHCG0007"]}],"id":[{"id":"10.13039\/100012542","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2024,4]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>In this study, we primarily aim to address the exposure bias issue in long text generation intrinsic to statistical language models. We propose a sentence-level heuristic tree search algorithm, specially tailored for long text generation, to mitigate the problem by managing generated texts in a tree structure and curbing the compounding of biases. Our algorithm utilizes two pre-trained language models, an auto-regressive model for generating new sentences and an auto-encoder model for evaluating sentence quality. These models work in tandem to perform four critical operations: expanding the text tree with new sentences, evaluating the quality of the additions, sampling potential unfinished text fragments for further generation, and pruning leaf nodes deemed unpromising. This iterative process continues until a pre-defined number of [EOS] tokens are produced, at which point we select the highest-scoring completed text as our final output. Moreover, we pioneer two novel token-level decoding techniques\u2014nucleus sampling with temperature and diverse beam search with sampling. These methods, integrated with our sentence-level search algorithm, aim to improve the consistency and diversity of text generation. Experimental results, both automated measures (including Jaccard similarity, Word2vec similarity, and unique word ratio) and human evaluations (assessing consistency, fluency, and rhetorical skills), conclusively demonstrate that our approach considerably enhances the quality of machine-generated long-form text. Through this research, we aim to inspire further innovations in sentence-level search-based text generation algorithms.<\/jats:p>","DOI":"10.1007\/s40747-023-01244-8","type":"journal-article","created":{"date-parts":[[2023,9,29]],"date-time":"2023-09-29T05:02:01Z","timestamp":1695963721000},"page":"3153-3167","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["Sentence-level heuristic tree search for long text generation"],"prefix":"10.1007","volume":"10","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-4013-3492","authenticated-orcid":false,"given":"Zheng","family":"Chen","sequence":"first","affiliation":[]},{"given":"Zhejun","family":"Liu","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,9,29]]},"reference":[{"key":"1244_CR1","unstructured":"Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser \u0141, Polosukhin I (2017) Attention is all you need. In: Advances in neural information processing systems, pp 5998\u20136008"},{"key":"1244_CR2","unstructured":"Radford A, Narasimhan K, Salimans T, Sutskever I (2018) Improving language understanding by generative pre-training. https:\/\/s3-us-west-2.amazonaws.com\/openai-assets\/researchcovers\/languageunsupervised\/languageunderstandingpaper.pdf"},{"issue":"8","key":"1244_CR3","first-page":"9","volume":"1","author":"A Radford","year":"2019","unstructured":"Radford A, Wu J, Child R, Luan D, Amodei D, Sutskever I (2019) Language models are unsupervised multitask learners. OpenAI Blog 1(8):9","journal-title":"OpenAI Blog"},{"key":"1244_CR4","unstructured":"Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell, Agarwal S (2020) Language models are few-shot learners. In: Larochelle H, Ranzato M, Hadsell R, Balcan MF, Lin H (eds) Advances in neural information processing systems, vol 33. Curran Associates, Inc., pp 1877\u20131901"},{"key":"1244_CR5","doi-asserted-by":"crossref","unstructured":"Ippolito D, Duckworth D, Callison-Burch C, Eck D (2020) Automatic detection of generated text is easiest when humans are fooled. In: Proceedings of the 58th annual meeting of the association for computational linguistics, pp 1808\u20131822","DOI":"10.18653\/v1\/2020.acl-main.164"},{"key":"1244_CR6","doi-asserted-by":"crossref","unstructured":"Zhong W, Tang D, Xu Z, Wang R, Duan N, Zhou M, Wang J, Yin J (2020) Neural deepfake detection with factual structure of text. In: Proceedings of the 2020 conference on empirical methods in natural language processing (EMNLP), pp 2461\u20132470","DOI":"10.18653\/v1\/2020.emnlp-main.193"},{"key":"1244_CR7","doi-asserted-by":"crossref","unstructured":"Jawahar G, Abdul-Mageed M, Laks\u00a0Lakshmanan V (2020) Automatic detection of machine generated text: a critical survey. In: Proceedings of the 28th international conference on computational linguistics, pp 2296\u20132309","DOI":"10.18653\/v1\/2020.coling-main.208"},{"key":"1244_CR8","doi-asserted-by":"crossref","unstructured":"Yao L, Peng N, Weischedel R, Knight K, Zhao D, Yan R (2019) Plan-and-write: towards better automatic storytelling. In: Proceedings of the AAAI conference on artificial intelligence, vol 33, pp 7378\u20137385","DOI":"10.1609\/aaai.v33i01.33017378"},{"key":"1244_CR9","doi-asserted-by":"crossref","unstructured":"Shao Z, Huang M, Wen J, Xu W, Zhu X (2019) Long and diverse text generation with planning-based hierarchical variational model. In: Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing, pp 3257\u20133268","DOI":"10.18653\/v1\/D19-1321"},{"key":"1244_CR10","doi-asserted-by":"crossref","unstructured":"Rashkin H, Celikyilmaz A, Choi Y, Gao J (2020) Plotmachines: outline-conditioned generation with dynamic plot state tracking. In: Proceedings of the 2020 conference on empirical methods in natural language processing, pp 4274\u20134295","DOI":"10.18653\/v1\/2020.emnlp-main.349"},{"issue":"2","key":"1244_CR11","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1007\/s10458-021-09501-1","volume":"35","author":"J Porteous","year":"2021","unstructured":"Porteous J, Ferreira JF, Lindsay A, Cavazza M (2021) Automated narrative planning model extension. Auton Agents Multi-Agent Syst 35(2):1\u201329","journal-title":"Auton Agents Multi-Agent Syst"},{"key":"1244_CR12","unstructured":"Jin K, Zhuo HH (2022) Integrating AI planning with natural language processing: a combination of explicit and tacit knowledge. arXiv preprint arXiv:2202.07138"},{"key":"1244_CR13","doi-asserted-by":"publisher","first-page":"343","DOI":"10.1613\/jair.1.12007","volume":"69","author":"F Stahlberg","year":"2020","unstructured":"Stahlberg F (2020) Neural machine translation: a review. J Artif Intell Res 69:343\u2013418","journal-title":"J Artif Intell Res"},{"key":"1244_CR14","doi-asserted-by":"crossref","unstructured":"Wang H, Zhang Y, Yu X (2020) An overview of image caption generation methods. Comput Intell Neurosci. 2020: 3062706","DOI":"10.1155\/2020\/3062706"},{"key":"1244_CR15","unstructured":"Holtzman A, Buys J, Du L, Forbes M, Choi Y (2019) The curious case of neural text degeneration. In: International conference on learning representations"},{"key":"1244_CR16","unstructured":"Nadeem M, He T, Cho K, Glass J (2020) A systematic characterization of sampling algorithms for open-ended language generation. In: Proceedings of the 1st conference of the Asia-Pacific chapter of the association for computational linguistics and the 10th international joint conference on natural language processing, pp 334\u2013346"},{"key":"1244_CR17","unstructured":"Calderwood A, Qiu V, Gero KI, Chilton LB (2020) How novelists use generative language models: an exploratory user study. In: HAI-GEN+ User2agent@ IUI"},{"key":"1244_CR18","unstructured":"Zhang H, Duckworth D, Ippolito D, Neelakantan A (2021) Trading off diversity and quality in natural language generation. In: Proceedings of the workshop on human evaluation of NLP systems, pp 25\u201333. Association for Computational Linguistics, Online"},{"key":"1244_CR19","unstructured":"Lan Z, Chen M, Goodman S, Gimpel K, Sharma P, Soricut R (2019) Albert: a lite bert for self-supervised learning of language representations. In: International conference on learning representations"},{"key":"1244_CR20","doi-asserted-by":"crossref","unstructured":"Dai Z, Yang Z, Yang Y, Carbonell JG, Le Q, Salakhutdinov R (2019) Transformer-xl: attentive language models beyond a fixed-length context. In: Proceedings of the 57th annual meeting of the association for computational linguistics, pp 2978\u20132988","DOI":"10.18653\/v1\/P19-1285"},{"key":"1244_CR21","doi-asserted-by":"publisher","DOI":"10.5281\/zenodo.3402023","author":"B Xu","year":"2019","unstructured":"Xu B (2019) NLP Chinese Corpus: large scale Chinese Corpus for NLP. Zenodo. https:\/\/doi.org\/10.5281\/zenodo.3402023","journal-title":"Zenodo"},{"key":"1244_CR22","doi-asserted-by":"crossref","unstructured":"Papineni K, Roukos S, Ward T, Zhu W-J (2002) Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th annual meeting of the association for computational linguistics, pp 311\u2013318","DOI":"10.3115\/1073083.1073135"},{"key":"1244_CR23","unstructured":"Lin C-Y (2004) Rouge: a package for automatic evaluation of summaries. In: Text summarization branches out, pp 74\u201381"},{"key":"1244_CR24","unstructured":"Celikyilmaz A, Clark E, Gao J (2020) Evaluation of text generation: a survey. arXiv preprint arXiv:2006.14799"},{"key":"1244_CR25","unstructured":"Sun J (2012) Jieba Chinese word segmentation tool"},{"key":"1244_CR26","doi-asserted-by":"crossref","unstructured":"Li S, Zhao Z, Hu R, Li W, Liu T, Du X (2018) Analogical reasoning on Chinese morphological and semantic relations. In: Proceedings of the 56th annual meeting of the association for computational linguistics, pp 138\u2013143","DOI":"10.18653\/v1\/P18-2023"},{"key":"1244_CR27","unstructured":"Roemmele M, Gordon AS, Swanson R (2017) Evaluating story generation systems using automated linguistic analyses. In: SIGKDD 2017 workshop on machine learning for creativity, pp 13\u201317"},{"key":"1244_CR28","doi-asserted-by":"crossref","unstructured":"Feng X, Liu M, Liu J, Qin B, Sun Y, Liu T (2018) Topic-to-essay generation with neural networks. In: Proceedings of the twenty-seventh international joint conference on artificial intelligence. International Joint Conferences on Artificial Intelligence Organization, pp 4078\u20134084","DOI":"10.24963\/ijcai.2018\/567"},{"key":"1244_CR29","doi-asserted-by":"crossref","unstructured":"Wang W, Zheng H-T, Lin Z (2020) Self-attention and retrieval enhanced neural networks for essay generation. In: 2020 IEEE international conference on acoustics, speech and signal processing, pp 8199\u20138203","DOI":"10.1109\/ICASSP40776.2020.9052954"},{"key":"1244_CR30","doi-asserted-by":"crossref","unstructured":"Yang P, Li L, Luo F, Liu T, Sun X (2019) Enhancing topic-to-essay generation with external commonsense knowledge. In: Proceedings of the 57th annual meeting of the association for computational linguistics. Association for Computational Linguistics, pp 2002\u20132012","DOI":"10.18653\/v1\/P19-1193"},{"key":"1244_CR31","doi-asserted-by":"crossref","unstructured":"Qiao L, Yan J, Meng F, Yang Z, Zhou J (2020) A sentiment-controllable topic-to-essay generator with topic knowledge graph. In: Findings of the association for computational linguistics: EMNLP 2020. Association for Computational Linguistics, Online, pp 3336\u20133344","DOI":"10.18653\/v1\/2020.findings-emnlp.299"},{"key":"1244_CR32","unstructured":"Yuan C, Huang Y-C, Tsai C-H (2019) Efficient text generation of user-defined topic using generative adversarial networks. In: Proceedings of the 4th workshop on computational creativity in language generation. Association for Computational Linguistics, Tokyo, pp 13\u201321"},{"key":"1244_CR33","unstructured":"Du N, Huang Y, Dai AM, Tong S, Lepikhin D, Xu Y, Krikun M, Zhou Y, Yu AW, Firat O et al (2022) Glam: efficient scaling of language models with mixture-of-experts. In: International conference on machine learning. PMLR, pp 5547\u20135569"},{"key":"1244_CR34","unstructured":"Rae JW, Borgeaud S, Cai T, Millican K, Hoffmann J, Song HF, Aslanides J, Henderson S, Ring R, Young S, Rutherford E, Hennigan T, Menick J, Cassirer A, Powell R, van\u00a0den Driessche G, Hendricks LA, Rauh M, Huang, P-S, Glaese A, Welbl J, Dathathri S, Huang S, Uesato J, Mellor J, Higgins I, Creswell A, McAleese N, Wu A, Elsen E, Jayakumar SM, Buchatskaya E, Budden D, Sutherland E, Simonyan K, Paganini M, Sifre L, Martens L, Li XL, Kuncoro A, Nematzadeh A, Gribovskaya E, Donato D, Lazaridou A, Mensch A, Lespiau J-B, Tsimpoukelli M, Grigorev N, Fritz D, Sottiaux T, Pajarskas M, Pohlen T, Gong Z, Toyama D, de Masson\u00a0d\u2019Autume C, Li Y, Terzi T, Mikulik V, Babuschkin I, Clark A, de Las\u00a0Casas D, Guy A, Jones C, Bradbury J, Johnson M, Hechtman BA, Weidinger L, Gabriel I, Isaac WS, Lockhart E, Osindero S, Rimell L, Dyer C, Vinyals O, Ayoub K, Stanway J, Bennett L, Hassabis D, Kavukcuoglu K, Irving G (2021) Scaling language models: methods, analysis and insights from training gopher. arXiv:2112.11446"},{"key":"1244_CR35","unstructured":"Hoffmann J, Borgeaud S, Mensch A, Buchatskaya E, Cai T, Rutherford E, de Las\u00a0Casas D, Hendricks LA, Welbl J, Clark A, Hennigan T, Noland E, Millican K, van\u00a0den Driessche G, Damoc B, Guy A, Osindero S, Simonyan K, Elsen E, Rae JW, Vinyals O, Sifre L (2022) Training compute-optimal large language models. arXiv:2203.15556"},{"key":"1244_CR36","unstructured":"Smith S, Patwary M, Norick B, LeGresley P, Rajbhandari S, Casper J, Liu Z, Prabhumoye S, Zerveas G, Korthikanti V, Zheng E, Child R, Aminabadi RY, Bernauer J, Song X, Shoeybi M, He Y, Houston M, Tiwary S, Catanzaro B (2022) Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language model. arXiv:2201.11990"},{"key":"1244_CR37","unstructured":"Song K, Tan X, Qin T, Lu J, Liu T-Y (2019) Mass: masked sequence to sequence pre-training for language generation. In: International conference on machine learning. PMLR, pp 5926\u20135936"},{"key":"1244_CR38","first-page":"1","volume":"21","author":"C Raffel","year":"2020","unstructured":"Raffel C, Shazeer N, Roberts A, Lee K, Narang S, Matena M, Zhou Y, Li W, Liu PJ (2020) Exploring the limits of transfer learning with a unified text-to-text transformer. J Mach Learn Res 21:1\u201367","journal-title":"J Mach Learn Res"},{"key":"1244_CR39","doi-asserted-by":"crossref","unstructured":"Lewis M, Liu Y, Goyal N, Ghazvininejad M, Mohamed A, Levy O, Stoyanov V, Zettlemoyer L (2020) Bart: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In: Proceedings of the 58th annual meeting of the association for computational linguistics, pp 7871\u20137880","DOI":"10.18653\/v1\/2020.acl-main.703"},{"key":"1244_CR40","unstructured":"Graves A (2012) Sequence transduction with recurrent neural networks. In: Proceedings of the 29th international conference on machine learning (ICML) 2012 workshop on representation learning, pp 1\u20139"},{"key":"1244_CR41","unstructured":"Li J, Monroe W, Jurafsky DA (2016) simple, fast diverse decoding algorithm for neural generation. arXiv preprint arXiv:1611.08562"},{"key":"1244_CR42","unstructured":"Vijayakumar AK, Cogswell M, Selvaraju RR, Sun Q, Lee S, Crandall D, Batra D (2016) Diverse beam search: decoding diverse solutions from neural sequence models. arXiv preprint arXiv:1610.02424"},{"key":"1244_CR43","doi-asserted-by":"crossref","unstructured":"Huang L, Zhao K, Ma M (2017) When to finish? optimal beam search for neural text generation (modulo beam size). In: Proceedings of the 2017 conference on empirical methods in natural language processing, pp 2134\u20132139","DOI":"10.18653\/v1\/D17-1227"},{"key":"1244_CR44","doi-asserted-by":"crossref","unstructured":"Meister CI, Forster M, Cotterell R (2021) Determinantal beam search. In: Proceedings of the 59th annual meeting of the association for computational linguistics and the 11th international joint conference on natural language processing, vol 1. Association for Computational Linguistics, pp 6551\u20136562","DOI":"10.18653\/v1\/2021.acl-long.512"},{"key":"1244_CR45","doi-asserted-by":"crossref","unstructured":"Weir N, Sedoc J, Van\u00a0Durme B (2020) COD3S: diverse generation with discrete semantic signatures. In: Proceedings of the 2020 conference on empirical methods in natural language processing. Association for Computational Linguistics, Online, pp 5199\u20135211","DOI":"10.18653\/v1\/2020.emnlp-main.421"},{"key":"1244_CR46","unstructured":"Welleck S, Kulikov I, Roller S, Dinan E, Cho K, Weston J (2020) Neural text generation with unlikelihood training. In: International conference on learning representations"},{"key":"1244_CR47","doi-asserted-by":"crossref","unstructured":"Fan A, Lewis M, Dauphin Y (2018) Hierarchical neural story generation. In: Proceedings of the 56th annual meeting of the association for computational linguistics (volume 1: long papers), pp 889\u2013898","DOI":"10.18653\/v1\/P18-1082"},{"key":"1244_CR48","unstructured":"Caccia M, Caccia L, Fedus W, Larochelle H, Pineau J, Charlin L (2020) Language GANs falling short. In: International conference on learning representations"},{"key":"1244_CR49","unstructured":"Kool W, Van\u00a0Hoof H, Welling M (2019) Stochastic beams and where to find them: the gumbel-top-k trick for sampling sequences without replacement. In: Proceedings of the 36th international conference on machine learning, Long Beach, California, USA. PMLR, pp 3499\u20133508"},{"key":"1244_CR50","doi-asserted-by":"crossref","unstructured":"Holtzman A, Buys J, Forbes M, Bosselut A, Golub D, Choi Y (2018) Learning to write with cooperative discriminators. In: Proceedings of the 56th annual meeting of the association for computational linguistics (volume 1: long papers), pp 1638\u20131649","DOI":"10.18653\/v1\/P18-1152"},{"key":"1244_CR51","doi-asserted-by":"crossref","unstructured":"Cowling PI, Powley EJ, Whitehouse D (2012) Information set Monte Carlo tree search. IEEE Trans Comput Intell AI Games 4(2):120\u2013143","DOI":"10.1109\/TCIAIG.2012.2200894"},{"key":"1244_CR52","unstructured":"Lamprier S, Scialom T, Chaffin A, Claveau V, Kijak E, Staiano J, Piwowarski B (2022) Generative cooperative networks for natural language generation. In: International conference on machine learning. PMLR, pp 11891\u201311905"},{"key":"1244_CR53","unstructured":"Scialom T, Dray P-A, Staiano J, Lamprier S, Piwowarski B (2021) To beam or not to beam: that is a question of cooperation for language GANs. Adv Neural Inf Process Syst 34:26585\u201326597"},{"key":"1244_CR54","doi-asserted-by":"publisher","unstructured":"Leblond R, Alayrac J-B, Sifre L, Pislar M, Jean-Baptiste L, Antonoglou I, Simonyan K, Vinyals O (2021) Machine translation decoding beyond beam search. In: Proceedings of the 2021 conference on empirical methods in natural language processing. Association for Computational Linguistics, Online and Punta Cana, Dominican Republic, pp 8410\u20138434. https:\/\/doi.org\/10.18653\/v1\/2021.emnlp-main.662","DOI":"10.18653\/v1\/2021.emnlp-main.662"},{"key":"1244_CR55","doi-asserted-by":"crossref","unstructured":"Chaffin A, Claveau V, Kijak E (2022) PPL-MCTS: constrained textual generation through discriminator-guided MCTS decoding. In: Proceedings of the 2022 conference of the North American chapter of the association for computational linguistics: human language technologies, pp 2953\u20132967","DOI":"10.18653\/v1\/2022.naacl-main.215"},{"key":"1244_CR56","doi-asserted-by":"publisher","unstructured":"Chaffin A, Scialom T, Lamprier S, Staiano J, Piwowarski B, Kijak E, Claveau, V (2022) Which discriminator for cooperative text generation? In: Proceedings of the 45th international ACM SIGIR conference on research and development in information retrieval. SIGIR \u201922. Association for Computing Machinery, New York, pp 2360\u20132365. https:\/\/doi.org\/10.1145\/3477495.3531858","DOI":"10.1145\/3477495.3531858"},{"key":"1244_CR57","doi-asserted-by":"crossref","unstructured":"Shen C, Cheng L, Bing L, You Y, Si L (2022) SentBS: sentence-level beam search for controllable summarization. arXiv preprint arXiv:2210.14502","DOI":"10.18653\/v1\/2022.emnlp-main.699"},{"key":"1244_CR58","unstructured":"Scialom T, Dray P-A, Lamprier S, Piwowarski B, Staiano J (2020) Discriminative adversarial search for abstractive summarization. In: Proceedings of the 37th international conference on machine learning. PMLR, pp 8555\u20138564"},{"key":"1244_CR59","doi-asserted-by":"crossref","unstructured":"Krause B, Gotmare AD, McCann B, Keskar NS, Joty S, Socher R, Rajani NF (2021) GEDI: generative discriminator guided sequence generation. In: Findings of the association for computational linguistics: EMNLP 2021, pp 4929\u20134952","DOI":"10.18653\/v1\/2021.findings-emnlp.424"},{"key":"1244_CR60","doi-asserted-by":"crossref","unstructured":"Liu A, Sap M, Lu X, Swayamdipta S, Bhagavatula C, Smith NA, Choi Y (2021) Dexperts: decoding-time controlled text generation with experts and anti-experts. In: Proceedings of the 59th annual meeting of the association for computational linguistics and the 11th international joint conference on natural language processing, pp 6691\u20136706","DOI":"10.18653\/v1\/2021.acl-long.522"},{"key":"1244_CR61","doi-asserted-by":"crossref","unstructured":"Yang K, Klein D (2021) Fudge: controlled text generation with future discriminators. In: Proceedings of the 2021 conference of the North American chapter of the association for computational linguistics: human language technologies, pp 3511\u20133535","DOI":"10.18653\/v1\/2021.naacl-main.276"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-023-01244-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-023-01244-8\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-023-01244-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,3,30]],"date-time":"2024-03-30T15:15:08Z","timestamp":1711811708000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-023-01244-8"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,9,29]]},"references-count":61,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2024,4]]}},"alternative-id":["1244"],"URL":"https:\/\/doi.org\/10.1007\/s40747-023-01244-8","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"value":"2199-4536","type":"print"},{"value":"2198-6053","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,9,29]]},"assertion":[{"value":"21 September 2022","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"9 September 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"29 September 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors hereby declare that there are no conflicts of interest associated with this study.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}