{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,12,24]],"date-time":"2024-12-24T05:06:52Z","timestamp":1735016812896,"version":"3.32.0"},"reference-count":0,"publisher":"IOS Press","isbn-type":[{"value":"9781643685694","type":"electronic"}],"license":[{"start":{"date-parts":[[2024,12,20]],"date-time":"2024-12-20T00:00:00Z","timestamp":1734652800000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2024,12,20]]},"abstract":"<jats:p>To address the limitations of flexibility or efficiency of existing prompting paradigms in generating intermediate reasoning steps, this paper proposes an reasoning framework LLM-AS, which innovatively combines the A* search algorithm with the reasoning process of large language models(LLMs). LLM-AS utilizes the efficient exploration capability of the A* algorithm and avoids the redundant exploration of high-cost nodes, which significantly improves the search efficiency and reduces the cost of invoking LLM. Meanwhile, through the self-improve mechanism of LLMs, LLM-AS ensures the quality of the generated solutions while minimizing model interactions. In addition, the flexibility of the A* search algorithm enables LLM-AS to be applicable to diverse thought organization structures, providing more possibilities for handling various tasks. We conducted experiments on two complex tasks, game 24 and 8 puzzle, to compare the accuracy of the existing prompting paradigms and LLM-AS on both gpt-3.5-turbo and gpt-4.0. The experimental results show that LLM-AS effectively improves the ability of LLMs to solve complex tasks.<\/jats:p>","DOI":"10.3233\/faia241464","type":"book-chapter","created":{"date-parts":[[2024,12,23]],"date-time":"2024-12-23T09:49:31Z","timestamp":1734947371000},"source":"Crossref","is-referenced-by-count":0,"title":["LLM-AS: A Self-Improve LLM Reasoning Framework Integrated with A* Heuristics Algorithm"],"prefix":"10.3233","author":[{"ORCID":"https:\/\/orcid.org\/0009-0006-1003-3684","authenticated-orcid":false,"given":"Xueqi","family":"Meng","sequence":"first","affiliation":[{"name":"School of Computer Science, Beijing University of Posts and Telecommunications, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1877-5982","authenticated-orcid":false,"given":"Kun","family":"Niu","sequence":"additional","affiliation":[{"name":"School of Computer Science, Beijing University of Posts and Telecommunications, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0008-2181-0084","authenticated-orcid":false,"given":"Xiao","family":"Chen","sequence":"additional","affiliation":[{"name":"School of Computer Science, Beijing University of Posts and Telecommunications, China"}]}],"member":"7437","container-title":["Frontiers in Artificial Intelligence and Applications","Fuzzy Systems and Data Mining X"],"original-title":[],"link":[{"URL":"https:\/\/ebooks.iospress.nl\/pdf\/doi\/10.3233\/FAIA241464","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,12,23]],"date-time":"2024-12-23T09:49:32Z","timestamp":1734947372000},"score":1,"resource":{"primary":{"URL":"https:\/\/ebooks.iospress.nl\/doi\/10.3233\/FAIA241464"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,12,20]]},"ISBN":["9781643685694"],"references-count":0,"URL":"https:\/\/doi.org\/10.3233\/faia241464","relation":{},"ISSN":["0922-6389","1879-8314"],"issn-type":[{"value":"0922-6389","type":"print"},{"value":"1879-8314","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,12,20]]}}}