{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,10]],"date-time":"2026-04-10T10:25:57Z","timestamp":1775816757260,"version":"3.50.1"},"reference-count":0,"publisher":"IOS Press","isbn-type":[{"value":"9781643684369","type":"print"},{"value":"9781643684376","type":"electronic"}],"license":[{"start":{"date-parts":[[2023,9,28]],"date-time":"2023-09-28T00:00:00Z","timestamp":1695859200000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2023,9,28]]},"abstract":"<jats:p>A promising way to improve the sample efficiency of reinforcement learning is model-based methods, in which many explorations and evaluations can happen in the learned models to save real-world samples. However, when the learned model has a non-negligible model error, sequential steps in the model are hard to be accurately evaluated, limiting the model\u2019s utilization. This paper proposes to alleviate this issue by introducing multi-step plans into policy optimization for model-based RL. We employ the multi-step plan value estimation, which evaluates the expected discounted return after executing a sequence of action plans at a given state, and updates the policy by directly computing the multi-step policy gradient via plan value estimation. The new model-based reinforcement learning algorithm MPPVE (Model-based Planning Policy Learning with Multi-step Plan Value Estimation) shows a better utilization of the learned model and achieves a better sample efficiency than state-of-the-art model-based RL approaches. The code is available at https:\/\/github.com\/HxLyn3\/MPPVE.<\/jats:p>","DOI":"10.3233\/faia230427","type":"book-chapter","created":{"date-parts":[[2023,9,29]],"date-time":"2023-09-29T09:13:02Z","timestamp":1695978782000},"source":"Crossref","is-referenced-by-count":6,"title":["Model-Based Reinforcement Learning with Multi-Step Plan Value Estimation"],"prefix":"10.3233","author":[{"given":"Haoxin","family":"Lin","sequence":"first","affiliation":[{"name":"National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, Jiangsu, China"},{"name":"Polixir Technologies, Nanjing, Jiangsu, China"}]},{"given":"Yihao","family":"Sun","sequence":"additional","affiliation":[{"name":"National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, Jiangsu, China"}]},{"given":"Jiaji","family":"Zhang","sequence":"additional","affiliation":[{"name":"National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, Jiangsu, China"}]},{"given":"Yang","family":"Yu","sequence":"additional","affiliation":[{"name":"National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, Jiangsu, China"},{"name":"Polixir Technologies, Nanjing, Jiangsu, China"},{"name":"Peng Cheng Laboratory, Shenzhen, Guangdong, China"}]}],"member":"7437","container-title":["Frontiers in Artificial Intelligence and Applications","ECAI 2023"],"original-title":[],"link":[{"URL":"https:\/\/ebooks.iospress.nl\/pdf\/doi\/10.3233\/FAIA230427","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,9,29]],"date-time":"2023-09-29T09:13:03Z","timestamp":1695978783000},"score":1,"resource":{"primary":{"URL":"https:\/\/ebooks.iospress.nl\/doi\/10.3233\/FAIA230427"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,9,28]]},"ISBN":["9781643684369","9781643684376"],"references-count":0,"URL":"https:\/\/doi.org\/10.3233\/faia230427","relation":{},"ISSN":["0922-6389","1879-8314"],"issn-type":[{"value":"0922-6389","type":"print"},{"value":"1879-8314","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,9,28]]}}}