{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,13]],"date-time":"2026-02-13T07:20:05Z","timestamp":1770967205246,"version":"3.50.1"},"reference-count":34,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2024,1,24]],"date-time":"2024-01-24T00:00:00Z","timestamp":1706054400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,1,24]],"date-time":"2024-01-24T00:00:00Z","timestamp":1706054400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61473155"],"award-info":[{"award-number":["61473155"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["J Intell Robot Syst"],"published-print":{"date-parts":[[2024,3]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Task-oriented robot learning has shown significant potential with the development of Reinforcement Learning (RL) algorithms. However, the learning of long-horizon tasks for robots remains a formidable challenge due to the inherent complexity of tasks, typically comprising multiple diverse stages. Universal RL algorithms commonly encounter issues such as slow convergence or even failure to converge altogether when applied to such tasks. The reasons behind these challenges lie in the local optima trap or redundant exploration during the new stages or the junction of two continuous stages. To address these challenges, we propose a novel state-dependent maximum entropy (SDME) reinforcement learning algorithm. This algorithm effectively balances the trade-off between exploration and exploitation around three kinds of critical states arising from the unique nature of long-horizon tasks. We conducted experiments within an open-source simulation environment, focusing on two representative long-horizon tasks. The proposed SDME algorithm exhibits faster and more stable learning capabilities, requiring merely one-third of the number of learning samples necessary for baseline approaches. Furthermore, we assess the generalization ability of our method under randomly initialized conditions, and the results show that the success rate of the SDME algorithm is nearly twice that of the baselines. Our code will be available at <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" ext-link-type=\"uri\" xlink:href=\"https:\/\/github.com\/Peter-zds\/SDME\">https:\/\/github.com\/Peter-zds\/SDME<\/jats:ext-link>.<\/jats:p>","DOI":"10.1007\/s10846-024-02049-8","type":"journal-article","created":{"date-parts":[[2024,1,24]],"date-time":"2024-01-24T07:02:07Z","timestamp":1706079727000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["State-Dependent Maximum Entropy Reinforcement Learning for Robot Long-Horizon Task Learning"],"prefix":"10.1007","volume":"110","author":[{"given":"Deshuai","family":"Zheng","sequence":"first","affiliation":[]},{"given":"Jin","family":"Yan","sequence":"additional","affiliation":[]},{"given":"Tao","family":"Xue","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0138-3720","authenticated-orcid":false,"given":"Yong","family":"Liu","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,1,24]]},"reference":[{"key":"2049_CR1","first-page":"26183","volume":"34","author":"Y Fang","year":"2021","unstructured":"Fang, Y., Liao, B., Wang, X., Fang, J., Qi, J., Wu, R., Niu, J., Liu, W.: You only look at one sequence: Rethinking transformer in vision through object detection. Adv. Neural. Inf. Process. Syst. 34, 26183\u201326197 (2021)","journal-title":"Adv. Neural. Inf. Process. Syst."},{"key":"2049_CR2","volume-title":"Data-driven control of hydraulic servo actuator: An event-triggered adaptive dynamic programming approach","author":"V Djordjevic","year":"2023","unstructured":"Djordjevic, V., Tao, H., Song, X., He, S., Gao, W., Stojanovi\u0107, V.: Data-driven control of hydraulic servo actuator: An event-triggered adaptive dynamic programming approach. MBE, Mathematical biosciences and engineering (2023)"},{"key":"2049_CR3","doi-asserted-by":"crossref","unstructured":"Wang, X., Girdhar, R., Yu, S.X., Misra, I.: Cut and learn for unsupervised object detection and instance segmentation. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 3124\u20133134 (2023)","DOI":"10.1109\/CVPR52729.2023.00305"},{"key":"2049_CR4","doi-asserted-by":"crossref","unstructured":"Stojanovi\u0107, V.: Fault-tolerant control of a hydraulic servo actuator via adaptive dynamic programming. Mathematical Modelling and Control (2023)","DOI":"10.3934\/mmc.2023016"},{"issue":"1","key":"2049_CR5","doi-asserted-by":"publisher","first-page":"329","DOI":"10.1109\/TCYB.2021.3091680","volume":"53","author":"O Tutsoy","year":"2023","unstructured":"Tutsoy, O., Barkana, D.E., Balikci, K.: A novel exploration-exploitation-based adaptive law for intelligent model-free control approaches. IEEE Trans. Cybernet. 53(1), 329\u2013337 (2023). https:\/\/doi.org\/10.1109\/TCYB.2021.3091680","journal-title":"IEEE Trans. Cybernet."},{"key":"2049_CR6","doi-asserted-by":"crossref","unstructured":"Zhuang, Z., Tao, H., Chen, Y., Stojanovic, V., Paszke, W.: An optimal iterative learning control approach for linear systems with nonuniform trial lengths under input constraints. IEEE Trans. Syst. Man. Cybernet. Syst. (2022)","DOI":"10.1109\/TSMC.2022.3225381"},{"key":"2049_CR7","doi-asserted-by":"crossref","unstructured":"Quillen, D., Jang, E., Nachum, O., Finn, C., Ibarz, J., Levine, S.: Deep reinforcement learning for vision-based robotic grasping: A simulated comparative evaluation of off-policy methods. In: 2018 IEEE International conference on robotics and automation (ICRA), IEEE, pp. 6284\u20136291 (2018)","DOI":"10.1109\/ICRA.2018.8461039"},{"key":"2049_CR8","unstructured":"Kalashnikov, D., Irpan, A., Pastor, P., Ibarz, J., Herzog, A., Jang, E., Quillen, D., Holly, E., Kalakrishnan, M., Vanhoucke, V., et al.: Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation. arXiv preprint arXiv:1806.10293 (2018)"},{"issue":"2\u20133","key":"2049_CR9","doi-asserted-by":"publisher","first-page":"202","DOI":"10.1177\/0278364919872545","volume":"39","author":"K Fang","year":"2020","unstructured":"Fang, K., Zhu, Y., Garg, A., Kurenkov, A., Mehta, V., Fei-Fei, L., Savarese, S.: Learning task-oriented grasping for tool manipulation from simulated self-supervision. The International Journal of Robotics Research 39(2\u20133), 202\u2013216 (2020)","journal-title":"The International Journal of Robotics Research"},{"key":"2049_CR10","unstructured":"Nair, A., Pong, V., Dalal, M., Bahl, S., Lin, S., Levine, S.: Visual reinforcement learning with imagined goals. arXiv preprint arXiv:1807.04742 (2018)"},{"key":"2049_CR11","doi-asserted-by":"crossref","unstructured":"Xu, D., Nair, S., Zhu, Y., Gao, J., Garg, A., Fei-Fei, L., Savarese, S.: Neural task programming: Learning to generalize across hierarchical tasks. In: 2018 IEEE International conference on robotics and automation (ICRA) (2017)","DOI":"10.1109\/ICRA.2018.8460689"},{"key":"2049_CR12","doi-asserted-by":"crossref","unstructured":"Tremblay, J., To, T., Molchanov, A., Tyree, S., Kautz, J., Birchfield, S.: Synthetically trained neural networks for learning human-readable plans from real-world demonstrations. In: 2018 IEEE Internationa L conference on robotics and automation (ICRA), IEEE, pp. 5659\u20135666 (2018)","DOI":"10.1109\/ICRA.2018.8460642"},{"key":"2049_CR13","doi-asserted-by":"crossref","unstructured":"Huang, D.-A., Nair, S., Xu, D., Zhu, Y., Garg, A., Fei-Fei, L., Savarese, S., Niebles, J.C.: Neural task graphs: Generalizing to unseen tasks from a single video demonstration. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp. 8565\u20138574 (2019)","DOI":"10.1109\/CVPR.2019.00876"},{"key":"2049_CR14","unstructured":"Yu, T., Quillen, D., He, Z., Julian, R., Hausman, K., Finn, C., Levine, S.: Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning. In: Conference on robot learning (CoRL) (2019). arXiv:1910.10897"},{"key":"2049_CR15","unstructured":"Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., Riedmiller, M.: Playing atari with deep reinforcement learning. Comput. Sci. (2013)"},{"key":"2049_CR16","unstructured":"Schulman, J., Levine, S., Abbeel, P., Jordan, M., Moritz, P.: Trust region policy optimization. In: International conference on machine learning, PMLR, pp. 1889\u20131897 (2015)"},{"key":"2049_CR17","unstructured":"Lillicrap, T.P., Hunt, J.J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., Wierstra, D.: Continuous control with deep reinforcement learning, (2015). arXiv:1509.02971"},{"issue":"2","key":"2049_CR18","first-page":"41","volume":"16","author":"B Abed-Alguni","year":"2018","unstructured":"Abed-Alguni, B., Ottom, M.A.: Double delayed q-learning. International Journal of Artificial Intelligence 16(2), 41\u201359 (2018)","journal-title":"International Journal of Artificial Intelligence"},{"key":"2049_CR19","unstructured":"Haarnoja, T., Zhou, A., Abbeel, P., Levine, S.: Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In: International Conference on Machine Learning, PMLR, pp. 1861\u20131870 (2018)"},{"key":"2049_CR20","unstructured":"Ho, J., Ermon, S.: Generative adversarial imitation learning. Adv. Neural Inform. Process. Syst. 29 (2016)"},{"key":"2049_CR21","unstructured":"Ng, A.Y., Russell, S., et al: Algorithms for inverse reinforcement learning. In: Icml, vol.1, p. 2 (2000)"},{"key":"2049_CR22","doi-asserted-by":"crossref","unstructured":"Abolghasemi, P., Mazaheri, A., Shah, M., Boloni, L.: Pay attention!-robustifying a deep visuomotor policy through task-focused visual attention. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp. 4254\u20134262 (2019)","DOI":"10.1109\/CVPR.2019.00438"},{"key":"2049_CR23","doi-asserted-by":"crossref","unstructured":"Mohseni-Kabir, A., Rich, C., Chernova, S., Sidner, C.L., Miller, D.: Interactive hierarchical task learning from a single demonstration. In: Proceedings of the tenth annual ACM\/IEEE international conference on human-robot interaction, pp. 205\u2013212 (2015)","DOI":"10.1145\/2696454.2696474"},{"issue":"8","key":"2049_CR24","doi-asserted-by":"publisher","first-page":"1735","DOI":"10.1162\/neco.1997.9.8.1735","volume":"9","author":"S Hochreiter","year":"1997","unstructured":"Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735\u20131780 (1997). https:\/\/doi.org\/10.1162\/neco.1997.9.8.1735","journal-title":"Neural Comput."},{"key":"2049_CR25","doi-asserted-by":"crossref","unstructured":"Hundt, A., Killeen, B., Greene, N., Wu, H., Kwon, H., Paxton, C., Hager, G.D.: good robot! : Efficient reinforcement learning for multi-step visual tasks with sim to real transfer. IEEE Robot. Autom. Lett. 5(4), 6724\u20136731 (2020)","DOI":"10.1109\/LRA.2020.3015448"},{"key":"2049_CR26","doi-asserted-by":"crossref","unstructured":"Li, Z., Sun, Z., Su, J., Zhang, J.: Learning a skill-sequence-dependent policy for long-horizon manipulation tasks. In: 2021 IEEE 17th International conference on automation science and engineering (CASE), IEEE, pp. 1229\u20131234 (2021)","DOI":"10.1109\/CASE49439.2021.9551399"},{"key":"2049_CR27","doi-asserted-by":"crossref","unstructured":"Strudel, R., Pashevich, A., Kalevatykh, I., Laptev, I., Sivic, J., Schmid, C.: Learning to combine primitive skills: A step towards versatile robotic manipulation. In: 2020 IEEE International conference on robotics and automation (ICRA), IEEE, pp. 4637\u20134643 (2020)","DOI":"10.1109\/ICRA40945.2020.9196619"},{"key":"2049_CR28","doi-asserted-by":"crossref","unstructured":"Wu, B., Xu, F., He, Z., Gupta, A., Allen, P.K.: Squirl: Robust and efficient learning from video demonstration of long-horizon robotic manipulation tasks. In: 2020 IEEE\/RSJ International conference on intelligent robots and systems (IROS), pp. IEEE, 9720\u20139727 (2020)","DOI":"10.1109\/IROS45743.2020.9340915"},{"issue":"6","key":"2049_CR29","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3272127.3275048","volume":"37","author":"A Clegg","year":"2018","unstructured":"Clegg, A., Yu, W., Tan, J., Liu, C.K., Turk, G.: Learning to dress: Synthesizing human dressing motion via deep reinforcement learning. ACM Transactions on Graphics (TOG) 37(6), 1\u201310 (2018)","journal-title":"ACM Transactions on Graphics (TOG)"},{"key":"2049_CR30","unstructured":"Lee, Y., Sun, S.-H., Somasundaram, S., Hu, E.S., Lim, J.J.: Composing complex skills by learning transition policies. In: International conference on learning representations (2018)"},{"key":"2049_CR31","unstructured":"Lee, Y., Lim, J.J., Anandkumar, A., Zhu, Y.: Adversarial skill chaining for long-horizon robot manipulation via terminal state regularization, (2021). arXiv:2111.07999"},{"key":"2049_CR32","unstructured":"Schulman, J., Chen, X., Abbeel, P.: Equivalence between policy gradients and soft q-learning, (2017). arXiv:1704.06440"},{"key":"2049_CR33","unstructured":"Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms. (2017). arXiv:1707.06347"},{"key":"2049_CR34","doi-asserted-by":"crossref","unstructured":"Zheng, D., Yan, J., Xue, T., Liu, Y.: A knowledge-based task planning approach for robot multi-task manipulation. Complex & Intell. Syst. pp. 1\u201314 (2023)","DOI":"10.1007\/s40747-023-01155-8"}],"container-title":["Journal of Intelligent &amp; Robotic Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10846-024-02049-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10846-024-02049-8\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10846-024-02049-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,4,1]],"date-time":"2024-04-01T02:18:12Z","timestamp":1711937892000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10846-024-02049-8"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,1,24]]},"references-count":34,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2024,3]]}},"alternative-id":["2049"],"URL":"https:\/\/doi.org\/10.1007\/s10846-024-02049-8","relation":{},"ISSN":["0921-0296","1573-0409"],"issn-type":[{"value":"0921-0296","type":"print"},{"value":"1573-0409","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,1,24]]},"assertion":[{"value":"14 February 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"31 December 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"24 January 2024","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"Informed consent was obtained from all individual participants included in the study.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent to participate"}},{"value":"The participant has consented to the submission of the case report to the journal.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent to Publish"}},{"value":"The authors have no relevant financial or non-financial interests to disclose.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing Interests"}}],"article-number":"19"}}