{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,19]],"date-time":"2026-03-19T13:21:03Z","timestamp":1773926463181,"version":"3.50.1"},"reference-count":56,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2023,7,18]],"date-time":"2023-07-18T00:00:00Z","timestamp":1689638400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,7,18]],"date-time":"2023-07-18T00:00:00Z","timestamp":1689638400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61473155"],"award-info":[{"award-number":["61473155"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2024,2]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Task planning is a crucial component in facilitating robot multi-task manipulations. Language-based task planning methods offer practicality in receiving commands from humans in real-life scenarios and require only low-cost labeled data. However, existing methods often rely on sequence models for planning, which primarily focus on mapping language to sequences of sub-tasks while neglecting the knowledge about tasks and objects. To overcome these limitations, we propose a knowledge-based task planning approach called Recurrent Graph Convolutional Network (RGCN). It is devised with a novel structure that combined GCN (Kipf and Welling in International Conference on Learning Representations (ICLR), 2017) and LSTM (Hochreiter and chmidhuber in Neural Comput 9 (8): 1735-1780, 1997. <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" ext-link-type=\"uri\" xlink:href=\"https:\/\/doi.org\/10.1162\/neco.1997.9.8.1735\">https:\/\/doi.org\/10.1162\/neco.1997.9.8.1735<\/jats:ext-link>) which enables it to leverage knowledge graph data and historical predictions. The experimental results demonstrate that our approach achieves the impressive task planning success rate of <jats:inline-formula><jats:alternatives><jats:tex-math>$${95.7\\%}$$<\/jats:tex-math><mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\">\n                  <mml:mrow>\n                    <mml:mn>95.7<\/mml:mn>\n                    <mml:mo>%<\/mml:mo>\n                  <\/mml:mrow>\n                <\/mml:math><\/jats:alternatives><\/jats:inline-formula>, surpassing the best baseline method significantly, which achieves <jats:inline-formula><jats:alternatives><jats:tex-math>$${78.7\\%}$$<\/jats:tex-math><mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\">\n                  <mml:mrow>\n                    <mml:mn>78.7<\/mml:mn>\n                    <mml:mo>%<\/mml:mo>\n                  <\/mml:mrow>\n                <\/mml:math><\/jats:alternatives><\/jats:inline-formula>. Furthermore, we evaluate the performance of multi-task manipulation across a specific set of 20 tasks within a simulated environment. Notably, RGCN combined with pre-trained primitive tasks exhibits the highest success rate compared with state-of-art multi-task learning methods. Our method is proven to be significant for language-conditioned task planning and is qualified for instructing robots for multi-task manipulation.<\/jats:p>","DOI":"10.1007\/s40747-023-01155-8","type":"journal-article","created":{"date-parts":[[2023,7,18]],"date-time":"2023-07-18T06:02:09Z","timestamp":1689660129000},"page":"193-206","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":11,"title":["A knowledge-based task planning approach for robot multi-task manipulation"],"prefix":"10.1007","volume":"10","author":[{"given":"Deshuai","family":"Zheng","sequence":"first","affiliation":[]},{"given":"Jin","family":"Yan","sequence":"additional","affiliation":[]},{"given":"Tao","family":"Xue","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4098-2339","authenticated-orcid":false,"given":"Yong","family":"Liu","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,7,18]]},"reference":[{"key":"1155_CR1","unstructured":"Kipf TN, Welling M (2017) Semi-supervised classification with graph convolutional networks. In: International conference on learning representations (ICLR)"},{"key":"1155_CR2","doi-asserted-by":"publisher","unstructured":"Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9 (8): 1735\u20131780. https:\/\/doi.org\/10.1162\/neco.1997.9.8.1735","DOI":"10.1162\/neco.1997.9.8.1735"},{"key":"1155_CR3","first-page":"26183","volume":"34","author":"Y Fang","year":"2021","unstructured":"Fang Y, Liao B, Wang X, Fang J, Qi J, Wu R, Niu J, Liu W (2021) You only look at one sequence: rethinking transformer in vision through object detection. Adv Neural Inf Process Syst 34:26183\u201326197","journal-title":"Adv Neural Inf Process Syst"},{"key":"1155_CR4","doi-asserted-by":"publisher","unstructured":"Cheng P, Wang H, Stojanovic V, Liu F, He S, Shi K (2022) Dissipativity-based finite-time asynchronous output feedback control for wind turbine system via a hidden Markov model. Int J Syst Sci 53(15):3177\u20133189. https:\/\/doi.org\/10.1080\/00207721.2022.2076171","DOI":"10.1080\/00207721.2022.2076171"},{"issue":"18","key":"1155_CR5","doi-asserted-by":"publisher","first-page":"10139","DOI":"10.1002\/rnc.6354","volume":"32","author":"C Zhou","year":"2022","unstructured":"Zhou C, Tao H, Chen Y, Stojanovic V, Paszke W (2022) Robust point-to-point iterative learning control for constrained systems: a minimum energy approach. Int J Robust Nonlinear Control 32(18):10139\u201310161","journal-title":"Int J Robust Nonlinear Control"},{"key":"1155_CR6","doi-asserted-by":"crossref","unstructured":"Wang X, Girdhar R, Yu SX, Misra I (2023) Cut and learn for unsupervised object detection and instance segmentation. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 3124\u20133134","DOI":"10.1109\/CVPR52729.2023.00305"},{"issue":"5","key":"1155_CR7","doi-asserted-by":"publisher","first-page":"8561","DOI":"10.3934\/mbe.2023376","volume":"20","author":"V Djordjevic","year":"2023","unstructured":"Djordjevic V, Tao H, Song X, He S, Gao W, Stojanovic V (2023) Data-driven control of hydraulic servo actuator: an event-triggered adaptive dynamic programming approach. Math Biosci Eng 20(5):8561\u20138582","journal-title":"Math Biosci Eng"},{"key":"1155_CR8","doi-asserted-by":"crossref","unstructured":"Abolghasemi P, Boloni L (2020) Accept synthetic objects as real: end-to-end training of attentive deep visuomotor policies for manipulation in clutter. In: 2020 IEEE International conference on robotics and automation (ICRA)","DOI":"10.1109\/ICRA40945.2020.9197552"},{"issue":"2","key":"1155_CR9","doi-asserted-by":"publisher","first-page":"1454","DOI":"10.1016\/j.jfranklin.2022.11.004","volume":"360","author":"H Tao","year":"2023","unstructured":"Tao H, Qiu J, Chen Y, Stojanovic V, Cheng L (2023) Unsupervised cross-domain rolling bearing fault diagnosis based on time-frequency information fusion. J Frankl Inst 360(2):1454\u20131477. https:\/\/doi.org\/10.1016\/j.jfranklin.2022.11.004","journal-title":"J Frankl Inst"},{"key":"1155_CR10","doi-asserted-by":"crossref","unstructured":"Abolghasemi P, Mazaheri A, Shah M, Boloni L (2019) Pay attention!\u2014Robustifying a deep visuomotor policy through task-focused visual attention. In: 2019 IEEE\/CVF conference on computer vision and pattern recognition (CVPR)","DOI":"10.1109\/CVPR.2019.00438"},{"issue":"1","key":"1155_CR11","doi-asserted-by":"publisher","first-page":"329","DOI":"10.1109\/TCYB.2021.3091680","volume":"53","author":"O Tutsoy","year":"2023","unstructured":"Tutsoy O, Barkana DE, Balikci K (2023) A novel exploration-exploitation-based adaptive law for intelligent model-free control approaches. IEEE Trans Cybern 53(1):329\u2013337. https:\/\/doi.org\/10.1109\/TCYB.2021.3091680","journal-title":"IEEE Trans Cybern"},{"key":"1155_CR12","doi-asserted-by":"crossref","unstructured":"Gu S, Holly E, Lillicrap T, Levine S (2017) Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates. In: 2017 IEEE International conference on robotics and automation (ICRA). IEEE, pp. 3389\u20133396","DOI":"10.1109\/ICRA.2017.7989385"},{"issue":"1","key":"1155_CR13","first-page":"1334","volume":"17","author":"S Levine","year":"2016","unstructured":"Levine S, Finn C, Darrell T, Abbeel P (2016) End-to-end training of deep visuomotor policies. J Mach Learn Res 17(1):1334\u20131373","journal-title":"J Mach Learn Res"},{"key":"1155_CR14","doi-asserted-by":"crossref","unstructured":"Singh A, Yang L, Hartikainen K, Finn C, Levine S (2019) End-to-end robotic reinforcement learning without reward engineering. arXiv preprint arXiv:1904.07854","DOI":"10.15607\/RSS.2019.XV.073"},{"key":"1155_CR15","doi-asserted-by":"crossref","unstructured":"Quillen D, Jang E, Nachum O, Finn C, Ibarz J, Levine S (2018) Deep reinforcement learning for vision-based robotic grasping: a simulated comparative evaluation of off-policy methods. In: 2018 IEEE international conference on robotics and automation (ICRA). IEEE, pp. 6284\u20136291","DOI":"10.1109\/ICRA.2018.8461039"},{"key":"1155_CR16","unstructured":"Kalashnikov D, Irpan A, Pastor P, Ibarz J, Herzog A, Jang E, Quillen D, Holly E, Kalakrishnan M, Vanhoucke V, et\u00a0al (2018) Qt-opt: scalable deep reinforcement learning for vision-based robotic manipulation. arXiv preprint arXiv:1806.10293"},{"issue":"2\u20133","key":"1155_CR17","doi-asserted-by":"publisher","first-page":"202","DOI":"10.1177\/0278364919872545","volume":"39","author":"K Fang","year":"2020","unstructured":"Fang K, Zhu Y, Garg A, Kurenkov A, Mehta V, Fei-Fei L, Savarese S (2020) Learning task-oriented grasping for tool manipulation from simulated self-supervision. Int J Robot Res 39(2\u20133):202\u2013216","journal-title":"Int J Robot Res"},{"key":"1155_CR18","unstructured":"Nair A, Pong V, Dalal M, Bahl S, Lin S, Levine S (2018) Visual reinforcement learning with imagined goals. arXiv preprint arXiv:1807.04742"},{"key":"1155_CR19","doi-asserted-by":"crossref","unstructured":"Jansen PA (2020) Visually-grounded planning without vision: language models infer detailed plans from high-level instructions. arXiv preprint arXiv:2009.14259","DOI":"10.18653\/v1\/2020.findings-emnlp.395"},{"key":"1155_CR20","unstructured":"Min SY, Chaplot DS, Ravikumar PK, Bisk Y, Salakhutdinov R (2022) FILM: following instructions in language with modular methods. In: International conference on learning representations. https:\/\/openreview.net\/forum?id=qI4542Y2s1D"},{"key":"1155_CR21","doi-asserted-by":"crossref","unstructured":"Zhang Y, Chai J (2021) Hierarchical task learning from language instructions with unified transformers and self-monitoring. In: Findings of the association for computational linguistics: ACL-IJCNLP 2021, pp. 4202\u20134213","DOI":"10.18653\/v1\/2021.findings-acl.368"},{"key":"1155_CR22","unstructured":"Blukis V, Paxton C, Fox D, Garg A, Artzi Y (2022) A persistent spatial semantic representation for high-level natural language instruction execution. In: Conference on robot learning. PMLR, pp. 706\u2013717"},{"key":"1155_CR23","doi-asserted-by":"crossref","unstructured":"Shridhar M, Thomason J, Gordon D, Bisk Y, Han W, Mottaghi R, Zettlemoyer L, Fox D (2020) Alfred: a benchmark for interpreting grounded instructions for everyday tasks. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 10740\u201310749","DOI":"10.1109\/CVPR42600.2020.01075"},{"key":"1155_CR24","unstructured":"Kenton JDM-WC, Toutanova LK (2019) Bert: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of naacL-HLT, vol 1, p 2"},{"key":"1155_CR25","unstructured":"Radford A, Narasimhan K, Salimans T, Sutskever I, et\u00a0al (2018) Improving language understanding by generative pre-training"},{"key":"1155_CR26","doi-asserted-by":"crossref","unstructured":"Lewis M, Liu Y, Goyal N, Ghazvininejad M, Mohamed A, Levy O, Stoyanov V, Zettlemoyer L (2020) Bart: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In: Proceedings of the 58th annual meeting of the association for computational linguistics, pp 7871\u20137880","DOI":"10.18653\/v1\/2020.acl-main.703"},{"key":"1155_CR27","unstructured":"Brohan A, Chebotar Y, Finn C, Hausman K, Herzog A, Ho D, Ibarz J, Irpan A, Jang E, Julian R (2023) Do as i can, not as i say: grounding language in robotic affordances. In: Conference on robot learning. PMLR, pp. 287\u2013318"},{"key":"1155_CR28","unstructured":"Huang W, Abbeel P, Pathak D, Mordatch I (2022) Language models as zero-shot planners: extracting actionable knowledge for embodied agents. In: International conference on machine learning. PMLR, pp 9118\u20139147"},{"key":"1155_CR29","unstructured":"Huang W, Xia F, Xiao T, Chan H, Liang J, Florence P, Zeng A, Tompson J, Mordatch I, Chebotar Y, et\u00a0al (2022) Inner monologue: embodied reasoning through planning with language models. arXiv preprint arXiv:2207.05608"},{"key":"1155_CR30","unstructured":"Zeng A, Wong A, Welker S, Choromanski K, Tombari F, Purohit A, Ryoo M, Sindhwani V, Lee J, Vanhoucke V, et\u00a0al (2022) Socratic models: composing zero-shot multimodal reasoning with language. arXiv preprint arXiv:2204.00598"},{"key":"1155_CR31","doi-asserted-by":"crossref","unstructured":"Lin K, Agia C, Migimatsu T, Pavone M, Bohg J (2023) Text2motion: From natural language instructions to feasible plans. arXiv preprint arXiv:2303.12153","DOI":"10.1007\/s10514-023-10131-7"},{"key":"1155_CR32","first-page":"5998","volume":"30","author":"A Vaswani","year":"2017","unstructured":"Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser \u0141, Polosukhin I (2017) Attention is all you need. Adv Neural Inf Process Syst 30:5998\u20136008","journal-title":"Adv Neural Inf Process Syst"},{"key":"1155_CR33","unstructured":"Yu T, Quillen D, He Z, Julian R, Hausman K, Finn C, Levine S (2019) Meta-world: a benchmark and evaluation for multi-task and meta reinforcement learning. In: Conference on robot learning (CoRL)"},{"key":"1155_CR34","first-page":"4767","volume":"33","author":"R Yang","year":"2020","unstructured":"Yang R, Xu H, Wu Y, Wang X (2020) Multi-task reinforcement learning with soft modularization. Adv Neural Inf Process Syst 33:4767\u20134777","journal-title":"Adv Neural Inf Process Syst"},{"key":"1155_CR35","doi-asserted-by":"crossref","unstructured":"Kumra S, Joshi S, Sahin F (2020) Antipodal robotic grasping using generative residual convolutional neural network. In: 2020 IEEE\/RSJ international conference on intelligent robots and systems (IROS). IEEE, pp 9626\u20139633","DOI":"10.1109\/IROS45743.2020.9340777"},{"key":"1155_CR36","unstructured":"Murali A, Liu W, Marino K, Chernova S, Gupta A (2020) Same object, different grasps: data and semantic knowledge for task-oriented grasping. arXiv preprint arXiv:2011.06431"},{"key":"1155_CR37","doi-asserted-by":"crossref","unstructured":"Ni P, Zhang W, Zhu X, Cao Q (2020) Pointnet++ grasping: Learning an end-to-end spatial grasp generation algorithm from sparse point clouds. In: 2020 IEEE international conference on robotics and automation (ICRA). IEEE, pp 3619\u20133625","DOI":"10.1109\/ICRA40945.2020.9196740"},{"key":"1155_CR38","unstructured":"Mnih V, Kavukcuoglu K, Silver D, Graves A, Antonoglou I, Wierstra D, Riedmiller M (2013) Playing atari with deep reinforcement learning. In: Computer science"},{"key":"1155_CR39","unstructured":"Lillicrap TP, Hunt JJ, Pritzel A, Heess N, Erez T, Tassa Y, Silver D, Wierstra D (2015) Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971"},{"key":"1155_CR40","unstructured":"Schulman J, Wolski F, Dhariwal P, Radford A, Klimov O (2017) Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347"},{"key":"1155_CR41","unstructured":"Haarnoja T, Zhou A, Abbeel P, Levine S (2018) Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In: International conference on machine learning. PMLR, pp 1861\u20131870"},{"key":"1155_CR42","doi-asserted-by":"crossref","unstructured":"Pinto L, Gupta A (2017) Learning to push by grasping: Using multiple tasks for effective learning. In: 2017 IEEE international conference on robotics and automation (ICRA). IEEE, pp. 2161\u20132168","DOI":"10.1109\/ICRA.2017.7989249"},{"key":"1155_CR43","doi-asserted-by":"crossref","unstructured":"Huang D-A, Nair S, Xu D, Zhu Y, Garg A, Fei-Fei L, Savarese S, Niebles JC (2019) Neural task graphs: Generalizing to unseen tasks from a single video demonstration. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 8565\u20138574","DOI":"10.1109\/CVPR.2019.00876"},{"key":"1155_CR44","doi-asserted-by":"crossref","unstructured":"Xu D, Nair S, Zhu Y, Gao J, Garg A, Fei-Fei L, Savarese S (2017) Neural task programming: learning to generalize across hierarchical tasks. In: 2018 IEEE international conference on robotics and automation (ICRA)","DOI":"10.1109\/ICRA.2018.8460689"},{"key":"1155_CR45","doi-asserted-by":"crossref","unstructured":"Tremblay J, To T, Molchanov A, Tyree S, Kautz J, Birchfield S (2018) Synthetically trained neural networks for learning human-readable plans from real-world demonstrations. In: 2018 IEEE International conference on robotics and automation (ICRA). IEEE, pp 5659\u20135666","DOI":"10.1109\/ICRA.2018.8460642"},{"key":"1155_CR46","doi-asserted-by":"crossref","unstructured":"Strudel R, Pashevich A, Kalevatykh I, Laptev I, Sivic J, Schmid C (2020) Learning to combine primitive skills: a step towards versatile robotic manipulation. In: 2020 IEEE international conference on robotics and automation (ICRA). IEEE, pp 4637\u20134643","DOI":"10.1109\/ICRA40945.2020.9196619"},{"issue":"4","key":"1155_CR47","doi-asserted-by":"publisher","first-page":"6724","DOI":"10.1109\/LRA.2020.3015448","volume":"5","author":"A Hundt","year":"2020","unstructured":"Hundt A, Killeen B, Greene N, Wu H, Kwon H, Paxton C, Hager GD (2020) \u201cgood robot!\u2019\u2019: efficient reinforcement learning for multi-step visual tasks with sim to real transfer. IEEE Robot Autom Lett 5(4):6724\u20136731","journal-title":"IEEE Robot Autom Lett"},{"key":"1155_CR48","doi-asserted-by":"crossref","unstructured":"Li Z, Sun Z, Su J, Zhang J (2021) Learning a skill-sequence-dependent policy for long-horizon manipulation tasks. In: 2021 IEEE 17th International conference on automation science and engineering (CASE). IEEE, pp 1229\u20131234","DOI":"10.1109\/CASE49439.2021.9551399"},{"key":"1155_CR49","unstructured":"Radford A, Wu J, Child R, Luan D, Amodei D, Sutskever I, et\u00a0al. Language models are unsupervised multitask learners"},{"key":"1155_CR50","unstructured":"Chowdhery A, Narang S, Devlin J, Bosma M, Mishra G, Roberts A, Barham P, Chung HW, Sutton C, Gehrmann S, et\u00a0al (2022) Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311"},{"key":"1155_CR51","first-page":"27730","volume":"35","author":"L Ouyang","year":"2022","unstructured":"Ouyang L, Wu J, Jiang X, Almeida D, Wainwright C, Mishkin P, Zhang C, Agarwal S, Slama K, Ray A (2022) Training language models to follow instructions with human feedback. Adv Neural Inf Process Syst 35:27730\u201327744","journal-title":"Adv Neural Inf Process Syst"},{"key":"1155_CR52","doi-asserted-by":"crossref","unstructured":"Speer R, Chin J, Havasi C (2017) Conceptnet 5.5: An open multilingual graph of general knowledge. In: Thirty-first AAAI conference on artificial intelligence","DOI":"10.1609\/aaai.v31i1.11164"},{"issue":"11","key":"1155_CR53","doi-asserted-by":"publisher","first-page":"39","DOI":"10.1145\/219717.219748","volume":"38","author":"A Miller George","year":"1995","unstructured":"Miller George A (1995) Wordnet: a lexical database for English. Commun ACM 38(11):39\u201341","journal-title":"Commun ACM"},{"key":"1155_CR54","unstructured":"Clark K, Luong M-T, Le QV, Manning CD (2020) Electra: pre-training text encoders as discriminators rather than generators. arXiv preprint arXiv:2003.10555"},{"key":"1155_CR55","unstructured":"Zhang S, Roller S, Goyal N, Artetxe M, Chen M, Chen S, Dewan C, Diab M, Li X, Lin XV, et\u00a0al (2022) Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068"},{"key":"1155_CR56","unstructured":"Holtzman A, Buys J, Du L, Forbes M, Choi Y (2019) The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-023-01155-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-023-01155-8\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-023-01155-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,2,10]],"date-time":"2024-02-10T22:13:06Z","timestamp":1707603186000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-023-01155-8"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,7,18]]},"references-count":56,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2024,2]]}},"alternative-id":["1155"],"URL":"https:\/\/doi.org\/10.1007\/s40747-023-01155-8","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"value":"2199-4536","type":"print"},{"value":"2198-6053","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,7,18]]},"assertion":[{"value":"17 April 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"15 June 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"18 July 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}