{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,11]],"date-time":"2026-03-11T11:49:18Z","timestamp":1773229758182,"version":"3.50.1"},"reference-count":50,"publisher":"Springer Science and Business Media LLC","issue":"2","license":[{"start":{"date-parts":[[2025,12,26]],"date-time":"2025-12-26T00:00:00Z","timestamp":1766707200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2026,1,29]],"date-time":"2026-01-29T00:00:00Z","timestamp":1769644800000},"content-version":"vor","delay-in-days":34,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"The National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["No.52065010"],"award-info":[{"award-number":["No.52065010"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100001809","name":"The National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["No.52165063"],"award-info":[{"award-number":["No.52165063"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["J. King Saud Univ. Comput. Inf. Sci."],"published-print":{"date-parts":[[2026,3]]},"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>To address the challenges of high-dimensional control in dual-arm collaborative tasks, the complexity of multi-stage decision-making, and the limitations of traditional Proximal Policy Optimization (PPO) algorithms due to their single constraint mechanism, which results in policy bias and insufficient convergence efficiency, this paper proposes a dual-arm collaborative control method based on an improved Proximal Policy Optimization algorithm. Based on deep reinforcement learning, the state space and action space of the dual-arm system are first defined, and a perception-decision-update closed-loop interaction mechanism is constructed. Subsequently, a Hierarchical Constrained Hybrid Proximal Policy Optimization algorithm (HCH-PPO) is proposed, which designs a dual-timescale hierarchical policy, establishes dynamic hybrid constraints, and incorporates an adaptive parameter adjustment mechanism. While maintaining the efficiency of Proximal Policy Optimization (PPO), the algorithm introduces Trust Region Policy Optimization (TRPO) to enhance the stability of the optimization process and the policy exploration capability. This hierarchical optimization framework effectively enables efficient state-to-action mapping learning. Finally, experimental results demonstrate that, compared to traditional PPO, the proposed method achieves a 56.82% improvement in convergence speed and a 12% increase in task success rate in dual-arm collaborative grasping and placing tasks, indicating significant performance enhancement.<\/jats:p>","DOI":"10.1007\/s44443-025-00371-1","type":"journal-article","created":{"date-parts":[[2025,12,26]],"date-time":"2025-12-26T10:35:13Z","timestamp":1766745313000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["A dual-arm cooperative control method based on improved proximal policy optimization"],"prefix":"10.1007","volume":"38","author":[{"given":"Man","family":"Su","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6747-7864","authenticated-orcid":false,"given":"Qingni","family":"Yuan","sequence":"additional","affiliation":[]},{"given":"Pengju","family":"Qu","sequence":"additional","affiliation":[]},{"given":"Chao","family":"Wang","sequence":"additional","affiliation":[]},{"given":"Yinjiang","family":"Zhou","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,12,26]]},"reference":[{"key":"371_CR1","doi-asserted-by":"crossref","unstructured":"Adkins J, Bowling M, White A (2024) A method for evaluating hyperparameter sensitivity in reinforcement learning. arXiv - CS - Machine Learning, arxiv-2412.07165","DOI":"10.52202\/079017-3964"},{"key":"371_CR2","doi-asserted-by":"publisher","DOI":"10.1109\/msp.2017.2743240","author":"K Arulkumaran","year":"2017","unstructured":"Arulkumaran K, Deisenroth MP, Brundage M et al (2017) Deep reinforcement learning: a brief survey. IEEE Signal Process Mag. https:\/\/doi.org\/10.1109\/msp.2017.2743240","journal-title":"IEEE Signal Process Mag"},{"key":"371_CR3","doi-asserted-by":"publisher","unstructured":"C Calderon-Cordova, Sarango R, Castillo D, et al. (0014) A deep reinforcement learning framework for control of robotic manipulators in simulated environments. IEEE Access, https:\/\/doi.org\/10.1109\/access.2024.3432741","DOI":"10.1109\/access.2024.3432741"},{"key":"371_CR4","doi-asserted-by":"publisher","DOI":"10.1109\/tcyb.2021.3051456","author":"Y Cheng","year":"2021","unstructured":"Cheng Y, Huang L, Wang X (2021) Authentic boundary proximal policy optimization. IEEE Trans Cybern. https:\/\/doi.org\/10.1109\/tcyb.2021.3051456","journal-title":"IEEE Trans Cybern"},{"key":"371_CR5","doi-asserted-by":"publisher","DOI":"10.1016\/j.jksuci.2024.102014","author":"Y-J Chiu","year":"2024","unstructured":"Chiu Y-J, Yuan Y-Y, Jian S-R (2024) Design of and research on the robot arm recovery grasping system based on machine vision. J King Saud Univ-Comput Inf Sci. https:\/\/doi.org\/10.1016\/j.jksuci.2024.102014","journal-title":"J King Saud Univ-Comput Inf Sci"},{"key":"371_CR6","doi-asserted-by":"publisher","DOI":"10.1109\/lra.2022.3202358","author":"M Costanzo","year":"2022","unstructured":"Costanzo M, De Maria G, Natale C (2022) Tactile feedback enabling in-hand pivoting and internal force control for dual-arm cooperative object carrying. IEEE Robot Automation Lett. https:\/\/doi.org\/10.1109\/lra.2022.3202358","journal-title":"IEEE Robot Automation Lett"},{"key":"371_CR7","doi-asserted-by":"publisher","unstructured":"Cui Y, Xu Z, Zhong L et al (2024) A task-adaptive deep reinforcement learning framework for dual-arm robot manipulation. IEEE Transactions on Automation Science and Engineering 1\u201314. https:\/\/doi.org\/10.1109\/TASE.2024.3352584","DOI":"10.1109\/TASE.2024.3352584"},{"key":"371_CR8","doi-asserted-by":"publisher","DOI":"10.1109\/tte.2021.3088853","author":"G Du","year":"2021","unstructured":"Du G, Zou Y, Zhang X et al (2021) Heuristic energy management strategy of hybrid electric vehicle based on deep reinforcement learning with accelerated gradient optimization. IEEE Trans Transp Electrif. https:\/\/doi.org\/10.1109\/tte.2021.3088853","journal-title":"IEEE Trans Transp Electrif"},{"key":"371_CR9","unstructured":"Engstrom L, Ilyas A, Santurkar S, et al (2020) Implementation matters in deep policy gradients: a case study on PPO and TRPO. arXiv - CS - Machine Learning, arxiv-2005.12729"},{"key":"371_CR10","doi-asserted-by":"publisher","DOI":"10.3390\/biomimetics9090577","author":"JN-C Francisco","year":"2024","unstructured":"Francisco JN-C, Juan GV, Carlos B (2024) Method for bottle opening with a dual-arm robot. Biomimetics. https:\/\/doi.org\/10.3390\/biomimetics9090577","journal-title":"Biomimetics"},{"key":"371_CR11","doi-asserted-by":"publisher","DOI":"10.1016\/j.rcim.2025.102991","author":"D Ge","year":"2025","unstructured":"Ge D, Zhao H, Li D et al (2025) A dual-arm robotic cooperative framework for multiple peg-in-hole assembly of large objects. Robot Computer-Integrated Manufacturing. https:\/\/doi.org\/10.1016\/j.rcim.2025.102991","journal-title":"Robot Computer-Integrated Manufacturing"},{"key":"371_CR12","unstructured":"Guizhe J, Zhuoren L, Bo L, et al (2025) Multi-timescale hierarchical reinforcement learning for unified behavior and control of autonomous driving. arXiv - CS - Robotics, arxiv-2506.23771"},{"key":"371_CR13","doi-asserted-by":"publisher","DOI":"10.1155\/2020\/8896610","author":"B Hu","year":"2020","unstructured":"Hu B, Chen H, Han L et al (2020) Research and ground verification of the force compliance control method for space station manipulator. Int J Aerosp Eng. https:\/\/doi.org\/10.1155\/2020\/8896610","journal-title":"Int J Aerosp Eng"},{"key":"371_CR14","doi-asserted-by":"publisher","DOI":"10.1007\/s11042-023-15361-6","author":"Z Hu","year":"2023","unstructured":"Hu Z, Liu H, Xiong Y et al (2023) Promoting human-AI interaction makes a better adoption of deep reinforcement learning: a real-world application in game industry. Multimedia Tools Appl. https:\/\/doi.org\/10.1007\/s11042-023-15361-6","journal-title":"Multimedia Tools Appl"},{"key":"371_CR15","doi-asserted-by":"publisher","DOI":"10.1016\/j.engappai.2023.107130","author":"Z Huang","year":"2023","unstructured":"Huang Z, Liu Q, Zhu F (2023) Hierarchical reinforcement learning with adaptive scheduling for robot control. Eng Appl Artif Intell. https:\/\/doi.org\/10.1016\/j.engappai.2023.107130","journal-title":"Eng Appl Artif Intell"},{"key":"371_CR16","doi-asserted-by":"publisher","DOI":"10.1109\/mra.2023.3262461","author":"D Jiang","year":"2023","unstructured":"Jiang D, Wang H, Lu Y (2023) Mastering the complex assembly task with a dual-arm robot: a novel reinforcement learning method. IEEE Robot Autom Mag. https:\/\/doi.org\/10.1109\/mra.2023.3262461","journal-title":"IEEE Robot Autom Mag"},{"key":"371_CR17","doi-asserted-by":"publisher","DOI":"10.1109\/tmm.2022.3177942","author":"Y-L Jin","year":"2022","unstructured":"Jin Y-L, Ji Z-Y, Zeng D et al (2022) VWP:An efficient DRL-based autonomous driving model. IEEE Trans Multimedia. https:\/\/doi.org\/10.1109\/tmm.2022.3177942","journal-title":"IEEE Trans Multimedia"},{"key":"371_CR18","doi-asserted-by":"publisher","DOI":"10.1109\/access.2020.3018470","author":"X Jing","year":"2020","unstructured":"Jing X, Gao H, Chen Z et al (2020) A recursive dynamic modeling and control for dual-arm manipulator with elastic joints. IEEE Access. https:\/\/doi.org\/10.1109\/access.2020.3018470","journal-title":"IEEE Access"},{"key":"371_CR19","doi-asserted-by":"publisher","DOI":"10.1016\/j.actaastro.2025.01.017","author":"B Li","year":"2025","unstructured":"Li B, Wang Z (2025) Two-stage DRL with hybrid perception of vision and force feedback for lunar construction robotic assembly control. Acta Astronaut. https:\/\/doi.org\/10.1016\/j.actaastro.2025.01.017","journal-title":"Acta Astronaut"},{"key":"371_CR20","doi-asserted-by":"publisher","DOI":"10.1109\/tii.2021.3125447","author":"X Li","year":"2021","unstructured":"Li X, Liu H, Dong M (2021) A general framework of motion planning for redundant robot manipulator based on deep reinforcement learning. IEEE Trans Ind Inform. https:\/\/doi.org\/10.1109\/tii.2021.3125447","journal-title":"IEEE Trans Ind Inform"},{"key":"371_CR21","doi-asserted-by":"publisher","DOI":"10.1007\/s41315-024-00413-3","author":"S Li","year":"2025","unstructured":"Li S, Chu Z, Hu Z et al (2025) Deep reinforcement learning with hindsight experience replay for dual-arm robot trajectory planning. Int J Intell Robot Appl. https:\/\/doi.org\/10.1007\/s41315-024-00413-3","journal-title":"Int J Intell Robot Appl"},{"key":"371_CR22","doi-asserted-by":"publisher","DOI":"10.3389\/fnbot.2024.1362359","author":"K Liang","year":"2024","unstructured":"Liang K, Zha F, Guo W et al (2024) Motion planning framework based on dual-agent DDPG method for dual-arm robots guided by human joint angle constraints. Front Neurorobot. https:\/\/doi.org\/10.3389\/fnbot.2024.1362359","journal-title":"Front Neurorobot"},{"key":"371_CR23","doi-asserted-by":"publisher","DOI":"10.3390\/app11041816","author":"L Liu","year":"2021","unstructured":"Liu L, Liu Q, Song Y et al (2021a) A collaborative control method of dual-arm robots based on deep reinforcement learning. Appl Sci. https:\/\/doi.org\/10.3390\/app11041816","journal-title":"Appl Sci"},{"key":"371_CR24","doi-asserted-by":"publisher","DOI":"10.1109\/access.2021.3056670","author":"S Liu","year":"2021","unstructured":"Liu S, Wu J, He J (2021b) Dynamic multichannel sensing in cognitive radio: hierarchical reinforcement learning. IEEE Access. https:\/\/doi.org\/10.1109\/access.2021.3056670","journal-title":"IEEE Access"},{"key":"371_CR25","doi-asserted-by":"publisher","DOI":"10.1109\/tase.2023.3288037","author":"X Liu","year":"2023","unstructured":"Liu X, Wang G, Liu Z et al (2023) Hierarchical reinforcement learning integrating with human knowledge for practical robot skill learning in complex multi-stage manipulation. IEEE Trans Autom Sci Eng. https:\/\/doi.org\/10.1109\/tase.2023.3288037","journal-title":"IEEE Trans Autom Sci Eng"},{"key":"371_CR26","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2025.129727","author":"Y Luo","year":"2025","unstructured":"Luo Y, Zhang M, Liu Y et al (2025) Dynamic neural learning for obstacle avoidance of humanoid robot performing cooperative tasks. Neurocomputing. https:\/\/doi.org\/10.1016\/j.neucom.2025.129727","journal-title":"Neurocomputing"},{"key":"371_CR27","doi-asserted-by":"publisher","first-page":"102130","DOI":"10.1016\/j.rcim.2021.102130","volume":"71","author":"A Maldonado-Ramirez","year":"2021","unstructured":"Maldonado-Ramirez A, Rios-Cabrera R, Lopez-Juarez I (2021) A visual path-following learning approach for industrial robots using DRL. Robot Computer-Integrated Manuf 71:102130. https:\/\/doi.org\/10.1016\/j.rcim.2021.102130","journal-title":"Robot Computer-Integrated Manuf"},{"key":"371_CR28","doi-asserted-by":"publisher","DOI":"10.1109\/tnnls.2020.3044196","author":"W Meng","year":"2021","unstructured":"Meng W, Zheng Q, Shi Y et al (2021) An off-policy trust region policy optimization method with monotonic improvement guarantee for deep reinforcement learning. IEEE Transact Neural Networks Learn Syst. https:\/\/doi.org\/10.1109\/tnnls.2020.3044196","journal-title":"IEEE Transact Neural Networks Learn Syst"},{"key":"371_CR29","unstructured":"Mnih V, Kavukcuoglu K, Silver D, et al (2013) Playing atari with deep reinforcement learning. CoRR, abs\/1312.5602("},{"key":"371_CR30","doi-asserted-by":"publisher","DOI":"10.1145\/3453160","author":"S Pateria","year":"2021","unstructured":"Pateria S, Subagdja B, Tan A-h et al (2021) Hierarchical reinforcement learning. ACM Comput Surv. https:\/\/doi.org\/10.1145\/3453160","journal-title":"ACM Comput Surv"},{"key":"371_CR31","doi-asserted-by":"publisher","DOI":"10.1007\/s12293-024-00419-1","author":"Y Peng","year":"2024","unstructured":"Peng Y, Chen G, Zhang M et al (2024) Proximal evolutionary strategy: improving deep reinforcement learning through evolutionary policy optimization. Memetic Computing. https:\/\/doi.org\/10.1007\/s12293-024-00419-1","journal-title":"Memetic Computing"},{"key":"371_CR32","doi-asserted-by":"publisher","DOI":"10.1109\/tsg.2022.3155455","author":"T Qian","year":"2022","unstructured":"Qian T, Shao C, Wang X et al (2022) Shadow-price DRL: a framework for online scheduling of shared autonomous EVs fleets. IEEE Trans Smart Grid. https:\/\/doi.org\/10.1109\/tsg.2022.3155455","journal-title":"IEEE Trans Smart Grid"},{"key":"371_CR33","doi-asserted-by":"publisher","DOI":"10.3390\/mi15010112","author":"A Salehi","year":"2024","unstructured":"Salehi A, Hosseinpour S, Tabatabaei N et al (2024) Intelligent navigation of a magnetic microrobot with model-free deep reinforcement learning in a real-world environment. Micromachines. https:\/\/doi.org\/10.3390\/mi15010112","journal-title":"Micromachines"},{"key":"371_CR34","doi-asserted-by":"publisher","DOI":"10.1016\/j.isatra.2024.01.007","author":"E Sayar","year":"2024","unstructured":"Sayar E, Gao X, Hu Y et al (2024) Toward coordinated planning and hierarchical optimization control for highly redundant mobile manipulator. ISA Trans. https:\/\/doi.org\/10.1016\/j.isatra.2024.01.007","journal-title":"ISA Trans"},{"key":"371_CR35","doi-asserted-by":"publisher","DOI":"10.1016\/s0004-3702(99)00052-1","author":"RS Sutton","year":"1999","unstructured":"Sutton RS, Precup D, Singh S (1999) Between MDPs and semi-MDPs: a framework for temporal abstraction in reinforcement learning. Artif Intell. https:\/\/doi.org\/10.1016\/s0004-3702(99)00052-1","journal-title":"Artif Intell"},{"key":"371_CR36","doi-asserted-by":"publisher","unstructured":"Wang D, Qiu C, Lian J, et al. (0014) Cooperative control for dual-arm robots based on improved dynamic movement primitives. IEEE Transactions on Industrial Electronics, https:\/\/doi.org\/10.1109\/tie.2024.3406866","DOI":"10.1109\/tie.2024.3406866"},{"key":"371_CR37","doi-asserted-by":"publisher","unstructured":"Wang X, Luo Y, Qin B, et al. (0018) Power allocation strategy for urban rail HESS based on deep reinforcement learning sequential decision optimization. IEEE Transactions on Transportation Electrification, https:\/\/doi.org\/10.1109\/tte.2022.3227900","DOI":"10.1109\/tte.2022.3227900"},{"key":"371_CR38","doi-asserted-by":"publisher","DOI":"10.1007\/bf00992698","author":"CJCH Watkins","year":"1992","unstructured":"Watkins CJCH, Dayan P (1992) Q-learning. Mach Learn. https:\/\/doi.org\/10.1007\/bf00992698","journal-title":"Mach Learn"},{"key":"371_CR39","doi-asserted-by":"publisher","DOI":"10.1016\/0377-2217(89)90348-2","author":"CC White","year":"1989","unstructured":"White CC, White DJ (1989) Markov decision processes. Eur J Oper Res. https:\/\/doi.org\/10.1016\/0377-2217(89)90348-2","journal-title":"Eur J Oper Res"},{"key":"371_CR40","doi-asserted-by":"publisher","first-page":"26871","DOI":"10.1109\/ACCESS.2021.3056903","volume":"9","author":"CC Wong","year":"2021","unstructured":"Wong CC, Chien SY, Feng HM et al (2021) Motion planning for dual-arm robot based on soft actor-critic. IEEE Access 9:26871\u201326885. https:\/\/doi.org\/10.1109\/ACCESS.2021.3056903","journal-title":"IEEE Access"},{"key":"371_CR41","doi-asserted-by":"publisher","DOI":"10.3390\/s23010523","author":"D Wu","year":"2023","unstructured":"Wu D, Yu Z, Adili A et al (2023) A self-collision detection algorithm of a dual-manipulator system based on GJK and deep learning. Sensors. https:\/\/doi.org\/10.3390\/s23010523","journal-title":"Sensors"},{"key":"371_CR42","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2023.02.008","author":"H Xu","year":"2023","unstructured":"Xu H, Yan Z, Xuan J et al (2023) Improving proximal policy optimization with alpha divergence. Neurocomputing. https:\/\/doi.org\/10.1016\/j.neucom.2023.02.008","journal-title":"Neurocomputing"},{"key":"371_CR43","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2024.127716","author":"H Xu","year":"2024","unstructured":"Xu H, Xuan J, Zhang G et al (2024) Trust region policy optimization via entropy regularization for Kullback-Leibler divergence constraint. Neurocomputing. https:\/\/doi.org\/10.1016\/j.neucom.2024.127716","journal-title":"Neurocomputing"},{"key":"371_CR44","doi-asserted-by":"publisher","DOI":"10.1109\/tnnls.2021.3059912","author":"X Yang","year":"2021","unstructured":"Yang X, Ji Z, Wu J et al (2021) Hierarchical reinforcement learning with universal policies for multistep robotic manipulation. IEEE Transact Neural Networks Learn Syst. https:\/\/doi.org\/10.1109\/tnnls.2021.3059912","journal-title":"IEEE Transact Neural Networks Learn Syst"},{"key":"371_CR45","doi-asserted-by":"publisher","DOI":"10.3390\/aerospace9120819","author":"S Yang","year":"2022","unstructured":"Yang S, Zhang Y, Chen T et al (2022) Assembly strategy for modular components using a dual-arm space robot with flexible appendages. Aerospace. https:\/\/doi.org\/10.3390\/aerospace9120819","journal-title":"Aerospace"},{"key":"371_CR46","doi-asserted-by":"publisher","DOI":"10.3389\/fnbot.2020.00063","author":"J Yu","year":"2020","unstructured":"Yu J, Su Y, Liao Y (2020) The path planning of mobile robot by neural networks and hierarchical reinforcement learning. Front Neurorobot. https:\/\/doi.org\/10.3389\/fnbot.2020.00063","journal-title":"Front Neurorobot"},{"key":"371_CR47","first-page":"24611","volume":"35","author":"C Yu","year":"2022","unstructured":"Yu C, Velu A, Vinitsky E et al (2022) The surprising effectiveness of ppo in cooperative multi-agent games. Adv Neural Inform Process Syst 35:24611\u201324624","journal-title":"Adv Neural Inform Process Syst"},{"key":"371_CR48","doi-asserted-by":"publisher","DOI":"10.1109\/lra.2025.3543137","author":"Z Zhang","year":"2025","unstructured":"Zhang Z, Yang Y, Zuo W et al (2025) Image-based visual servoing for enhanced cooperation of dual-arm manipulation. IEEE Robot Autom Lett. https:\/\/doi.org\/10.1109\/lra.2025.3543137","journal-title":"IEEE Robot Autom Lett"},{"key":"371_CR49","unstructured":"Zhang Q, Guo Z, J\u00f8sang A, et al. (2022) PPO-UE: proximal policy optimization via uncertainty-aware exploration. arXiv - CS - Machine Learning, arxiv-2212.06343"},{"key":"371_CR50","doi-asserted-by":"publisher","DOI":"10.1088\/1742-6596\/2203\/1\/012065","author":"Z Zhizhuo","year":"2022","unstructured":"Zhizhuo Z, Change Z (2022) Simulation of robotic arm grasping control based on proximal policy optimization algorithm. J Phys: Conf Ser. https:\/\/doi.org\/10.1088\/1742-6596\/2203\/1\/012065","journal-title":"J Phys: Conf Ser"}],"container-title":["Journal of King Saud University Computer and Information Sciences"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s44443-025-00371-1","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s44443-025-00371-1.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s44443-025-00371-1.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,10]],"date-time":"2026-03-10T14:39:30Z","timestamp":1773153570000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s44443-025-00371-1"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,12,26]]},"references-count":50,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2026,3]]}},"alternative-id":["371"],"URL":"https:\/\/doi.org\/10.1007\/s44443-025-00371-1","relation":{},"ISSN":["1319-1578","2213-1248"],"issn-type":[{"value":"1319-1578","type":"print"},{"value":"2213-1248","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,12,26]]},"assertion":[{"value":"30 July 2025","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"30 October 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"26 December 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"All authors confirm that no conflict of interest exits in the submission of this manuscript.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"45"}}