{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,6]],"date-time":"2026-04-06T16:53:28Z","timestamp":1775494408302,"version":"3.50.1"},"reference-count":48,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2020,7,9]],"date-time":"2020-07-09T00:00:00Z","timestamp":1594252800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/www.springer.com\/tdm"},{"start":{"date-parts":[[2020,7,9]],"date-time":"2020-07-09T00:00:00Z","timestamp":1594252800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.springer.com\/tdm"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Int. J. Mach. Learn. &amp; Cyber."],"published-print":{"date-parts":[[2021,1]]},"DOI":"10.1007\/s13042-020-01167-7","type":"journal-article","created":{"date-parts":[[2020,7,9]],"date-time":"2020-07-09T22:16:32Z","timestamp":1594332992000},"page":"231-241","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":73,"title":["Multi-agent reinforcement learning for redundant robot control in task-space"],"prefix":"10.1007","volume":"12","author":[{"given":"Adolfo","family":"Perrusqu\u00eda","sequence":"first","affiliation":[]},{"given":"Wen","family":"Yu","sequence":"additional","affiliation":[]},{"given":"Xiaoou","family":"Li","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2020,7,9]]},"reference":[{"issue":"23","key":"1167_CR1","doi-asserted-by":"publisher","first-page":"5570","DOI":"10.1177\/1077546318758800","volume":"24","author":"S Ahmadi","year":"2018","unstructured":"Ahmadi S, Fateh M (2018) Task-space asymptotic tracking control of robots using a direct adaptive Taylor series controller. J Vib Control 24(23):5570\u20135584. https:\/\/doi.org\/10.1177\/1077546318758800","journal-title":"J Vib Control"},{"key":"1167_CR2","doi-asserted-by":"publisher","unstructured":"Ansari Y, Falotico E (2016) A multiagent reinforcement learning approach for inverse kinematics oh high dimensional manipulators with precision positioning. In: 6th IEEE RAS\/EMBS international conference on biomedical robotics and biomechatronics (BioRob). https:\/\/doi.org\/10.1109\/BIOROB.2016.7523669","DOI":"10.1109\/BIOROB.2016.7523669"},{"issue":"2","key":"1167_CR3","doi-asserted-by":"publisher","first-page":"562","DOI":"10.1109\/TMECH.2018.2806918","volume":"23","author":"S Atashzar","year":"2018","unstructured":"Atashzar S, Tavakoli M, Patel R (2018) A computational-model-based study of supervised haptics-enabled therapist-in-the-loop training for upper-limb poststroke robotic rehabilitation. IEEE\/ASME Trans Mechatron 23(2):562\u2013574. https:\/\/doi.org\/10.1109\/TMECH.2018.2806918","journal-title":"IEEE\/ASME Trans Mechatron"},{"issue":"2","key":"1167_CR4","doi-asserted-by":"publisher","first-page":"978","DOI":"10.1109\/TMECH.2018.2800285","volume":"23","author":"D Axinte","year":"2018","unstructured":"Axinte D, Dong X, Palmer D, Rushworth A, Guzman S, Olarra A (2018) Miror-miniaturized robotic systems for holisticin-siturepair and maintenance works in restrained and hazardous environments. IEEE\/ASME Trans Mechatron 23(2):978\u2013981. https:\/\/doi.org\/10.1109\/TMECH.2018.2800285","journal-title":"IEEE\/ASME Trans Mechatron"},{"key":"1167_CR5","doi-asserted-by":"publisher","DOI":"10.1109\/IROS.2011.6094666","author":"B Bcsi","year":"2011","unstructured":"Bcsi B, Nguyen-Tuong D, Csat L, Schlkopf B, Peters J (2011) Learning inverse kinematics with structured prediction. IEEE\/RSJ Int Conf Intell Robots Syst. https:\/\/doi.org\/10.1109\/IROS.2011.6094666","journal-title":"IEEE\/RSJ Int Conf Intell Robots Syst"},{"key":"1167_CR6","doi-asserted-by":"publisher","DOI":"10.1109\/IROS.2010.5650243","author":"S Bitzer","year":"2010","unstructured":"Bitzer S, Howard M, Vijayakumar S (2010) Using dimensionality reduction to exploit constraints in reinforcement learning. IEEE\/RSJ Int Conf Intell Robots Syst (IROS). https:\/\/doi.org\/10.1109\/IROS.2010.5650243","journal-title":"IEEE\/RSJ Int Conf Intell Robots Syst (IROS)"},{"key":"1167_CR7","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-14435-6_7","volume-title":"Innovations in multi-agent systems and applications\u20141. Studies in computational intelligence. Lecture notes in computer science","author":"L Bu\u015foniu","year":"2010","unstructured":"Bu\u015foniu L, Bab\u00fbska R, De Schutter B (2010) Multi-agent reinforcement learning: an overview. In: Srinivasan D, Jain L (eds) Innovations in multi-agent systems and applications\u20141. Studies in computational intelligence. Lecture notes in computer science, vol 310. Springer, Berlin. https:\/\/doi.org\/10.1007\/978-3-642-14435-6_7"},{"key":"1167_CR8","volume-title":"Reinforcement learning and dynamic programming using function approximators. Automation and Control Engineering Series","author":"L Bu\u015foniu","year":"2010","unstructured":"Bu\u015foniu L, Bab\u00fbska R, De Schutter B, Ernst D (2010) Reinforcement learning and dynamic programming using function approximators. Automation and Control Engineering Series. CRC Press, Boca Raton"},{"key":"1167_CR9","doi-asserted-by":"publisher","DOI":"10.1109\/ICRA.2011.5979932","author":"C Cheah","year":"2011","unstructured":"Cheah C, Li X (2011) Singularity-robust task-space tracking control of robot. IEEE Int Conf Robot Autom. https:\/\/doi.org\/10.1109\/ICRA.2011.5979932","journal-title":"IEEE Int Conf Robot Autom"},{"key":"1167_CR10","doi-asserted-by":"publisher","unstructured":"Csistzar A, Eilers J, Verl A (2017) On solving the inverse kinematics problem using neural networks. In: 24th international conference on mechatronics and machine vision in practice. https:\/\/doi.org\/10.1109\/M2VIP.2017.8211457","DOI":"10.1109\/M2VIP.2017.8211457"},{"key":"1167_CR11","unstructured":"Deisenroth M, Rasmussen C (2011) PILCO: A model-based and data-efficient approach to policy search. In: Proceedings of the 28th international conference on machine learning, Bellevue, WAA, USA"},{"issue":"1\u20132","key":"1167_CR12","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1561\/2300000021","volume":"2","author":"MP Deisenroth","year":"2011","unstructured":"Deisenroth MP, Neumann G, Peters J (2011) A survey on policy search for robotics. Found Trends Robot 2(1\u20132):1\u2013142. https:\/\/doi.org\/10.1561\/2300000021","journal-title":"Found Trends Robot"},{"key":"1167_CR13","doi-asserted-by":"publisher","unstructured":"Duka A (2014) Neural network based inverse kinematics solution for trajectory tracking of a robotic arm. In: Procedia technology, the 7th international conference interdisciplinarity in engineering, INTER-ENG 2013. Petru Maior University of Tirgu Mures, Romania. https:\/\/doi.org\/10.1016\/j.protcy.2013.12.451","DOI":"10.1016\/j.protcy.2013.12.451"},{"issue":"3","key":"1167_CR14","doi-asserted-by":"publisher","first-page":"459","DOI":"10.15837\/ijccc.2012.3.1387","volume":"7","author":"Y Feng","year":"2012","unstructured":"Feng Y, Yao-nan W, Yi-min Y (2012) Inverse kinematics solution for robot manipulator based on neural network under joint subspace. Int J Comput Commun Control 7(3):459\u2013472. https:\/\/doi.org\/10.15837\/ijccc.2012.3.1387","journal-title":"Int J Comput Commun Control"},{"key":"1167_CR15","doi-asserted-by":"publisher","first-page":"165","DOI":"10.1016\/j.automatica.2016.01.025","volume":"67","author":"M Galicki","year":"2016","unstructured":"Galicki M (2016) Finite-time trajectory tracking control in task space of robotic manipulators. Automatica 67:165\u2013170. https:\/\/doi.org\/10.1016\/j.automatica.2016.01.025","journal-title":"Automatica"},{"issue":"3","key":"1167_CR16","doi-asserted-by":"publisher","first-page":"547","DOI":"10.1515\/ijame-2016-0033","volume":"21","author":"M Galicki","year":"2016","unstructured":"Galicki M (2016) Robust task space trajectory tracking control of robotic manipulators. Int J Appl Mech Eng 21(3):547\u2013568. https:\/\/doi.org\/10.1515\/ijame-2016-0033","journal-title":"Int J Appl Mech Eng"},{"key":"1167_CR17","doi-asserted-by":"publisher","unstructured":"Grondman I, Bu\u015foniu L, Bab\u00fbska R (2012) Model learning actor-critic algorithms: performance evaluation in a motion control task. In: 51st IEEE conference on decision and control (CDC), pp 5272\u20135277. https:\/\/doi.org\/10.1109\/CDC.2012.6426427","DOI":"10.1109\/CDC.2012.6426427"},{"key":"1167_CR18","doi-asserted-by":"publisher","unstructured":"Grondman I, Vaandrager M, Bu\u015foniu L, Bab\u00fbska R, Schuitema E (2011) Actor-critic control with reference model learning. In: Proceedings of the 18th World congress the international federation of automatic control, pp 14723\u201314728. https:\/\/doi.org\/10.3182\/20110828-6-IT-1002.00759","DOI":"10.3182\/20110828-6-IT-1002.00759"},{"issue":"3","key":"1167_CR19","doi-asserted-by":"publisher","first-page":"291","DOI":"10.1109\/TSMCB.2011.2170565","volume":"42","author":"I Grondman","year":"2012","unstructured":"Grondman I, Vaandrager M, Bu\u015foniu L, Bab\u00fbska R, Schuitema E (2012a) Efficient model learning methods for actor-critic control. IEEE Trans Syst Man Cybern B Cybern 42(3):291\u2013602. https:\/\/doi.org\/10.1109\/TSMCB.2011.2170565","journal-title":"IEEE Trans Syst Man Cybern B Cybern"},{"issue":"1","key":"1167_CR20","doi-asserted-by":"publisher","first-page":"88","DOI":"10.1109\/TMECH.2018.2878228","volume":"24","author":"P Hyatt","year":"2019","unstructured":"Hyatt P (2019) Configuration estimation for accurate position control of large-scale soft robots. IEEE\/ASME Trans Mechatron 24(1):88\u201399. https:\/\/doi.org\/10.1109\/TMECH.2018.2878228","journal-title":"IEEE\/ASME Trans Mechatron"},{"issue":"6","key":"1167_CR21","doi-asserted-by":"publisher","first-page":"1185","DOI":"10.1162\/neco.1994.6.6.1185","volume":"6","author":"TMJ Jaakola","year":"1994","unstructured":"Jaakola TMJ, Singh S (1994) On the convergence of stochastic iterative dyanamic programming algorithms. Neural Comput 6(6):1185\u20131201. https:\/\/doi.org\/10.1162\/neco.1994.6.6.1185","journal-title":"Neural Comput"},{"issue":"11","key":"1167_CR22","doi-asserted-by":"publisher","first-page":"1238","DOI":"10.1007\/978-3-319-03194-1_2","volume":"32","author":"J Kober","year":"2013","unstructured":"Kober J, Bagnell J, Peters J (2013) Reinforcement learning in robotics: a survey. Int J Robot Res 32(11):1238\u20131274. https:\/\/doi.org\/10.1007\/978-3-319-03194-1_2","journal-title":"Int J Robot Res"},{"issue":"6","key":"1167_CR23","doi-asserted-by":"publisher","first-page":"76","DOI":"10.1109\/MCS.2012.2214134","volume":"32","author":"F Lewis","year":"2012","unstructured":"Lewis F, Vrable D, Vamvoudakis K (2012) Reinforcement learning and feedback control: using natural decision methods to desgin optimal adaptive controllers. IEEE Control Syst Mag 32(6):76\u2013105. https:\/\/doi.org\/10.1109\/MCS.2012.2214134","journal-title":"IEEE Control Syst Mag"},{"issue":"1","key":"1167_CR24","doi-asserted-by":"publisher","first-page":"155","DOI":"10.1109\/3477.907575","volume":"31","author":"L Luya","year":"2001","unstructured":"Luya L, Gruver W, Zhang Q, Yang Z (2001) Kinematic control of redundant robots and the motion optimizability measure. IEEE Trans Syst Man Cybern Part B Cybern 31(1):155\u2013160. https:\/\/doi.org\/10.1109\/3477.907575","journal-title":"IEEE Trans Syst Man Cybern Part B Cybern"},{"issue":"6","key":"1167_CR25","doi-asserted-by":"publisher","first-page":"2996","DOI":"10.1109\/TMECH.2015.2418793","volume":"20","author":"Y Moon","year":"2015","unstructured":"Moon Y, Seo J, Choi J (2015) Development of new end-effector for proof-of-concept of fully robotic multichannel biopsy. IEEE\/ASME Trans Mechatron 20(6):2996\u20133008. https:\/\/doi.org\/10.1109\/TMECH.2015.2418793","journal-title":"IEEE\/ASME Trans Mechatron"},{"key":"1167_CR26","doi-asserted-by":"publisher","DOI":"10.1007\/b93979","volume-title":"Control of redundant manipulators: theory and experiments","author":"R Patel","year":"2005","unstructured":"Patel R, Shadpey F (2005) Control of redundant manipulators: theory and experiments. Springer, Berlin. https:\/\/doi.org\/10.1007\/b93979"},{"key":"1167_CR27","doi-asserted-by":"publisher","first-page":"271","DOI":"10.1007\/s10846-019-01058-2","volume":"97","author":"A Perrusqu\u00eda","year":"2020","unstructured":"Perrusqu\u00eda A, Yu W (2020) Human-in-the-loop control using euler angles. J Intell Robot Syst 97:271\u2013285. https:\/\/doi.org\/10.1007\/s10846-019-01058-2","journal-title":"J Intell Robot Syst"},{"key":"1167_CR28","doi-asserted-by":"publisher","DOI":"10.1080\/01969722.2020.1758466","author":"A Perrusqu\u00eda","year":"2020","unstructured":"Perrusqu\u00eda A, Yu W (2020) Robot position\/force control in unknown environment using hybrid reinforcement learning. Cybern Syst. https:\/\/doi.org\/10.1080\/01969722.2020.1758466","journal-title":"Cybern Syst"},{"key":"1167_CR29","doi-asserted-by":"publisher","unstructured":"Perrusqu\u00eda A, Yu W, Soria A (2019) Large space dimension reinforcement learning for robot position\/force discrete control. In: 2019 6th international conference on control, decision and information technologies (CoDIT 2019), Paris, France. https:\/\/doi.org\/10.1109\/CoDIT.2019.8820575","DOI":"10.1109\/CoDIT.2019.8820575"},{"key":"1167_CR30","doi-asserted-by":"publisher","unstructured":"Perrusqu\u00eda A, Yu W, Soria A (2019) Optimal contact force in unknown environments using reinforcement learning and model-free controllers. In: 16th international conference on electrical engineering, computing science and automatic control (CCE), Mexico city, Mexico. https:\/\/doi.org\/10.1109\/ICEEE.2019.8884518","DOI":"10.1109\/ICEEE.2019.8884518"},{"issue":"2","key":"1167_CR31","doi-asserted-by":"publisher","first-page":"267","DOI":"10.1108\/IR-10-2018-0209","volume":"46","author":"A Perrusqu\u00eda","year":"2019","unstructured":"Perrusqu\u00eda A, Yu W, Soria A (2019) Position\/force control of robots manipulators using reinforcement learning. Ind Robot Int J Robot Res Appl 46(2):267\u2013280. https:\/\/doi.org\/10.1108\/IR-10-2018-0209","journal-title":"Ind Robot Int J Robot Res Appl"},{"issue":"7","key":"1167_CR32","doi-asserted-by":"publisher","first-page":"2920","DOI":"10.1002\/rnc.4911","volume":"30","author":"A Perrusqui\u00eda","year":"2020","unstructured":"Perrusqui\u00eda A, Yu W (2020) Robust control under worst-case uncertainty for unknown nonlinear systems using modified reinforcement learning. Int J Robust Nonlinear Control 30(7):2920\u20132936. https:\/\/doi.org\/10.1002\/rnc.4911","journal-title":"Int J Robust Nonlinear Control"},{"issue":"6","key":"1167_CR33","doi-asserted-by":"publisher","first-page":"1147","DOI":"10.1109\/TNNLS.2013.2287890","volume":"25","author":"M Rolf","year":"2014","unstructured":"Rolf M, Steil J (2014) Efficient exploratory learning of inverse kinematics on a bionic elephant trunk. IEEE Trans Neural Netw Learn Syst 25(6):1147\u20131160. https:\/\/doi.org\/10.1109\/TNNLS.2013.2287890","journal-title":"IEEE Trans Neural Netw Learn Syst"},{"key":"1167_CR34","unstructured":"Schulman J, Wolski F, Klimov O (2017) Proximal policy optimization algorithms. arXiv:1707.06347"},{"key":"1167_CR35","unstructured":"Silver D, Lever G, Hess N, Degris T, Wierstra D, Riedmiller M (2014) Deterministic policy gradient algorithms. In: Proceedings of the 31st international conference on machine learning, Beijing, China, vol 32, pp 387\u2013395"},{"key":"1167_CR36","doi-asserted-by":"publisher","DOI":"10.1109\/TFUZZ.2020.2965890","author":"K Sun","year":"2020","unstructured":"Sun K, Liu L, Qiu J, Feng G (2020) Fuzzy adaptive finite-time fault tolerant control for strict-feedback nonlinear systems. IEEE Trans Fuzzy Syst. https:\/\/doi.org\/10.1109\/TFUZZ.2020.2965890","journal-title":"IEEE Trans Fuzzy Syst"},{"key":"1167_CR37","doi-asserted-by":"publisher","DOI":"10.1109\/TFUZZ.2020.2979129","author":"K Sun","year":"2020","unstructured":"Sun K, Qiu J, Karimi H, Fu Y (2020) Event- triggered robust fuzzy adaptive finite-time control of nonlinear systems with prescribed performance. IEEE Trans Fuzzy Syst. https:\/\/doi.org\/10.1109\/TFUZZ.2020.2979129","journal-title":"IEEE Trans Fuzzy Syst"},{"key":"1167_CR38","volume-title":"Reinforcement learning: an introduction","author":"RAB Sutton","year":"1998","unstructured":"Sutton RAB (1998) Reinforcement learning: an introduction. MIT Press, Cambridge"},{"key":"1167_CR39","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-03040-6\\_125","author":"T Tamei","year":"2009","unstructured":"Tamei T, Shibata T (2009) Policy gradient learning of cooperative interaction with a robot using user\u2019s biological signals. Int Conf Neural Inf Process (ICONIP). https:\/\/doi.org\/10.1007\/978-3-642-03040-6_125","journal-title":"Int Conf Neural Inf Process (ICONIP)"},{"key":"1167_CR40","doi-asserted-by":"publisher","DOI":"10.1109\/ROBOT.2010.5509336","author":"E Theodorou","year":"2010","unstructured":"Theodorou E, Buchli J, Schaal S (2010) Reinforcement learning of motor skills in high dimensions: a path integral approach. IEEE Int Conf Robot Autom (ICRA). https:\/\/doi.org\/10.1109\/ROBOT.2010.5509336","journal-title":"IEEE Int Conf Robot Autom (ICRA)"},{"key":"1167_CR41","doi-asserted-by":"publisher","DOI":"10.1109\/IROS.2011.6094428","author":"D Tuong","year":"2011","unstructured":"Tuong D, Peters J (2011) Learning task-space tracking control with kernels. IEEE\/RSJ Int Conf Intell Robots Syst. https:\/\/doi.org\/10.1109\/IROS.2011.6094428","journal-title":"IEEE\/RSJ Int Conf Intell Robots Syst"},{"key":"1167_CR42","doi-asserted-by":"publisher","unstructured":"Wiering MA, van Hasselt H (2007) Two novel on-policy reinforcement learning algorithms based on TD($$\\lambda$$)-method. In: Proceedings of the 2007 IEEE symposium on approximate dynamic programming and reinforcement learning (ADPRL). https:\/\/doi.org\/10.1109\/ADPRL.2007.368200","DOI":"10.1109\/ADPRL.2007.368200"},{"key":"1167_CR43","doi-asserted-by":"publisher","unstructured":"Wiering MA, van Hasselt H (2009) The QV family compared to other reinforcement learning algorithms. In: 2009 IEEE symposium on adaptive dynamic programming and reinforcement learning. https:\/\/doi.org\/10.1109\/ADPRL.2009.4927532","DOI":"10.1109\/ADPRL.2009.4927532"},{"issue":"1","key":"1167_CR44","doi-asserted-by":"publisher","first-page":"160","DOI":"10.1109\/TRA.2003.820932","volume":"20","author":"B Xian","year":"2004","unstructured":"Xian B, de Queiroz M, Dawson D, Walker I (2004) Task-space tracking control of robots manipulators via quaternion feedback. IEEE Trans Robot Autom 20(1):160\u2013167. https:\/\/doi.org\/10.1109\/TRA.2003.820932","journal-title":"IEEE Trans Robot Autom"},{"key":"1167_CR45","doi-asserted-by":"publisher","DOI":"10.1007\/s12369-019-00579-y","author":"W Yu","year":"2019","unstructured":"Yu W, Perrusqu\u00eda A (2019) Simplified stable admittance control using end-effector orientations. Int J Soc Robot. https:\/\/doi.org\/10.1007\/s12369-019-00579-y","journal-title":"Int J Soc Robot"},{"key":"1167_CR46","doi-asserted-by":"publisher","DOI":"10.3390\/robotics6040023","author":"D Zhang","year":"2017","unstructured":"Zhang D, Wei B (2017) On the development of learning control for robotic manipulators. Robotics. https:\/\/doi.org\/10.3390\/robotics6040023","journal-title":"Robotics"},{"issue":"4","key":"1167_CR47","doi-asserted-by":"publisher","first-page":"1359","DOI":"10.1109\/TNNLS.2017.2651402","volume":"29","author":"Y Zheng","year":"2017","unstructured":"Zheng Y, Ma J, Wang L (2017) Consensus of hybrid multi-agent systems. IEEE Trans Neural Netw Learn Syst 29(4):1359\u20131365. https:\/\/doi.org\/10.1109\/TNNLS.2017.2651402","journal-title":"IEEE Trans Neural Netw Learn Syst"},{"issue":"12","key":"1167_CR48","doi-asserted-by":"publisher","first-page":"2012","DOI":"10.1109\/TCSII.2018.2811803","volume":"65","author":"Y Zhu","year":"2018","unstructured":"Zhu Y, Li S, Ma J, Zheng Y (2018) Bipartite consensus in networks of agents with antagonistic interactions and quantization. IEEE Trans Circuits Syst II Express Briefs 65(12):2012\u20132016. https:\/\/doi.org\/10.1109\/TCSII.2018.2811803","journal-title":"IEEE Trans Circuits Syst II Express Briefs"}],"container-title":["International Journal of Machine Learning and Cybernetics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s13042-020-01167-7.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s13042-020-01167-7\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s13042-020-01167-7.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2021,7,9]],"date-time":"2021-07-09T00:38:02Z","timestamp":1625791082000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s13042-020-01167-7"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,7,9]]},"references-count":48,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2021,1]]}},"alternative-id":["1167"],"URL":"https:\/\/doi.org\/10.1007\/s13042-020-01167-7","relation":{},"ISSN":["1868-8071","1868-808X"],"issn-type":[{"value":"1868-8071","type":"print"},{"value":"1868-808X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,7,9]]},"assertion":[{"value":"27 February 2020","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"29 June 2020","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"9 July 2020","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}]}}